Your curated collection of saved posts and media
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. https://t.co/rM77LJejuk
Thank you for your attention to this matter. cc: @AnthropicAI @DarioAmodei https://t.co/FLCByLHF73

A statement on the comments from Secretary of War Pete Hegseth. https://t.co/Gg7Zb09IMR
"We will challenge any supply chain risk designation in court" - Anthropic They are saying Department of War cannot restrict customers' use of Claude outside of Dep of War contract work. https://t.co/3FDsXmmcZi
A statement on the comments from Secretary of War Pete Hegseth. https://t.co/Gg7Zb09IMR
huh, why hasn't Wikipedia been updated in more than 2 years on HuggingFace? https://t.co/DxbO8RpM6G
Marco Rubio finding out he has to run Anthropic now too. https://t.co/Ffc5jsvzLi
I know we donβt do facts anymore, but hereβs the "dangerous and collapsing" EU that Elon Musk and MAGA influencers keep warning you about. https://t.co/RZrT5p9p9D
This has been the plan all along. - Foment violence and chaos in the streets of MN - Implement the Insurrection Act - Declare Martial Law - Suspend elections https://t.co/Ky3Jf3awqc
As a three-time combat veteran, I get pretty damn hot when a five-time draft dodger like @realDonaldTrump pounds his chest and bangs the war drums. America is over it. No more sending our sons & daughters to fight for oil. https://t.co/sgyXKme3h3
Whereβs the outrage? https://t.co/Hd0Br38yOF
Whereβs the outrage? https://t.co/Hd0Br38yOF
Le siΓ¨ge social dβAMI Labs, la sociΓ©tΓ© de β¦Yann Le Cunβ©, qui a longtemps Γ©tΓ© chief scientist de Meta (Facebook, Whatsapp et Instagram). sera Γ Paris. Le chercheur β¦β¦@ylecunβ©, qui a dΓ©cidΓ©ment de top lecturesπ, reste professeur Γ β¦β¦β¦β¦@nyuniversityβ© https://t.co/g7cAHoF7zw
We told you the Venezuela invasion was just corruption. It took one whole week to get the proof. Trump took Venezuela's oil at gunpoint, and gave it to one of his biggest campaign donors. 1/ But when you learn the details, it's even worse. A shortπ§΅on this corruption story. https://t.co/ZExM5S89VK
Nostalgia has tricked people into thinking the 1990s and early 2000s were a time of cutesy and comfortable digital disruption. https://t.co/yPMO0Ab3Ga

From the country that lectures Europe daily on its supposed "lack of free speech": https://t.co/6nLs1Ykfwt
From the country that lectures Europe daily on its supposed "lack of free speech": https://t.co/6nLs1Ykfwt
Citizen: βWhy are you asking for my paperwork?β Border patrol agent: βBecause of your accent.β Citizen: βYou have an accent too!β Jesus Christ. This might be the funniest form of fascism in history. https://t.co/ocvFDeQzof
1st and 34th author of @GoogleDeepMind's paper [1] each got 1/4 Nobel Prize for protein structure prediction through Alphafold. Who invented that? (Disclaimer: a student from my lab co-founded DeepMind.) The 2021 paper [1] failed to cite important prior work [2] by Baldi and Pollastri (2002): at a time when compute was roughly ten thousand times more expensive than in 2021, [2] introduced a pipeline very similar to the one of Alphafold 2, using multiple sequence alignment (MSA) to predict the secondary protein structure with the help of a position-specific scoring matrix (PSSM) or a profile matrix, going beyond even earlier work of 1988 [5][6][10]. The extra step (absent in Alphafold 2) was to predict the protein's topology, too. See also the follow-up work of 2012 [3]. [1] didn't cite @HochreiterSepp et al.'s first successful application [7] of deep learning to protein folding (2007, using LSTM instead of MSA to construct a profile). [1] also failed to cite the essential prior work by Golkov et al (2016) [4][8], which had crucial aspects of AlphaFold: (1) identify homologous sequences in a database of proteins with known structure, (2) compute the co-evolution statistics using the homologous sequences, (3) train a graph NN to predict the protein contact map (that determines its 3D structure) directly from the co-evolution statistics, (4) demonstrate experimentally a significant boost in performance on the CASP dataset [4][9]. See the attached image! Instead of the contact map, DeepMind (2021) predicted the distance map, and instead of graph CNNs, they used the quadratic Transformer published in 2017 (the unnormalized linear Transformer had existed since 1991 [11]). DeepMind also used more training data and much more compute for hyperparameter tuning etc. Image credits: [4][8] REFERENCES [1] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Zidek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli & D. Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583-589, 2021. [2] P. Baldi, G. Pollastri. A machine learning strategy for protein analysis. IEEE Intelligent Systems 17.2 (2002): 28-35. [3] P. Di Lena, K. Nagata, and P. Baldi. Deep Architectures for Protein Contact Map Prediction. Bioinformatics, 28, 2449-2457, (2012). [4] V. Golkov, M. J. Skwark, A. Golkov, A. Dosovitskiy, T. Brox, J. Meiler, D. Cremers (2016). Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images. NeurIPS, Barcelona, 2016. [5] N. Qian and T.J. Sejnowski (1988). Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 1988, 202, 865-884. [6] H. Bohr, J. Bohr, S. Brunak, R.M.J. Cotterill, B. Lautrup, L. Norskov, O.H. Olsen, S.B. Petersen (1988). Protein secondary structure and homology by neural networks. The Ξ±-helices in rhodopsin. FEBS Lett. 1988, 241, 223-228. [7] S. Hochreiter, M. Heusel, K. Obermayer. Fast model-based protein homology detection without alignment. Bioinformatics 23(14):1728-36, 2007. Successful application of deep learning to protein folding problems, through an LSTM that was orders of magnitude faster than competing methods. [8] D. Cremers (July 2025). LinkedIn post on the Nobel Prize for AlphaFold. [9] A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24, 2024 (updated 2025) https://t.co/u9YxfBuqNf . Popular tweets on this: https://t.co/heYSuPQDxp https://t.co/QQU9FKpqAh [10] The Nobel Committee for Chemistry (2024). Scientific Background to the Nobel Prize in Chemistry 2024. [11] Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Switzerland, 2022 (updated 2025). Preprint https://t.co/YZrEphq1qx. This extends the 2015 award-winning deep learning survey in the journal "Neural Networks."
The answer turns out to be "yes, kinda". After spending few minutes clicking "like" on posts I liked and "show less like this" on those I don't, here's what my feed now contains. https://t.co/4E96OQKee6
Is this true?

So there was quite a sensational rant post titling "DuckDB beats Polars for 1TB of data" and the video "Polars Got Destroyed by DuckDB in this 1TB Test" that was shared a lot. There was no code shared for Polars and upon request, we were ignored. These posts were conveniently shared in posts and newsletters because they fit a narrative. In any case, I went through the effort to reproduce the dataset and run the exact benchmark. The post mentioned 64GB RAM, so I ran on a 5a.8xlarge (32vCPU / 64GB RAM). Polars did not go OOM, but finished the query in 14 minutes never exceeding 14GB RAM usage. On the same machine DuckDB also took 14 minutes. Both tools hit the bandwidth limit: 1 TB / 10 Gbps = 13.3 min, but that makes less of a title π. The whole benchmark was just hard to reproduce, the 1TB part of it made it unwieldy, but didn't matter. It could have done with a 100GB benchmark as the cardinality of the groups was just ~1800. Here is the Polars query: https://t.co/62oLSctcSd So I guess... Code or it didn't happen.

CuTe algebra is extremely elegant. I can't believe we've all been writhing around on the floor like Terence Tao to do tile indexing. https://t.co/4puH9HSefw
Meta paper 2601.10639 is exactly the same as RWKV DeepEmbed original version π We have an improved version in https://t.co/28vbKfGXdn (line 261-262)
RWKV-8 "Heron" preview (1) - DeepEmbed. Seems Gemma3n is trying similar tricks (Per-Layer Embedding), so I will discuss it first πͺΆ It's essentially free performance - lots of params, but can be offloaded to RAM/SSD, and simple to train and deployπ https://t.co/UY1rhh0JLQ
wait this is actually big. this deepseek research used LogitLens (lets you see what the model is 'thinking' at each layer) and CKA (compares what different layers are actually learning) to figure out why the new Engram architecture works. apparently this is the first time i have seen mech interpretability research being used in a capabilities paper. feels like a shift.
DeepSeek is back! "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models" They introduce Engram, a module that adds an O(1) lookup-style memory based on modernized hashed N-gram embeddings Mechanistic analysis suggests Engram reduces the need
OpenAI is nothing without its people!!!! This is incredible to witness from so many OpenAI employees https://t.co/wmcKMC8gIJ