Your curated collection of saved posts and media
We told you the Venezuela invasion was just corruption. It took one whole week to get the proof. Trump took Venezuela's oil at gunpoint, and gave it to one of his biggest campaign donors. 1/ But when you learn the details, it's even worse. A shortπ§΅on this corruption story. https://t.co/ZExM5S89VK
Nostalgia has tricked people into thinking the 1990s and early 2000s were a time of cutesy and comfortable digital disruption. https://t.co/yPMO0Ab3Ga

From the country that lectures Europe daily on its supposed "lack of free speech": https://t.co/6nLs1Ykfwt
From the country that lectures Europe daily on its supposed "lack of free speech": https://t.co/6nLs1Ykfwt
Citizen: βWhy are you asking for my paperwork?β Border patrol agent: βBecause of your accent.β Citizen: βYou have an accent too!β Jesus Christ. This might be the funniest form of fascism in history. https://t.co/ocvFDeQzof
1st and 34th author of @GoogleDeepMind's paper [1] each got 1/4 Nobel Prize for protein structure prediction through Alphafold. Who invented that? (Disclaimer: a student from my lab co-founded DeepMind.) The 2021 paper [1] failed to cite important prior work [2] by Baldi and Pollastri (2002): at a time when compute was roughly ten thousand times more expensive than in 2021, [2] introduced a pipeline very similar to the one of Alphafold 2, using multiple sequence alignment (MSA) to predict the secondary protein structure with the help of a position-specific scoring matrix (PSSM) or a profile matrix, going beyond even earlier work of 1988 [5][6][10]. The extra step (absent in Alphafold 2) was to predict the protein's topology, too. See also the follow-up work of 2012 [3]. [1] didn't cite @HochreiterSepp et al.'s first successful application [7] of deep learning to protein folding (2007, using LSTM instead of MSA to construct a profile). [1] also failed to cite the essential prior work by Golkov et al (2016) [4][8], which had crucial aspects of AlphaFold: (1) identify homologous sequences in a database of proteins with known structure, (2) compute the co-evolution statistics using the homologous sequences, (3) train a graph NN to predict the protein contact map (that determines its 3D structure) directly from the co-evolution statistics, (4) demonstrate experimentally a significant boost in performance on the CASP dataset [4][9]. See the attached image! Instead of the contact map, DeepMind (2021) predicted the distance map, and instead of graph CNNs, they used the quadratic Transformer published in 2017 (the unnormalized linear Transformer had existed since 1991 [11]). DeepMind also used more training data and much more compute for hyperparameter tuning etc. Image credits: [4][8] REFERENCES [1] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Zidek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli & D. Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583-589, 2021. [2] P. Baldi, G. Pollastri. A machine learning strategy for protein analysis. IEEE Intelligent Systems 17.2 (2002): 28-35. [3] P. Di Lena, K. Nagata, and P. Baldi. Deep Architectures for Protein Contact Map Prediction. Bioinformatics, 28, 2449-2457, (2012). [4] V. Golkov, M. J. Skwark, A. Golkov, A. Dosovitskiy, T. Brox, J. Meiler, D. Cremers (2016). Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images. NeurIPS, Barcelona, 2016. [5] N. Qian and T.J. Sejnowski (1988). Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 1988, 202, 865-884. [6] H. Bohr, J. Bohr, S. Brunak, R.M.J. Cotterill, B. Lautrup, L. Norskov, O.H. Olsen, S.B. Petersen (1988). Protein secondary structure and homology by neural networks. The Ξ±-helices in rhodopsin. FEBS Lett. 1988, 241, 223-228. [7] S. Hochreiter, M. Heusel, K. Obermayer. Fast model-based protein homology detection without alignment. Bioinformatics 23(14):1728-36, 2007. Successful application of deep learning to protein folding problems, through an LSTM that was orders of magnitude faster than competing methods. [8] D. Cremers (July 2025). LinkedIn post on the Nobel Prize for AlphaFold. [9] A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24, 2024 (updated 2025) https://t.co/u9YxfBuqNf . Popular tweets on this: https://t.co/heYSuPQDxp https://t.co/QQU9FKpqAh [10] The Nobel Committee for Chemistry (2024). Scientific Background to the Nobel Prize in Chemistry 2024. [11] Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Switzerland, 2022 (updated 2025). Preprint https://t.co/YZrEphq1qx. This extends the 2015 award-winning deep learning survey in the journal "Neural Networks."
The answer turns out to be "yes, kinda". After spending few minutes clicking "like" on posts I liked and "show less like this" on those I don't, here's what my feed now contains. https://t.co/4E96OQKee6
Is this true?

So there was quite a sensational rant post titling "DuckDB beats Polars for 1TB of data" and the video "Polars Got Destroyed by DuckDB in this 1TB Test" that was shared a lot. There was no code shared for Polars and upon request, we were ignored. These posts were conveniently shared in posts and newsletters because they fit a narrative. In any case, I went through the effort to reproduce the dataset and run the exact benchmark. The post mentioned 64GB RAM, so I ran on a 5a.8xlarge (32vCPU / 64GB RAM). Polars did not go OOM, but finished the query in 14 minutes never exceeding 14GB RAM usage. On the same machine DuckDB also took 14 minutes. Both tools hit the bandwidth limit: 1 TB / 10 Gbps = 13.3 min, but that makes less of a title π. The whole benchmark was just hard to reproduce, the 1TB part of it made it unwieldy, but didn't matter. It could have done with a 100GB benchmark as the cardinality of the groups was just ~1800. Here is the Polars query: https://t.co/62oLSctcSd So I guess... Code or it didn't happen.

CuTe algebra is extremely elegant. I can't believe we've all been writhing around on the floor like Terence Tao to do tile indexing. https://t.co/4puH9HSefw
Meta paper 2601.10639 is exactly the same as RWKV DeepEmbed original version π We have an improved version in https://t.co/28vbKfGXdn (line 261-262)
RWKV-8 "Heron" preview (1) - DeepEmbed. Seems Gemma3n is trying similar tricks (Per-Layer Embedding), so I will discuss it first πͺΆ It's essentially free performance - lots of params, but can be offloaded to RAM/SSD, and simple to train and deployπ https://t.co/UY1rhh0JLQ
wait this is actually big. this deepseek research used LogitLens (lets you see what the model is 'thinking' at each layer) and CKA (compares what different layers are actually learning) to figure out why the new Engram architecture works. apparently this is the first time i have seen mech interpretability research being used in a capabilities paper. feels like a shift.
DeepSeek is back! "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models" They introduce Engram, a module that adds an O(1) lookup-style memory based on modernized hashed N-gram embeddings Mechanistic analysis suggests Engram reduces the need
OpenAI is nothing without its people!!!! This is incredible to witness from so many OpenAI employees https://t.co/wmcKMC8gIJ
The final step of the FinePDFs saga is here! The FinePDFs π BOOK We put everything we know about PDFs inside: - How to make the SoTA PDFs dataset? - How much old internet is dead now? - Why we chose RolmOCR for OCR - What is https://t.co/i3PivBI9hh And many moreπ€ https://t.co/m8mC0Xjksc
Really great work by @Aflah02101 documenting the achievable performance on a variety of hardware. https://t.co/WMgpV9uYDU
π§΅ Thread: Introducing MAMF Explorer π§΅ A practical way to understand real matmul performance on GPUs, not just theoretical peaks. https://t.co/j4R3LHRAFt
π¨ Ever wondered how much you can ace popular MCQ benchmarks without even looking at the questions? π€― Turns out, you can often get significant accuracy just from the choices alone. This is true even on recent benchmarks with 10 choices (like MMLU-Pro) and their vision counterparts like MMMU-Pro (yes, even without images!)π±π Such choice-only shortcuts are hard to fix. We find prior attempts at fixing them -- GoldenSwag (for HellaSwag) and TruthfulQA v2 still suffer from similar problems.
We pretrained multiple 7B LLMs from scratch and found that natural exposure to AI misalignment discourse causes models to become more misaligned. Optimistically, we also find that adding positive synthetic documents in pretraining reduces misalignment. Thread π§΅ https://t.co/ACMsC1qkV9
@JerryWeiAI Since you provided no evidence for you claims I went to go see what Anthropic has released. The only work on this I can find seems to contradict your tweet: https://t.co/sAyKIooNXj Has there been subsequent unreleased work leading to different conclusions? https://t.co/FkBrsAJ1ow
Comments that aged like milk https://t.co/0pGXUsuKk9
Recent contributions by NVIDIA engineers and llama.cpp collaborators resulting in significant performance gains for local AI https://t.co/NFkopVZaFz
https://t.co/bDmZM6codm
https://t.co/bEF4FKjqv8
LFM2.5-Audio-1.5B > Real-time text-to-speech and ASR > Running locally on a CPU with llama.cpp > Interleave speech and text It's super elegant, I'm bullish on local audio models https://t.co/Fw8RWAg4bG
β°οΈ Last chance: register now for the Modeling Molecules β The Bio Foundation Model Breakfast from #AWS at #JPM2026. https://t.co/3JqFNGirjI π¬ Donβt miss the opportunity to network with peers & learn from pioneers shaping the future of #AI-powered drug discovery. Spaces are limitedβsecure your spot now.
π±π₯ AI is critical for medical startup @montugroup to match its large patient base with limited clinical staff. We spoke to Montu about how it transforms the patient experience with Amazon Connect. Join AWS Activate to build, scale & equip your startup. https://t.co/LD9RiRzHNH https://t.co/JkWqxsJOVt