Your curated collection of saved posts and media

Showing 24 posts Β· last 7 days Β· quality filtered
C
ChrisMurphyCT
@ChrisMurphyCT
πŸ“…
Jan 17, 2026
58d ago
πŸ†”66446869

We told you the Venezuela invasion was just corruption. It took one whole week to get the proof. Trump took Venezuela's oil at gunpoint, and gave it to one of his biggest campaign donors. 1/ But when you learn the details, it's even worse. A short🧡on this corruption story. https://t.co/ZExM5S89VK

Media 1
πŸ–ΌοΈ Media
P
PessimistsArc
@PessimistsArc
πŸ“…
Jan 17, 2026
58d ago
πŸ†”18913424

Nostalgia has tricked people into thinking the 1990s and early 2000s were a time of cutesy and comfortable digital disruption. https://t.co/yPMO0Ab3Ga

Media 1Media 2
+2 more
πŸ–ΌοΈ Media
D
Daractenus
@Daractenus
πŸ“…
Jan 17, 2026
58d ago
πŸ†”21143693

From the country that lectures Europe daily on its supposed "lack of free speech": https://t.co/6nLs1Ykfwt

πŸ–ΌοΈ Media
πŸ”ylecun retweeted
D
Daractenus
@Daractenus
πŸ“…
Jan 17, 2026
58d ago
πŸ†”21143693

From the country that lectures Europe daily on its supposed "lack of free speech": https://t.co/6nLs1Ykfwt

❀️2,859
likes
πŸ”683
retweets
πŸ–ΌοΈ Media
R
RichardHanania
@RichardHanania
πŸ“…
Jan 18, 2026
57d ago
πŸ†”33737171

Citizen: β€œWhy are you asking for my paperwork?” Border patrol agent: β€œBecause of your accent.” Citizen: β€œYou have an accent too!” Jesus Christ. This might be the funniest form of fascism in history. https://t.co/ocvFDeQzof

πŸ–ΌοΈ Media
S
SchmidhuberAI
@SchmidhuberAI
πŸ“…
Jan 15, 2026
60d ago
πŸ†”01574264

1st and 34th author of @GoogleDeepMind's paper [1] each got 1/4 Nobel Prize for protein structure prediction through Alphafold. Who invented that? (Disclaimer: a student from my lab co-founded DeepMind.) The 2021 paper [1] failed to cite important prior work [2] by Baldi and Pollastri (2002): at a time when compute was roughly ten thousand times more expensive than in 2021, [2] introduced a pipeline very similar to the one of Alphafold 2, using multiple sequence alignment (MSA) to predict the secondary protein structure with the help of a position-specific scoring matrix (PSSM) or a profile matrix, going beyond even earlier work of 1988 [5][6][10]. The extra step (absent in Alphafold 2) was to predict the protein's topology, too. See also the follow-up work of 2012 [3]. [1] didn't cite @HochreiterSepp et al.'s first successful application [7] of deep learning to protein folding (2007, using LSTM instead of MSA to construct a profile). [1] also failed to cite the essential prior work by Golkov et al (2016) [4][8], which had crucial aspects of AlphaFold: (1) identify homologous sequences in a database of proteins with known structure, (2) compute the co-evolution statistics using the homologous sequences, (3) train a graph NN to predict the protein contact map (that determines its 3D structure) directly from the co-evolution statistics, (4) demonstrate experimentally a significant boost in performance on the CASP dataset [4][9]. See the attached image! Instead of the contact map, DeepMind (2021) predicted the distance map, and instead of graph CNNs, they used the quadratic Transformer published in 2017 (the unnormalized linear Transformer had existed since 1991 [11]). DeepMind also used more training data and much more compute for hyperparameter tuning etc. Image credits: [4][8] REFERENCES [1] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Zidek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli & D. Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583-589, 2021. [2] P. Baldi, G. Pollastri. A machine learning strategy for protein analysis. IEEE Intelligent Systems 17.2 (2002): 28-35. [3] P. Di Lena, K. Nagata, and P. Baldi. Deep Architectures for Protein Contact Map Prediction. Bioinformatics, 28, 2449-2457, (2012). [4] V. Golkov, M. J. Skwark, A. Golkov, A. Dosovitskiy, T. Brox, J. Meiler, D. Cremers (2016). Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images. NeurIPS, Barcelona, 2016. [5] N. Qian and T.J. Sejnowski (1988). Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 1988, 202, 865-884. [6] H. Bohr, J. Bohr, S. Brunak, R.M.J. Cotterill, B. Lautrup, L. Norskov, O.H. Olsen, S.B. Petersen (1988). Protein secondary structure and homology by neural networks. The Ξ±-helices in rhodopsin. FEBS Lett. 1988, 241, 223-228. [7] S. Hochreiter, M. Heusel, K. Obermayer. Fast model-based protein homology detection without alignment. Bioinformatics 23(14):1728-36, 2007. Successful application of deep learning to protein folding problems, through an LSTM that was orders of magnitude faster than competing methods. [8] D. Cremers (July 2025). LinkedIn post on the Nobel Prize for AlphaFold. [9] A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24, 2024 (updated 2025) https://t.co/u9YxfBuqNf . Popular tweets on this: https://t.co/heYSuPQDxp https://t.co/QQU9FKpqAh [10] The Nobel Committee for Chemistry (2024). Scientific Background to the Nobel Prize in Chemistry 2024. [11] Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Switzerland, 2022 (updated 2025). Preprint https://t.co/YZrEphq1qx. This extends the 2015 award-winning deep learning survey in the journal "Neural Networks."

Media 1
πŸ–ΌοΈ Media
J
jeremyphoward
@jeremyphoward
πŸ“…
Jan 16, 2026
59d ago
πŸ†”86742240

The answer turns out to be "yes, kinda". After spending few minutes clicking "like" on posts I liked and "show less like this" on those I don't, here's what my feed now contains. https://t.co/4E96OQKee6

@jeremyphoward β€’ Sun Jan 11 03:06

Is this true?

Media 1Media 2
+2 more
πŸ–ΌοΈ Media
R
RitchieVink
@RitchieVink
πŸ“…
Jan 16, 2026
59d ago
πŸ†”70149716

So there was quite a sensational rant post titling "DuckDB beats Polars for 1TB of data" and the video "Polars Got Destroyed by DuckDB in this 1TB Test" that was shared a lot. There was no code shared for Polars and upon request, we were ignored. These posts were conveniently shared in posts and newsletters because they fit a narrative. In any case, I went through the effort to reproduce the dataset and run the exact benchmark. The post mentioned 64GB RAM, so I ran on a 5a.8xlarge (32vCPU / 64GB RAM). Polars did not go OOM, but finished the query in 14 minutes never exceeding 14GB RAM usage. On the same machine DuckDB also took 14 minutes. Both tools hit the bandwidth limit: 1 TB / 10 Gbps = 13.3 min, but that makes less of a title πŸ˜‰. The whole benchmark was just hard to reproduce, the 1TB part of it made it unwieldy, but didn't matter. It could have done with a 100GB benchmark as the cardinality of the groups was just ~1800. Here is the Polars query: https://t.co/62oLSctcSd So I guess... Code or it didn't happen.

Media 1Media 2
πŸ–ΌοΈ Media
F
fleetwood___
@fleetwood___
πŸ“…
Jan 16, 2026
59d ago
πŸ†”22485811

CuTe algebra is extremely elegant. I can't believe we've all been writhing around on the floor like Terence Tao to do tile indexing. https://t.co/4puH9HSefw

Media 1
πŸ–ΌοΈ Media
B
BlinkDL_AI
@BlinkDL_AI
πŸ“…
Jan 16, 2026
59d ago
πŸ†”68198711

Meta paper 2601.10639 is exactly the same as RWKV DeepEmbed original version πŸ˜‚ We have an improved version in https://t.co/28vbKfGXdn (line 261-262)

@BlinkDL_AI β€’ Mon May 26 10:00

RWKV-8 "Heron" preview (1) - DeepEmbed. Seems Gemma3n is trying similar tricks (Per-Layer Embedding), so I will discuss it first πŸͺΆ It's essentially free performance - lots of params, but can be offloaded to RAM/SSD, and simple to train and deployπŸš€ https://t.co/UY1rhh0JLQ

Media 1
πŸ–ΌοΈ Media
H
himanshustwts
@himanshustwts
πŸ“…
Jan 17, 2026
58d ago
πŸ†”98155686

wait this is actually big. this deepseek research used LogitLens (lets you see what the model is 'thinking' at each layer) and CKA (compares what different layers are actually learning) to figure out why the new Engram architecture works. apparently this is the first time i have seen mech interpretability research being used in a capabilities paper. feels like a shift.

@scaling01 β€’ Mon Jan 12 16:19

DeepSeek is back! "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models" They introduce Engram, a module that adds an O(1) lookup-style memory based on modernized hashed N-gram embeddings Mechanistic analysis suggests Engram reduces the need

Media 1
πŸ–ΌοΈ Media
K
kyliebytes
@kyliebytes
πŸ“…
Nov 20, 2023
847d ago
πŸ†”13279733

OpenAI is nothing without its people!!!! This is incredible to witness from so many OpenAI employees https://t.co/wmcKMC8gIJ

πŸ–ΌοΈ Media
H
HKydlicek
@HKydlicek
πŸ“…
Jan 06, 2026
69d ago
πŸ†”28707126

The final step of the FinePDFs saga is here! The FinePDFs πŸ“ƒ BOOK We put everything we know about PDFs inside: - How to make the SoTA PDFs dataset? - How much old internet is dead now? - Why we chose RolmOCR for OCR - What is https://t.co/i3PivBI9hh And many moreπŸ€— https://t.co/m8mC0Xjksc

Media 1
πŸ–ΌοΈ Media
B
BlancheMinerva
@BlancheMinerva
πŸ“…
Jan 08, 2026
66d ago
πŸ†”64008640

Really great work by @Aflah02101 documenting the achievable performance on a variety of hardware. https://t.co/WMgpV9uYDU

@Aflah02101 β€’ Thu Jan 08 19:12

🧡 Thread: Introducing MAMF Explorer 🧡 A practical way to understand real matmul performance on GPUs, not just theoretical peaks. https://t.co/j4R3LHRAFt

Media 1
πŸ–ΌοΈ Media
N
nikhilchandak29
@nikhilchandak29
πŸ“…
Jul 04, 2025
255d ago
πŸ†”66681602

🚨 Ever wondered how much you can ace popular MCQ benchmarks without even looking at the questions? 🀯 Turns out, you can often get significant accuracy just from the choices alone. This is true even on recent benchmarks with 10 choices (like MMLU-Pro) and their vision counterparts like MMMU-Pro (yes, even without images!)πŸ˜±πŸ“‰ Such choice-only shortcuts are hard to fix. We find prior attempts at fixing them -- GoldenSwag (for HellaSwag) and TruthfulQA v2 still suffer from similar problems.

Media 1
πŸ–ΌοΈ Media
G
GeodesResearch
@GeodesResearch
πŸ“…
Jan 16, 2026
59d ago
πŸ†”20698501

We pretrained multiple 7B LLMs from scratch and found that natural exposure to AI misalignment discourse causes models to become more misaligned. Optimistically, we also find that adding positive synthetic documents in pretraining reduces misalignment. Thread 🧡 https://t.co/ACMsC1qkV9

Media 1
πŸ–ΌοΈ Media
B
BlancheMinerva
@BlancheMinerva
πŸ“…
Jan 18, 2026
57d ago
πŸ†”27640136

@JerryWeiAI Since you provided no evidence for you claims I went to go see what Anthropic has released. The only work on this I can find seems to contradict your tweet: https://t.co/sAyKIooNXj Has there been subsequent unreleased work leading to different conclusions? https://t.co/FkBrsAJ1ow

Media 1
πŸ–ΌοΈ Media
B
BlancheMinerva
@BlancheMinerva
πŸ“…
Jan 18, 2026
57d ago
πŸ†”77567869

Comments that aged like milk https://t.co/0pGXUsuKk9

Media 1
πŸ–ΌοΈ Media
G
ggerganov
@ggerganov
πŸ“…
Jan 06, 2026
69d ago
πŸ†”43904359

Recent contributions by NVIDIA engineers and llama.cpp collaborators resulting in significant performance gains for local AI https://t.co/NFkopVZaFz

Media 1
πŸ–ΌοΈ Media
G
ggerganov
@ggerganov
πŸ“…
Jan 06, 2026
69d ago
πŸ†”78170786

https://t.co/bDmZM6codm

Media 1
πŸ–ΌοΈ Media
G
ggerganov
@ggerganov
πŸ“…
Jan 06, 2026
69d ago
πŸ†”14522525

https://t.co/bEF4FKjqv8

Media 1
πŸ–ΌοΈ Media
M
maximelabonne
@maximelabonne
πŸ“…
Jan 06, 2026
69d ago
πŸ†”88542716

LFM2.5-Audio-1.5B > Real-time text-to-speech and ASR > Running locally on a CPU with llama.cpp > Interleave speech and text It's super elegant, I'm bullish on local audio models https://t.co/Fw8RWAg4bG

πŸ–ΌοΈ Media
A
AWSstartups
@AWSstartups
πŸ“…
Jan 05, 2026
70d ago
πŸ†”56578425

⏰️ Last chance: register now for the Modeling Molecules β€” The Bio Foundation Model Breakfast from #AWS at #JPM2026. https://t.co/3JqFNGirjI πŸ”¬ Don’t miss the opportunity to network with peers & learn from pioneers shaping the future of #AI-powered drug discovery. Spaces are limitedβ€”secure your spot now.

πŸ–ΌοΈ Media
A
AWSstartups
@AWSstartups
πŸ“…
Jan 08, 2026
67d ago
πŸ†”07208329

🌱πŸ₯ AI is critical for medical startup @montugroup to match its large patient base with limited clinical staff. We spoke to Montu about how it transforms the patient experience with Amazon Connect. Join AWS Activate to build, scale & equip your startup. https://t.co/LD9RiRzHNH https://t.co/JkWqxsJOVt

πŸ–ΌοΈ Media