Your curated collection of saved posts and media
New paper: We train Activation Oracles: LLMs that decode their own neural activations and answer questions about them in natural language. We find surprising generalization. For instance, our AOs uncover misaligned goals in fine-tuned models, without training to do so. https://t.co/ePK2gG5Rik
@TaylorLorenz wtf is a radicalized soccer mom? https://t.co/STDfYIpLwT
New paper, open access, feedback very welcome. Any complaints will be handled in the fullness of time, by the appropriate authorities, once you have located the correct form etc https://t.co/PZqBxc9QK3 https://t.co/5XthCYMvuh
βChatGPT detectorβ catches AI-generated papers with unprecedented accuracy https://t.co/WoaQH8uIjF
Ok this is kinda neat: If you give https://t.co/XTEx6kY7EN a scientific paper it will generate a network of references and citations! https://t.co/E93tAG5vMc
Clear evidence that employers discriminate against nonbinary individuals. All-else-equal, disclosing that one uses "they/themβ pronouns substantially lowers the chances of getting a job interview. https://t.co/VJ8LlQR77H https://t.co/raLGvf2h4v

Claude Code can now browse the web! You can build some insane things with this. I just built a little subagent that tracks & reports AI-related posts on X. Broken X algo? No worries! Just build a personal agent and let it help you filter out and track what you want. https://t.co/XGOtdF8WiT
A new 25-year study found that higher intake of high-fat cheese and cream was associated with a lower risk of all-cause dementia. Moo. https://t.co/Eoa256jsNB
βRe-inventing the wheelβ should mean exactly the opposite of what it means, given that there is a decent chance that the wheel for transportation was invented just once, in the Carpathian Mountains around 3900 BC & was never independently re-invented by other societies afterwards https://t.co/lXYBSnlqfP

Is XRP crashing? The sustained break below $2 signals trouble https://t.co/n94cPbGD5a @godbole17 @coindesk
Last week @Shopify released what might be the most significant AI tool set the world has seen so far in terms of AIs that will have immediate impact on generating revenue. They released many new tools. We dig into what we think are the three most important ones: 1) Simgym, 2) Shopify Product Network, and 3) Sidekick Pulse. Thanks to @tobi @RichardSSutton @m_sendhil @suzannegildert and Niamh Gavin for a fascinating discussion. Video link below.
In collaboration with @AIatMeta, we added support for Pixio in the Transformers library! It proposes 4 changes to Masked AutoEncoders (MAE), including scaling it to 2B images. It outperforms/matches DINOv3 trained at similar scales Find the models here: https://t.co/iUE8fQbmOp https://t.co/cJssb2s7FZ

https://t.co/1igzeDZGqw
https://t.co/1igzeDZGqw

People use AI for a wide variety of reasons, including emotional support. Below, we share the efforts weβve taken to ensure that Claude handles these conversations both empathetically and honestly. https://t.co/P2BmTDEDge
π MiMo-V2-Flash FREE API is now live on ModelScope! The first major release since Fuli Luo joined Xiaomiβand itβs built for real-world agentic AI. β‘ MiMo-V2-Flash: an open, high-performance MoE model with β’ 309B total / 15B active parameters β’ 256K context window β’ 150+ tokens/s generation thanks to native Multi-Token Prediction π₯ Key wins for developers: β Hybrid attention (5:1 SWA + Global) β 6Γ less KV cache, full long-context recall β 73.4% on SWE-Bench Verified β new SOTA for open-source models β Matches DeepSeek-V3.2 on reasoning, but much faster in practice β¨ API-readyβperfect for building smart, responsive agents. π Try it now: https://t.co/7hQIhBC25Y

Weβre open-sourcing Perception Encoder Audiovisual (PE-AV), the technical engine that helps drive SAM Audioβs state-of-the-art audio separation. Built on our Perception Encoder model from earlier this year, PE-AV integrates audio with visual perception, achieving state-of-the-art results across a wide range of audio and video benchmarks. Its native multimodal support can assist people in everyday tasks, including sound detection and richer audio-visual scene understanding. π Read the paper: https://t.co/RLWJOgG2uz π Download the code: https://t.co/1L5ZqCZlxq

You can train the VyvoTTS model using just a 200MB audio file with Unsloth in 5 minutes to replicate someone's voice with high accuracy. Next week, we'll release the base model and many fine-tuned models trained with extensive data. Which other voices should we train? https://t.co/A6AgGbMnBW
Skills allow more people to build what's lacking in AI Agents. It's about building the capabilities/knowledge missing in AI agents. Everyone should learn to build Skills. I am hosting a 2hr workshop for our academy members in Jan: https://t.co/64C8UmZq7j Join us! https://t.co/eRJB5vxOPn

Every year, I put together a year's review of what's going on in the crazily evolving world of brain chips and #BCI's. 2025 was huge for #Neuralink, but we saw many other trends. Here are my top-10 trends of the year and what to watch for in 2026: https://t.co/VOIc4fXC94
Alrighty. The Toad is out of the bag. ππΈ Install toad to work with a variety of #AI coding agents with one beautiful terminal interface. Check out the blog post for more information... https://t.co/KpQu5cYZzR I've been told I'm very authentic on camera. You just can't fake that kind of awkwardness.
Using the extension, Claude Code can test code directly in the browser to validate its work. Claude can also see client-side errors via console logs. Try it out by running /chrome in the latest version of Claude Code. https://t.co/jcb21qr5Y5
π€©This looks extremely useful to improve designing and debugging workflows in Claude Code. Sometimes it's hard to explain to Claude Code the exact changes you want in your app. This provides better visual cues and more direct and useful context for Claude Code. https://t.co/y7nbm3CwRV
Using the extension, Claude Code can test code directly in the browser to validate its work. Claude can also see client-side errors via console logs. Try it out by running /chrome in the latest version of Claude Code. https://t.co/jcb21qr5Y5

@eunifiedworld @bytebot girl. https://t.co/Q3ODvrvnxu
i swear to god if i hear a normie say «knowledge graph» one more time⦠https://t.co/3MUzxTMVvD
i swear to god if i hear a normie say «knowledge graph» one more time⦠https://t.co/3MUzxTMVvD
excited to have the vp of at at algolia give cus a talk on how to build self improving ai systems https://t.co/6ah81HZm7L
excited to have the vp of at at algolia give cus a talk on how to build self improving ai systems https://t.co/6ah81HZm7L
Google just released FunctionGemma https://t.co/PwXVblLxnr
Nice! https://t.co/IkI5tVo2fM
icymi I work at the emoji company (new feature!) https://t.co/4tYkaTg3KG
Nice! https://t.co/IkI5tVo2fM

Introducing T5Gemma 2, the next generation of encoder-decoder models π Built on top of Gemma 3, we were able to build compact models at sizes of 270m-270m, 1B-1B, and 4B-4B sizes. While most models today are decoder-only, T5Gemma 2 is the first (I'm aware of) multimodal, long-context, and heavily multilingual (140 languages) encoder-decoder model out there. We hope this model enables the model research community as well as the community of devs ready to explore with new architectures. Blog: https://t.co/12ScxYcjxa Models: https://t.co/D38wNFo5Bc Paper: https://t.co/2rypSQ7Bf6
