Your curated collection of saved posts and media
Worth mentioning that two of the AI players basically blew themselves up this week: * Musk said he would basically start brand new on XAI after most of his cofounders left and his product is basically crap. * Meta also shared that itβs AI is not working properly, after Zuckerberg spent hundreds of billions hiring out from other companies and internal staff didnβt like it. The company now plans to layoff 20% of its employees.
Scaling is not all you need, illustration #3761: two of the two richest guys in the world tried to build AGI with basically unlimited budgets β¦ and failed.
Insane how leaky OpenAI is smh
Insane how leaky OpenAI is smh
@amirrahnama_ Wow! There are a lot of books that came out 2025-2026... so, big thanks!!
@TanayVasishtha I meant to say we donβt need IDEs.
Today we celebrate the most dangerous number in the universe. Starring @neiltyson. Ο = 3.1415926535β¦ It never ends. Teaser trailer below π https://t.co/8zbQby8V2S
@bridgemindai Will do
@TanayVasishtha IDEs on their way out
Built a step race mini-game for Apple Vision Pro, your headset tracks your real-world steps and you race your friends over SharePlayπ First to 50 steps win! @himelstech as a guest (he won because he was running π) Available in Party Games https://t.co/rvZ6gbqoXE
New research on LLM Agent Generalization. RL fine-tuning makes agents strong in familiar environments, but it struggles to transfer across unseen ones. This paper systematically studies RL generalization for LLM agents across three axes: within-environment transfer across task difficulty, cross-environment transfer to unseen settings, and sequential multi-environment training. Within an environment, RL delivers massive gains. Training on easy WebShop tasks improves hard task performance by 60+ points. Easy-to-hard curriculum learning adds another 2-3 points on top. Across environments, transfer is weak. Agents average only 3.3-3.4 point improvements on unseen environments. Training on BabyAI actually drops WebShop from 28.6 to 10.3. Sequential training is where it gets interesting. Training across five environments sequentially achieves performance comparable to joint training, with minimal forgetting. The authors claim that RL fine-tuning doesn't produce generally capable agents out of the box. But sequential training across diverse environments offers a practical path to broad competence. Paper: https://t.co/BYfVK3DPoH Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c

VC investments typically take 5-8 years to exit. That means almost every AI VC investment right now is essentially a bet against the vision Anthropic, OpenAI, and Gemini have laid out.
Maybe 5% of the replies to this post are human written.
The AI boom may be hitting the gaming industry in unexpected ways. From a global RAM shortage pushing up console costs to growing fears of job losses in development studios, gamers are starting to feel the ripple effects of the broader AI infrastructure race. The irony is striking. Technology built to create new experiences may end up reshaping the very industry that helped drive digital innovation for decades. https://t.co/1GJwxk4XaC @wired @helen
@PatrickMoorhead Hardware 4 is better than anything Rivian has coming. Way better.
Allergies so bad I the first thing Iβm doing after I land is buying sunglasses
@BacLeodiv If you want to maximize compute advantage and your skilled then Claude. It has many surfaces good enough quality but lack image gen. But that can be fixed by asking Claude to generate ascii art and prompts for other tools. If you want only for coding, use codex.
@BacLeodiv Alternstively, build a webpage for your idea link it to a calendly meeting invite and get your first customers to subsidize it. Use Gemini 3.1 in ai studio free for this. Hustle outreach get customers the start with codex and a code review tool then your off to the races
@michael_nielsen How do you tell whoβs using AI? Overly verbose vs super dense? https://t.co/qwyZ4Txdx9
@steipete Awesome, here's a list of everyone on X that my AI was able to find about who is going to GTC: https://t.co/k0TrQ29626 See you there!
this is so fucking wholesome guy used AI to save his cancer-ridden dog by sequencing its DNA and creating a CUSTOM cure. the tech behind this is fucking awesome (well done @demishassabis and the google team): - used CHATGPT to sequence dogs DNA discovers mutations - ran the mutations through Googleβs Alphafold (AI protein sequencer) which CREATED A CUSTOM VACCINE TO TREAT THEM. - treated dog and reduced tumour by 50% in WEEKS. dog is alive and well. - this is the 1st time AI has been used to create a custom vaccine for a dog (and it worked) - dude is now working on similar vaccines for humans using AI! 2026 is definitely the year we see AI change personalised medicine in a HUGE way so sick

this is so fucking wholesome guy used AI to save his cancer-ridden dog by sequencing its DNA and creating a CUSTOM cure. the tech behind this is fucking awesome (well done @demishassabis and the google team): - used CHATGPT to sequence dogs DNA discovers mutations - ran the mutations through Googleβs Alphafold (AI protein sequencer) which CREATED A CUSTOM VACCINE TO TREAT THEM. - treated dog and reduced tumour by 50% in WEEKS. dog is alive and well. - this is the 1st time AI has been used to create a custom vaccine for a dog (and it worked) - dude is now working on similar vaccines for humans using AI! 2026 is definitely the year we see AI change personalised medicine in a HUGE way so sick
Okay, it seems like OpenAI has fixed the looping problem in 5.4, it's working for me now whereas before it would not follow instructions for things like polling CI. After extensively using it, I can say that it's a really freaking good model, I'm really enjoying it. I had some big refactors to do this week and it crushed them. We're spoiled by the amount of good models that are out right now, no lie.
@amirrahnama_ No, that's my code. But I used LLMs to help expand unit tests and run benchmarks outside the book etc.