Your curated collection of saved posts and media
"Starlink internet is what's being used to pay for humanity getting to Mars. So I'd like to thank everyone out there who bought Starlink, because you're helping secure the future of civilization and helping make life multiplanetary." δΈ Elon Musk https://t.co/DBK735CsVH
SANCTUARY CRISIS: Sheriff Stacey A. Kincaid of Fairfax County ignored an ICE detainer, released a criminal illegal alien, and a Virginia resident was murdered hours later in Reston. Sanctuary policies have real victims. Marvin Morales-Ortez, 23, an illegal alien from El Salvador, is charged with killing a man in Reston after Fairfax County released him from jail. ICE says its detainer request was ignored. Morales-Ortez had prior charges for brandishing a firearm and assault and a rap sheet with at least seven cases since 2020. Police even sought an emergency custody order over safety concerns but failed to locate him before the killing.
In 2016, Zohran Mamdaniβs director of appointments wrote, βItβs important that white people feel defeated.β https://t.co/uVqwNWmhSY
DEI: Minnesota cannot say how much SNAP fraud exists after a 174% surge in benefits. Auditors say eligibility is not checked and access is not controlled. A state audit revealed Minnesotaβs SNAP agency fails basic safeguards. Commissioner Tikki Brownβs agency does not properly determine eligibility, does not follow federal rules, and does not control access to the SNAP payment system. Despite a 174% surge in benefits from 2020 to 2021, Brown falsely claimed fraud was nearly nonexistent using data she later conceded was incorrect. Leadership works remotely half the time while oversight collapses. Taxpayers are funding a system with no guardrails.
Your company lost 25% of its customers. But the team still grew 19%. The board would fire you. Immediately. But in LA public schools, this is absolutely normal. Why is this not 'front page' news? https://t.co/NQL2jZHpxh
NEWS: Today, Trump signed an executive order committing the United States to return to the Moon by 2028, build a lunar outpost by 2030 and prepare for the journey to Mars. Everything in the Executive Order: β’ Return Americans to the Moon by 2028 β’ Begin building a permanent lunar outpost by 2030 β’ Make U.S. space superiority a core national priority β’ Expand commercial launch, lower costs, increase cadence β’ Develop next-gen space-based missile defense by 2028 β’ Detect and counter threats in LEO and cislunar space β’ Rapidly modernize national security space architecture β’ Deepen allied cooperation in space security β’ Grow the U.S. commercial space economy β’ Target $50B+ in new space investment by 2028 β’ Support a commercial successor to the ISS by 2030 β’ Enable space nuclear power for lunar and orbital missions β’ Improve space weather forecasting β’ Lead on space traffic management & debris mitigation

A Message from AI Research Leaders: Join Us in Supporting OpenReview https://t.co/U1Co2d59do https://t.co/R2QOVVqJfn

I am actually kind of surprised that OpenAI doesn't view this as an upsell opportunity. https://t.co/dXZOWphuUi

The original viral AI image modification was the controversial "Ghiblification" The natural successor: "De-Ghibli this image. Remove all the Ghibli and replace it with the opposite of Ghibli. Deeply De-Ghiblify this, root and branch, heart & soul." Nano Banana vs. GPT Image 1.5 https://t.co/oQlXslPk2r

The original image and final Nano Banana and GPT versions. https://t.co/AA9zrwHLNJ

Check out Nano Banana Pro in action, right in Google Search πβ¨π After seeing folks visualize coordinates, we tried the prompt βVisualize 40.7422Β° N, 73.9880Β° W in 1916β and reran it several times, adding a decade with each go. Hereβs a GIF compilation: https://t.co/LNZrLOiwof
A Message from AI Research Leaders: Join Us in Supporting OpenReview https://t.co/U1Co2d59do https://t.co/R2QOVVqJfn
A lot of people underestimate AI due to the confluence of 4 OpenAI choices: 1) GPT-5.x instant is not a very smart model 2) Most users are free users & the ChatGPT router sends them to instant often 3) The router calls everything GPT-5.2 4) Most people don't know Reasoners exist https://t.co/NeNwHlSkeh

vLLM delivers even more inference performance with the same GPU platform. In just 1 month, we've worked with NVIDIA to increase @nvidia Blackwell maximum throughput per GPU by up to 33% -- significantly reducing cost per token -- while also enabling even higher peak speed for the most latency-sensitive use cases powered by deep PyTorch integration and collaboration.
π The YouWare Hackathon is NOW LIVE! $10K+ in prizes. Build with YouBase. Ship something real. Powered by @contra. Enter now at the link in threads! https://t.co/xdVEKsmS5r
Google not done T5 Gemma 2 https://t.co/lTXjaSTO7Q
Introducing T5Gemma 2, the next generation of encoder-decoder models, built on the powerful capabilities of Gemma 3. Key innovations and upgraded capabilities include: + Multimodality + Extended long context + Support of 140+ languages out of the box + Architectural improvements
Google not done T5 Gemma 2 https://t.co/lTXjaSTO7Q

The Reachy mini desktop app is so aesthetically pleasing. Huge fan. https://t.co/2o5YzFqIsI
The Reachy mini desktop app is so aesthetically pleasing. Huge fan. https://t.co/2o5YzFqIsI
New course: Nvidia's NeMo Agent Toolkit: Making Agents Reliable, taught by @Pr_Brian from @NVIDIA. Many teams struggle to turn agent demos into reliable systems that are ready for production. This short course teaches you to harden agentic workflows into reliable systems using Nvidia's open-source NeMo Agent Toolkit (NAT). Whether you built your agent in raw Python or using a framework like LangGraph, or CrewAI, NAT provides building blocks for observability, evaluation, and deployment that turn proofs-of-concept into production-ready systems. NAT makes it easy to troubleshoot and optimize agent performance with execution traces, systematic evaluations, and CI/CD integration. Skills you'll gain: - Build configuration-driven agent workflows with REST APIs and minimal code - Add observability with tracing to visualize agent reasoning and debug performance bottlenecks - Create systematic evaluations using gold-standard datasets to measure and improve agent reliability - Deploy multi-agent systems with authentication, rate limiting, and professional web interfaces - Orchestrate agents from different frameworks to collaborate on complex tasks Join and learn how to turn agent demos into reliable systems! https://t.co/9rBcBteq4b
πSonicMoEπ: a blazingly-fast MoE implementation optimized for NVIDIA Hopper GPUs. SonicMoE reduces activation memory by 45% and is 1.86x faster on H100 than previous SOTAπ Paper: https://t.co/Xesd3cNcpQ Work with @MayankMish98, @XinleC295, @istoica05, @tri_dao https://t.co/B83toUk27G
replying with this when my boss texts me that iβm an hour and a half late for work https://t.co/c5XNrAlg2x
replying with this when my boss texts me that iβm an hour and a half late for work https://t.co/c5XNrAlg2x

Look what weβre missing tonight at our hotel ~~ https://t.co/vRb0qUtr7C
Look what weβre missing tonight at our hotel ~~ https://t.co/vRb0qUtr7C

We now support Agent Skills - the open standard created by @AnthropicAI for extending AI agents with specialized capabilities. Create skills once, use them everywhere. π https://t.co/4GomgRJ21O https://t.co/MHA4SBzVNN
Tomorrow morning (8AM Pacific on Dec. 19th) join @digitarald, @timrogers, and me as we take a look at #AgentSkills in @code and #GitHubCopilot Cloud Agent and more! https://t.co/4DhMVQx7X0 #vscode
We now support Agent Skills - the open standard created by @AnthropicAI for extending AI agents with specialized capabilities. Create skills once, use them everywhere. π https://t.co/4GomgRJ21O https://t.co/MHA4SBzVNN
Yann LeCun says reaching human-level AI won't be a sudden event, but a gradual evolution "optimistically, we might reach human-level, or at least dog-level, intelligence within 5 to 10 years" However, if we run into unexpected obstacles, it may take 20 years or more
Yann LeCun (@ylecun ) beautifully explains how the architecture and principles used to train LLMs can not be extended to teach AI the real-world intelligence. In 1 line: LLMs excel where intelligence equals sequence prediction over symbols. Real-world intelligence requires learned world models, abstraction, causality, and action planning under uncertainty, which current next-token training does not provide. He says current LLMs learn by predicting the next token. That objective works very well when the task itself can be reduced to manipulating discrete symbols and sequences. Math, physics problem solving on paper, and coding fit this pattern because success largely comes from searching and composing the right sequences of symbols, equations, or program tokens. With enough data and scale, these models get very good at that kind of structured sequence prediction. Real-world intelligence is different. The physical world is continuous, noisy, uncertain, and high dimensional. To act in it, a system needs internal models that capture objects, dynamics, causality, constraints from the body, and the outcomes of actions over time. Humans and animals build abstract representations from rich sensory streams, then make predictions in that abstract space, not at the raw pixel level. That is why a child can learn intuitive physics, plan multi-step actions, and adapt quickly in new situations with little data. His claim about saturation follows from this gap. Scaling token prediction keeps improving symbol manipulation tasks like math and code, but it hits limits on embodied reasoning and common sense because text alone does not provide the right learning signals for world models. Predicting the next word cannot efficiently teach contact forces, affordances, occlusion, friction, or how actions change the state of the environment. For that, he argues we need architectures that learn abstractions from sensory data and predict futures in abstract latent spaces, then use those predictions to plan actions toward goals with built-in guardrails. --- From 'Pioneer Works' YT Channel (link in comment)
Yann LeCun's new interview - explains why LLMs are so limited in terms of real-world intelligence. Says the biggest LLM is trained on about 30 trillion words, which is roughly 10 to the power 14 bytes of text. That sounds huge, but a 4 year old who has been awake about 16,000 h
βElon Musk and Yann LeCun playing marblesβ made by @grok Yann doesnβt really look like Yann https://t.co/AuAuN285OA
βElon Musk and Yann LeCun playing marblesβ made by @grok Yann doesnβt really look like Yann https://t.co/AuAuN285OA
Introducing DexWM: Dexterous Manipulation World Model tl;dr: train on human videos; fine-grained actions; hand-consistency loss; DexWM+MPC -> zero-shot dexterous manipulation w/ @_amirbar, @DavidJFan, @JimmyTYYang1, @GaoyueZhou, P. Krishnamurthy, M. Rabbat, F. Khorrami, @ylecun https://t.co/cn2D0mvJyg