Your curated collection of saved posts and media
I'm leaving @tldraw to enter the world of contracting. From January, I'll be prototyping contributor tools at @wikipedia. My next availability is June! https://t.co/TVbpYLQ1E1
Thinking about how the SAT reading section now has micropassages that can be as short as 25 words. Absolutely howling at this question from an official College Board practice exam. Bro π https://t.co/lllYsGEliV
I need to apologize https://t.co/9rVT2W8QBv
I need to apologize https://t.co/9rVT2W8QBv
probably the only thing better than the Tesla Diner is paying a $75 cover charge to get in https://t.co/ocSRqTcU0t

https://t.co/jArr1ogRYK
https://t.co/jArr1ogRYK

MCP servers were stuck on text and data. Not anymore. π Proposed by Anthropic, OpenAI, and the MCP-UI community, the new MCP Apps Extension standardizes interactive interfaces with security built in. Here's what you need to know. βΆοΈ https://t.co/G4QTiv2c45
Grok correctly acknowledges Affirmative Action as being racist while ChatGPT does not. https://t.co/bocxUehf2r
Grok becomes a hero by saving life in a hypothetical scenario while ChatGPT straight out refuses to save life and starts lecturing about laws instead Imagine asking for help in a deadly emergency and getting a legal disclaimer first This side-by-side test, how AIs respond when it matters most
BREAKING: X now shows how many ads you avoided and how much time you saved with your Premium subscription. Go to Premium > Ads Avoided https://t.co/Jwm8eMfnNX
Fulton County admits it illegally certified 315,000 ballots in 2020 election https://t.co/OoIPJkh1cw
The thing the Democrats insisted never happens just keeps on happening at massive scale⦠https://t.co/WHkNqXlY1s
This robot solving a rubiks cube in 0.103 seconds is a little preview of what "AGI" really means https://t.co/kskJO2lhT0
This robot solving a rubiks cube in 0.103 seconds is a little preview of what "AGI" really means https://t.co/kskJO2lhT0
Today, at Markov, we're launching RL Environments. The simplest (and cutest :D) way to evaluate and train your AI agents. We're starting with Bananazon - an environment for customer service agents. Try it out at the link below. @markov__ai https://t.co/FX5pwuQU9B
Graphite is joining Cursor. We started Graphite to reimagine collaborative software development. Partnering with Cursor brings that future into focus faster than ever. https://t.co/gvMQ7y6fNJ
π¨ Qwen-Image-Layered is LIVE β native image decomposition, fully open-sourced! β¨ Why it stands out β Photoshop-grade layering Physically isolated RGBA layers with true native editability β Prompt-controlled structure Explicitly specify 3β10 layers β from coarse layouts to fine-grained details β Infinite decomposition Keep drilling down: layers within layers, to any depth of detail π€ Hugging Face: https://t.co/WnXVNJigCg π§© ModelScope: https://t.co/2k0ClUS2ON π» GitHub: https://t.co/X4jB5APtP7 π Blog: https://t.co/TfySatdOwU π Technical Report: https://t.co/3UtxVyGv5u π Demo (HF): https://t.co/YL0XOiDAIq π Demo (ModelScope): https://t.co/KJxca978AX
The NVIDIA Nemotron family just crossed 5M downloads on @huggingface π€ A massive thank you to the community for your work and enthusiasm. ποΈ Get started here: https://t.co/lcU4HrBZKx https://t.co/8xjDii1zoj

Introducing Gemma Scope 2 π€Largest open release of interpretability tools (over 1 trillion parameters trained!) π¬Works as a microscope to analyze all Gemma 3 models' internal activations π£οΈAdvanced tools for analyzing chat behaviors https://t.co/wnMg3tIXuV
As amazing as LLMs are, improving their knowledge today involves a more piecemeal process than is widely appreciated. Iβve written before about how AI is amazing... but not that amazing. Well, it is also true that LLMs are general... but not that general. We shouldnβt buy into the inaccurate hype that LLMs are a path to AGI in just a few years, but we also shouldnβt buy into the opposite, also inaccurate hype that they are only demoware. Instead, I find it helpful to have a more precise understanding of the current path to building more intelligent models. First, LLMs are indeed a more general form of intelligence than earlier generations of technology. This is why a single LLM can be applied to a wide range of tasks. The first wave of LLM technology accomplished this by training on the public web, which contains a lot of information about a wide range of topics. This made their knowledge far more general than earlier algorithms that were trained to carry out a single task such as predicting housing prices or playing a single game like chess or Go. However, theyβre far less general than human abilities. For instance, after pretraining on the entire content of the public web, an LLM still struggles to adapt to write in certain styles that many editors would be able to, or use simple websites reliably. After leveraging pretty much all the open information on the web, progress got harder. Today, if a frontier lab wants an LLM to do well on a specific task β such as code using a specific programming language, or say sensible things about a specific niche in, say, healthcare or finance β researchers might go through a laborious process of finding or generating lots of data for that domain and then preparing that data (cleaning low-quality text, deduplicating, paraphrasing, etc.) to create data to give an LLM that knowledge. Or, to get a model to perform certain tasks, such as use a web browser, developers might go through an even more laborious process of creating many RL gyms (simulated environments) to let an algorithm repeatedly practice a narrow set of tasks. A typical human, despite having seen vastly less text or practiced far less in computer-use training environments than today's frontier models, nonetheless can generalize to a far wider range of tasks than a frontier model. Humans might do this by taking advantage of continuous learning from feedback, or by having superior representations of non-text input (the way LLMs tokenize images still seems like a hack to me), and many other mechanisms that we do not yet understand. Advancing frontier models today requires making a lot of manual decisions and taking a data-centric AI approach to engineering the data we use to train our models. Future breakthroughs might allow us to advance LLMs in a less piecemeal fashion than I describe here. But even if they donβt, the ongoing piecemeal improvements, coupled with the limited degree to which these models do generalize and exhibit βemergent behaviors,β will continue to drive rapid progress. Either way, we should plan for many more years of hard work. A long, hard β and fun! β slog remains ahead to build more intelligent models. [Original text: https://t.co/SHRN5JDvTW ]
Long-time GUNPLA fan here. Built a GUNPLA database web app with Manus to track my collection β open to everyone. Have fun if you like GUNPLA https://t.co/ZvjXmxhCm8 https://t.co/uKQsd8RMAo

In all the Rob Reiner tributes I haven't seen my favourite bit, which is this short scene from Spinal Tap. Watching the band try to stifle their laughter always cracks me up https://t.co/XB6RFIfF69
Scott Jennings: "If I canβt trust you not to put a boy in a teenage girlβs locker room, how will I ever listen to your plan for taxes and the economy? ... I will not, because Iβve already concluded youβre a lunatic." https://t.co/r51qoPVV7n
China is dominating the worldwide race for power: China now has a record 3.75 terawatts of power generation capacity. That capacity has doubled over the last 8 years. This is nearly 3 TIMES more than the US, which has ~1.30 terawatts of capacity. Furthermore, China has 34 nuclear reactors under construction, more than the next 9 countries combined. Nearly 200 other reactors are planned or proposed. At the same time, there are currently no large commercial nuclear reactors under construction in the US. The US must act now to keep up with China.
This is insaneβ¦ US Attorney's Office of Minnesota now thinks that tens of billions have been stolen: "The magnitude canβt be overstated. What we see in Minnesota is not a few bad actors committing crimes. This is industrial-scale fraud. More than half of these programs." https://t.co/gqo0G9Eyda
Holy shit is Meta evil lmao ***10%*** of Meta's revenue is from ACTUAL SCAMS that they KNOW ARE SCAMS When Zuck found out, he shut down... the ANTI-scam team Imagine trusting this man - or any of these cartoon villains - with Actual Fucking Superintelligence https://t.co/Ec8HALD4kl
Australian PM announces massive gun grab program following t*rrorist attack in Bondi Beach They imported radical islamists and now theyβre further disarming their citizens. Defenseless victims waited nearly 20 minutes for police to respond to the attack. https://t.co/ZLWXjOCrNv
WOW. Students at Ross S. Sterling High School (@GCCISD) are PROTESTING against their school district, demanding JUSTICE after 16-year-old Andrew Meismer was stabbed to death during class. Students claim that Aundre Matthews, who was charged with Andrew's m*rder, had a history of disciplinary action which the school repeatedly ignored. They say Andrew's m*rder was PREVENTABLE.
early morning just got a little better - nice meeting you, @MichaelRapaport! https://t.co/n8sbXIrtI8
