Your curated collection of saved posts and media
@toddsaunders /voice
Everyone's excited about Karpathy's autoresearch that automates the experiment loop. We automated the whole damn thing. ๐ฆ Meet AutoResearchClaw: one message in, full conference paper out. Real experiments. Real citations. Real code. No human in the loop. One message in โ full paper out. Here's what happens in between: ๐ Raids arXiv & Semantic Scholar, digests 50+ papers in minutes ๐ฅ Three AI agents FIGHT over the best hypothesis (one swings big, one sanity-checks, one tries to kill every idea) ๐ป Writes experiment code from scratch, adapts to your hardware ๐ฅ Code crashes at 3am? It reads the stack trace, rewrites the fix, keeps going ๐ Results weak? It pivots to entirely new hypotheses and starts over ๐ Drafts a full paper with citations, every single one verified against live databases No babysitting. No Slack messages. No "hey can you re-run this." Karpathy built the experiment loop. We built the whole lab. Chat an idea. Get a paper. ๐ฆ Try it ๐: https://t.co/KLOcnzFYaD Kudos to the team @JiaqiLiu835914, @richardxp888, @lillianwei423, @StephenQS0710, @Xinyu2ML, @HaoqinT, @zhengop, @cihangxie, @dingmyu, and we are looking for more contributors.

This is a very cool experiment but we need to get AIs to do good science. The modern scientific method & Mertonian norms are critical for a reason, and a failure to follow them has led to many of our current scientific crises. We donโt want p-hacking at scale https://t.co/YEqzVDmTpH
Also: *EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT* *EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT* *EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT* It can literally mean the opposite: AI exposed jobs may increase hiring and attract higher wages. It all depends on a) elasticity of consumer demand and b) number of AI exposed tasks in a job.
Also: *EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT* *EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT* *EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT* It can literally mean the opposite: AI exposed jobs may increase hiring and attract higher wages. It all depends on a) elasticity of consumer demand and b) number of AI exposed tasks in a job.
@eigenron It has the same problems RL ran into, maybe worse. Collapsing branches to win benchmarks doesnโt improve real capability. It mostly compresses output variance; by shifting the weights, it distorts the latent space and hurts performance elsewhere. Careful what you optimize for. Benchmaxxing isnโt the path forward.
ไปๅคฉๆญฃๅผๅๅคไป Claude Code ๅๆขๅฐ Codex ไบ ไนๅ็จ Claude Code ๆถๅ ไธบๆฒกๆ Anthropic ๅฎๆน API๏ผไธ็ดๅจ็จ Minimax ๅ Kimi ็ญ API ๅๆข็็จใ ๆ่ฟ่็ผๅฏ่ง @OpenAIDevs ๅจ Codex ไธ็ๅณๅฟๅๅจไฝ่ถๆฅ่ถๅฏ้๏ผOpenClaw ๅๅงไบบ @steipeteใInstructor ไฝ่ @jxnlco ็ญๅผๆบๅ AI ๆ่ฒๅไบซ้ๅธธๆดป่ท็ๅคงไฝฌๅ ๅ ฅ Codex๏ผ่ฟๆไธๅฎๆ Reset limit ็ @thsottiaux ๐ ๅ ่ฎข้ ไธช Plus ไผๅไฝไธบไธปๅ AI ็จ่ตทๆฅ๏ผๅฏน Codex ๆไปคไธๅค็ๆ๏ผๅ ๅไธช Cheatsheet ็ปๅๅไบ่งฃ Codex ็ๆๅไปฌ๏ผๅ ๆฌๆ่ชๅทฑใ
ไปๅคฉๆญฃๅผๅๅคไป Claude Code ๅๆขๅฐ Codex ไบ ไนๅ็จ Claude Code ๆถๅ ไธบๆฒกๆ Anthropic ๅฎๆน API๏ผไธ็ดๅจ็จ Minimax ๅ Kimi ็ญ API ๅๆข็็จใ ๆ่ฟ่็ผๅฏ่ง @OpenAIDevs ๅจ Codex ไธ็ๅณๅฟๅๅจไฝ่ถๆฅ่ถๅฏ้๏ผOpenClaw ๅๅงไบบ @steipeteใInstructor ไฝ่ @jxnlco ็ญๅผๆบๅ AI ๆ่ฒๅไบซ้ๅธธๆดป่ท็ๅคงไฝฌๅ ๅ ฅ Codex๏ผ่ฟๆไธๅฎๆ Reset limit ็ @thsottiaux ๐ ๅ ่ฎข้ ไธช Plus ไผๅไฝไธบไธปๅ AI ็จ่ตทๆฅ๏ผๅฏน Codex ๆไปคไธๅค็ๆ๏ผๅ ๅไธช Cheatsheet ็ปๅๅไบ่งฃ Codex ็ๆๅไปฌ๏ผๅ ๆฌๆ่ชๅทฑใ
Top AI papers on @huggingface this week: Language feedback for RL, training agents by talking, and fixing LLM story consistency - Bootstrapping Exploration with Group-Level Natural Language Feedback in Reinforcement Learning - Geometry-Guided Reinforcement Learning for Multi-view Consistent 3D Scene Editing - Penguin-VL by Tencent: Exploring the Efficiency Limits of VLM with LLM-based Vision Encoders - OpenClaw-RL: Train Any Agent Simply by Talking - Lost in Stories: Consistency Bugs in Long Story Generation by LLMs - Holi-Spatial: Evolving Video Streams into Holistic 3D Spatial Intelligence - Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training - Flash-KMeans: Fast and Memory-Efficient Exact K-Means - Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs - LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory
Top AI papers on @huggingface this week: Language feedback for RL, training agents by talking, and fixing LLM story consistency - Bootstrapping Exploration with Group-Level Natural Language Feedback in Reinforcement Learning - Geometry-Guided Reinforcement Learning for Multi-view Consistent 3D Scene Editing - Penguin-VL by Tencent: Exploring the Efficiency Limits of VLM with LLM-based Vision Encoders - OpenClaw-RL: Train Any Agent Simply by Talking - Lost in Stories: Consistency Bugs in Long Story Generation by LLMs - Holi-Spatial: Evolving Video Streams into Holistic 3D Spatial Intelligence - Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training - Flash-KMeans: Fast and Memory-Efficient Exact K-Means - Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs - LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory
Latent world models learn differentiable dynamics in a learned representation space, which should make planning as simple as gradient descent. But it almost never works. What I mean is, at test time, you can treat the action sequence as learnable parameters, roll out the frozen world model, measure how far the predicted final state is from the goal, and backprop through the entire unrolled chain to optimize actions directly. Yet many of the systems that work (Dreamer, TD-MPC2, DINO-WM) abandon this and fall back to sampling-based search instead. That's why I really like this new paper by @yingwww_, @ylecun, and @mengyer, which gives a clean diagnosis of why, and a principled fix. The reason everyone abandons gradient descent on actions is that the planning objective is highly non-convex in the learned latent space. So instead most systems use CEM (cross-entropy method) or MPPI (model predictive path integral), both derivative-free. CEM samples batches of action sequences, evaluates them by rolling out the world model, keeps the top-k, and refits the sampling distribution. MPPI does something similar but weights trajectories by exponentiated negative cost instead of hard elite selection. These work when gradients are unreliable but the compute cost is substantial โ hundreds of candidate rollouts per planning step vs a single forward-backward pass. This paper asks what exactly makes the latent planning landscape so hostile to gradients and what you can do about it. The diagnosis. Their baseline is DINO-WM, a JEPA-style world model with a ViT predictor planning in frozen DINOv2 feature space, minimizing terminal MSE between predicted and goal embeddings. The problem is that DINOv2 latent trajectories are highly curved (when you use MSE as the planning cost you're implicitly assuming euclidean distance approximates geodesic distance along feasible transitions). For curved trajectories this breaks badly, gradient-based planners get trapped and straight-line distances in embedding space misrepresent actual reachability. The fix draws from the perceptual straightening hypothesis in neuroscience โ the idea that biological visual systems transform complex video into internally straighter representations. So they add a curvature regularizer during world model training. Given consecutive encoded states z_t, z_{t+1}, z_{t+2}, define velocity vectors as v_t = z_{t+1} - z_t measure curvature as the cosine similarity between consecutive velocities, and minimize L_curv = 1 - cos(v_t, v_{t+1}). Total loss is then L_pred + ฮป * L_curv with stop-gradient on the target branch to prevent collapse. The theory backs this up cleanly โ they prove that reducing curvature directly bounds how well-conditioned the planning optimization is โ straighter latent trajectories guarantee faster convergence of gradient descent over longer horizons. Worth noting that even without the curvature loss, training the encoder with a prediction objective alone produces some "implicit straightening" โ the JEPA loss naturally favors representations whose temporal evolution is predictable. Explicit regularization simply pushes this much further. Empirical results across four 2D goal-reaching environments are consistently strong. Open-loop success improves by 20-50%, and the GD with straightening matches or beats CEM at a fraction of the compute. The most convincing evidence is the distance heatmaps: after straightening, latent Euclidean distance closely matches the shortest distance between states, even though the model was trained only on suboptimal random trajectories. What I find interesting beyond the specific method is that the planning algorithm didn't change. The dynamics model didn't change. A single regularization term on the embedding geometry turned gradient descent from unreliable to competitive with sampling methods. The field has largely treated representation learning and planning as separate concerns โ learn good features, then figure out how to plan in them. This paper makes a concrete case that the representation geometry is itself the bottleneck. This connects to a broader pattern in ML. When optimization fails, the instinct is to fix the optimizer (better search, more samples, adaptive schedules). But often the real lever is the shape of the space you're optimizing in. Same principle shows up in RL post-training where reward landscape shaping matters as much as the algorithm itself. Shape the space so simple optimization works, rather than building complex optimization to handle a bad space. Their paper: https://t.co/NLPGxqbP2x
Latent world models learn differentiable dynamics in a learned representation space, which should make planning as simple as gradient descent. But it almost never works. What I mean is, at test time, you can treat the action sequence as learnable parameters, roll out the frozen world model, measure how far the predicted final state is from the goal, and backprop through the entire unrolled chain to optimize actions directly. Yet many of the systems that work (Dreamer, TD-MPC2, DINO-WM) abandon this and fall back to sampling-based search instead. That's why I really like this new paper by @yingwww_, @ylecun, and @mengyer, which gives a clean diagnosis of why, and a principled fix. The reason everyone abandons gradient descent on actions is that the planning objective is highly non-convex in the learned latent space. So instead most systems use CEM (cross-entropy method) or MPPI (model predictive path integral), both derivative-free. CEM samples batches of action sequences, evaluates them by rolling out the world model, keeps the top-k, and refits the sampling distribution. MPPI does something similar but weights trajectories by exponentiated negative cost instead of hard elite selection. These work when gradients are unreliable but the compute cost is substantial โ hundreds of candidate rollouts per planning step vs a single forward-backward pass. This paper asks what exactly makes the latent planning landscape so hostile to gradients and what you can do about it. The diagnosis. Their baseline is DINO-WM, a JEPA-style world model with a ViT predictor planning in frozen DINOv2 feature space, minimizing terminal MSE between predicted and goal embeddings. The problem is that DINOv2 latent trajectories are highly curved (when you use MSE as the planning cost you're implicitly assuming euclidean distance approximates geodesic distance along feasible transitions). For curved trajectories this breaks badly, gradient-based planners get trapped and straight-line distances in embedding space misrepresent actual reachability. The fix draws from the perceptual straightening hypothesis in neuroscience โ the idea that biological visual systems transform complex video into internally straighter representations. So they add a curvature regularizer during world model training. Given consecutive encoded states z_t, z_{t+1}, z_{t+2}, define velocity vectors as v_t = z_{t+1} - z_t measure curvature as the cosine similarity between consecutive velocities, and minimize L_curv = 1 - cos(v_t, v_{t+1}). Total loss is then L_pred + ฮป * L_curv with stop-gradient on the target branch to prevent collapse. The theory backs this up cleanly โ they prove that reducing curvature directly bounds how well-conditioned the planning optimization is โ straighter latent trajectories guarantee faster convergence of gradient descent over longer horizons. Worth noting that even without the curvature loss, training the encoder with a prediction objective alone produces some "implicit straightening" โ the JEPA loss naturally favors representations whose temporal evolution is predictable. Explicit regularization simply pushes this much further. Empirical results across four 2D goal-reaching environments are consistently strong. Open-loop success improves by 20-50%, and the GD with straightening matches or beats CEM at a fraction of the compute. The most convincing evidence is the distance heatmaps: after straightening, latent Euclidean distance closely matches the shortest distance between states, even though the model was trained only on suboptimal random trajectories. What I find interesting beyond the specific method is that the planning algorithm didn't change. The dynamics model didn't change. A single regularization term on the embedding geometry turned gradient descent from unreliable to competitive with sampling methods. The field has largely treated representation learning and planning as separate concerns โ learn good features, then figure out how to plan in them. This paper makes a concrete case that the representation geometry is itself the bottleneck. This connects to a broader pattern in ML. When optimization fails, the instinct is to fix the optimizer (better search, more samples, adaptive schedules). But often the real lever is the shape of the space you're optimizing in. Same principle shows up in RL post-training where reward landscape shaping matters as much as the algorithm itself. Shape the space so simple optimization works, rather than building complex optimization to handle a bad space. Their paper: https://t.co/NLPGxqbP2x
@zhuokaiz Nice summary ๐
The Top AI Papers of the Week (March 9 - March 15) - KARL - OpenDev - SkillNet - Memex(RL) - AutoHarness - FlashAttention-4 - The Spike, the Sparse, and the Sink Read on for more:
https://t.co/0lQ8NXM4M3
The Top AI Papers of the Week (March 9 - March 15) - KARL - OpenDev - SkillNet - Memex(RL) - AutoHarness - FlashAttention-4 - The Spike, the Sparse, and the Sink Read on for more:
I've been building panlabel โ a fast Rust CLI that converts between dataset annotation formats โ and I'm a few releases behind on sharing updates. Here's a quick catch-up. v0.3.0 added Hugging Face ImageFolder support, including remote Hub import via --hf-repo. You can point it at a HF dataset repo and it figures out the layout (metadata.jsonl, parquet shards, even zip-style splits that contain YOLO or COCO inside). v0.4.0 overhauled auto-detection so it gives you concrete evidence when format detection is ambiguous ("found YOLO labels/ but missing images/") instead of a generic error. Also added Docker images. v0.5.0 brought split-aware YOLO reading for Roboflow/Ultralytics Hub exports and conversion report explainability โ every adapter now explains its deterministic policies so you know exactly what happens to your data. v0.6.0 is the big one. Five new format adapters: โ LabelMe JSON (per-image, with polygon-to-bbox envelope) โ Apple CreateML JSON (center-based coords) โ KITTI (autonomous driving standard โ 15 fields per line) โ VGG Image Annotator (VIA) JSON โ RetinaNet Keras CSV That brings panlabel to 13 supported formats with full read, write, and auto-detection. Also in v0.6.0: YOLO confidence token support, dry-run mode for previewing conversions, and content-based CSV detection. Single binary, no Python dependencies. Install via pip, brew, cargo, or grab a pre-built binary from GitHub releases. This is the kind of project I enjoy just steadily plodding away at โ ticking off one format at a time until every common object detection annotation format is covered. Still sticking with detection bboxes for now, but the format list keeps growing. #ObjectDetection #Rust #MachineLearning #ComputerVision #OpenSource
I've been building panlabel โ a fast Rust CLI that converts between dataset annotation formats โ and I'm a few releases behind on sharing updates. Here's a quick catch-up. v0.3.0 added Hugging Face ImageFolder support, including remote Hub import via --hf-repo. You can point it at a HF dataset repo and it figures out the layout (metadata.jsonl, parquet shards, even zip-style splits that contain YOLO or COCO inside). v0.4.0 overhauled auto-detection so it gives you concrete evidence when format detection is ambiguous ("found YOLO labels/ but missing images/") instead of a generic error. Also added Docker images. v0.5.0 brought split-aware YOLO reading for Roboflow/Ultralytics Hub exports and conversion report explainability โ every adapter now explains its deterministic policies so you know exactly what happens to your data. v0.6.0 is the big one. Five new format adapters: โ LabelMe JSON (per-image, with polygon-to-bbox envelope) โ Apple CreateML JSON (center-based coords) โ KITTI (autonomous driving standard โ 15 fields per line) โ VGG Image Annotator (VIA) JSON โ RetinaNet Keras CSV That brings panlabel to 13 supported formats with full read, write, and auto-detection. Also in v0.6.0: YOLO confidence token support, dry-run mode for previewing conversions, and content-based CSV detection. Single binary, no Python dependencies. Install via pip, brew, cargo, or grab a pre-built binary from GitHub releases. This is the kind of project I enjoy just steadily plodding away at โ ticking off one format at a time until every common object detection annotation format is covered. Still sticking with detection bboxes for now, but the format list keeps growing. #ObjectDetection #Rust #MachineLearning #ComputerVision #OpenSource
I (finally) put together a new LLM Architecture Gallery that collects the architecture figures all in one place! https://t.co/NO7z6XSRHS https://t.co/X41FrK4i94
@theo @DavidOndrej1 I have one good use case to save on token burn, use Gemini 3.1 flash for mock & seed data.
// Continual Learning from Experience and Skills // Skills are so good when you combine them properly with MCP & CLIs. I have found that Skills can significantly improve tool usage of my coding agents. The best way to improve them is to regularly document improvements, patterns, and things to avoid. Self-improving skills don't work that well (yet). Check out this related paper on the topic: It introduces XSkill, a dual-stream continual learning framework. Agents distill two types of reusable knowledge from past trajectories: experiences for action-level tool selection, and skills for task-level planning and workflows. Both are grounded in visual observations. During accumulation, agents compare successful and failed rollouts via cross-rollout critique to extract high-quality knowledge. During inference, they retrieve and adapt relevant experiences and skills to the current visual context. Evaluated across five benchmarks with four backbone models, XSkill consistently outperforms baselines. On Gemini-3-Flash, the average success rate jumps from 33.6% to 40.3%. Skills reduce overall tool errors from 29.9% to 16.3%. Agents that accumulate and reuse knowledge from their own trajectories get better over time without parameter updates. I have now seen two papers this week with similar ideas. Paper: https://t.co/YXrHcJ6Zim Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
in the next claw release (~Sunday), you can always ask your agents, even they are busy working. https://t.co/TVX9o6ciKo