Your curated collection of saved posts and media

Showing 14 posts · last 7 days · quality filtered
K
karpathy
@karpathy
📅
Mar 15, 2026
1h ago
🆔16992999
0.40

@Ignaci0m_ @_kaitodev This was a saturday morning 2 hour vibe coded project inspired by a book I’m reading. I thought the code/data might be helpful to others to explore the BLS dataset visually, or color it in different ways or with different prompts or add their own visualizations. It’s been wildly misinterpreted (which I should have anticipated even despite the readme docs) so I took it down.

M
Mid0
@Mid0
📅
Mar 15, 2026
2h ago
🆔10757761
0.38

@Bhavani_00007 Trick question there is no windsurf. I vote for Augment + Codex + Claude code. I rarely use Gemini CLI but I like it for research, test/mock data creation. All three. VS Code just for reading. Previously it supported other

O
omarsar0
@omarsar0
📅
Mar 15, 2026
2h ago
🆔66759535

15K stars already!? Great idea. CLIs work amazingly well with coding agents. Worth playing around with. Do run a lot of tests if you are planning to use this to build tools. https://t.co/Aigh3uAI5Y

Media 1
🖼️ Media
🔁dair_ai retweeted
O
elvis
@omarsar0
📅
Mar 15, 2026
2h ago
🆔66759535
0.32

15K stars already!? Great idea. CLIs work amazingly well with coding agents. Worth playing around with. Do run a lot of tests if you are planning to use this to build tools. https://t.co/Aigh3uAI5Y

❤️8
likes
🔁2
retweets
H
HuaxiuYaoML
@HuaxiuYaoML
📅
Mar 15, 2026
13h ago
🆔53405308

Everyone's excited about Karpathy's autoresearch that automates the experiment loop. We automated the whole damn thing. 🦞 Meet AutoResearchClaw: one message in, full conference paper out. Real experiments. Real citations. Real code. No human in the loop. One message in → full paper out. Here's what happens in between: 📚 Raids arXiv & Semantic Scholar, digests 50+ papers in minutes 🥊 Three AI agents FIGHT over the best hypothesis (one swings big, one sanity-checks, one tries to kill every idea) 💻 Writes experiment code from scratch, adapts to your hardware 💥 Code crashes at 3am? It reads the stack trace, rewrites the fix, keeps going 🔄 Results weak? It pivots to entirely new hypotheses and starts over 📝 Drafts a full paper with citations, every single one verified against live databases No babysitting. No Slack messages. No "hey can you re-run this." Karpathy built the experiment loop. We built the whole lab. Chat an idea. Get a paper. 🦞 Try it 👉: https://t.co/KLOcnzFYaD Kudos to the team @JiaqiLiu835914, @richardxp888, @lillianwei423, @StephenQS0710, @Xinyu2ML, @HaoqinT, @zhengop, @cihangxie, @dingmyu, and we are looking for more contributors.

Media 1Media 2
🖼️ Media
G
gerardsans
@gerardsans
📅
Mar 15, 2026
3h ago
🆔52086940
0.36

@eigenron It has the same problems RL ran into, maybe worse. Collapsing branches to win benchmarks doesn’t improve real capability. It mostly compresses output variance; by shifting the weights, it distorts the latent space and hurts performance elsewhere. Careful what you optimize for. Benchmaxxing isn’t the path forward.

S
shao__meng
@shao__meng
📅
Mar 15, 2026
5h ago
🆔73171514

今天正式准备从 Claude Code 切换到 Codex 了 之前用 Claude Code 时因为没有 Anthropic 官方 API,一直在用 Minimax 和 Kimi 等 API 切换着用。 最近肉眼可见 @OpenAIDevs 在 Codex 上的决心和动作越来越密集,OpenClaw 创始人 @steipete、Instructor 作者 @jxnlco 等开源和 AI 教育分享非常活跃的大佬加入 Codex,还有不定期 Reset limit 的 @thsottiaux 😄 先订阅个 Plus 会员作为主力 AI 用起来!对 Codex 指令不够熟悉,先做个 Cheatsheet 给刚刚了解 Codex 的朋友们,包括我自己。

Media 1
🖼️ Media
🔁jxnlco retweeted
S
meng shao
@shao__meng
📅
Mar 15, 2026
5h ago
🆔73171514
0.36

今天正式准备从 Claude Code 切换到 Codex 了 之前用 Claude Code 时因为没有 Anthropic 官方 API,一直在用 Minimax 和 Kimi 等 API 切换着用。 最近肉眼可见 @OpenAIDevs 在 Codex 上的决心和动作越来越密集,OpenClaw 创始人 @steipete、Instructor 作者 @jxnlco 等开源和 AI 教育分享非常活跃的大佬加入 Codex,还有不定期 Reset limit 的 @thsottiaux 😄 先订阅个 Plus 会员作为主力 AI 用起来!对 Codex 指令不够熟悉,先做个 Cheatsheet 给刚刚了解 Codex 的朋友们,包括我自己。

❤️48
likes
🔁7
retweets
H
HuggingPapers
@HuggingPapers
📅
Mar 15, 2026
4h ago
🆔83475344

Top AI papers on @huggingface this week: Language feedback for RL, training agents by talking, and fixing LLM story consistency - Bootstrapping Exploration with Group-Level Natural Language Feedback in Reinforcement Learning - Geometry-Guided Reinforcement Learning for Multi-view Consistent 3D Scene Editing - Penguin-VL by Tencent: Exploring the Efficiency Limits of VLM with LLM-based Vision Encoders - OpenClaw-RL: Train Any Agent Simply by Talking - Lost in Stories: Consistency Bugs in Long Story Generation by LLMs - Holi-Spatial: Evolving Video Streams into Holistic 3D Spatial Intelligence - Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training - Flash-KMeans: Fast and Memory-Efficient Exact K-Means - Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs - LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory

Media 1
🖼️ Media
🔁_akhaliq retweeted
H
DailyPapers
@HuggingPapers
📅
Mar 15, 2026
4h ago
🆔83475344
0.38

Top AI papers on @huggingface this week: Language feedback for RL, training agents by talking, and fixing LLM story consistency - Bootstrapping Exploration with Group-Level Natural Language Feedback in Reinforcement Learning - Geometry-Guided Reinforcement Learning for Multi-view Consistent 3D Scene Editing - Penguin-VL by Tencent: Exploring the Efficiency Limits of VLM with LLM-based Vision Encoders - OpenClaw-RL: Train Any Agent Simply by Talking - Lost in Stories: Consistency Bugs in Long Story Generation by LLMs - Holi-Spatial: Evolving Video Streams into Holistic 3D Spatial Intelligence - Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training - Flash-KMeans: Fast and Memory-Efficient Exact K-Means - Thinking to Recall: How Reasoning Unlocks Parametric Knowledge in LLMs - LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory

❤️3
likes
🔁3
retweets
Z
zhuokaiz
@zhuokaiz
📅
Mar 15, 2026
12h ago
🆔07654255
0.46

Latent world models learn differentiable dynamics in a learned representation space, which should make planning as simple as gradient descent. But it almost never works. What I mean is, at test time, you can treat the action sequence as learnable parameters, roll out the frozen world model, measure how far the predicted final state is from the goal, and backprop through the entire unrolled chain to optimize actions directly. Yet many of the systems that work (Dreamer, TD-MPC2, DINO-WM) abandon this and fall back to sampling-based search instead. That's why I really like this new paper by @yingwww_, @ylecun, and @mengyer, which gives a clean diagnosis of why, and a principled fix. The reason everyone abandons gradient descent on actions is that the planning objective is highly non-convex in the learned latent space. So instead most systems use CEM (cross-entropy method) or MPPI (model predictive path integral), both derivative-free. CEM samples batches of action sequences, evaluates them by rolling out the world model, keeps the top-k, and refits the sampling distribution. MPPI does something similar but weights trajectories by exponentiated negative cost instead of hard elite selection. These work when gradients are unreliable but the compute cost is substantial — hundreds of candidate rollouts per planning step vs a single forward-backward pass. This paper asks what exactly makes the latent planning landscape so hostile to gradients and what you can do about it. The diagnosis. Their baseline is DINO-WM, a JEPA-style world model with a ViT predictor planning in frozen DINOv2 feature space, minimizing terminal MSE between predicted and goal embeddings. The problem is that DINOv2 latent trajectories are highly curved (when you use MSE as the planning cost you're implicitly assuming euclidean distance approximates geodesic distance along feasible transitions). For curved trajectories this breaks badly, gradient-based planners get trapped and straight-line distances in embedding space misrepresent actual reachability. The fix draws from the perceptual straightening hypothesis in neuroscience — the idea that biological visual systems transform complex video into internally straighter representations. So they add a curvature regularizer during world model training. Given consecutive encoded states z_t, z_{t+1}, z_{t+2}, define velocity vectors as v_t = z_{t+1} - z_t measure curvature as the cosine similarity between consecutive velocities, and minimize L_curv = 1 - cos(v_t, v_{t+1}). Total loss is then L_pred + λ * L_curv with stop-gradient on the target branch to prevent collapse. The theory backs this up cleanly — they prove that reducing curvature directly bounds how well-conditioned the planning optimization is — straighter latent trajectories guarantee faster convergence of gradient descent over longer horizons. Worth noting that even without the curvature loss, training the encoder with a prediction objective alone produces some "implicit straightening" — the JEPA loss naturally favors representations whose temporal evolution is predictable. Explicit regularization simply pushes this much further. Empirical results across four 2D goal-reaching environments are consistently strong. Open-loop success improves by 20-50%, and the GD with straightening matches or beats CEM at a fraction of the compute. The most convincing evidence is the distance heatmaps: after straightening, latent Euclidean distance closely matches the shortest distance between states, even though the model was trained only on suboptimal random trajectories. What I find interesting beyond the specific method is that the planning algorithm didn't change. The dynamics model didn't change. A single regularization term on the embedding geometry turned gradient descent from unreliable to competitive with sampling methods. The field has largely treated representation learning and planning as separate concerns — learn good features, then figure out how to plan in them. This paper makes a concrete case that the representation geometry is itself the bottleneck. This connects to a broader pattern in ML. When optimization fails, the instinct is to fix the optimizer (better search, more samples, adaptive schedules). But often the real lever is the shape of the space you're optimizing in. Same principle shows up in RL post-training where reward landscape shaping matters as much as the algorithm itself. Shape the space so simple optimization works, rather than building complex optimization to handle a bad space. Their paper: https://t.co/NLPGxqbP2x

🔁ylecun retweeted
Z
Zhuokai Zhao
@zhuokaiz
📅
Mar 15, 2026
12h ago
🆔07654255
0.36

Latent world models learn differentiable dynamics in a learned representation space, which should make planning as simple as gradient descent. But it almost never works. What I mean is, at test time, you can treat the action sequence as learnable parameters, roll out the frozen world model, measure how far the predicted final state is from the goal, and backprop through the entire unrolled chain to optimize actions directly. Yet many of the systems that work (Dreamer, TD-MPC2, DINO-WM) abandon this and fall back to sampling-based search instead. That's why I really like this new paper by @yingwww_, @ylecun, and @mengyer, which gives a clean diagnosis of why, and a principled fix. The reason everyone abandons gradient descent on actions is that the planning objective is highly non-convex in the learned latent space. So instead most systems use CEM (cross-entropy method) or MPPI (model predictive path integral), both derivative-free. CEM samples batches of action sequences, evaluates them by rolling out the world model, keeps the top-k, and refits the sampling distribution. MPPI does something similar but weights trajectories by exponentiated negative cost instead of hard elite selection. These work when gradients are unreliable but the compute cost is substantial — hundreds of candidate rollouts per planning step vs a single forward-backward pass. This paper asks what exactly makes the latent planning landscape so hostile to gradients and what you can do about it. The diagnosis. Their baseline is DINO-WM, a JEPA-style world model with a ViT predictor planning in frozen DINOv2 feature space, minimizing terminal MSE between predicted and goal embeddings. The problem is that DINOv2 latent trajectories are highly curved (when you use MSE as the planning cost you're implicitly assuming euclidean distance approximates geodesic distance along feasible transitions). For curved trajectories this breaks badly, gradient-based planners get trapped and straight-line distances in embedding space misrepresent actual reachability. The fix draws from the perceptual straightening hypothesis in neuroscience — the idea that biological visual systems transform complex video into internally straighter representations. So they add a curvature regularizer during world model training. Given consecutive encoded states z_t, z_{t+1}, z_{t+2}, define velocity vectors as v_t = z_{t+1} - z_t measure curvature as the cosine similarity between consecutive velocities, and minimize L_curv = 1 - cos(v_t, v_{t+1}). Total loss is then L_pred + λ * L_curv with stop-gradient on the target branch to prevent collapse. The theory backs this up cleanly — they prove that reducing curvature directly bounds how well-conditioned the planning optimization is — straighter latent trajectories guarantee faster convergence of gradient descent over longer horizons. Worth noting that even without the curvature loss, training the encoder with a prediction objective alone produces some "implicit straightening" — the JEPA loss naturally favors representations whose temporal evolution is predictable. Explicit regularization simply pushes this much further. Empirical results across four 2D goal-reaching environments are consistently strong. Open-loop success improves by 20-50%, and the GD with straightening matches or beats CEM at a fraction of the compute. The most convincing evidence is the distance heatmaps: after straightening, latent Euclidean distance closely matches the shortest distance between states, even though the model was trained only on suboptimal random trajectories. What I find interesting beyond the specific method is that the planning algorithm didn't change. The dynamics model didn't change. A single regularization term on the embedding geometry turned gradient descent from unreliable to competitive with sampling methods. The field has largely treated representation learning and planning as separate concerns — learn good features, then figure out how to plan in them. This paper makes a concrete case that the representation geometry is itself the bottleneck. This connects to a broader pattern in ML. When optimization fails, the instinct is to fix the optimizer (better search, more samples, adaptive schedules). But often the real lever is the shape of the space you're optimizing in. Same principle shows up in RL post-training where reward landscape shaping matters as much as the algorithm itself. Shape the space so simple optimization works, rather than building complex optimization to handle a bad space. Their paper: https://t.co/NLPGxqbP2x

❤️132
likes
🔁15
retweets
Y
ylecun
@ylecun
📅
Mar 15, 2026
4h ago
🆔45027353
0.32

@zhuokaiz Nice summary 😊

🔁omarsar0 retweeted
D
DAIR.AI
@dair_ai
📅
Mar 15, 2026
4h ago
🆔04105379
0.38

The Top AI Papers of the Week (March 9 - March 15) - KARL - OpenDev - SkillNet - Memex(RL) - AutoHarness - FlashAttention-4 - The Spike, the Sparse, and the Sink Read on for more:

❤️7
likes
🔁1
retweets
← PreviousPage 108 of 211Next →