Your curated collection of saved posts and media

Showing 24 posts Β· last 7 days Β· quality filtered
J
jasteinerman
@jasteinerman
πŸ“…
Mar 16, 2026
2h ago
πŸ†”75976987
⭐0.32

Love this submission from our world models hackathon this weekend - a generative FPS!

@AnshulDhawan001 β€’ Mon Mar 16 21:08

Spent the weekend hacking at the Worlds in Action hackathon at @fdotinc by @SensAIHackademy. It was so much fun playing with the world models by @theworldlabs . I believe generative games are the future where characters, rules and even parts of the world can be generated and ad

πŸ”Scobleizer retweeted
J
Jake Steinerman πŸ”œ GDC & GTC
@jasteinerman
πŸ“…
Mar 16, 2026
2h ago
πŸ†”75976987
⭐0.34

Love this submission from our world models hackathon this weekend - a generative FPS!

❀️5
likes
πŸ”1
retweets
P
PyTorch
@PyTorch
πŸ“…
Mar 16, 2026
49m ago
πŸ†”18333110

#ExecuTorch addresses fragmented native deployment for #AI agents as a #PyTorch native platform. It enables voice models across CPU, GPU, and NPU on Android, iOS, Linux, macOS & Windows πŸ”— https://t.co/NeQQyUniL4 https://t.co/O3itnoQFoG

Media 1
πŸ–ΌοΈ Media
πŸ”jxnlco retweeted
E
edwin
@edwinarbus
πŸ“…
Mar 16, 2026
4h ago
πŸ†”50334333
⭐0.34

Matt Maher tested frontier models in Cursor v. other harnesses. Cursor boosted model performance by 11% on average: Gemini: 52% β†’ 57% GPT-5.4: 82% β†’ 88% Opus: 77% β†’ 93% His benchmark measures how well models implement a 100-feature PRD. @cursor_ai consistently outperformed. https://t.co/hrjCmWMNKN

❀️176
likes
πŸ”17
retweets
E
edwinarbus
@edwinarbus
πŸ“…
Mar 16, 2026
4h ago
πŸ†”50334333
⭐0.44

Matt Maher tested frontier models in Cursor v. other harnesses. Cursor boosted model performance by 11% on average: Gemini: 52% β†’ 57% GPT-5.4: 82% β†’ 88% Opus: 77% β†’ 93% His benchmark measures how well models implement a 100-feature PRD. @cursor_ai consistently outperformed. https://t.co/hrjCmWMNKN

_
_akhaliq
@_akhaliq
πŸ“…
Mar 16, 2026
2h ago
πŸ†”76800000

Mistral Small 4 is out https://t.co/IdAowSpHpN

Media 1
πŸ–ΌοΈ Media
D
DeryaTR_
@DeryaTR_
πŸ“…
Mar 16, 2026
20h ago
πŸ†”06008849
⭐0.38

32Γ— efficiency improvement in just the last 3 months, that’s the crazy jump from GPT-5.2 to GPT-5.4! 37 cents/task is essentially almost at human-level efficiency (target was 24 cents/task). This was inconceivable a year ago when o3 cost $4500/task on ARC-AGI-1, 12,000x improved!

@PoliticalKiwi β€’ Sun Mar 15 11:24

GPT-5.4 (High) has now cleared 90% on this benchmark at a cost of just $0.37/task So that's a 32x efficiency improvement in the last three months, or 12000x since December 2024

πŸ”omarsar0 retweeted
O
elvis
@omarsar0
πŸ“…
Mar 16, 2026
10h ago
πŸ†”09077648
⭐0.38

Banger report from the Kimi team: Attention Residuals Residual connections made deep Transformers trainable. But they also force uncontrolled hidden-state growth with depth. This work proposes a cleaner alternative. It introduces Attention Residuals, which replace fixed residual accumulation with softmax attention over previous layer outputs. Instead of blindly summing everything, each layer selectively retrieves the earlier representations it actually needs. To keep this practical at scale, they add a blockwise version that compresses layers into block summaries, recovering most of the gains with minimal systems overhead. Why does it matter? Residual paths have barely changed across modern LLMs, even though they govern how information moves through depth. This paper shows that making the mixing content-dependent improves scaling laws, matches a baseline trained with 1.25x more compute, boosts GPQA-Diamond by +7.5 and HumanEval by +3.1, while keeping inference overhead under 2%. Paper: https://t.co/04IG6FDiVr Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX

❀️116
likes
πŸ”15
retweets
H
HuggingPapers
@HuggingPapers
πŸ“…
Mar 16, 2026
3h ago
πŸ†”83694046

OmniForcing unlocks real-time joint audio-visual generation Achieves ~25 FPS with 0.7s latencyβ€”a 35Γ— speedup over offline diffusion modelsβ€”by distilling bidirectional LTX-2 into a causal streaming generator with maintained multi-modal fidelity. https://t.co/UGYGMyTQOs

Media 1
πŸ–ΌοΈ Media
πŸ”_akhaliq retweeted
H
DailyPapers
@HuggingPapers
πŸ“…
Mar 16, 2026
3h ago
πŸ†”83694046
⭐0.36

OmniForcing unlocks real-time joint audio-visual generation Achieves ~25 FPS with 0.7s latencyβ€”a 35Γ— speedup over offline diffusion modelsβ€”by distilling bidirectional LTX-2 into a causal streaming generator with maintained multi-modal fidelity. https://t.co/UGYGMyTQOs

❀️9
likes
πŸ”1
retweets
O
OpenAIDevs
@OpenAIDevs
πŸ“…
Mar 16, 2026
4h ago
πŸ†”48174967

Subagents are now available in Codex. You can accelerate your workflow by spinning up specialized agents to: β€’ Keep your main context window clean β€’ Tackle different parts of a task in parallel β€’ Steer individual agents as work unfolds https://t.co/QJC2ZYtYcA

πŸ–ΌοΈ Media
πŸ”jxnlco retweeted
O
OpenAI Developers
@OpenAIDevs
πŸ“…
Mar 16, 2026
4h ago
πŸ†”48174967
⭐0.34

Subagents are now available in Codex. You can accelerate your workflow by spinning up specialized agents to: β€’ Keep your main context window clean β€’ Tackle different parts of a task in parallel β€’ Steer individual agents as work unfolds https://t.co/QJC2ZYtYcA

❀️798
likes
πŸ”74
retweets
R
RaiaHadsell
@RaiaHadsell
πŸ“…
Mar 16, 2026
6h ago
πŸ†”56989392
⭐0.38

It's been about 20 years since I first started working on embeddings with Yann LeCun (siamese networks!), and I've been fascinated ever since. Gemini Embeddings 2 approaches the platonic ideal: native embedding of text, image, video, audio, and docs to a single space.

@GoogleAIStudio β€’ Tue Mar 10 17:25

https://t.co/mIXzM657cR

πŸ”jeremyphoward retweeted
R
raia hadsell
@RaiaHadsell
πŸ“…
Mar 16, 2026
6h ago
πŸ†”56989392
⭐0.36

It's been about 20 years since I first started working on embeddings with Yann LeCun (siamese networks!), and I've been fascinated ever since. Gemini Embeddings 2 approaches the platonic ideal: native embedding of text, image, video, audio, and docs to a single space.

❀️277
likes
πŸ”24
retweets
P
PyTorch
@PyTorch
πŸ“…
Mar 16, 2026
3h ago
πŸ†”07617111
⭐0.38

@Nvidiadev πŸ—“οΈ MONDAY @ Booth #338 2PM: Shaping the Future w/ @matthew_d_white 3PM: TensorRT + PyTorch w/ Angela Yi & @narendasan 4PM: DeepSpeed Trillion-Param Training w/ @PKUWZP 5PM: PyTorch Export w/ Angela Yi 6PM: Ray Distributed Computing w/ @robertnishihara #AI #GTC2025

πŸ”_akhaliq retweeted
P
PixVerse
@PixVerse_
πŸ“…
Mar 16, 2026
11h ago
πŸ†”08201897
⭐0.34

Your AI agent can now generate videos. PixVerse CLI ships today β€” JSON output, 6 deterministic exit codes, full PixVerse v5.6, Sora2 and Veo 3.1, Nano Banana access from terminal. Same account. Same credits. No new signup. -> Follow+ Reply+RT = 300 Creds(72H ONLY)

❀️432
likes
πŸ”155
retweets
A
alex_peys
@alex_peys
πŸ“…
Mar 16, 2026
5h ago
πŸ†”51888850
⭐0.40

this was one of the things i co-led at fair, then fb had ~2b users, embeddings of ~128d made it a 300b-1T parameter model depending on how you count entities (e.g. ad campaigns). at the time, this was big, now it's medium. we trained it purely on distributed cpus

@ylecun β€’ Mon Mar 16 18:09

@RaiaHadsell Universal embeddings FTW 😊 One of the flagship projects at FAIR was to "embed the world" (i.e. represent every entity on Facebook). The name was soon changed to "Filament", deployed internally, and eventually open-sourced as "PyTorch-BigGraph" The techniques were m

A
AdinaYakup
@AdinaYakup
πŸ“…
Mar 16, 2026
10h ago
πŸ†”41999406

Covo Audio πŸ”ŠA end-to-end audio language model from @TencentAI_News https://t.co/tic5cH1A39 ✨ 7B ✨ Audio β†’ Audio in one model ✨ Multi-speaker + voice transfer ✨ Real-time full duplex conversations https://t.co/hFrsxQgzkT

Media 1Media 2
πŸ–ΌοΈ Media
πŸ”ai_fast_track retweeted
A
Adina Yakup
@AdinaYakup
πŸ“…
Mar 16, 2026
10h ago
πŸ†”41999406

Covo Audio πŸ”ŠA end-to-end audio language model from @TencentAI_News https://t.co/tic5cH1A39 ✨ 7B ✨ Audio β†’ Audio in one model ✨ Multi-speaker + voice transfer ✨ Real-time full duplex conversations https://t.co/hFrsxQgzkT

Media 1
❀️77
likes
πŸ”11
retweets
πŸ–ΌοΈ Media
T
TeksEdge
@TeksEdge
πŸ“…
Mar 14, 2026
2d ago
πŸ†”30554364

🚨 Want to parse complex PDFs with SOTA accuracy, 100% locally? πŸ“„πŸ” At just 0.9B parameters, you can drop GLM-OCR straight into LM Studio and run it on almost any machine! πŸ₯” 🧠 0.9B total parameters πŸ’Ύ Runs on < 1.5GB VRAM (or ~1GB quantized!) πŸ’Έ Zero API costs πŸ”’ Total data privacy Desktop document AI is officially here. πŸ’»βš‘

Media 1
πŸ–ΌοΈ Media
πŸ”ai_fast_track retweeted
T
David Hendrickson
@TeksEdge
πŸ“…
Mar 14, 2026
2d ago
πŸ†”30554364
⭐0.34

🚨 Want to parse complex PDFs with SOTA accuracy, 100% locally? πŸ“„πŸ” At just 0.9B parameters, you can drop GLM-OCR straight into LM Studio and run it on almost any machine! πŸ₯” 🧠 0.9B total parameters πŸ’Ύ Runs on < 1.5GB VRAM (or ~1GB quantized!) πŸ’Έ Zero API costs πŸ”’ Total data privacy Desktop document AI is officially here. πŸ’»βš‘

❀️2,365
likes
πŸ”218
retweets
A
askalphaxiv
@askalphaxiv
πŸ“…
Mar 16, 2026
23h ago
πŸ†”49397718

Yann LeCun is pumping out papers recently β€œTemporal Straightening for Latent Planning” This paper shows that by straightening latent trajectories in a world model, Euclidean distance starts to reflect true reachable progress, so it's closer to geodesic/minimum-step distance. This makes gradient-based planning far more stable and effective without relying as heavily on expensive search.

Media 1
πŸ–ΌοΈ Media
πŸ”ylecun retweeted
A
alphaXiv
@askalphaxiv
πŸ“…
Mar 16, 2026
23h ago
πŸ†”49397718
⭐0.36

Yann LeCun is pumping out papers recently β€œTemporal Straightening for Latent Planning” This paper shows that by straightening latent trajectories in a world model, Euclidean distance starts to reflect true reachable progress, so it's closer to geodesic/minimum-step distance. This makes gradient-based planning far more stable and effective without relying as heavily on expensive search.

❀️702
likes
πŸ”115
retweets
J
jxnlco
@jxnlco
πŸ“…
Mar 16, 2026
5h ago
πŸ†”10125942
⭐0.38

codex app automations: slack pending replies Review Slack for the current user and update today's daily summary note in /Users/jasonliu/vault at agent/daily-summary-YYYY-MM-DD.md with a single section titled ## Pending Slack Replies. Use Slack search and thread reads across public channels, private channels, DMs, and group DMs to find conversations where the current user is mentioned, directly addressed, or has already participated, and where the latest substantive message is from someone else and the current user has not replied. Focus on recent activity, prioritizing today and the last 36 hours. Read candidate threads before including them. Exclude resolved threads, FYIs that do not need a response, and anything the user already answered later. Rewrite the ## Pending Slack Replies section on each run instead of appending duplicates. For each pending item include: who is waiting, channel or DM name, last message time in America/Los_Angeles, a one-line summary of the ask or blocker, and a short snippet. If a stable Slack link is available, include it. If nothing is pending, keep the section and write - None right now. Keep the rest of the note unchanged.