Your curated collection of saved posts and media

Showing 24 posts ยท last 30 days ยท by score
G
GordonWetzstein
@GordonWetzstein
๐Ÿ“…
Mar 04, 2026
7d ago
๐Ÿ†”59376026

Video world models today have a very limited context length. Mode Seeking meets Mean Seeking (MMM) unlocks long-context, persistent video world models through a unified representation. 1/8 ๐Ÿงต https://t.co/XXMic82qoc

๐Ÿ–ผ๏ธ Media
X
Xianbao_QIAN
@Xianbao_QIAN
๐Ÿ“…
Mar 02, 2026
8d ago
๐Ÿ†”61966034

New model updates from iquestlab. If you're trying to find an inference model that you can run offline, this is probably the one you're looking for. - 7B and 14B coding models - Optimized for tool use, CLI agents and HTML generation - 128k context length - Explicit and detailed prompting works best - MiT license with requirement of display logo - available on @huggingface

Media 1
๐Ÿ–ผ๏ธ Media
R
RisingSayak
@RisingSayak
๐Ÿ“…
Mar 05, 2026
5d ago
๐Ÿ†”15248724

Diffusers 0.37.0 is out ๐Ÿ”ฅ New models, including LTX-2, Helios, GLM-Image, and more. We're proud to be shipping the wild hot RAEs in this release, too! New CP backends, caching methods, etc., are in too! Check out the release notes for more details ๐Ÿงจ https://t.co/fzwmRDgk80

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”huggingface retweeted
R
Sayak Paul
@RisingSayak
๐Ÿ“…
Mar 05, 2026
5d ago
๐Ÿ†”15248724
โญ0.34

Diffusers 0.37.0 is out ๐Ÿ”ฅ New models, including LTX-2, Helios, GLM-Image, and more. We're proud to be shipping the wild hot RAEs in this release, too! New CP backends, caching methods, etc., are in too! Check out the release notes for more details ๐Ÿงจ https://t.co/fzwmRDgk80

โค๏ธ108
likes
๐Ÿ”19
retweets
R
rasbt
@rasbt
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”54649395

@DnuLkjkjh This one doesn't have MoE; but I have the larger Qwen3's with MoE if you are interested: https://t.co/IcyLHmP4dz

Media 1
๐Ÿ–ผ๏ธ Media
O
OpenAI
@OpenAI
๐Ÿ“…
Mar 05, 2026
5d ago
๐Ÿ†”43219811

GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT. GPT-5.4 is also now available in the API and Codex. GPT-5.4 brings our advances in reasoning, coding, and agentic workflows into one frontier model. https://t.co/1hy6xXLAmJ

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”ivanleomk retweeted
J
jytan
@jyt4n
๐Ÿ“…
Mar 06, 2026
5d ago
๐Ÿ†”06175859
โญ0.32

Wrote a blog post about my journey here. Has some scalability limitations & will fix them soon. Appreciate any pointers/feedback! https://t.co/javKm9ebYa

โค๏ธ11
likes
๐Ÿ”3
retweets
O
OpenAI
@OpenAI
๐Ÿ“…
Mar 05, 2026
5d ago
๐Ÿ†”23189283
โญ0.38

GPT-5.4 Thinking and Pro are rolling out gradually starting today across ChatGPT, the API, and Codex. https://t.co/LukYM9v1vk

C
code
@code
๐Ÿ“…
Mar 04, 2026
6d ago
๐Ÿ†”78515372

Agents, for real work. The latest @code release gives you better agent orchestration, extensibility, and continuity. Here's what's new: ๐Ÿช Hooks support ๐ŸŽฏ Message steering and queueing ๐ŸŒ Agentic integrated browser ๐Ÿง  Shared memory And more... https://t.co/F5NTXXYjsZ

๐Ÿ–ผ๏ธ Media
M
Mid0
@Mid0
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”85090257
โญ0.36

I have a new workflow for automated bug fixing and minor enhancements. 1- push bugs & enhancements to GitHub issues 2- ask your agent to comment and @ @coderabbitai to plan the fix 3- schedule agents to work on those 4- run agents to review & fix feedback from PR 5- approve

S
SipeedIO
@SipeedIO
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”11570455

โœจCan you imagine your personal assistant run in a bottle cap? The tiny #PicoClaw has done it!๐Ÿฆ Itโ€™s not an RPi0 connect to remote #openclaw server, but #RISCV open-source hardware truly running a local PicoClaw, also supports voice interaction,all for $20! Want to adopt one?๐Ÿ  https://t.co/PqeIELNyQi

๐Ÿ–ผ๏ธ Media
T
tom_doerr
@tom_doerr
๐Ÿ“…
Mar 05, 2026
6d ago
๐Ÿ†”47190086

Terminal session manager for AI coding agents https://t.co/oWlUMAxM5q https://t.co/bBG9bPb2XH

Media 1Media 2
๐Ÿ–ผ๏ธ Media
๐Ÿ”Sanemavcil retweeted
T
Tom Dรถrr
@tom_doerr
๐Ÿ“…
Mar 05, 2026
6d ago
๐Ÿ†”47190086

Terminal session manager for AI coding agents https://t.co/oWlUMAxM5q https://t.co/bBG9bPb2XH

Media 1Media 2
โค๏ธ146
likes
๐Ÿ”19
retweets
๐Ÿ–ผ๏ธ Media
M
max_a_schwarzer
@max_a_schwarzer
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”44585989
โญ0.42

I've decided to leave OpenAI. I'm incredibly proud of all the work I've been part of here, from helping create the reasoning paradigm with @MillionInt, scaling up test-time compute with @polynoamial, working on RL algorithms with my fellow strawberries, shipping o1-preview (which started life as of one of my derisking runs), to post-training o1 and o3 with @ericmitchellai, @yanndubs and many others. I'm most proud of having led the post-training team here for the last year -- the team has done incredible work and shipped some really smart models, including GPT-5, 5.1, 5.2, and 5.3-Codex. OpenAI has genuinely some of the most talented researchers I have ever met, and I have learned more than I could have imagined knowing since I joined as a new grad. I want to thank @markchen90 @FidjiSimo @sama @merettm for all their support over my time here, and too many collaborators to name for the insights, ideas, and just plain fun we have had working together. After leading post-training for a year, though, I'm longing to start fresh and return to IC research work. I've been thinking about going back to technical research for quite some time, and I genuinely believe my colleagues and team here are set up to succeed going forward without me. I'm personally very excited for my next chapter -- I'm proud to be joining @AnthropicAI to get back into the weeds in RL research, and I'm looking forward supporting my friends there at this important time. Many of people I most trust and respect have joined Anthropic over the last couple of years, and I'm excited to work with them again. I have also been very impressed with Anthropic's talent, research taste and values, and I'm excited to be part of what the company does next!

A
addyosmani
@addyosmani
๐Ÿ“…
Mar 05, 2026
6d ago
๐Ÿ†”67805081

Introducing the Google Workspace CLI: https://t.co/8yWtbxiVPp - built for humans and agents. Google Drive, Gmail, Calendar, and every Workspace API. 40+ agent skills included.

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”ai_fast_track retweeted
A
Qwen
@Alibaba_Qwen
๐Ÿ“…
Mar 02, 2026
8d ago
๐Ÿ†”10965160
โญ0.34

๐Ÿš€ Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B ยท Qwen3.5-2B ยท Qwen3.5-4B ยท Qwen3.5-9B โœจ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation โ€” native multimodal, improved architecture, scaled RL: โ€ข 0.8B / 2B โ†’ tiny, fast, great for edge device โ€ข 4B โ†’ a surprisingly strong multimodal base for lightweight agents โ€ข 9B โ†’ compact, but already closing the gap with much larger models And yes โ€” weโ€™re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: https://t.co/wFMdX5pDjU ModelScope: https://t.co/9NGXcIdCWI

โค๏ธ21,268
likes
๐Ÿ”2,919
retweets
M
MLflow
@MLflow
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”70951961

Back in January, the MLflow team sat down with @mlopscommunity to discuss why MLflow is being rebuilt for the "AI Engineer" era. As more teams move toward autonomous agents, this conversation is more relevant than ever. The highlights: ๐Ÿ”น ๐—ง๐—ต๐—ฒ ๐—š๐—ฒ๐—ป๐—”๐—œ ๐—ฃ๐—ถ๐˜ƒ๐—ผ๐˜: Why MLflow is being rebuilt for agents and real production systems. ๐Ÿ”น ๐—ง๐—ต๐—ฒ ๐— ๐—ฒ๐˜€๐˜€๐˜† ๐—ฅ๐—ฒ๐—ฎ๐—น๐—ถ๐˜๐˜†: Tackling evals, risky memory management, and governance that actually works. ๐Ÿ”น ๐—ง๐—ต๐—ฒ ๐—™๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ:ย Why MLflow remains the leading open-source standard for the next generation of AI. Don't build the next generation of AI on a legacy stack. ๐Ÿ“บ Watch: https://t.co/TpLzUGNei0 ๐ŸŽง Listen: https://t.co/VABLK7jqcC #MLflow #GenAI #LLMOps #AgenticAI

Media 1Media 2
๐Ÿ–ผ๏ธ Media
E
elvissun
@elvissun
๐Ÿ“…
Mar 03, 2026
8d ago
๐Ÿ†”19107687

zoe was burning 24M+ opus tokens/day monitoring agents that weren't running. replaced her cron with a 2-layer system: - bash pre-check, zero tokens when idle - webhook fires opus only when needed. ~95% token reduction and more reliable output. details below. (set up a cron to watch this performance, if it works well I'll double down on this event driven stack, seems like the future)

Media 1
๐Ÿ–ผ๏ธ Media
S
steipete
@steipete
๐Ÿ“…
Mar 05, 2026
5d ago
๐Ÿ†”99042332

TIL: There's a whole bunch of interesting skills in the oss codex repo: https://t.co/gNFHV3MD2j $skill-installer playwright-interactive (also /fast is sweeeeet, 1.5x codex makes a huge diff!) https://t.co/XTENPuZ9Ie

Media 1Media 2
๐Ÿ–ผ๏ธ Media
L
LiorOnAI
@LiorOnAI
๐Ÿ“…
Mar 02, 2026
8d ago
๐Ÿ†”52031145
โญ0.42

Someone just bypassed Apple's Neural Engine to train models. The Neural Engine inside every M-series Mac was designed for inference. Run models, don't train them. No public API, no documentation, and certainly no backpropagation. A researcher reverse-engineered the private APIs anyway and built a transformer training loop that runs forward and backward passes directly on the ANE hardware. The method bypasses CoreML entirely. Instead of using Apple's official tools, the project constructs programs in MIL (Model Intermediate Language), compiles them in-memory using undocumented `_ANEClient` APIs, and feeds data through IOSurface shared memory buffers. Weights get baked into the compiled programs as constants. E ach training step dispatches six custom kernels: attention forward, feedforward forward, then four backward passes that compute gradients with respect to inputs. Weight gradients still run on the CPU using Accelerate's matrix libraries, but the heavy lifting (matrix multiplies, softmax, activation functions) happens on the ANE. This makes three things possible that weren't before: 1. Training small models locally without burning through your battery 2. Fine-tuning on-device without sending data to a server or spinning up the GPU 3. Research into what the ANE hardware can actually do when you ignore Apple's guardrails If this approach scales, the next wave of on-device AI stops being about running someone else's frozen model.

O
omarsar0
@omarsar0
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”96343923

Can AI agents agree? Communication is one of the biggest challenges in multi-agent systems. New research tests LLM-based agents on Byzantine consensus games, scenarios where agents must agree on a value even when some participants behave adversarially. The main finding: valid agreement is unreliable even in fully benign settings, and degrades further as group size grows. Most failures come from convergence stalls and timeouts, not subtle value corruption. Why does it matter? Multi-agent systems are being deployed in high-stakes coordination tasks. This paper is an early signal that reliable consensus is not an emergent property you can assume. It needs to be designed explicitly. Paper: https://t.co/3fllhchiKX Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX

Media 1
๐Ÿ–ผ๏ธ Media
O
omarsar0
@omarsar0
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”22674842

MCP is dead? What are your thoughts? I mostly use Skills and CLI lately. I still use a few MCP tools for orchestrating agents more efficiently. https://t.co/o6saSxNQ9s

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”dair_ai retweeted
O
elvis
@omarsar0
๐Ÿ“…
Mar 03, 2026
7d ago
๐Ÿ†”22674842
โญ0.32

MCP is dead? What are your thoughts? I mostly use Skills and CLI lately. I still use a few MCP tools for orchestrating agents more efficiently. https://t.co/o6saSxNQ9s

โค๏ธ239
likes
๐Ÿ”17
retweets
P
ph_singer
@ph_singer
๐Ÿ“…
Feb 26, 2026
12d ago
๐Ÿ†”83063300
โญ0.38

@alex_prompter Without opening the paper, how did they gather the ground truth? My naive assumption is if they are able to gather the ground truth, it is somewhere out there.