Your curated collection of saved posts and media
Introducing, Runway Characters. Real-time intelligent avatars that turn the internet into a conversation. Deployable anywhere via the Runway API, Runway Characters can be customized in any way across every style. All with the ability to embed bespoke knowledge banks, custom voices and instructions. Start integrating Runway Characters directly into your apps, websites, products and services today. Available now at the link below.
The good/bad part about agentic codeing is the barrier to getting nerdsniped is now much lower https://t.co/CiGerRgM8H https://t.co/z6p0W229YM

Penguin-VL Exploring the Efficiency Limits of VLM with LLM-based Vision Encoders app: https://t.co/VZ8IvEdjN3 paper: https://t.co/XSM2GGVcCz https://t.co/ovxWSRJG0n

C++ devs: your AI-assisted flows just got even smarter! With the new symbolβlevel context and CMakeβaware build tools, your agents now have access to rich C++ specific intelligence directly in VS Code. Learn more: https://t.co/ErgApqTzZc
shout-out to @nicopreme and @jxnlco for being based gods and hooking me up with a ChatGPT Pro subscription for my OSS contributions! cheers πββοΈ https://t.co/HACBr3Dvme
KARL Knowledge Agents via Reinforcement Learning paper: https://t.co/sTeBtxk5Ls
MatAnyone 2 is out on Hugging Face Scaling Video Matting via a Learned Quality Evaluator paper: https://t.co/KPMaG8teJ2 app: https://t.co/wkMpaOdoCh https://t.co/ZSQNrOKcv4
A major legal line has been drawn in the AI creativity debate. A U.S. court ruling reinforced that copyright law protects works created by humans, not machines. For βAI artists,β the message is clear, without meaningful human authorship, there may be no copyright. https://t.co/g1Nferwkod @futurism
A Threads user named Laushi Liu posted dashcam footage from his Tesla Model 3 on Sunday, March 8, showing the vehicle on βFull Self-Drivingβ mode at 23 mph near West Covina, California. In the video, the car approaches a railroad crossing where barriers have just come down β and drives straight through them. The timing is almost poetic: this video drops the day Tesla is supposed to finally hand NHTSA the data from its FSD violation investigation, after two deadline extensions. Weβll be watching to see whether Tesla actually delivers, and what that data reveals about just how common these railroad crossing failures really are.
They haven't even discovered the sacred texts yet. https://t.co/aQE41MqaWn
They haven't even discovered the sacred texts yet. https://t.co/aQE41MqaWn
Learn how to run Qwen3.5 locally using Claude Code. Our guide shows you how to run Qwen3.5 on your server for local agentic coding. We then build a Qwen 3.5 agent that autonomously fine-tunes models using Unsloth. Works on 24GB RAM or less. Guide: https://t.co/JDPtuIJAZC
Announcing Copilot Cowork, a new way to complete tasks and get work done in M365. When you handΒ off a task to Cowork, it turns your request into a plan and executes it across your apps and files, grounded in your work data and operating within M365βs security and governance boundaries.
πΈ New on Lovart: Multi-Angles Drag to rotate, tilt, and scale. One image, every angle, no prompt needed. β Subject Mode: move the subject directly β Camera Mode: move the virtual camera Like + reply + follow β 30 lucky winners get 300 credits each! https://t.co/n6VDphWw5Q
Is the rise of coding agents surprising or consistent with our predictions? Thanks for the question, @_NathanCalvin. https://t.co/fLdWDgSRAL The answer is: Both surprising and consistent. AI as Normal Technology (AINT) doesn't give us a way to predict the timing of specific capability advances, and we haven't tried to do that. But when it comes to understanding why coding agents work so well and what their impacts are likely to be, AINT is extremely helpful (and its predictions are consistent with what we observe so far). 1. Products, not just models. One key prediction is that model capability advances are generally not useful by themselves; building products is still necessary in order to meet people where they are, instead of forcing people to contort their workflows to fit the affordances of raw LLMs. That's exactly what we see with Claude Code and other agents. If we try to understand the success of coding agents as the result of model capability leaps, it doesn't make sense. Rather, coding agents have dozens if not hundreds of features, both big (like memory) and small (like rewinding or interruptability) that allow software engineers to integrate them into workflows. 2. Early adoption. Despite everything we hear on X, we're still in the early adoption phase. The median programmer (keep in mind that they work in a regulated industry like finance or healthcare) has barely heard of coding agents and is not yet using them in any serious way. 3. The speed of diffusion. As I've written before, the software industry has uniquely low diffusion barriers and programmers have a long history of embracing productivity improvements to continually migrate up the abstraction chain (machine code -> assembly -> compiled languages -> high-level languages -> frameworks -> AI-assisted programming). Because of this, software has "has never had time or the cultural inclination to ossify institutional processes around particular ways of doing things." I highly doubt that we are going to see the same speed of diffusion in other sectors. For example, see our analysis of AI in legal services here https://t.co/0kYIaT2UJJ 4. Labor market impacts. AINT predicted that in most cognitive jobs the result of AI adoption won't be replacing humans but shifting the role of humans to supervising AI systems. Of course we were hardly alone in making that prediction but it's good to see that this is what is happening in software. There's also the fact that in most white-collar jobs, if it gets cheaper to produce a unit of work, we will simply produce more of it β orders of magnitude more in the case of software (related to "Jevons paradox"). This is another factor that mitigates job loss risks.

People are lying to you. These agents don't work as they promised. https://t.co/3Oyoi7i4zh
Microsoft seems to be launching its own branded version of Cowork (though I hesitate to discuss products I havenβt tried) A big question is whether it will continue to use lower-end models without telling you. Also whether it will keep up as the space evolves, or is it a one-off https://t.co/9ZkHEfZ6zr
@FrankieIsLost This diagram by @trychroma shows how accuracy crashes past ~5K tokens, dropping below 50/50. Let that sink in: you might need ~50 attempts to get the same result (if it exists). If not, you could be heading toward 100 tries with zero chance of success. https://t.co/qG2vWoAQBo https://t.co/OuSMrnUL3q

mlx-audio v0.4.0 is here π What's new: β Qwen3-TTS: fastest generation on Apple silicon and first batch support. > Sequential (<80 ms TTFB at 2.75x realtime) > Batch support (<210 ms TTFB at 4.12x for batch of 4-8) β Audio separation UI & server β nvfp4, mxfp4, mxfp8 quantization β Streaming /v1/audio/speech endpoint β Realtime STT streaming toggle New models: β Echo TTS β Voxtral Mini 4B, β MingOmni TTS (MoE + Dense) β KittenTTS β Parakeet v3 β MedASR β Spoken language identification (MMS-LID) β Sortformer diarization + Smart Turn v3 semantic (VAD) Plus fixes for Kokoro Chinese TTS, Pocket TTS, Whisper, Qwen3-ASR, and more. Thank you very much to @lllucas, @beshkenadze, @KarnikShreyas, @andimarafioti, @mnoukhov and welcome the 13 new contributors ππ½ Get started today: > pip install -U mlx-audio Leave us a star β https://t.co/bQ5WBLR6FK

Even AI experts arenβt immune to the disruption they helped create. A machine learning engineer who thought his role was safe from automation was told AI could eventually replace much of his work. The lesson is becoming clear, no profession is entirely insulated from the technology it builds. https://t.co/Xlr7qqoB79 @futurism
New research on scaling agent memory for long-horizon tasks. One of the biggest challenges with AI agents is memory. As tasks get longer and more complex, agents lose track of what they've learned, what they've tried, and what worked. This paper, from Accenture, introduces Memex(RL), a system that gives agents indexed experience memory. Instead of relying on raw context windows, agents build a structured, searchable index of past experiences and retrieve relevant memories as needed. Long-horizon agent tasks like deep research, multi-step coding, and complex planning all require persistent memory. Memex(RL) shows how to scale this without blowing up context length. Paper: https://t.co/TWMF5HC6Qe Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX

Do people become more conservative as they age? If they were born between 1940 and 1954, the answer is clearly "yes." Among people born from 1955 to 1979, there's really been no change. For those born in 1980 or later, it looks they are becoming more liberal as they age. https://t.co/cE3iJayMWH
New research from Databricks. It's about training enterprise search agents via RL. KARL introduces a multi-task RL approach where agents are trained across heterogeneous search behaviors, constraint-driven entity search, cross-document synthesis, and tabular reasoning. It generalizes substantially better than those optimized for any single benchmark. KARL is Pareto-optimal on both cost-quality and latency-quality trade-offs compared to Claude 4.6 and GPT 5.2. With sufficient test-time compute, it surpasses the strongest closed models while being more cost efficient. Paper: https://t.co/CToEmDU89J Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c

Block cut nearly half its workforce citing AI productivity gains. But current and former employees say the reality is more complicated, arguing many of their roles canβt simply be automated. The gap between AI expectations and operational reality is becoming a recurring theme in tech layoffs. https://t.co/YiM5Cxo6fl