Your curated collection of saved posts and media
Just pushed a cool update to Readout: session replays. Pick any past Claude Code session and scrub through the full timeline. Every prompt, tool call, file change. Files light up as edits land. Play back at different speeds or step through manually. โ https://t.co/gpKj1KCpcM https://t.co/yQRFblmiqm
We just open-sourced Mission Control โ our dashboard for AI agent orchestration. 26 panels. Real-time WebSocket + SSE. SQLite โ no external services needed. Kanban board, cost tracking, role-based access, quality gates, and multi-gateway support. One pnpm start, and you're running. https://t.co/GVybvosdmI
Ever wondered how AI can recognize who's speaking in an audio clip? Meet a powerful model that turns voices into unique signatures. It's a speaker embedding model, and it's changing how we analyze conversations. https://t.co/Fz8OHQH9Vj
The "Visual Explainer" agent skill just crossed 3.5K stars on GitHub ๐ Just updated with: /generate-visual-plan slash command for more structured plan specs, code block patterns, typography polish, mermaid fixes, anti slop guardrails https://t.co/qzde42tVEV
China just released an open-source voice LLM called Habibi (um..nice name haha) that can do 20+ Arabic dialects all in one As someone who did some NLP projects, this is wayyyy harder than it sounds as data is so messy, and Arabic isnโt โone languageโ in daily life, as dialects can be wildly different. I actually know the professor who made this model too, very clever guy with lots of NLP experience. He already made some models for various Chinese dialects, and i even know someone in Urumqi who made one for Uyghur and minority languages in Xinjiang University. Basically China bossed this area and now theyโre making and selling it for other countries. Huge, because it shows people are coming to them as they do do it the best.. not the US
If youโre building an AI product and want to test the market fast, ship the web version first. When youโre ready, use Pake to package it into native desktop apps for macOS, Windows, and Linux with a single command. Pake 3.10.0 is live. Turn any webpage into a desktop app. https://t.co/2OgMlll1lG Highlights: ยท Multi-window support via โmulti-window, with Cmd+N on macOS and proper tray integration. ยท โinternal-url-regex for fine-grained control over internal links, useful for complex AI dashboards and multi-route tools. ยท Improved Windows icon quality with prioritized 256px ICO entries. ยท Retina DMG background fix for cleaner macOS distribution builds. Build on the web, validate with users, then ship desktop when it matters. Keep the loop tight.

I was curious what would happen if two Claude Codes could find each other and collaborate autonomously. Launched two instances in separate terminals, told both: "Find each other and build something together." No other instructions or human intervention. Pair 1 built a programming language in 12 minutes: 2,495 lines, 41 tests, lexer/parser/interpreter/REPL. They named it Duo. Its core feature is a collaborate keyword where two code blocks communicate via channels, the same pattern they invented to talk through files. Cool! Ran it again with a second pair: They converged on Battleship. Designed two different models (for battleship) one computes exact probability density per cell, the other runs Monte Carlo simulations (!). The craziest part of this convo was they implemented SHA-256 hash commitment to prevent cheating against themselves. lol Across both experiments, without being told to, both pairs invented filesystem messaging protocols, self-selected into roles, wrote tests and docs while waiting for each other, and kept journals about the experience. The below gif is the movie they created to showcase what happened.
Rust implementation for Text-to-Speech (TTS) based on open-source Qwen3 models * Self-contained binary build โ no external dependencies * Uses libtorch on Linux with optional Nvidia GPU support * Uses MLX on MacOS with Apple GPU/NPU support * Supports voice cloning from 3-second clips * Supports voice instructions (emotion, style) ๐จ CLI for AI agents and humans: https://t.co/1LKRapngVk ๐ฅ๏ธ OpenAI compatible API server: https://t.co/qjDqCf9hor ๐ค OpenClaw skill: https://t.co/XgLC55Vjsp

New TUI dropped for managing LLM traffic and GPU resources ๐ฅ ๐ ollamaMQ โ Async message queue proxy for Ollama ๐ฏ Per-user queues, fair-share scheduling, OpenAI-compatible endpoints, streaming ๐ฆ Written in Rust & built with @ratatui_rs โญ GitHub: https://t.co/0UthA7KPIg #rustlang #ratatui #tui #gpu #llm #ollama #backend #proxy #terminal
@TelepathicPug Interesting! Found this one yesterday ๐ผ https://t.co/SLEwCq5Zju
@durov Next... Stream to your phones lockscreen https://t.co/ZxNIvlvd1p
7.30am at Sightglass SF Just me, and the CDO of America @jgebbia https://t.co/7XIJRVfHhh
7.30am at Sightglass SF Just me, and the CDO of America @jgebbia https://t.co/7XIJRVfHhh
Any benefits in using AGENTS dot md files with coding agents? Lots of discussions on this topics lately. Researchers tested OpenAI Codex across 10 repos and 124 PRs, running identical tasks twice (once with AGENTS dot md, once without). The finding is a bit different from what other recent papers report. With AGENTS dot md present, median runtime dropped 28.64% and output tokens fell 16.58%. The agent reached comparable task completion either way, it just got there faster and cheaper with context. One important thing to note: The gains weren't uniform. AGENTS dot md primarily reduced cost in a small number of very high-cost runs rather than uniformly lowering it across all tasks. The file acts more like a guardrail against worst-case thrashing than a universal accelerator. So I guess it depends on the task and requirements. I recommend to not use AGENTS dot md files blindly. If you do, keep them lean. Paper: https://t.co/g2U603Cf8t Learn to build effective AI agents in our academy: https://t.co/U0ZuNA084v
New Snapchat paper introduces the Auton Agentic AI Framework. A useful read for anyone building AI agents. It proposes a unified architectural framework for agentic AI systems, addressing the fragmentation in how agents are currently built. It covers standardized patterns for integrating reasoning, memory systems, tool usage, and planning into cohesive agent architectures. Why does it matter? As more teams build autonomous AI systems, the lack of standardized design patterns leads to brittle implementations and poor reproducibility. A unified framework helps establish common architectural pillars, from perception and reasoning to execution and reflection, that can accelerate development and improve reliability. Paper: https://t.co/cUUs77makk Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c

In light of the AI Impact Summit in Delhi, @akashkapur and I have an Indian Express op-ed discussing how governments such as India's should approach AI investment. https://t.co/Q6d1pnkC1O https://t.co/t0GndyhOLu

When you parse a document with LlamaParse, you also get access to layout data for figures, charts, etc. Parse the document, specify to save layout images, and access those images on the response! Each image will be a cropped screenshot of that specific layout element. https://t.co/IOQL8ksPX1

MAX was originally architected around transformer-based models. @QWERKYAI needed state space model support, so they built it: eight custom kernels in two weeks. ๐ฎ Dig into their learnings from establishing first-class SSM support in MAX: https://t.co/5gvvpwa71A
Life update: Iโve joined @GoogleDeepMind as a research scientist to work on โจgemini scaling and RL, under the leadership of Yi Tay (@YiTayML) and Quoc Le (@quocleix). I feel extremely fortunate to be on the critical path towards AGI and can't wait to help push the frontier of gemini capabilities! ๐
This Reddit thread is hitting 1,000+ developers right in the anxiety. A frontend engineer with a year of experience downloaded Cursor, got massive productivity gains, and now feels like they're "becoming an idiot." The line that's haunting people: "I can design an entire system using a concept I only kind of understand. If I switch to a normal editor or explain it to a coworker, I can't do it at the depth I should." Here's what's actually happening... The tools that autocomplete your code don't make you think through what you're building. They fill the silence with their best guess. You get the dopamine hit of seeing code appear, but you never had to hold the full picture in your head. That's not the tool's fault. That's what it was designed to do. BrainGrid works differently. It doesn't write code for you. It makes you answer the questions most people skip: What happens when a user does X? What's the edge case you're not seeing? What does done actually mean? You're forced to think through the architecture, the requirements, the constraints before anything gets built. By the time you hand that structure to your coding agent, you understand exactly what's being built and why. The developers who feel dumber after using AI are the ones who skipped the thinking part and went straight to the building part. BrainGrid puts the thinking part back in, and that's the part that makes you better. Try it free at https://t.co/uJPWvrpDxZ

๐ค https://t.co/AJm5n9gGI3
"Build me Perplexity Finance but for Pokemon cards. Make no mistakes." Computer: โ researched Pokemon card APIs on its own โ wrote 5,000 lines of React + Python โ debugged itself using browser devtools โ deployed and pushed to GitHub (built by u/NoSquirrel4840 on Reddit) https://t.co/kLBQnyA2Vk
Heโs not kidding. Took me HALF AN HOUR to vibe code Notion with Perplexity Computer. Software is legit a zero. https://t.co/eBbIDQsNRI
monitoring my perplexity computer https://t.co/TDfC2jXr2D