Your curated collection of saved posts and media
Every Company Building Your AI Assistant Is Now an Ad Company https://t.co/pSBtV7nfDZ # #devtalk
Every Company Building Your AI Assistant Is Now an Ad Company https://t.co/pSBtV7nfDZ # #devtalk
Announcing a new Claude Code feature: Remote Control. It's rolling out now to Max users in research preview. Try it with /remote-control Start local sessions from the terminal, then continue them from your phone. Take a walk, see the sun, walk your dog without losing your flow.
I miss this kind of forced compression in tweets. High communication bandwidth is the essence of intellectual transfer. https://t.co/QsHVceIYYT
when it still felt like magic https://t.co/o6RRnC6DVJ
thinking about summer days โ๏ธ https://t.co/I9Bd2waQuK https://t.co/jNpgDmtWt7
It's been just delightful to discover that I can now use old tricks from improv (like animating a character with specific details and controlled randomness) in order to make AI more alive. Longer post with prompts etc: https://t.co/AbB9luG1y8
Haha my Claude Code decided to cheat..... https://t.co/69QA9UvOk3
Very cool that you can paste shadertoy code and have it automatically ported! And the inspect tool let me poke at the internals of this SIREN network in a way I never thought to before, fun to see how the final output builds up through the layers :D https://t.co/uOhJXQcS8p https://t.co/wKGRzRLi3W
Introducing FragCoord: My ultimate shader editing tool! https://t.co/ZdGm8992iZ

Dark breakfast review: really good! Texture reminds me of mupotohayi, improved in this case by crispy bacon and buttery hollandaise. Think I prefer a regular Benedict though. Recipe: 3 eggs, 1/2c flour, 1/4c milk, cook like an omelette. Final egg w/ lemon and butter for sauce. https://t.co/Y0fBjw1DWo
Inspiration: @moultano 's delightful recent post: https://t.co/bfwT0FrlTj (Addition of food coloring, bacon, hollandaise ingredients improvised)
@ATinyGreenCell Nice! Are these the ones you got from that place which has a bunch of known strains? Mine yesterday, from a few sad fronds last month, magic stuff https://t.co/VodTGLaqio
Fun with my pipette bot ๐งฌ https://t.co/ad3nSrXgCy
@TelepathicPug Ha, I have the same one just at larger volumes. I use a 20-200uL for the bot ATM, but nothing besides a lack of patience preventing an attempt at much tinier pixel art with this ๐ https://t.co/pB4lQWBRMr
Mercury 2 is live. The world's first reasoning diffusion LLM โ 5x faster than leading speed-optimized autoregressive models. Built for production: multi-step agents without delays, voice AI with tight latency budgets, instant coding feedback. Diffusion-based generation enables parallel refinement, not sequential tokens. Faster. More controllable. Dramatically lower inference cost. Available today on the Inception API. @dinabass has the story in @business.
https://t.co/Uy6qo3WzVy
๐ Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash ยท Qwen3.5-35B-A3B ยท Qwen3.5-122B-A10B ยท Qwen3.5-27B โจ More intelligence, less compute. โข Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B โ a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. โข Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models โ especially in more complex agent scenarios. โข Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: โ 1M context length by default โ Official built-in tools ๐ Hugging Face: https://t.co/wFMdX5pDjU ๐ ModelScope: https://t.co/9NGXcIdCWI ๐ Qwen3.5-Flash API: https://t.co/82ESSpaqAF Try in Qwen Chat ๐ Flash: https://t.co/UkTL3JZxIK 27B: https://t.co/haKxG4lETy 35B-A3B: https://t.co/Oc1lYSTbwh 122B-A10B: https://t.co/hBMODXmh1o Would love to hear what you build with it.

Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end. https://t.co/dZUybl6VkY
Meet Hermes Agent, the open source agent that grows with you. Hermes Agent remembers what it learns and gets more capable over time, with a multi-level memory system and persistent dedicated machine access. https://t.co/Xe55wBbUuo
Most language models only read forward. Perplexity just open-sourced 4 models that read text in both directions. They used a technique from image generation to retrain Qwen3 so every word can see every other word in a passage. That changes how well a model understands meaning. They built four models from this: 1. Two sizes: 0.6B and 4B parameters 2. Two types: standard search embeddings and context-aware embeddings The context-aware version is the interesting one. It processes an entire document at once, so each small chunk "knows" what the full document is about. Standard embeddings treat each chunk in isolation. > Tops benchmarks for models of similar size > Works in multiple languages out of the box > MIT licensed, free for commercial use If you're building search over large document collections, you can now get document-level understanding without running a massive model. Small enough to actually deploy.
https://t.co/Li76j2d3L3
Imbue just open-sourced Evolver. A tool that uses LLMs to automatically optimize code and prompts. They hit 95% on ARC-AGI-2 benchmarks. That's GPT-5.2-level performance from an open model. Evolver works like natural selection for code. You give it three things: 1. Starting code or prompt 2. A way to score results 3. An LLM that suggests improvements Then it runs in a loop. It picks high-scoring solutions. Mutates them. Tests the mutations. Keeps what works. The key difference from random mutation: LLMs propose targeted fixes. When a solution fails on specific inputs, the LLM sees those failures. It suggests changes to fix them. Most suggestions don't help. But some do. Those survivors become parents for the next generation. Evolver adds smart optimizations: > Batch mutations: fix multiple failures at once > Learning logs: share discoveries across branches > Post-mutation filters: skip bad mutations before scoring The verification step alone cuts costs 10x. This works on any problem where LLMs can read the code and you can score the output. You can now auto-optimize: - Agentic workflows - Prompt templates - Code performance - Reasoning chains No gradient descent needed. No differentiable functions required.
"AI is not in a bubble, because you are fundamentally automating the boring part of businesses like accounting or billing or product design or delivery, or inventory. If anything it is underhyped" ~ Former Google CEO Eric Schmidt https://t.co/dnWdNJ5ffd
Latest nl with 3 world models, the moltbook analyses, a bunch of iso cities, a not bad web design gen site, m2-her roleplaying, a fun painting-to-blender paper, a concept for plot description I didn't know about, and more... (games, useful document models, some claude-ing) 1/2 https://t.co/guk3M7hqQc