Your curated collection of saved posts and media
Very cool that you can paste shadertoy code and have it automatically ported! And the inspect tool let me poke at the internals of this SIREN network in a way I never thought to before, fun to see how the final output builds up through the layers :D https://t.co/uOhJXQcS8p https://t.co/wKGRzRLi3W

Dark breakfast review: really good! Texture reminds me of mupotohayi, improved in this case by crispy bacon and buttery hollandaise. Think I prefer a regular Benedict though. Recipe: 3 eggs, 1/2c flour, 1/4c milk, cook like an omelette. Final egg w/ lemon and butter for sauce. https://t.co/Y0fBjw1DWo
Inspiration: @moultano 's delightful recent post: https://t.co/bfwT0FrlTj (Addition of food coloring, bacon, hollandaise ingredients improvised)
@ATinyGreenCell Nice! Are these the ones you got from that place which has a bunch of known strains? Mine yesterday, from a few sad fronds last month, magic stuff https://t.co/VodTGLaqio
Fun with my pipette bot 𧬠https://t.co/ad3nSrXgCy
@TelepathicPug Ha, I have the same one just at larger volumes. I use a 20-200uL for the bot ATM, but nothing besides a lack of patience preventing an attempt at much tinier pixel art with this π https://t.co/pB4lQWBRMr
Mercury 2 is live. The world's first reasoning diffusion LLM β 5x faster than leading speed-optimized autoregressive models. Built for production: multi-step agents without delays, voice AI with tight latency budgets, instant coding feedback. Diffusion-based generation enables parallel refinement, not sequential tokens. Faster. More controllable. Dramatically lower inference cost. Available today on the Inception API. @dinabass has the story in @business.
https://t.co/Uy6qo3WzVy
π Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash Β· Qwen3.5-35B-A3B Β· Qwen3.5-122B-A10B Β· Qwen3.5-27B β¨ More intelligence, less compute. β’ Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B β a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. β’ Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models β especially in more complex agent scenarios. β’ Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: β 1M context length by default β Official built-in tools π Hugging Face: https://t.co/wFMdX5pDjU π ModelScope: https://t.co/9NGXcIdCWI π Qwen3.5-Flash API: https://t.co/82ESSpaqAF Try in Qwen Chat π Flash: https://t.co/UkTL3JZxIK 27B: https://t.co/haKxG4lETy 35B-A3B: https://t.co/Oc1lYSTbwh 122B-A10B: https://t.co/hBMODXmh1o Would love to hear what you build with it.

Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end. https://t.co/dZUybl6VkY
Meet Hermes Agent, the open source agent that grows with you. Hermes Agent remembers what it learns and gets more capable over time, with a multi-level memory system and persistent dedicated machine access. https://t.co/Xe55wBbUuo
Most language models only read forward. Perplexity just open-sourced 4 models that read text in both directions. They used a technique from image generation to retrain Qwen3 so every word can see every other word in a passage. That changes how well a model understands meaning. They built four models from this: 1. Two sizes: 0.6B and 4B parameters 2. Two types: standard search embeddings and context-aware embeddings The context-aware version is the interesting one. It processes an entire document at once, so each small chunk "knows" what the full document is about. Standard embeddings treat each chunk in isolation. > Tops benchmarks for models of similar size > Works in multiple languages out of the box > MIT licensed, free for commercial use If you're building search over large document collections, you can now get document-level understanding without running a massive model. Small enough to actually deploy.
https://t.co/Li76j2d3L3
Imbue just open-sourced Evolver. A tool that uses LLMs to automatically optimize code and prompts. They hit 95% on ARC-AGI-2 benchmarks. That's GPT-5.2-level performance from an open model. Evolver works like natural selection for code. You give it three things: 1. Starting code or prompt 2. A way to score results 3. An LLM that suggests improvements Then it runs in a loop. It picks high-scoring solutions. Mutates them. Tests the mutations. Keeps what works. The key difference from random mutation: LLMs propose targeted fixes. When a solution fails on specific inputs, the LLM sees those failures. It suggests changes to fix them. Most suggestions don't help. But some do. Those survivors become parents for the next generation. Evolver adds smart optimizations: > Batch mutations: fix multiple failures at once > Learning logs: share discoveries across branches > Post-mutation filters: skip bad mutations before scoring The verification step alone cuts costs 10x. This works on any problem where LLMs can read the code and you can score the output. You can now auto-optimize: - Agentic workflows - Prompt templates - Code performance - Reasoning chains No gradient descent needed. No differentiable functions required.
"AI is not in a bubble, because you are fundamentally automating the boring part of businesses like accounting or billing or product design or delivery, or inventory. If anything it is underhyped" ~ Former Google CEO Eric Schmidt https://t.co/dnWdNJ5ffd
Latest nl with 3 world models, the moltbook analyses, a bunch of iso cities, a not bad web design gen site, m2-her roleplaying, a fun painting-to-blender paper, a concept for plot description I didn't know about, and more... (games, useful document models, some claude-ing) 1/2 https://t.co/guk3M7hqQc
Introducing Pika AI Selves: AI you birth, raise, and set loose to be a living extension of you. Theyβre rich, multi-faceted beings with persistent memory, and maybe even a peanut allergy. Itβs up to you! Have them send pictures to your group chat. Make a video game about your fish. Call your mom while you do anything but call your mom. The possibilities are as myriad as the stars β¨ Get on the list to give birth to yours at pika dot me
Feel a lot of resonance with this. When we're doing things right, I think we're building tools for open-ended exploration, where the journey leads you somewhere new https://t.co/aPQsrpAIJV
@vanstriendaniel https://t.co/jT0qWQpq84
@viratt_mankali @openclaw Hey yeah - we built a custom iOS app to chat and interact with OpenClaw. We opensourced it here: https://t.co/Drl94NfDOR
@sundeep Got it to share everything itβs up to on my lock screen π± https://t.co/lMmcyueaUu
Why is OpenClaw everywhere right now? 1. Use the AI model of your choice 2. Lives in your chat apps 3. Spin up agents from plain English and name them run 24/7 The personal AI that actually works while you sleep. π¦ Btw you donβt need a Mac mini, android will do: https://t.co/5Ls2UFYpqg
@dwr We visualized an experience like this with Live Activity + OpenClaw https://t.co/4kra38D96D
@dwr Can try it here! https://t.co/Drl94NfDOR