Your curated collection of saved posts and media

Showing 24 posts ยท last 30 days ยท by score
D
dev_talk
@dev_talk
๐Ÿ“…
Feb 21, 2026
23d ago
๐Ÿ†”16248920

Every Company Building Your AI Assistant Is Now an Ad Company https://t.co/pSBtV7nfDZ # #devtalk

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”johnrobinsn retweeted
D
devtalk
@dev_talk
๐Ÿ“…
Feb 21, 2026
23d ago
๐Ÿ†”16248920

Every Company Building Your AI Assistant Is Now an Ad Company https://t.co/pSBtV7nfDZ # #devtalk

Media 1
โค๏ธ1
likes
๐Ÿ”1
retweets
๐Ÿ–ผ๏ธ Media
N
noahzweben
@noahzweben
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”05271615

Announcing a new Claude Code feature: Remote Control. It's rolling out now to Max users in research preview. Try it with /remote-control Start local sessions from the terminal, then continue them from your phone. Take a walk, see the sun, walk your dog without losing your flow.

๐Ÿ–ผ๏ธ Media
R
Reza_Zadeh
@Reza_Zadeh
๐Ÿ“…
Feb 16, 2026
28d ago
๐Ÿ†”68961890

I miss this kind of forced compression in tweets. High communication bandwidth is the essence of intellectual transfer. https://t.co/QsHVceIYYT

Media 1
๐Ÿ–ผ๏ธ Media
I
illustrata_ai
@illustrata_ai
๐Ÿ“…
Jan 22, 2026
53d ago
๐Ÿ†”71690053

when it still felt like magic https://t.co/o6RRnC6DVJ

Media 1
๐Ÿ–ผ๏ธ Media
I
illustrata_ai
@illustrata_ai
๐Ÿ“…
Jan 25, 2026
50d ago
๐Ÿ†”30469550

thinking about summer days โ„๏ธ https://t.co/I9Bd2waQuK https://t.co/jNpgDmtWt7

๐Ÿ–ผ๏ธ Media
A
alexisgallagher
@alexisgallagher
๐Ÿ“…
Feb 23, 2026
20d ago
๐Ÿ†”89948784

It's been just delightful to discover that I can now use old tricks from improv (like animating a character with specific details and controlled randomness) in order to make AI more alive. Longer post with prompts etc: https://t.co/AbB9luG1y8

Media 1
๐Ÿ–ผ๏ธ Media
D
dyushag
@dyushag
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”69536862

Haha my Claude Code decided to cheat..... https://t.co/69QA9UvOk3

@ โ€ข

Media 1
๐Ÿ–ผ๏ธ Media
J
johnowhitaker
@johnowhitaker
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”56193090

Very cool that you can paste shadertoy code and have it automatically ported! And the inspect tool let me poke at the internals of this SIREN network in a way I never thought to before, fun to see how the final output builds up through the layers :D https://t.co/uOhJXQcS8p https://t.co/wKGRzRLi3W

@XorDev โ€ข Fri Feb 13 02:02

Introducing FragCoord: My ultimate shader editing tool! https://t.co/ZdGm8992iZ

Media 1Media 2
+3 more
๐Ÿ–ผ๏ธ Media
J
johnowhitaker
@johnowhitaker
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”59570175

Dark breakfast review: really good! Texture reminds me of mupotohayi, improved in this case by crispy bacon and buttery hollandaise. Think I prefer a regular Benedict though. Recipe: 3 eggs, 1/2c flour, 1/4c milk, cook like an omelette. Final egg w/ lemon and butter for sauce. https://t.co/Y0fBjw1DWo

Media 1
๐Ÿ–ผ๏ธ Media
J
johnowhitaker
@johnowhitaker
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”79683651

Inspiration: @moultano 's delightful recent post: https://t.co/bfwT0FrlTj (Addition of food coloring, bacon, hollandaise ingredients improvised)

Media 1
๐Ÿ–ผ๏ธ Media
J
johnowhitaker
@johnowhitaker
๐Ÿ“…
Feb 27, 2026
17d ago
๐Ÿ†”41020268

@ATinyGreenCell Nice! Are these the ones you got from that place which has a bunch of known strains? Mine yesterday, from a few sad fronds last month, magic stuff https://t.co/VodTGLaqio

Media 1
๐Ÿ–ผ๏ธ Media
J
johnowhitaker
@johnowhitaker
๐Ÿ“…
Mar 01, 2026
15d ago
๐Ÿ†”44201987

Fun with my pipette bot ๐Ÿงฌ https://t.co/ad3nSrXgCy

Media 1
๐Ÿ–ผ๏ธ Media
J
johnowhitaker
@johnowhitaker
๐Ÿ“…
Mar 01, 2026
15d ago
๐Ÿ†”66566418

@TelepathicPug Ha, I have the same one just at larger volumes. I use a 20-200uL for the bot ATM, but nothing besides a lack of patience preventing an attempt at much tinier pixel art with this ๐Ÿ˜‚ https://t.co/pB4lQWBRMr

Media 1
๐Ÿ–ผ๏ธ Media
_
_inception_ai
@_inception_ai
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”43409933

Mercury 2 is live. The world's first reasoning diffusion LLM โ€“ 5x faster than leading speed-optimized autoregressive models. Built for production: multi-step agents without delays, voice AI with tight latency budgets, instant coding feedback. Diffusion-based generation enables parallel refinement, not sequential tokens. Faster. More controllable. Dramatically lower inference cost. Available today on the Inception API. @dinabass has the story in @business.

Media 1
๐Ÿ–ผ๏ธ Media
L
LiorOnAI
@LiorOnAI
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”38259222

https://t.co/Uy6qo3WzVy

Media 1
๐Ÿ–ผ๏ธ Media
A
Alibaba_Qwen
@Alibaba_Qwen
๐Ÿ“…
Feb 24, 2026
20d ago
๐Ÿ†”30188939

๐Ÿš€ Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash ยท Qwen3.5-35B-A3B ยท Qwen3.5-122B-A10B ยท Qwen3.5-27B โœจ More intelligence, less compute. โ€ข Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B โ€” a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. โ€ข Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models โ€” especially in more complex agent scenarios. โ€ข Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: โ€“ 1M context length by default โ€“ Official built-in tools ๐Ÿ”— Hugging Face: https://t.co/wFMdX5pDjU ๐Ÿ”— ModelScope: https://t.co/9NGXcIdCWI ๐Ÿ”— Qwen3.5-Flash API: https://t.co/82ESSpaqAF Try in Qwen Chat ๐Ÿ‘‡ Flash: https://t.co/UkTL3JZxIK 27B: https://t.co/haKxG4lETy 35B-A3B: https://t.co/Oc1lYSTbwh 122B-A10B: https://t.co/hBMODXmh1o Would love to hear what you build with it.

Media 1Media 2
๐Ÿ–ผ๏ธ Media
P
perplexity_ai
@perplexity_ai
๐Ÿ“…
Feb 25, 2026
19d ago
๐Ÿ†”71540489

Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end. https://t.co/dZUybl6VkY

๐Ÿ–ผ๏ธ Media
N
NousResearch
@NousResearch
๐Ÿ“…
Feb 25, 2026
19d ago
๐Ÿ†”07898954

Meet Hermes Agent, the open source agent that grows with you. Hermes Agent remembers what it learns and gets more capable over time, with a multi-level memory system and persistent dedicated machine access. https://t.co/Xe55wBbUuo

๐Ÿ–ผ๏ธ Media
L
LiorOnAI
@LiorOnAI
๐Ÿ“…
Feb 27, 2026
17d ago
๐Ÿ†”52900129

Most language models only read forward. Perplexity just open-sourced 4 models that read text in both directions. They used a technique from image generation to retrain Qwen3 so every word can see every other word in a passage. That changes how well a model understands meaning. They built four models from this: 1. Two sizes: 0.6B and 4B parameters 2. Two types: standard search embeddings and context-aware embeddings The context-aware version is the interesting one. It processes an entire document at once, so each small chunk "knows" what the full document is about. Standard embeddings treat each chunk in isolation. > Tops benchmarks for models of similar size > Works in multiple languages out of the box > MIT licensed, free for commercial use If you're building search over large document collections, you can now get document-level understanding without running a massive model. Small enough to actually deploy.

Media 1
๐Ÿ–ผ๏ธ Media
L
LiorOnAI
@LiorOnAI
๐Ÿ“…
Feb 27, 2026
17d ago
๐Ÿ†”59849725

https://t.co/Li76j2d3L3

Media 1
๐Ÿ–ผ๏ธ Media
L
LiorOnAI
@LiorOnAI
๐Ÿ“…
Feb 28, 2026
16d ago
๐Ÿ†”52119603

Imbue just open-sourced Evolver. A tool that uses LLMs to automatically optimize code and prompts. They hit 95% on ARC-AGI-2 benchmarks. That's GPT-5.2-level performance from an open model. Evolver works like natural selection for code. You give it three things: 1. Starting code or prompt 2. A way to score results 3. An LLM that suggests improvements Then it runs in a loop. It picks high-scoring solutions. Mutates them. Tests the mutations. Keeps what works. The key difference from random mutation: LLMs propose targeted fixes. When a solution fails on specific inputs, the LLM sees those failures. It suggests changes to fix them. Most suggestions don't help. But some do. Those survivors become parents for the next generation. Evolver adds smart optimizations: > Batch mutations: fix multiple failures at once > Learning logs: share discoveries across branches > Post-mutation filters: skip bad mutations before scoring The verification step alone cuts costs 10x. This works on any problem where LLMs can read the code and you can score the output. You can now auto-optimize: - Agentic workflows - Prompt templates - Code performance - Reasoning chains No gradient descent needed. No differentiable functions required.

Media 1
๐Ÿ–ผ๏ธ Media
R
rohanpaul_ai
@rohanpaul_ai
๐Ÿ“…
Jan 31, 2026
44d ago
๐Ÿ†”77208564

"AI is not in a bubble, because you are fundamentally automating the boring part of businesses like accounting or billing or product design or delivery, or inventory. If anything it is underhyped" ~ Former Google CEO Eric Schmidt https://t.co/dnWdNJ5ffd

๐Ÿ–ผ๏ธ Media
A
arnicas
@arnicas
๐Ÿ“…
Feb 01, 2026
43d ago
๐Ÿ†”62963636

Latest nl with 3 world models, the moltbook analyses, a bunch of iso cities, a not bad web design gen site, m2-her roleplaying, a fun painting-to-blender paper, a concept for plot description I didn't know about, and more... (games, useful document models, some claude-ing) 1/2 https://t.co/guk3M7hqQc

Media 1
๐Ÿ–ผ๏ธ Media