Your curated collection of saved posts and media

Showing 32 posts ยท last 14 days ยท by score
A
Agentic_wooz
@Agentic_wooz
๐Ÿ“…
Apr 15, 2026
8d ago
๐Ÿ†”46434088

This dataset was crafted with a fine-tuned @NousResearch Hermes 4.3 36B model run on a RTX 6000 Blackwell Server Edition. (We simply love @NousResearch but this, by no means, does it signify a partnership) PMI-relevant results: 60.6% TruthfulQA (Delta: +11.7% vs Qwen3.5-4B) & 71.5% HellaSwag on a 4B fine tuned model.

Media 1
๐Ÿ–ผ๏ธ Media
C
charliejhills
@charliejhills
๐Ÿ“…
Apr 15, 2026
8d ago
๐Ÿ†”60424417

Stanford just tested whether LexisNexis and Thomson Reutersโ€™ AI legal research tools are really โ€œhallucination-free,โ€ as they claim. Spoiler: not even close. Hereโ€™s what the study found. https://t.co/lb2CekFeWn

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”johnrobinsn retweeted
S
spark
@sparkjsdev
๐Ÿ“…
Apr 14, 2026
9d ago
๐Ÿ†”82816449
โญ0.32

Spark 2.0 is here! ๐Ÿš€ Weโ€™re redefining whatโ€™s possible on the web with a streamable LoD system for 3D Gaussian Splatting. Built on Three.js, you can now stream massive 100M+ splat worlds to any device from mobile to VR using WebGL2. All open-source. Dive into the tech ๐Ÿ‘‡ https://t.co/VOd6V0Wz1s

โค๏ธ1,504
likes
๐Ÿ”221
retweets
X
xuanchi13
@xuanchi13
๐Ÿ“…
Apr 15, 2026
8d ago
๐Ÿ†”18878229

Thanks @_akhaliq for sharing our work! Try and put your robot in today at: https://t.co/i5HkcE1dqs https://t.co/A1P8QRsMhQ

@_akhaliq โ€ข Wed Apr 15 15:57

Nvidia released Lyra 2.0 on Hugging Face Explorable Generative 3D Worlds paper: https://t.co/HcxsBD2yEh model: https://t.co/bC32ADfvDS https://t.co/RwdR7DUEcY

Media 2
๐Ÿ–ผ๏ธ Media
D
demishassabis
@demishassabis
๐Ÿ“…
Apr 16, 2026
8d ago
๐Ÿ†”90010217
โญ0.38

Our most expressive and steerable TTS model yet! Designed to give builders granular control over AI-generated speech, Gemini 3.1 Flash TTS is really fun to play with! Available in preview today - for devs via the Gemini API & @GoogleAIStudio + for enterprises on Vertex AI

@OfficialLoganK โ€ข Wed Apr 15 16:07

Introducing Gemini 3.1 Flash TTS ๐Ÿ—ฃ๏ธ, our latest text to speech model with scene direction, speaker level specificity, audio tags, more natural + expressive voices, and support for 70 different languages. Available via our new audio playground in AI Studio and in the Gemini API!

O
obviyus
@obviyus
๐Ÿ“…
Apr 11, 2026
13d ago
๐Ÿ†”51671867

OpenClaw now has end-to-end testing for Telegram ๐Ÿ‘€ Uses the brand new Telegram bot-to-bot communication mode: https://t.co/RDEj1i0zwa ๐Ÿฆž https://t.co/LWSn4PCr6N

Media 2
๐Ÿ–ผ๏ธ Media
R
Rigario
@Rigario
๐Ÿ“…
Apr 02, 2026
21d ago
๐Ÿ†”84314311

Many are running @NousResearch Hermes Agent now. Here are some practical tips that help a lot, especially if you're coming from OpenClaw: 1. Nightly skill evolution is worth setting up. Link: https://t.co/qPs3QPIYgW Pro tip: Add a second cronjob to evaluate the changes so you don't have to. Make sure it stops anything that tries to game the optimization loop. 2. Install Honcho if you're hitting memory issues. It gives proper cross-session recall, memory synthesis, and better long-term storage. Helps avoid repeating the same mistakes or pulling too much context (and wasting tokens). 3. Consider changing the default session timeout and expiry. Especially useful for threads you don't use every day, prevents the agent from losing context unnecessarily. For those migrating from OpenClaw: 4. Expose your OpenClaw agents as OpenAI-compatible endpoints. This lets you run both side-by-side with zero disruption while you transition. Hermes can call them directly, and your existing crons keep working. 5. On day one, start populating your USER.md and MEMORY.md files. Note for OC users: Hermes has a much smaller character limit than OpenClaw, so populate and curate thoughtfully, don't just dump everything in. Quality over quantity helps it learn you faster. 2,200 for memory and 1,375 for user. Hermes works especially well once you integrate it properly into your workflows. Last tip, don't start changing your skin till your agents are actually doing work. You might never stop and go down the rabbit hole... ๐Ÿคฃ

Media 1Media 2
๐Ÿ–ผ๏ธ Media
E
eranshir
@eranshir
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”40388611

Most Physical AI models recognize patterns. They donโ€™t understand the world. Thatโ€™s why they fail on edge cases. BADAS 2.0 is a V-JEPA2 world model trained by @getnexar on real-world videos. We used the model to find what it didnโ€™t understand, then trained on that. It generalizes. And we built lite versions so it runs on edge devices, even CPU. Understanding is the only way this scales. See how it performs on your own videos. Link in first comment.

๐Ÿ–ผ๏ธ Media
M
Mid0
@Mid0
๐Ÿ“…
Apr 11, 2026
13d ago
๐Ÿ†”99227381

Was able to use ollama with qwen3 14b locally on my laptop to troubleshoot internet connectivity issue on a cruise ship wifi @ollama @Alibaba_Qwen Found answer in < 5 mins. I'm running it on a Macbook Air M4 24GB. You all should keep a model locally handy to use https://t.co/xM4LOhzHj6

Media 1
๐Ÿ–ผ๏ธ Media
F
friesmakesfries
@friesmakesfries
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”41802481

hermes agent @NousResearch is fucking insane i know literally NOTHING about coding. ZERO. and i just built a fully functioning web app in minutes http://localhost:3000/ check it out @Teknium https://t.co/H0uvfhoNX5

Media 1
๐Ÿ–ผ๏ธ Media
0
0xjiawei
@0xjiawei
๐Ÿ“…
Apr 16, 2026
8d ago
๐Ÿ†”98113645
โญ0.42

I've been using @NousResearch Hermes Agent for a while and here are the slash commands that I found super useful: > Self-improving loop /skills โ†’ browse & install new capabilities mid-session /reasoning โ†’ dial thinking depth (none โ†’ xhigh) per task /model โ†’ hot-swap models without restarting > Power moves /background "research X" โ†’ spawn parallel agent /queue "also check Y" โ†’ stack tasks for next turn /btw "quick question" โ†’ ephemeral Q&A, burns no context /branch โ†’ fork session to explore a different path > Context management /compress โ†’ manually summarize & free up tokens /snapshot โ†’ save/restore full agent state /rollback โ†’ undo filesystem changes like git

M
MayukhBagchi4
@MayukhBagchi4
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”43508155

@Teknium @MiniMax_AI Just shipped a Hermes Agent skill with HuskyLens V2 on a Pi 5 gesture control, face recognition, emotion reading, all local. MiniMax M2.7 as the brain would be wild for this. What's the target hardware? https://t.co/UuBtK7YIyO

Media 1
๐Ÿ–ผ๏ธ Media
W
whoiskatrin
@whoiskatrin
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”40225228

cloudflare just gave agents git this is one of those changes that will just quietly improve everything agents with proper version control @dillon_mulroy @elithrar @mattzcarey @thomas_ankcorn have done something incredible here https://t.co/4dFPie896A

Media 1
๐Ÿ–ผ๏ธ Media
G
github
@github
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”81161744

๐Ÿ†• @AnthropicAI's Claude Opus 4.7 is now generally available and rolling out in GitHub Copilot. Early testing shows โžก๏ธ It has stronger multi-step task performance and more reliable agentic execution โžก๏ธ Meaningful improvement in long-horizon reasoning and complex workflows Try it out in @code or Copilot CLI. https://t.co/8QFLkf0RqR

๐Ÿ–ผ๏ธ Media
R
RohOnChain
@RohOnChain
๐Ÿ“…
Apr 11, 2026
12d ago
๐Ÿ†”83786812
โญ0.44

This 2 hour Stanford lecture shows exactly how Stanford trains it's engineers to build AI systems. It's more practical than every Claude tutorial & prompting threads you've seen. Bookmark & give it 2 hours, no matter what. It'll be the most productive thing you do this weekend. https://t.co/L0poFGEYKe

K
kr0der
@kr0der
๐Ÿ“…
Apr 11, 2026
12d ago
๐Ÿ†”79744845

did you know that the Codex app shows you PR statuses? i.e. CI checks, PR comments, merge status, review status ๐Ÿ‘€ https://t.co/saguG9YZ4L

Media 1
๐Ÿ–ผ๏ธ Media
C
coreyching
@coreyching
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”18403871

We shipped a bunch of new features in Codex today, but Iโ€™m especially excited about plugins. So many new plugins just landed that make Codex even more powerful! https://t.co/I4kVFpm3G3

๐Ÿ–ผ๏ธ Media
S
SahinLale
@SahinLale
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”05174399
โญ0.36

60 tokens/s, 8B ๐ŸŒณ model, on browser ๐Ÿค— Huge thanks to @mervenoyann @xenovacom and @huggingface team on Day-0 support!

@mervenoyann โ€ข Thu Apr 16 18:13

new open-source Bonsai models are out ๐Ÿ”ฅ > ternary weights in 8B (1.75 GB), 4B (0.86 GB), and 1.7B (0.37 GB) > comes in MLX, ONNX weights and WebGPU browser demo ๐Ÿ˜ > a2.0 licensed ๐Ÿ‘ https://t.co/jo5jbb79dW

U
UnslothAI
@UnslothAI
๐Ÿ“…
Apr 10, 2026
13d ago
๐Ÿ†”60796991

Google DeepMind is hosting a Gemma 4 hackathon with a $10,000 Unsloth prize! ๐Ÿฆฅ Show off your best fine-tuned Gemma 4 model built with Unsloth. There's $200,000 total prizes to be won. Challenge info + Notebook: https://t.co/HndHPaXICT https://t.co/cBnNro1fVI

Media 1
๐Ÿ–ผ๏ธ Media
M
maruchikim
@maruchikim
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”30225740

what if your earbuds could see? meet VueBuds: tiny cameras at each ear, streaming low-resolution images over Bluetooth, with an on-device vision language model to understand what's around you. capstone of my PhD ๐ŸŽ“ #CHI2026 honorable mention https://t.co/Go72NzVEb7

๐Ÿ–ผ๏ธ Media
T
thsottiaux
@thsottiaux
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”04196753
โญ0.32

Codex Compute efficient โœ… Always up, never down โœ… Best at hardcore engineering โœ… Crazy good app, first to escape the terminal โœ…

L
lowesyang
@lowesyang
๐Ÿ“…
Apr 12, 2026
11d ago
๐Ÿ†”17862867
โญ0.40

I'm frequently using Claude Code and i love @openclaw. But i would say Hermes Agent from @NousResearch is the best open-source agent Iโ€™ve ever used, especially given that it comes from an independent startup rather than a major LLM giant.

D
dkundel
@dkundel
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”54278983
โญ0.32

Because computer use in Codex doesn't take over your own cursor so Codex can work in the background and you can truly cursor max! ๐Ÿ”ฅ

@priyashah_ โ€ข Thu Apr 16 20:13

brb, cursor-maxxing https://t.co/gryiF1MCik

A
ajambrosino
@ajambrosino
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”14973711
โญ0.34

Even this thread was done in Codex: Slack + Google Drive plugins to learn about the release and gather visuals. Notion plugin to draft the post content and run the drafts by me. Computer Use to put into x dot com and upload the videos. Automations to tell me how it's going for the few hours after. Started it only 8 minutes before the launch.

@ajambrosino โ€ข Thu Apr 16 17:14

Big Codex update today. Codex can now work across more of your computer, more of your tools, and longer-running projects. It started as a coding agent. Itโ€™s becoming a teammate for the whole software loop.

S
sean_t_strong
@sean_t_strong
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”45390042
โญ0.46

@emollick Hey Ethan! Sean here, PM on https://t.co/KZTlPpbqBQ - thanks for the feedback. This isn't a router, this is the model being trained to decide when to think based on the context -- we've been running this for a while in Sonnet 4.6 in https://t.co/KZTlPpbqBQ as well as Claude Code. Understood that it's not tuned perfectly in https://t.co/3Rk7wAMA7D yet - we're sprinting on tuning this more internally and should have some updates here shortly. Feel free to DM us examples of queries where you expected thinking and didn't see it

B
bcherny
@bcherny
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”35156457
โญ0.32

Dogfooding Opus 4.7 the last few weeks, I've been feeling incredibly productive. Sharing a few tips to get more out of 4.7 ๐Ÿงต

M
Mid0
@Mid0
๐Ÿ“…
Apr 17, 2026
7d ago
๐Ÿ†”83351487
โญ0.32

@theo Works when you trigger ultrathink mode (I know they deprecated it) but somehow reasoning effort is higher now like xHigh. Might be a bugโ€ฆ

A
AkwyZ
@AkwyZ
๐Ÿ“…
Apr 15, 2026
8d ago
๐Ÿ†”87065813
โญ0.44

The Strange Origin of AIโ€™s โ€˜Reasoningโ€™ Abilities https://t.co/lXyZw8U4u4 #TechNews @ArturHabant @elaniazito @IanLJones98 @CurieuxExplorer @Shi4Tech @enilev @Fabriziobustama @mvollmer1 @AnthonyRochand @JolaBurnett @lyakovet @debashis_dutta @3itcom @ahier @Analytics_699 @antgrasso @CathCervoni @chidambara09 @DigitalColmer @dinisguarda @DimitriHommel @EvanKirstel @FrRonconi @GlenGilmore @gvalan @HeinzVHoenen @ipfconline1 @jeancayeux @jorgecunha @kalydeoo @nafisalam @Nicochan33 @pierrepinna @PawlowskiMario @puneetsinghal22 @ralph_ohr @RLDI_Lamy @rshevlin @sarbjeetjohal @SpirosMargaris @StefanoDeCupis @tewoz @thomas_dettling @Ym78200 @aure79lien @jblefevre60

G
GOROman
@GOROman
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”64152187

macOS ็‰ˆใฎCodex Desktop ใ‚ขใƒ—ใƒชใซใ€ŒComputer Useใ€ๆฉŸ่ƒฝใŒใคใ„ใŸใฎใงใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใฆ่ฉฆใ—ใฆใฟใพใ™ใ€‚ https://t.co/xBU891BSUa

Media 1
๐Ÿ–ผ๏ธ Media
S
SathvikBil
@SathvikBil
๐Ÿ“…
Apr 17, 2026
7d ago
๐Ÿ†”33952756
โญ0.36

THREAD 1/7 Every AI benchmark a lab bragged about this year is compromised. not because labs are cheating. because the game itself is broken.

K
kenziyuliu
@kenziyuliu
๐Ÿ“…
Apr 15, 2026
8d ago
๐Ÿ†”41794496

Sharing a super simple, user-owned memory module we've been playing around: nanomem The basic idea is to treat memory as a pure intelligence problem: ingestion, structuring, and (selective) retrieval are all just LLM calls & agent loops on a on-device markdown file tree. Each file lists a set of facts w/ metadata (timestamp, confidence, source, etc.); no embeddings/RAG/training of any kind. For example: - `nanomem add <fact>` starts an agent loop to walk the tree, read relevant files, and edit. - `nanomem retrieve <query>` walks the tree and returns a single summary string (possibly assembled from many subtrees) related to the query. Whatโ€™s nice about this approach is that the memory system is, by construction: 1. partitionable (human/agents can easily separate `hobbies/snowboard.md` from `tax/residency.md` for data minimization + relevance) 2. portable and user-owned (itโ€™s just text files) 3. interpretable (you know exactly whatโ€™s written and you can manually edit) 4. forward-compatible (future models can read memory files just the same, and memory quality/speed improves as models get better) 5. modularized (you can optimize ingestion/retrieval/compaction prompts separately) Privacy & utility. I'm most excited about the ability to partition + selectively disclose memory at inference-time. Selective disclosure helps with both privacy (principle of least privilege & โ€œneed-to-knowโ€) and utility (as too much context for a query can harm answer quality). Composability. An inference-time memory module means: (1) you can run such a module with confidential inference (LLMs on TEEs) for provable privacy, and (2) you can selectively disclose context over unlinkable inference of remote models (demo below). We built nanomem as part of the Open Anonymity project (https://t.co/fO14l5hRkp), but itโ€™s meant to be a standalone module for humans and agents (e.g., you can write a SKILL for using the CLI tool). Still polishing the rough edges! - GitHub (MIT): https://t.co/YYDCk5sIzc - Blog: https://t.co/pexZTFdWzz - Beta implementation in chat client soon: https://t.co/rsMjL3wzKQ Work done with amazing project co-leads @amelia_kuang @cocozxu @erikchi !!

Media 2
๐Ÿ–ผ๏ธ Media
H
HeyGen
@HeyGen
๐Ÿ“…
Apr 16, 2026
7d ago
๐Ÿ†”60871072

We built our launch video in Claude Code using HyperFrames. Now it's yours. Open source, agent-native framework. HTML to MP4. $ npx skills add heygen-com/hyperframes RT + Comment "HyperFrames" to get the full source code of this launch video (must follow) https://t.co/vsRtZ6gQsb

๐Ÿ–ผ๏ธ Media