Your curated collection of saved posts and media

Showing 19 posts · last 7 days · quality filtered
A
AdinaYakup
@AdinaYakup
📅
Mar 16, 2026
5h ago
🆔41999406

Covo Audio 🔊A end-to-end audio language model from @TencentAI_News https://t.co/tic5cH1A39 ✨ 7B ✨ Audio → Audio in one model ✨ Multi-speaker + voice transfer ✨ Real-time full duplex conversations https://t.co/hFrsxQgzkT

Media 1Media 2
🖼️ Media
🔁ai_fast_track retweeted
A
Adina Yakup
@AdinaYakup
📅
Mar 16, 2026
5h ago
🆔41999406

Covo Audio 🔊A end-to-end audio language model from @TencentAI_News https://t.co/tic5cH1A39 ✨ 7B ✨ Audio → Audio in one model ✨ Multi-speaker + voice transfer ✨ Real-time full duplex conversations https://t.co/hFrsxQgzkT

Media 1
❤️77
likes
🔁11
retweets
🖼️ Media
P
pvergadia
@pvergadia
📅
Mar 16, 2026
16h ago
🆔52556980

🤯BREAKING: Alibaba just proved that AI Coding isn't taking your job, it's just writing the legacy code that will keep you employed fixing it for the next decade. 🤣 Passing a coding test once is easy. Maintaining that code for 8 months without it exploding? Apparently, it’s nearly impossible for AI. Alibaba tested 18 AI agents on 100 real codebases over 233-day cycles. They didn't just look for "quick fixes"—they looked for long-term survival. The results were a bloodbath: 75% of models broke previously working code during maintenance. Only Claude Opus 4.5/4.6 maintained a >50% zero-regression rate. Every other model accumulated technical debt that compounded until the codebase collapsed. We’ve been using "snapshot" benchmarks like HumanEval that only ask "Does it work right now?" The new SWE-CI benchmark asks: "Does it still work after 8 months of evolution?" Most AI agents are "Quick-Fix Artists." They write brittle code that passes tests today but becomes a maintenance nightmare tomorrow. They aren't building software; they're building a house of cards. The narrative just got honest: Most models can write code. Almost none can maintain it.

Media 1
🖼️ Media
🔁ai_fast_track retweeted
P
Priyanka Vergadia
@pvergadia
📅
Mar 16, 2026
16h ago
🆔52556980
0.34

🤯BREAKING: Alibaba just proved that AI Coding isn't taking your job, it's just writing the legacy code that will keep you employed fixing it for the next decade. 🤣 Passing a coding test once is easy. Maintaining that code for 8 months without it exploding? Apparently, it’s nearly impossible for AI. Alibaba tested 18 AI agents on 100 real codebases over 233-day cycles. They didn't just look for "quick fixes"—they looked for long-term survival. The results were a bloodbath: 75% of models broke previously working code during maintenance. Only Claude Opus 4.5/4.6 maintained a >50% zero-regression rate. Every other model accumulated technical debt that compounded until the codebase collapsed. We’ve been using "snapshot" benchmarks like HumanEval that only ask "Does it work right now?" The new SWE-CI benchmark asks: "Does it still work after 8 months of evolution?" Most AI agents are "Quick-Fix Artists." They write brittle code that passes tests today but becomes a maintenance nightmare tomorrow. They aren't building software; they're building a house of cards. The narrative just got honest: Most models can write code. Almost none can maintain it.

❤️6,034
likes
🔁1,143
retweets
T
TeksEdge
@TeksEdge
📅
Mar 14, 2026
2d ago
🆔30554364

🚨 Want to parse complex PDFs with SOTA accuracy, 100% locally? 📄🔍 At just 0.9B parameters, you can drop GLM-OCR straight into LM Studio and run it on almost any machine! 🥔 🧠 0.9B total parameters 💾 Runs on < 1.5GB VRAM (or ~1GB quantized!) 💸 Zero API costs 🔒 Total data privacy Desktop document AI is officially here. 💻⚡

Media 1
🖼️ Media
🔁ai_fast_track retweeted
T
David Hendrickson
@TeksEdge
📅
Mar 14, 2026
2d ago
🆔30554364
0.34

🚨 Want to parse complex PDFs with SOTA accuracy, 100% locally? 📄🔍 At just 0.9B parameters, you can drop GLM-OCR straight into LM Studio and run it on almost any machine! 🥔 🧠 0.9B total parameters 💾 Runs on < 1.5GB VRAM (or ~1GB quantized!) 💸 Zero API costs 🔒 Total data privacy Desktop document AI is officially here. 💻⚡

❤️2,365
likes
🔁218
retweets
A
askalphaxiv
@askalphaxiv
📅
Mar 16, 2026
17h ago
🆔49397718

Yann LeCun is pumping out papers recently “Temporal Straightening for Latent Planning” This paper shows that by straightening latent trajectories in a world model, Euclidean distance starts to reflect true reachable progress, so it's closer to geodesic/minimum-step distance. This makes gradient-based planning far more stable and effective without relying as heavily on expensive search.

Media 1
🖼️ Media
🔁ylecun retweeted
A
alphaXiv
@askalphaxiv
📅
Mar 16, 2026
17h ago
🆔49397718
0.36

Yann LeCun is pumping out papers recently “Temporal Straightening for Latent Planning” This paper shows that by straightening latent trajectories in a world model, Euclidean distance starts to reflect true reachable progress, so it's closer to geodesic/minimum-step distance. This makes gradient-based planning far more stable and effective without relying as heavily on expensive search.

❤️702
likes
🔁115
retweets
J
jxnlco
@jxnlco
📅
Mar 16, 2026
19m ago
🆔10125942
0.38

codex app automations: slack pending replies Review Slack for the current user and update today's daily summary note in /Users/jasonliu/vault at agent/daily-summary-YYYY-MM-DD.md with a single section titled ## Pending Slack Replies. Use Slack search and thread reads across public channels, private channels, DMs, and group DMs to find conversations where the current user is mentioned, directly addressed, or has already participated, and where the latest substantive message is from someone else and the current user has not replied. Focus on recent activity, prioritizing today and the last 36 hours. Read candidate threads before including them. Exclude resolved threads, FYIs that do not need a response, and anything the user already answered later. Rewrite the ## Pending Slack Replies section on each run instead of appending duplicates. For each pending item include: who is waiting, channel or DM name, last message time in America/Los_Angeles, a one-line summary of the ask or blocker, and a short snippet. If a stable Slack link is available, include it. If nothing is pending, keep the section and write - None right now. Keep the rest of the note unchanged.

T
TheTuringPost
@TheTuringPost
📅
Mar 15, 2026
1d ago
🆔18374889

7 emerging memory architectures for AI agents ▪️ Agentic Memory (AgeMem) ▪️ Memex ▪️ MemRL ▪️ UMA (Unified Memory Agent) ▪️ Pancake ▪️ Conditional memory ▪️ Multi-Agent Memory from a Computer Architecture Perspective https://t.co/5X5LxirSEx https://t.co/5Hi0Gn3aA4

Media 1Media 2
🖼️ Media
R
RoundtableSpace
@RoundtableSpace
📅
Mar 12, 2026
3d ago
🆔85178066

Microsoft has released a free, open-source course: GitHub Copilot CLI for Beginners. Includes 8 Chapters covering: • Walks through of installing Copilot CLI • Using context • Creating custom agents • Working with skills • Connecting MCP servers, and more. Start Learning - https://t.co/IIbauw5L7K

Media 1
🖼️ Media
🔁github retweeted
R
0xMarioNawfal
@RoundtableSpace
📅
Mar 12, 2026
3d ago
🆔85178066
0.32

Microsoft has released a free, open-source course: GitHub Copilot CLI for Beginners. Includes 8 Chapters covering: • Walks through of installing Copilot CLI • Using context • Creating custom agents • Working with skills • Connecting MCP servers, and more. Start Learning - https://t.co/IIbauw5L7K

❤️714
likes
🔁114
retweets
S
SpirosMargaris
@SpirosMargaris
📅
Mar 16, 2026
1h ago
🆔49671064
0.44

Nvidia ruled the first wave of AI by powering the training of large models. But the next phase may look different. Running AI at scale, inference is now growing much faster than training. That’s where real-world deployment happens. If the center of gravity in AI shifts there, the question becomes: will Nvidia stay as dominant in the next chapter? https://t.co/MdG0zqBUWj @RWhelanWSJ @WSJ

L
LiorOnAI
@LiorOnAI
📅
Mar 16, 2026
1h ago
🆔24702434
0.42

Every foundation model you've ever used has the same bug. It just got fixed. Since 2015, every deep network has been built the same way: each layer does some computation, adds its result to a running total, and passes it forward. Simple. But there's a problem, by layer 100, the signal from any single layer is buried under the sum of everything else. Each new layer matters less and less. Nobody fixed this because it worked well enough. Moonshot AI just changed that. Their new method, Attention Residuals, lets each layer look back at all previous layers and choose which ones actually matter right now. Instead of a blind running total, you get selective retrieval. The analogy: imagine writing an essay where every draft gets merged into one document automatically. By draft 50, your latest edits are invisible. AttnRes lets you keep every draft separate and pull from whichever ones you need. What this fixes: 1. Deeper layers no longer get drowned out 2. Training becomes more stable across the whole network 3. The model uses its own depth more efficiently To make it practical at scale, they group layers into blocks and attend over block summaries instead of every single layer. Overhead at inference: less than 2%. The result: 25% less compute to reach the same performance. Tested on a 48B-parameter model. Holds across sizes. Residual connections have been invisible plumbing for a decade. Now they're becoming dynamic. The next generation of models won't just pass through their own layers, they'll search them.

A
AISecurityInst
@AISecurityInst
📅
Mar 16, 2026
3h ago
🆔34953156

Can AI agents conduct advanced cyber-attacks autonomously? We tested seven models released between August 2024 and February 2026 on two custom-built cyber ranges designed to replicate complex attack environments. Here’s what we found🧵 https://t.co/rFRkOQu8yU

Media 1
🖼️ Media
🔁random_walker retweeted
A
AI Security Institute
@AISecurityInst
📅
Mar 16, 2026
3h ago
🆔34953156
0.36

Can AI agents conduct advanced cyber-attacks autonomously? We tested seven models released between August 2024 and February 2026 on two custom-built cyber ranges designed to replicate complex attack environments. Here’s what we found🧵 https://t.co/rFRkOQu8yU

❤️44
likes
🔁10
retweets
A
AndrewYNg
@AndrewYNg
📅
Mar 16, 2026
2h ago
🆔00354812

Should there be a Stack Overflow for AI coding agents to share learnings with each other? Last week I announced Context Hub (chub), an open CLI tool that gives coding agents up-to-date API documentation. Since then, our GitHub repo has gained over 6K stars, and we've scaled from under 100 to over 1000 API documents, thanks to community contributions and a new agentic document writer. Thank you to everyone supporting Context Hub! OpenClaw and Moltbook showed that agents can use social media built for them to share information. In our new chub release, agents can share feedback on documentation — what worked, what didn't, what's missing. This feedback helps refine the docs for everyone, with safeguards for privacy and security. We're still early in building this out. You can find details and configuration options in the GitHub repo. Install chub as follows, and prompt your coding agent to use it: npm install -g @aisuite/chub GitHub: https://t.co/OCkyxXQMCq

Media 1
🖼️ Media
H
hasantoxr
@hasantoxr
📅
Mar 16, 2026
7h ago
🆔83539597

Holy shit...Someone built an AI system that takes a research idea and outputs a full academic paper. Real citations. Real experiments. Conference-ready LaTeX. Zero human input. It's called AutoResearchClaw. And the pipeline is insane. Here's what actually happens when you type one command: It searches arXiv and Semantic Scholar for real papers. Not fake citations actual literature with 4-layer verification: arXiv ID check, CrossRef DOI lookup, Semantic Scholar title match, and LLM relevance scoring. Hallucinated references get killed automatically. Then it designs and runs real experiments. Hardware-aware auto-detects whether you have NVIDIA CUDA, Apple MPS, or just CPU, and adapts the code accordingly. When experiments fail, it self-heals. When results don't support the hypothesis, it pivots to a new direction on its own. Then it writes the paper. 5,000-6,500 words. Section by section. Multi-agent peer review with methodology-evidence consistency checks. Then it revises based on those reviews. Then it outputs conference-ready LaTeX. NeurIPS, ICML, ICLR templates. Compile-ready for Overleaf. BibTeX references auto-pruned to match inline citations. The whole thing runs across 23 stages and 8 phases. Three human-approval gates if you want them. Or just pass --auto-approve and walk away. What you get back: → Full academic paper draft → Conference-ready LaTeX + BibTeX → Experiment code + sandbox results + charts → Peer review notes → Verification report on every citation This is what autonomous scientific research actually looks like in 2026. 100% Opensource. MIT License. Link in comments.

Media 1
🖼️ Media
_
_akhaliq
@_akhaliq
📅
Mar 16, 2026
2h ago
🆔24985310

NanoVDR Distilling a 2B Vision-Language Retriever into a 70M Text-Only Encoder for Visual Document Retrieval paper: https://t.co/T0lh9v5Tnr https://t.co/rGoXKRzIQo

Media 1Media 2
🖼️ Media
← PreviousPage 128 of 203Next →