Your curated collection of saved posts and media

Showing 24 posts Β· last 7 days Β· quality filtered
G
gilmxres
@gilmxres
πŸ“…
Mar 09, 2026
6h ago
πŸ†”74668048

you know what hell yea https://t.co/mTYyoxakZy

Media 1Media 2
πŸ–ΌοΈ Media
W
Weather_West
@Weather_West
πŸ“…
Mar 09, 2026
6h ago
πŸ†”67228435

Lots of buzz online about an upcoming major March heatwave for the American SW & California. And in this case, it does indeed appear increasingly likely than an extremely anomalous and even record-breaking heatwave may envelop much of the SW about a week from now. https://t.co/GByhbmJEZb

Media 1
πŸ–ΌοΈ Media
Y
youwouldntpost
@youwouldntpost
πŸ“…
Mar 09, 2026
10m ago
πŸ†”45586135
⭐0.30

no thank you…

G
gerardsans
@gerardsans
πŸ“…
Mar 09, 2026
5m ago
πŸ†”03887029
⭐0.44

@ArthurMacwaters No it won’t. Leading AI labs, including Anthropic, know full well that current models are unreliable, third-party tests show a staggering 97% failure rate on digital tasks. Pause and let that sink in. Silicon Valley has always lived in a bubble. Today, its recklessness threatens the entire economy, and our systems aren’t ready to cope. Brace yourself. Ask yourself: why do we take AI labs at their word about their own technology? Scrutiny isn’t anti-innovation, it’s pro-accountability. https://t.co/Ut4hpvTU3C

G
gerardsans
@gerardsans
πŸ“…
Mar 09, 2026
3m ago
πŸ†”50077083
⭐0.44

@MilkRoadAI Nonsense. Leading AI labs, including Anthropic, know full well that current models are unreliable, third-party tests show a staggering 97% failure rate on digital tasks. Pause and let that sink in. Silicon Valley has always lived in a bubble. Today, its recklessness threatens the entire economy, and our systems aren’t ready to cope. Brace yourself. Ask yourself: why do we take AI labs at their word about their own technology? Scrutiny isn’t anti-innovation, it’s pro-accountability. https://t.co/Ut4hpvTU3C

G
GaryMarcus
@GaryMarcus
πŸ“…
Mar 09, 2026
27m ago
πŸ†”32455207
⭐0.32

Bets on Zuck’s next bad bet, after Metaverse and AGI/Alexander Wang?

J
jxnlco
@jxnlco
πŸ“…
Mar 09, 2026
15m ago
πŸ†”61948810
⭐0.30

@neil_projects @rubanlah this uses hooks right?

G
github
@github
πŸ“…
Mar 09, 2026
30m ago
πŸ†”10113489
⭐0.36

If you’ve built a multi-agent workflow, you’ve probably seen it fail in a way that’s hard to explain. πŸ€” An agent closes an issue another just opened, or ships a change that fails a downstream check. Why does this happen? And how do we fix it? πŸ§΅β¬‡οΈ

G
github
@github
πŸ“…
Mar 09, 2026
30m ago
πŸ†”52290568
⭐0.34

The core problem: We treat multi-agent systems like chat interfaces. But the moment agents begin handling related tasks, they start making implicit assumptions about state, ordering, and validation. They are actually distributed systems.

G
github
@github
πŸ“…
Mar 09, 2026
30m ago
πŸ†”66434663
⭐0.36

Fix 1: Typed Schemas 🧱 Natural language is messy. Agents need typed interfaces and strict schemas at every boundary. Passing machine-checkable data means invalid messages fail fast, and downstream steps don’t have to guess what a payload means.

πŸ”dair_ai retweeted
O
elvis
@omarsar0
πŸ“…
Mar 09, 2026
58m ago
πŸ†”76359887
⭐0.32

This is the part that actually matters for code reviews: Generating code is about output. Verifying code is about skepticism, judgment, and trust. Those are different engineering muscles, and strong coding teams need both where things are headed with coding agents.

❀️11
likes
πŸ”1
retweets
G
GaryMarcus
@GaryMarcus
πŸ“…
Mar 09, 2026
43m ago
πŸ†”04172738
⭐0.36

@thesandeep_nair see eg https://t.co/HbxB1boF77, though it needs to be updated (again)

G
GaryMarcus
@GaryMarcus
πŸ“…
Mar 09, 2026
41m ago
πŸ†”56914628
⭐0.34

my contempt originates in the kind of behavior I discussed here: https://t.co/HbxB1boF77

A
AravSrinivas
@AravSrinivas
πŸ“…
Mar 09, 2026
40m ago
πŸ†”70111007
⭐0.34

The @AskPerplexity account is now meant exclusively for Perplexity Computer updates. Give it a follow to stay up to date on everything Computer is capable of doing, and consistent updates like new tools, connectors capabilities, and workflows.

M
marouen19
@marouen19
πŸ“…
Mar 09, 2026
36m ago
πŸ†”02885462
⭐0.30

@buntyverse You don’t know me

Y
ylecun
@ylecun
πŸ“…
Mar 09, 2026
54m ago
πŸ†”70415206
⭐0.32

@ianatmars @ML3democrats There is no voter fraud to speak of. You're a delusional grokon.

D
dee_bosa
@dee_bosa
πŸ“…
Mar 09, 2026
1h ago
πŸ†”46870386

Oracle is building yesterday's data centers with tomorrow's debt Frontier labs like OpenAI want the newest chips. But Nvidia is shipping a new generation annually while data centers still take years to get up and running. That's a mismatch for the whole AI trade Oracle, funding it with $100B in debt, may be first to crack

Media 1
πŸ–ΌοΈ Media
Z
ZeffMax
@ZeffMax
πŸ“…
Mar 09, 2026
1h ago
πŸ†”46713032

NEW: OpenAI and Google employeesβ€”including Google DeepMind Chief Scientist Jeff Dean β€”filed an amicus brief in support of Anthropic in its lawsuit against the US government. https://t.co/3lQrzlq8BE

Media 1
πŸ–ΌοΈ Media
I
itamar_mar
@itamar_mar
πŸ“…
Mar 09, 2026
2h ago
πŸ†”30895092
⭐0.38

Generating code and verifying code are fundamentally different engineering problems. Good to see Claude Code recognizing the importance of review. And does it cost $15-20 per PR ?!@#? But the real question is: Should the same system that generates the code also verify it? In mature systems, we separate concerns: Β β€’ Creation systems β†’ generation Β β€’ Integrity systems β†’ quality, governance, verification, observability They operate on different philosophies: Β generation optimizes for output, verification for skepticism. Speed is easy. Quality is the hard part. And as code generators evolve, teams will want the freedom to switch between them. That’s why we believe the winning stack will look like: Β Claude + Qodo = speed + quality = velocity Β (Claude reviewing Claude risks shared blind spots and even $$ spend?)

O
omarsar0
@omarsar0
πŸ“…
Mar 09, 2026
58m ago
πŸ†”76359887
⭐0.34

This is the part that actually matters for code reviews: Generating code is about output. Verifying code is about skepticism, judgment, and trust. Those are different engineering muscles, and strong coding teams need both where things are headed with coding agents.

O
omarsar0
@omarsar0
πŸ“…
Mar 09, 2026
58m ago
πŸ†”87511396
⭐0.36

I've been using @QodoAI for code reviews and their deep expertise in this area is clear. Their recent rule system is brilliant if you want to explore. Thanks to the team for partnering with me on this post. Get 1 month free of Qodo’s Teams plan with promo code: UNBIASED

J
jxnlco
@jxnlco
πŸ“…
Mar 09, 2026
51m ago
πŸ†”31094678
⭐0.30

Applications are still open distributing and ramping up slowly.

S
StasBekman
@StasBekman
πŸ“…
Mar 09, 2026
3h ago
πŸ†”63792574

Good news! Ulysses Sequence Parallelism from the Snowflake AI Research and the Deepspeed teams has been integrated into @huggingface Trainer, Accelerate and TRL For extensive details please see this writeup: https://t.co/2xDWUk8p3V Thanks a lot to @krasul for helping make it happen. Also the others in the HF team who helped with integration.

Media 1Media 2
πŸ–ΌοΈ Media
πŸ”dair_ai retweeted
D
DAIR.AI
@dair_ai
πŸ“…
Mar 09, 2026
9h ago
πŸ†”70433749
⭐0.38

New research from Databricks. It's about training enterprise search agents via RL. KARL introduces a multi-task RL approach where agents are trained across heterogeneous search behaviors, constraint-driven entity search, cross-document synthesis, and tabular reasoning. It generalizes substantially better than those optimized for any single benchmark. KARL is Pareto-optimal on both cost-quality and latency-quality trade-offs compared to Claude 4.6 and GPT 5.2. With sufficient test-time compute, it surpasses the strongest closed models while being more cost efficient. Paper: https://t.co/CToEmDU89J Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c

❀️216
likes
πŸ”32
retweets