Your curated collection of saved posts and media

Showing 21 posts Β· last 7 days Β· quality filtered
J
jxmnop
@jxmnop
πŸ“…
Mar 13, 2026
2h ago
πŸ†”48218839

pretty unsettling to see the disdain OpenAI employees hold for @karpathy, the most prolific educator of the AI era > median openai employee: gathers niche data + runs evals for GPT-N datamix > karpathy: teaches millions how to build these models who has more long-term impact? https://t.co/IL8Illo5sb

Media 1
πŸ–ΌοΈ Media
H
HamelHusain
@HamelHusain
πŸ“…
Mar 13, 2026
9m ago
πŸ†”35699122
⭐0.34

I don't want read things the author feels isn't worth the effort of writing/editing in many cases Many people authoring slop are simply swapping one kind of audience for another. I suspect in many cases trading for lower IQ audience

B
boringcompany
@boringcompany
πŸ“…
Mar 13, 2026
37m ago
πŸ†”08295283

Prufrock-3 launching off The Monster! The Monster tilts down, allowing the boring machine to mine directly into the parking lot ground without any pit/shaft/excavation. https://t.co/3kWKIk3n5v

Media 2
πŸ–ΌοΈ Media
S
Scobleizer
@Scobleizer
πŸ“…
Mar 13, 2026
21m ago
πŸ†”39835274
⭐0.30

@D2Chattoway @SVVRLIVE Yeah!

A
aakashgupta
@aakashgupta
πŸ“…
Mar 13, 2026
2h ago
πŸ†”46449355
⭐0.36

This startup is one to watch. Underrated team and founder growing via Threads (seriously!) https://t.co/zDeMsarFlt

E
emollick
@emollick
πŸ“…
Mar 13, 2026
52m ago
πŸ†”87282378
⭐0.38

For those confused, almost every particular detail of the post is either wrong or exaggerated, including the very first claim that "Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine." The word evil does not appear in the paper.

E
emollick
@emollick
πŸ“…
Mar 13, 2026
52m ago
πŸ†”30980774
⭐0.32

@theta_ai_takes OMG

E
emollick
@emollick
πŸ“…
Mar 13, 2026
46m ago
πŸ†”32409533
⭐0.32

Though skimming over the comments, it appears to mostly be bots talking to each other about whats "doing the heavy lifting" in the post. Twitter as Moltbook

B
bcherny
@bcherny
πŸ“…
Mar 13, 2026
1h ago
πŸ†”76159438
⭐0.34

🀯 You can now launch Claude Code sessions on your laptop *from your phone* This blew my mind the first time I tried it

J
jxnlco
@jxnlco
πŸ“…
Mar 13, 2026
57m ago
πŸ†”99847749
⭐0.30

@dctanner @badlogicgames @usetoyo link to the talk?

J
jxnlco
@jxnlco
πŸ“…
Mar 13, 2026
56m ago
πŸ†”99848903

first time I'm now scared of tweeting https://t.co/hDBplR8Xs0

Media 1
πŸ–ΌοΈ Media
M
Mid0
@Mid0
πŸ“…
Mar 13, 2026
55m ago
πŸ†”38600256
⭐0.34

@Techweek_ Can someone pair this with @MLHacks ?

H
heynavtoor
@heynavtoor
πŸ“…
Mar 13, 2026
3h ago
πŸ†”76011121

🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?

Media 1
πŸ–ΌοΈ Media
F
frankiemuniz
@frankiemuniz
πŸ“…
Mar 13, 2026
1h ago
πŸ†”47588376
⭐0.30

@pepsi Panda Exprespi

πŸ”youwouldntpost retweeted
F
Frankie Muniz
@frankiemuniz
πŸ“…
Mar 13, 2026
1h ago
πŸ†”47588376
⭐0.30

@pepsi Panda Exprespi

❀️596
likes
πŸ”12
retweets
M
Mid0
@Mid0
πŸ“…
Mar 13, 2026
1h ago
πŸ†”52162491
⭐0.36

@BlockShaolin @andrewchen @grok In most recent hackathon folks ran out of limits on Claude Max & codex jumping on copilot to complete their projects for a 4 hr hackathon. This is the limit for power folks.

G
goodfellow_ian
@goodfellow_ian
πŸ“…
Mar 13, 2026
1h ago
πŸ†”18227488
⭐0.34

@Lat3ntG3nius To be clear this includes proposals for how to drive adoption, not just proposals for scientific work. We do already do AI-assisted public health comms.

J
jxnlco
@jxnlco
πŸ“…
Mar 13, 2026
1h ago
πŸ†”01491589
⭐0.30

@pashmerepat Ride coming in 18 minutes

O
offbeatorbit
@offbeatorbit
πŸ“…
Mar 13, 2026
7h ago
πŸ†”99426970
⭐0.30

The girls are Waking Up

πŸ”youwouldntpost retweeted
O
Ashley Reese
@offbeatorbit
πŸ“…
Mar 13, 2026
7h ago
πŸ†”99426970
⭐0.30

The girls are Waking Up

❀️3,279
likes
πŸ”268
retweets
Y
youwouldntpost
@youwouldntpost
πŸ“…
Mar 13, 2026
1h ago
πŸ†”62826214
⭐0.30

@offbeatorbit buddy lent me this DVD in high school and i never gave it back