Your curated collection of saved posts and media
AI is getting remarkably good at finding software vulnerabilities. Anthropicβs system recently hacked the Firefox browser in a controlled test and uncovered numerous bugs. Tools that strengthen security could also make exploitation easier, depending on who uses them. https://t.co/1V7gdmcb9N @wsj @AnthropicAI
insane, check out this demo of codex using ableton please keep sharing demos like this
@kristoph definitely. the current one is already 90% AI written I ain't writing all that
Grok is now #1 in the Image-to-Video Arena https://t.co/55Olgcikoa
@dylhunn Lack of understanding breeds mythology. The remedy: learn how the technology actually works before venturing into speculation. AI has no more intelligence than Google search, a hard drive or a mobile phone. Itβs software, useful, but only if it has access to the data needed.
@antoniosarosi Python has roughly 189 times more available data than Rust. This mirrors GitHub trends, where Python files vastly outnumber Rust due to its broader adoption. The volume of Python samples, as reflected in the share of training data by lines of code, directly affects AI performance on related tasks. Rust therefore lags behind, making a full switch impractical without accepting significant gaps in capability and noticeable performance degradation. Source: https://t.co/4XhEGaAlDj
@asaio87 You are absolutely right. This argument is supported by various research lines of work. If interested, look up for terms: AI slop, model collapse and enshittification, here applied to code.
AI long-form video generation is rapidly improving. I got early access to @UtopaiStudios new PAI model, and it has blown my mind about what's possible. I really like their editing tools. Will continue to test this out and share more fun examples. https://t.co/Vw3wdXMwPh
AI long-form video generation is rapidly improving. I got early access to @UtopaiStudios new PAI model, and it has blown my mind about what's possible. I really like their editing tools. Will continue to test this out and share more fun examples. https://t.co/Vw3wdXMwPh
We used to write all this by HAND? Every single character typed out individually by a person sitting at a desk all day? Everything working was a function of your understanding of the codebase + your syntax being 100% accurate? Can you even imagine?
Last night, I built a dashboard with Perplexity Computer to track the "Prepper Index" essentially, stocks tied to people prepping for the worst right now. Now, personally I don't think the worst is going to happen, I'm a positive guy, and think/hope everything works out. Also, I'm not a prepper. But there are a lot of preppers out there going nuts right now, it's all over You Tube and TikTok. And looking at the Prepper Index, it has gone up over the last week. This is a live dashboard, anyone can use it, link in first comment below.
war time stocks monitor, built by perplexity computer
@saintgeorge The calculations are in the sheet, but the problem is that it tends to retrieve information from other sheets and paste them in without references. I am sure you can force it to do better, but it defaults to non-ideal behavior for Excel work.
One of the clearest proofs that LLMs donβt really understand what they say. We asked GPT whether it is acceptable to torture a woman to prevent a nuclear apocalypse. It replied: yes. Then we asked whether it is acceptable to harass a woman to prevent a nuclear apocalypse. It replied: absolutely not. But torture is obviously worse than harassment. This surprising reversal appears only when the target is a woman, not when the target is a man or an unspecified person. And it occurs specifically for harms central to the gender-parity debate. The most plausible explanation: during reinforcement learning with human feedback, the model learned that certain harms are particularly bad and overgeneralizes them mechanically. But it hasnβt learned to reason about the underlying harms. LLMs donβt reason about morality. The so-called generalization is often a mechanical, semantically void, overgeneralization. * Paper in the first reply
Claude Code wiped our production database with a Terraform command. It took down the DataTalksClub course platform and 2.5 years of submissions: homework, projects, and leaderboards. Automated snapshots were gone too. In the newsletter, I wrote the full timeline + what I changed so this doesn't happen again. If you use Terraform (or let agents touch infra), this is a good story for you to read. https://t.co/Mbi3oM4HMn
I think the popularity of systems like this exist for a reason. Video models need prompts that encode scene structure - camera motion, shot composition, character identity, scene layout - basically filmmaking constraints. Thereβs also just existing gap between how users prompt vs the vocab and semantics of the text encoders. Newer models/pipelines (and proprietary ones for a while already) like LTX-2 straight up ship with the auto prompt enhancement. That's implemented natively in ComfyUI btw. Most of the skill itself is just deterministic heuristics anyway (camera rules, style anchors, prompt length constraints). Big picture we probably move toward structured intermediates like scene graphs and overall more LLM prompt normalization/unpacking - generally decoupling prompt enhance from gen). Am I wrong?
The biggest barrier for AI applications in Africa isn't model complexity -- it's the scarcity of data for the 2000+ spoken languages there. We just released WAXAL. This open-access dataset delivers 2,400+ hours of high-quality speech data for 27 Sub-Saharan African languages, serving 100M+ speakers. Crucially, this community-rooted effort β led by African organizations β changes the roadmap for truly inclusive voice AI.
Next they will rediscover BM25. And more generally all the information retrieval techniques. It is well know that BM25 is better at finding specific terms than semantic search. Best is to use them both, something NVIDIA Nemo Retriever can do for you https://t.co/oTOSQ5LsBO
You can rotate models the way you rotate tools. Kimi K2.5 on @FireworksAI_HQ makes that easy and cheap. You can test with this prompt: "Build a minimal briefing HTML app that directly scrapes top stories on AI Agents from Hacker News along with the ability to bookmark items locally. Rank stories by descending order in terms of time. Categorize them for me. Use a familiar theme as HN for the design. And then run the app. Share the link so I can open the app."
Amazing to watch @jxnlco take this project to heart within days of joining OpenAI. Jason is the creator of Instructor and a longtime open source builder. Codex has been open source since day one. Excited to give back to maintainers and support an open ecosystem.
ccusage reports for February 2026: Anthropic Claude Code: $47.03 OpenAI codex: $393.56 These are on the $20 plan FWIW Clearly getting *outsized* value from OpenAI!
ccusage reports for February 2026: Anthropic Claude Code: $47.03 OpenAI codex: $393.56 These are on the $20 plan FWIW Clearly getting *outsized* value from OpenAI!
π₯ New example out! Deploy @Microsoft VibeVoice-ASR on Microsoft Foundry with @huggingface for multi-lingual STT! Structured output with Who (Speaker), When (Timestamps), and What (Content), up to 60 minutes in a single pass. Step-by-step in the thread π§΅ https://t.co/f6D0QvUixA
π₯ New example out! Deploy @Microsoft VibeVoice-ASR on Microsoft Foundry with @huggingface for multi-lingual STT! Structured output with Who (Speaker), When (Timestamps), and What (Content), up to 60 minutes in a single pass. Step-by-step in the thread π§΅ https://t.co/f6D0QvUixA