Your curated collection of saved posts and media
Larry Ellison $ORCL highlighted something critical: models like ChatGPT, Gemini, Grok, and Llama are all trained on largely the same public internet data. When everyone trains on the same information, models inevitably converge. Thatโs why AI is moving toward commoditization. The real moat isnโt the model itself. Itโs the proprietary data behind it. Companies that can train on exclusive datasets gain an advantage competitors canโt replicate. Having data that no one else has will allow you to dominate your market.
Warren Buffett: "I like the way I lived 30 years ago โ and I live that way now. The only difference is I have a plane to travel around privately. But in terms of what I eat, the clothes I wear, the books I read, the television I watch ... it's what I want to do in life." https://t.co/0XiZu1RgMJ
The CEO of a $380 billion AI company said something that should concern every developer, every startup, and every government on earth. He called open-source AI a "red herring." This is Dario Amodei, the man who runs Claude. His argument sounds technical but it's not and it's about money. Here's what he said: "I don't care whether a model is open source or not. The only thing I care about is, is it a good model?" Sounds reasonable, until you look at his books. 75% of Anthropic's $14 billion in revenue comes from one thing, charging companies per token to use Claude through an API. If enterprises could run their own models for free that revenue disappears. So when Amodei says open source "doesn't matter," what he means is, please don't look at open source. His technical argument, AI "open source" isn't real open source. You get the weights, just numbers not the actual source code and you can't see inside the model. Fair point but it misses the bigger picture. Companies running open source models don't need to see inside. They need three things, lower costs, data privacy, and freedom from vendor lock in. Open source somewhat delivers all three. A Berkeley study found open source AI models cost up to 90% less than closed APIs. Hospitals can keep patient data in house. Banks can meet compliance rules. Defense contractors don't send classified data to someone else's servers. Amodei brought up DeepSeek to prove his point. "I don't think it mattered that DeepSeek was open source," he said. But DeepSeek's release crashed Nvidia's stock by the biggest single day loss in market history and it mattered. Here's the pattern that keeps repeating in tech: Linux was a toy until it ran the internet. Android was "fragmented" until it owned 72% of mobile. Open source starts cheap, then it gets good, then it wins. Anthropic is posting record revenue. 500+ customers spending over $1M a year and 8 of the Fortune 10 on board. But so were BlackBerry and Sun Microsystems. So was every incumbent that dismissed the disruptor. The real question isn't whether open source is a red herring. But the real question is whether the man running a $380 billion closed source empire is the right person to ask.
The price of intelligence is falling fast. As AI becomes cheaper and more capable, agentic systems are starting to take over tasks once done by human workers. The shift isnโt gradual. Itโs economic. https://t.co/qpha8z0s7S @abcnews @AlanKohler
Inside Amazonโs layoffs, AI and โleanerโ operations are reshaping the culture. Survivorโs guilt and rising workloads are becoming part of the transition as automation accelerates. Whatโs happening there may become a blueprint other companies quietly follow. https://t.co/cRMN54FsNi @ft @rafeuddin_
AI is starting to shape scientific discovery itself. In particle physics, machine learning systems inside detectors now decide which signals are worth keeping and which are discarded. When algorithms filter reality, they quietly influence what scientists get to study. https://t.co/xjGX5V2QEZ @IEEESpectrum
Investors are piling into AI-resistant โhaloโ stocks. Heavy-asset, low-obsolescence companies are driving UK and EU markets to record highs. In the AI era, scarcity and stability are suddenly back in favor. https://t.co/aQwauFrnR9
McKinsey says agentic AI could fundamentally reshape global banking. But it warns banks not to get trapped in endless pilots and proofs of concept. The competitive edge wonโt go to those experimenting the longest, but to those scaling fastest. https://t.co/rrt7ROCCBl @DigWatchWorld @mckinsey
AI is beginning to decode the electrical noise inside our brains. Signals once thought too complex to interpret are now being translated into patterns and meaning. When machines start reading inner thoughts, neuroscience enters an entirely new era. https://t.co/4ZB8TUqxiG @LauraCReporter @bbcnews
AI literacy isnโt optional anymore. As automation spreads across industries, understanding how AI works becomes a core economic skill. If policymakers underestimate that urgency, the competitiveness gap will widen fast. https://t.co/ncHCj37goe @epc_eu https://t.co/XtYStA1f1V
WE WON THE @MistralAI LONDON HACKATHON ๐ฌ๐ง๐ซ๐ท We made Mistralverse, here's our demo vid. @HarryStebbings who says the UK isn't shipping?? https://t.co/lVWr43XkNj
NEW: When OpenAI announced its Pentagon deal Friday night, people immediately challenged Sam Altman's claims. Why, they asked, would the DoD suddenly agree to red lines when it had said it would never do so? The answer, sources told me, is that it didn't. https://t.co/DkF9uWVHa4
introducing Voice Mode. speak as you draw and get changes in real-time. available now in Krea iPad. https://t.co/c6mHHjupmW
introducing Voice Mode. speak as you draw and get changes in real-time. available now in Krea iPad. https://t.co/c6mHHjupmW
introducing @nozomioai v1. state of the art search and index API to reduce hallucinations in AI agents. use it inside any coding agent or power your own products (thread): https://t.co/mqNqWDSAsU
What if AI could see the world the way we do? Thatโs the idea we bet our weekend on at the Mistral Worldwide Hackathon. With @haaspierre_ and Arman Artola-Zanganeh, we built ๐ฃ๐ผ๐ฟ๐:๐ช๐ผ๐ฟ๐น๐ฑ๐, an open-source framework that lets anyone connect their Meta glasses to any AI system. Let me take you back to saturday morning. So before knowing it could work we needed the hardware. So I ran to Rue de Rivoli and bought โฌ500 Meta glasses on the spot. If thatโs not commitment, I donโt know what is (a true bet). We then built non-stop for 36 hours to make it usable. End-to-end. The glasses stream what you see โ the AI makes sense of it โ it answers back through the glassesโ speaker. And suddenly when we understood that it was going to work, the question changed. It was no longer โ๐๐ ๐๐ต๐ถ๐ ๐ฑ๐ผ๐ฎ๐ฏ๐น๐ฒ?โ It became โ๐ช๐ต๐ฎ๐ ๐ฐ๐ฎ๐ป ๐ฝ๐ฒ๐ผ๐ฝ๐น๐ฒ ๐ฏ๐๐ถ๐น๐ฑ ๐๐ถ๐๐ต ๐๐ต๐ถ๐?โ - A plumber getting live assistance while repairing something. - A technician repairing industrial machinery. - A traveler exploring a new country. - A visually impaired person navigating space. At first, we were looking for the โrightโ use case. Then we realized something more interesting. If AI can share your perspective, continuously, the use cases are not ours to decide. Thatโs why ๐ฃ๐ผ๐ฟ๐:๐ช๐ผ๐ฟ๐น๐ฑ๐ is fully open source. If you want to connect your Meta glasses, plug in your own models, customize with your own prompts, your own MCP, your Openclawโฆ you can. Link to the open source repo (you can contribute and give it a little star โค๏ธ): https://t.co/UueLnkMZpM Link to the demo video: https://t.co/qcTDqKGvax Huge thanks to the organizing team of the hackathon, it was truly great. @Jthmas404

BOOM! Appleโs Neural Engine Was Just Cracked Open, The Future of AI Training Just Change And Zero-Human Company Is Already Testing It! In a jaw-dropping open-source breakthrough, a lone developer has done what Apple said was impossible: full neural network trainingโ including backpropagation โ directly on the Apple Neural Engine (ANE). No CoreML, no Metal, no GPU. Pure, blazing ANE silicon. The project (https://t.co/jrk67hf9p1) delivers a single transformer layer (dim=768, seq=512) in just 9.3 ms per step at 1.78 TFLOPS sustained with only 11.2% ANE utilization on an M4 chip. Thatโs the same idle chip sitting in millions of Mac minis, MacBooks, and iMacs right now. Translation? Your desktop just became a hyper-efficient AI supercomputer. The numbers are insane: M4 ANE hits roughly 6.6 TFLOPS per watt โ 80 times more efficient than an NVIDIA A100. Real-world throughput crushes Appleโs own โ38 TOPSโ marketing claims. And because it sips power like a phone, you can train 24/7 without melting your electricity bill or the planet. At The Zero-Human Company, weโre not waiting. We are testing this right now on real ZHC workloads. This is the missing piece weโve been chasing for our Zero Human Company vision: reviving archived data into fully autonomous AI systems with zero human overhead. This is world-changing. For the first time, anyone with a Mac can fine-tune, train, or iterate massive models locally, privately, and at a fraction of the cost of cloud GPUs. No more renting $40,000 A100 clusters. No more waiting in queues. No more massive carbon footprints. Training costs that used to run into the tens or hundreds of thousands of dollars? Plummeting toward pennies on the dollar โ mostly just the electricity your Mac was already using while it sat idle. The AI revolution just moved from billion-dollar data centers to your desk. WE WILL HAVE A NEW ZERO-HUMAN COMPANY @ HOME wage for equipped Macs that will be up to 100x more income for the owner! Weโre only at the beginning (single-layer today, full models tomorrow), but the door is wide open. Ultra-cheap, on-device training is here. The future isnโt coming. Itโs already running on your Mac. Welcome to the Zero-Human Company era.

@HamelHusain I love it. I have this in my global AGENTS .md to maximise the use of the questions tool (works in Claude, @opencode, @code, and @GitHubCopilot CLI). https://t.co/cPDwXHjwrP
OCR is solved right. https://t.co/hSGdRBi1jw
GPT 5.3 Codex (xhigh) scores 79.3% and takes the lead on WeirdML, just ahead of opus 4.6 (77.9%) at less than half the prize. It is very solid across the board, but I still feel the peak performance of gemini 3.1 is stronger. https://t.co/WRYosAStGY

dLLM Simple Diffusion Language Modeling https://t.co/8a3wDPMZiN
Enhancing Spatial Understanding in Image Generation via Reward Modeling https://t.co/3t4ylnDlTo
Mode Seeking meets Mean Seeking for Fast Long Video Generation paper: https://t.co/TFznQW57cC https://t.co/nfLMnHpp9b
Introducing Jan-Code-4B ๐ป A compact coding model tuned for practical day-to-day tasks. Generation, refactors, debugging, tests โ all runnable locally in Jan. Download Jan: https://t.co/MPwceB2eHG Model: https://t.co/siedXzTv0v https://t.co/KNlzvwKkDu