Your curated collection of saved posts and media
Youβre Thinking About AI and Water All Wrong https://t.co/xKTCjRHHKq @mollytaft @wired
Slop, vibe coding and glazing: AI dominates 2025βs words of the year https://t.co/FIQJMik8ej @ConversationUS @ConversationUK
Today we're rolling out our new iPad app, optimized for iPadOS. The Perplexity iPad app is designed for real work, offering the same core features you find on desktop, now with you wherever you go. Available on the App Store today. https://t.co/906c3jHcFJ
TurboDiffusion Accelerating Video Diffusion Models by 100β205 Times https://t.co/66ZYtT20hy
https://t.co/ZU9D3rl1Po
Nemotron 3 is already third trending on @huggingface. @nvidia is really becoming the American powerhouse for open-source AI! https://t.co/uDQWBPJUwL
Gemini 3 Flash across different test-time compute levels (green line below) represents a new score/cost Pareto frontier on ARC-AGI-2. Congrats to @demishassabis and @sundarpichai on the launch! https://t.co/XJCTx5jvuq
Compute enabled our first image generation launch (and a +32% jump in WAU over the following weeks) as well as our latest image generation launch yesterday. We have a lot more coming⦠and need a lot more compute. https://t.co/rHfQv1aLKS
Gemini 3 Flash is now rolling out in public preview in GitHub Copilot. This model is ideal for tasks where speed is crucial. Try it out in @code π https://t.co/UNMXQvQ635
Gemini 3 Flash rolling out to @code now π Try it out and let us know what you think! https://t.co/rutLlNevgO
Gemini 3 Flash is now rolling out in public preview in GitHub Copilot. This model is ideal for tasks where speed is crucial. Try it out in @code π https://t.co/UNMXQvQ635
You can now fine-tune LLMs and deploy them directly on your phone! π We collabed with PyTorch so you can export and run your trained model 100% locally on your iOS or Android device. Deploy Qwen3 on Pixel 8 and iPhone 15 Pro at ~40 tokens/sec. Guide: https://t.co/8wyQLJfzeC
Crashes and corrupted data in GPU kernels often come from out-of-bounds memory access. Puzzle 03 in the Mojoπ₯ GPU Puzzles series shows how guard conditions prevent this with just a few lines of code. Watch the full tutorial β¬οΈ https://t.co/HlWQVGPhRx
Weβve rewritten the Perplexity iPad app to be native and support the workflows that iPad users have - multi tasking and wider screen real estate. It gets you the experience of a desktop Perplexity usage and the polish and finesse of our iOS app. https://t.co/pV3Pu6GA9k
β‘οΈThrilled to introduce @ropedia_ai, where we aim to structure human Xperience (observations + interactions) for world models, robotics foundation models and interactive intelligence! π₯Check out our blog for more: https://t.co/8284kANOFF πDonβt forget to join our waitlist for early access of HOMIE: https://t.co/F9iNlCeAUX
π§΅1/ Today weβre introducing Ropedia π AI needs more human Xperience πβπΆπ to unlock interactive intelligence Weβre building the system to capture & scale it π‘ Example ππ π³ (tomato & egg stir-fry)

already here β‘οΈ gemini 3 flash. free to compare. free to decide. (limited time) https://t.co/mXQNL4bhHq
already here β‘οΈ gemini 3 flash. free to compare. free to decide. (limited time) https://t.co/mXQNL4bhHq
Introducing Particulate: a feed-forward model for 3D object articulation π»βοΈππ§³ Particulate gives you a fully articulated 3D object, including part segmentation, kinematic structure & motion constraints, in a single forward pass in ~10secs. π SOTA performance! π‘GenAI compatible: Turns AI-generated 3D meshes into fully articulated models! Project page: https://t.co/8yYFpYdEkY Code: https://t.co/CUuubxqbdY
BREAKING: π Chat standalone web app is now live. β’ Fully encrypted for maximum privacy β’ No third party dependencies β’ No ads or hidden trackers β’ Supports file sharing β’ π never reads or sells your chats to advertisers. Live on web: https://t.co/2sZF5QZIzt https://t.co/t4yXz3gcND
Gemini 3.0 Flash surpasses 2.5 Pro while being 3x faster at a fraction of the cost β‘οΈ frontier-class performance on PhD-level reasoning and knowledge benchmarks like GPQA Diamond (90.4%) and Humanityβs Last Exam (33.7% without tools) β‘οΈ features our most advanced visual and spatial reasoning β‘οΈ you can now use code execution to zoom, count and edit visual inputs. β‘οΈ $0.50/1M input tokens and $3/1M output tokens (audio input remains at $1/1M input tokens) Rolling out to developers in the Gemini API via Google AI Studio, Google Antigravity, Gemini CLI, Android Studio and to enterprises via Vertex AI. https://t.co/4Pn5XwrXqx
The next scaling frontier isn't bigger models. It's societies of models and tools. That's the big claim made in this concept paper. It actually points to something really important in the AI field. Let's take a look: (bookmark for later) Classical scaling laws relate performance to parameters, tokens, and compute. More of each, better loss. These laws have driven a decade of progress. But they describe a single-agent world: one model, static corpus, one prompt at a time. There is a clear misalignment with how real-world problems actually work. This new perspective paper argues that scaling must expand along three new axes: population, organization, and institution. Not just how many parameters, but how many agents, how they're connected, and what norms govern their interaction. Simply adding more agents doesn't monotonically improve performance. Early experiments in multi-agent debate show that naive agent swarms can degenerate into majority herding, where the first plausible-but-wrong answer locks in and gets reinforced through subsequent rounds. Groups of frontier models fail to integrate distributed information, displaying human-like collective failures. The paper proposes three interaction regimes for multi-agent systems: 1) Competition: debate, adversarial critique, self-play. 2) Collaboration: role specialization, division of labor, complementary expertise. 3) Coordination: orchestrated workflows, planner-worker hierarchies, reliable execution. Which regime fits which task matters. Competitive regimes suit focused reasoning problems with clear correctness criteria. Collaborative regimes fit an open-ended design where diverse skills are needed. Coordinated regimes handle long-horizon, safety-critical workflows. The architectural implications are significant. Effective multi-agent systems need cognitive diversity: agents with different priors, reasoning styles, and tool access. They need institutional memory: persistent artifacts that outlive individual sessions, analogous to lab notebooks and version control. They need communication topologies: not just broadcast or hub-and-spoke, but structured graphs that balance diversity and coherence. Training objectives must change, too. Current models optimize individual next-token prediction. Multi-agent systems need collective objectives: group accuracy, calibration, hypothesis diversity, and conflict resolution quality. The paper proposes "multi-agent pretraining" where debate, peer review, and negotiation become first-class optimization targets. Paper: https://t.co/OqwIIeJe8T Learn to build AI agents in my academy: https://t.co/JBU5beHQNs
Damn! Gemini 3 Flash is no joke. Faster and cheaper, while demonstrating remarkable reasoning capabilities. Amazing that we have models of this caliber with multimodal and agentic capabilities. Time to build! Stay tuned for more of my thoughts on this model. https://t.co/uNRhz0GhNj
Gemini 3 Flash gives you frontier intelligence at a fraction of the cost. β‘ Hereβs how itβs built for speed and scale π§΅
last chance to sign up if interested! https://t.co/NY7HCOTEMB
last chance to sign up if interested! https://t.co/NY7HCOTEMB
Since central park was so beautiful covered in snow, we decided to put it on our website :) https://t.co/bms4JsbdOG
Not many people know this, but the company behind Manus is Butterfly Effect. In that small office, we pressed the launch buttonβnever imagining how one small move could change the course ahead. Keep shipping! https://t.co/5PggD9FXla
$0 β $100M ARR in 8 months. Since we launched in March: -147 trillion tokens processed -80M+ virtual computers created -Total revenue run rate over $125M Thank you to everyone building with us. https://t.co/TZJ3n162zl
Working on a new RL article based on what I've been mucking around with :) Finally cracked these custom visualisations https://t.co/OgXiMnerBw
Thereβs only 2 genders https://t.co/iEtLBYz3Pv
John Carmack on what he admires about Elon Musk Programming legend John Carmack is asked about his relationship with Elon Musk, to which he replies: βIn some ways we have a similar background. Weβre almost exactly the same age, have backgrounds programming personal computers, and have even read similar books that have turned us into the people we are today.β John first met Elon when he was building Armadillo Aerospace. Elon visited Armadillo with his right-hand propulsion guy, and the three of them talked about rockets. βI think in many corners [Elon] does not get the respect he should for being a wealthy person who could just retire,β John says. βHe went all-in, and he couldβve gone bust. Thereβs plenty of athletes or entertainers who had all the money in the world and blew it. [Elon] couldβve been the business case example of that with the things he was doing: space exploration, electrification of transportation, and Solar City type things. These are big, world-level things. And I have a great deal of admiration that he was willing to throw himself so completely into that.β John contrasts this with the way he approached his own aerospace company: βI was doing Armadillo Aerospace in this tightly-bounded way. It was βJohnβs crazy moneyβ at the time that had a finite limit on it. It was never going to impact me or my family if it completely failed, and I was still hedging my bets working at id Software at a time when [Elon] had been really all-in. I have a huge amount of respect for that.β It also irritates John when people call Elon βjust a business guyβ: βElon was deeply involved in a lot of the [technical] decisions. Not all of them were perfect, but he cared very much about engine material selection and propellant selection. For years heβd be telling me to βGet off that hydrogen peroxide stuff. Liquid oxygen is the only proper oxidizer for this.β And the times that Iβve gone through the factories with him, we talked about very detailed things like how this weld is made or how this subassembly goes together. Heβs really in there a very detailed level . . . I worry a lot that heβs stretched too thin. Heβs got the Boring Company and Neuralink and Twitter too whereas I know I have limits on how much I can pay attention to.β John continues: βI look back at my aerospace side of things, and Iβm like, βI did not go all-in on that.β I did not commit myself at a level that it wouldβve taken to be successful there. And itβs a weird thing having a discussion with him. Heβs the richest man in the world right now, but he operates on a level that is still very much in my wheelhouse on the technical side of things.β Video source: @lexfridman (2022)
π¨ STARLINK HITS 8.6 MILLION USERS: ELONβS SKY-NET IS OFFICIALLY ONLINE Starlink just crossed 8.6 million subscribers, turning Elonβs space-based internet experiment into a full-blown global ISP juggernaut. What started as a lifeline for remote areas is now delivering high-speed internet to dozens of countries, connecting war zones, deserts, mountains, and basically anyone with a dish and a clear view of the sky. No government involvement and no telecom monopoly. Just thousands of satellites, beaming WiFi from orbit like itβs sci-fi made real. And with 10,000+ satellites expected by 2026, this isnβt a network, itβs orbital dominance. Next step? Probably streaming Netflix from Mars. Source: @muskonomy, @Starlink

The Exodus https://t.co/kKP6rcrt7P