@LiorOnAI
Yann just bet a billion dollars that the entire industry is building on the wrong foundation. Large language models predict the next word. They're trained on text, so they understand language. But the real world isn't made of words. It's made of continuous sensor data: camera feeds, touch, sound. And most of that data is unpredictable. You can't predict every pixel in a video the way you predict the next token in a sentence. Generative models fail here because they try to predict everything, including noise. AMI Labs is building world models using JEPA (a method LeCun proposed in 2022 that learns abstract representations of reality and predicts in that compressed space, not in raw pixels). Action-conditioned versions let AI simulate the consequences of actions before taking them. That's not generation. That's understanding. This unlocks AI that can operate in the physical world without hallucinating: 1. Robotics that plans multi-step actions 2. Healthcare devices where errors kill patients 3. Industrial process control under safety constraints 4. Wearables that adapt to real-time sensor input If JEPA works at scale, the next wave of AI companies won't fine-tune LLMs. They'll train world models on sensor data. LeCun's CEO already predicts every startup will rebrand as a "world model company" within six months. The architecture war is starting.