@rohanpaul_ai
📢 BREAKING: FT reports that Yann LeCun’s startups AMI Labs raises $1.03 bn to build world models, at a pre-money valuation of $3.5bn. Congratulations @ylecun 🚀 The financing positions the company as a test of LeCun's belief that today's large language models fall short of human-level reasoning and autonomy. LeCun earlier said AMI aims to build systems capable of reasoning and planning in complex real-world settings. AMI Labs (Advanced Machine Intelligence Labs) aims to solve the limitations of standard language models by building world models using the Joint Embedding Predictive Architecture to observe spatial data. This visual framework helps the AI internalize how objects behave so it can safely plan complex actions. Relying exclusively on text limits AI to human linguistic output while ignoring the massive bandwidth of unspoken physical laws. Building predictive spatial architectures is the mandatory leap required to achieve reliable autonomous agents. Building predictive spatial architectures is the mandatory leap required to achieve reliable autonomous agents. This fundraising included backing from a global group of investors, including France’s Cathay Innovation, Amazon founder Jeff Bezos’s Bezos Expeditions, Singapore’s Temasek, Seoul-based SBVA and US chip giant Nvidia. The company's near-term target customers are organizations operating complex systems, including manufacturers, automakers, aerospace companies, biomedical firms and pharmaceutical groups. Over time, he added, the technology could also support consumer applications. "What consumers could be interacting with is a domestic robot. You need a domestic robot to have some level of common sense to really understand the physical world." LeCun said he was also talking with Meta about potentially deploying the technology in its Ray-Ban Meta smart glasses. "That's probably one of the shorter term potential applications," he said. --- ft .com/content/e5245ec3-1a58-4eff-ab58-480b6259aaf1