What LeCun Believes Is Wrong
LeCun has been publicly critical of large language models for years. His position: LLMs are impressive pattern matchers but don't genuinely understand the world. They lack persistent models of physical reality, can't plan the way humans do, and are limited by autoregressive token prediction.
This isn't a minor disagreement about architecture. LeCun thinks the current path leads to a ceiling, not to general intelligence.
What AMI Is Building
AMI is pursuing three things current AI systems don't have:
World models: internal representations of how physical reality works, not just what words follow other words. When you reach for a glass, you model its weight, position, and how it will respond. Current LLMs model none of this.
Hierarchical planning: breaking goals into sub-goals across different time scales and abstraction levels. Current agents struggle to plan reliably beyond a few steps.
Persistent memory: actual memory systems that update from experience — not context windows that disappear when the session ends.
Why a $1B Seed Round
Seed rounds at this scale are unusual. It signals that credible investors believe the current AI trajectory is missing something significant — that there's a prize worth billions at the end of a different research path.
For builders working with today's tools: AMI's work, if it succeeds, would produce fundamentally different AI systems than what exists now. The timelines are research timelines — years, not months. But it's the kind of foundational work that reshapes the tool landscape.