Datamation content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
Yann LeCun, Meta’s chief AI scientist and one of the most influential figures in AI, plans to leave the company to launch his own startup.
According to a Financial Times report, The Turing Award winner is already in talks to raise capital for a venture focused on his revolutionary “world models” approach, a dramatic departure from the large language model obsession gripping Silicon Valley. A left turn, not a lap in the same race.
This news lands at a brutal moment for Meta. Just as CEO Mark Zuckerberg scrambles to keep pace with OpenAI and Google, his most prestigious researcher is peeling off to chase a different future.
The planned exit spotlights turbulence inside Meta’s AI division. Three months ago, Meta underwent its fourth AI reorganization in six months, a reshuffle that hinted at deeper structural strain.
The culture problem is harder to reorg away. Meta’s retention rate for AI talent has dropped to 64 percent, well below Google DeepMind at 78 percent. Employees describe a “culture of fear” and dysfunction fueled by aggressive performance reviews and constant layoffs. Not exactly the vibe that keeps researchers tinkering late into the night.
Following the underwhelming reception of Meta’s Llama 4 model, Zuckerberg steered the company in a new direction. He brought in Scale AI’s Alexandr Wang to lead a “superintelligence” team with a $14.3 billion investment. The Turing Award winner, who previously reported to Chief Product Officer Chris Cox, now reports directly to Wang, a clear sign his long-term research has been sidelined. Cue a pivot.
One vision
While the industry keeps scaling LLMs, the Meta scientist has been developing “world models” that teach AI to understand and predict the physical world.
He famously believes today’s AI systems are less intelligent than house cats. His approach aims to give machines the common sense needed to navigate reality, the kind of intuition you get from bumping into things, watching cause and effect, learning what falls, what rolls, what breaks.
The co-inventor of convolutional neural networks, the backbone of image and speech recognition across the industry, rejects the idea that simply scaling language models will produce human-level AI. His startup could validate that stance and rewrite the field, or it could prove that the industry’s current path holds.
AI’s future
Meta is spending over $600 billion through 2028 to become an “AI leader”, yet its most visionary scientist is walking away. More than a talent loss, it reads as a vote of no confidence in Zuckerberg’s strategy.
For the wider AI landscape, the new venture is a live experiment. If world models work, the language model arms race could look like last season’s playbook. Big tech would be forced to pivot, fast.
The move also underscores Meta’s internal strain. When even a co-winner of the 2018 Turing Award, often called the Nobel Prize of computing, cannot thrive, questions about leadership and culture get louder. Others may take note, and then take calls.
Could this accelerate the path to artificial general intelligence? Possibly, just not in the way most expect. Instead of ever bigger language models, we might see systems that understand the physical world, reason more like humans, and develop a sense of common sense. If that happens, today’s AI will feel quaint. Like dial-up in a fiber world.