WavePod Logo

wavePod

← Back to Big Technology Podcast
Podcast cover

Why Can't AI Make Its Own Discoveries? — With Yann LeCun

Big Technology Podcast

Published: Wed Mar 19 2025

Wave Logo

Powered by Wave AI

Get AI-powered summaries and transcripts for any meeting, phone call, or podcast.

AI SummariesFull TranscriptsSpeaker Identification

Available on iOS, Android, Mac, and Windows

Summary

Big Technology Podcast: Why Can't AI Make Its Own Discoveries? — With Yann LeCun

Host: Alex Kantrowitz
Guest: Yann LeCun, Chief AI Scientist at Meta and Turing Award Winner
Release Date: March 19, 2025


Introduction

In this episode of the Big Technology Podcast, host Alex Kantrowitz engages in a deep and insightful conversation with Yann LeCun, a luminary in the field of artificial intelligence (AI). The discussion centers around the limitations of current AI systems, particularly Large Language Models (LLMs), in making genuine scientific discoveries, and explores the future paradigms necessary for AI to achieve human-like understanding and innovation.


The Limitation of Large Language Models (LLMs)

Timestamp: [00:52]

Alex opens the dialogue with a critical question: "Why has generative AI ingested all the world's knowledge but not been able to come up with scientific discoveries of its own?" This query, inspired by Dwarkesh Patel's thoughts, sets the stage for unraveling the inherent constraints of current AI architectures.

Yann LeCun responds by distinguishing between different types of AI:

"LLMs are trained on an enormous amount of knowledge which is purely text, and they're trained to basically regurgitate... they are incapable of inventing new things." ([02:47])

He emphasizes that while LLMs like ChatGPT can retrieve and generate text based on existing data, they lack the capability to form new connections or innovate independently.


The Need for Question Generation in AI

Timestamp: [02:47] - [06:33]

Alex introduces Tom Wolfe's perspective from Hugging Face, highlighting the necessity for AI to not just know answers but to ask novel questions that lead to discoveries:

"To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought or dared to ask." ([03:21])

LeCun agrees, asserting that current LLMs are fundamentally limited to retrieval-based tasks and cannot engage in the innovative questioning that drives scientific discovery. He elaborates on the multifaceted nature of problem-solving in humans, which involves asking the right questions, framing problems creatively, and applying diverse skills—all of which are areas where LLMs fall short.


Diminishing Returns and the Future of AI Investment

Timestamp: [11:49] - [24:19]

The conversation shifts to the economic and developmental aspects of AI. Alex points out the significant investments pouring into LLM-centric companies, questioning the sustainability given the diminishing returns:

"If that money is being invested into something that is at the point of diminishing returns, requiring a new paradigm to progress, that sounds like a real problem." ([11:49])

LeCun acknowledges the issue, explaining that the vast amounts of data required to train LLMs are reaching practical limits:

"We've kind of run out of natural text data to train those LLMs... we need a new paradigm." ([12:22])

He argues that moving forward, the AI field must develop new architectures capable of understanding and interacting with the physical world, beyond mere data retrieval. This entails creating systems that can reason, plan, and learn from sensory inputs like video, akin to human cognitive processes.


Timeline Mismatches and Potential AI Winter

Timestamp: [29:50] - [34:13]

Alex expresses concern over a potential mismatch between AI investment and the emergence of breakthrough technologies, drawing parallels to past AI winters. He questions whether the current investment surge, primarily focused on LLMs, might lead to a stagnation if new paradigms don't materialize swiftly enough.

LeCun responds by emphasizing the gradual and collective nature of AI advancements:

"It's not going to be a day before which there is no AGI, and after which we have AGI. This is not going to be an event." ([34:13])

He posits that the transition to more advanced AI systems will be a continuous process involving diverse research efforts globally, rather than a sudden breakthrough from a single entity. This collaborative approach, he suggests, mitigates the risk of an AI winter by ensuring sustained progress across multiple fronts.


Understanding Physics and the World with AI

Timestamp: [35:50] - [44:35]

A significant portion of the discussion delves into the AI's comprehension of physical laws. Alex references a past experiment where ChatGPT incorrectly predicted the behavior of a falling paper, contrasting it with recent improvements in video generation AI systems like Sora.

"When you let go of the paper with your left hand, gravity will cause the left side of the paper to drop while the right side remains in place..." ([35:50])

LeCun explains that while AI can now generate more plausible physical simulations, this improvement doesn't equate to genuine understanding. He underscores that AI's ability to predict outcomes is still surface-level and relies heavily on data patterns rather than an intrinsic grasp of physical realities.

Introducing Joint Embedding Predictive Architecture (JEPA), LeCun outlines a new approach:

"Together, this is what JEPA is... it's not generative because the system is not trying to regenerate the part of the input. It's trying to generate a representation, an abstract representation of the input." ([43:06])

JEPA focuses on abstracting and predicting representations rather than reconstructing raw data, enabling AI to understand and reason about the physical world more effectively.


Open Source vs. Proprietary AI Development

Timestamp: [55:00] - [59:50]

Alex shifts the conversation to the impact of open-source initiatives like Deepsea on the AI landscape. He inquires whether open-source has begun to overtake proprietary models.

LeCun responds affirmatively, highlighting the innovative speed and diversity of ideas within the open-source community:

"Progress is faster in the open source world... it's cheaper to run, more secure, more controllable." ([55:44])

He cites examples of successful open-source contributions, such as the ResNet architecture from Beijing and the Llama model from Paris, illustrating that no single entity holds a monopoly on groundbreaking ideas. LeCun advocates for the global and collaborative nature of open-source development as a catalyst for rapid AI advancement.


Conclusion

Timestamp: [59:50] - [60:08]

As the episode wraps up, Alex appreciates Yann's ability to cut through hype and provide grounded insights into the AI industry's trajectory. Yann emphasizes the necessity for diverse, collaborative research and cautions against expecting instant breakthroughs from current LLM approaches.

"If you think that there is some startup somewhere with five people who has discovered the secret of AGI... you are making a huge mistake." ([34:13])

The conversation leaves listeners with a nuanced understanding of where AI stands today, the challenges it faces in achieving true scientific discovery, and the promising avenues that lie ahead through innovative architectures like JEPA and the open-source movement.


Key Takeaways

  • LLMs' Limitations: Current LLMs excel in data retrieval and text generation but lack the ability to make independent scientific discoveries.
  • Need for New Paradigms: To achieve human-like reasoning and innovation, AI systems must move beyond LLMs to architectures that can understand and interact with the physical world.
  • Economic Implications: The focus on LLMs is reaching a point of diminishing returns, necessitating a shift in investment towards novel AI research.
  • Collaborative Progress: Open-source initiatives are accelerating AI advancements through diverse and rapid innovation compared to proprietary models.
  • JEPA as a Solution: Joint Embedding Predictive Architecture offers a promising path for developing AI that can abstractly represent and reason about the world.

Notable Quotes

  • "LLMs... are incapable of inventing new things."Yann LeCun ([02:47])
  • "To create an Einstein in a data center... one that can ask questions nobody else has thought or dared to ask."Tom Wolfe, discussed by Alex Kantrowitz ([03:21])
  • "Progress is faster in the open source world... more secure, more controllable."Yann LeCun ([55:44])
  • "If you think that there is some startup somewhere with five people who has discovered the secret of AGI... you are making a huge mistake."Yann LeCun ([34:13])

This comprehensive discussion between Alex Kantrowitz and Yann LeCun sheds light on the current state of AI, its limitations, and the innovative directions required to overcome these challenges. By moving towards architectures that prioritize understanding and reasoning, and fostering open-source collaboration, the AI community can pave the way for systems that not only process information but also drive genuine scientific and technological discoveries.

No transcript available.