
Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities. They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas. The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year.
Subscribe to your favorite podcasts and get free AI summaries within minutes of release.
Browse trending podcasts or search for your favorites
One click to follow any show — always free, no credit card
Free AI summaries delivered by email within minutes of release
Free forever · No credit card · Unsubscribe anytime
Never miss an episode of Unsupervised Learning with Jacob Effron. Subscribe for free →
No transcript available.