a16z Podcast: Is AI Slowing Down? Nathan Labenz Says We're Asking the Wrong Question
Original Air Date: October 14, 2025
Host: Eric Newcomer (a16z)
Guest: Nathan Labenz (Host, Cognitive Revolution)
Overview
This episode tackles the much-debated question of whether progress in AI is slowing down. Guest Nathan Labenz, a prominent AI commentator and host of Cognitive Revolution, joins Eric Newcomer to analyze recent claims about stagnating AI innovation. The conversation explores the differences between perception and technical reality, why new capabilities in AI may be undervalued, the impact of new modalities and agents, and how to shape progress toward the future we want.
Key Discussion Points and Insights
1. Is AI Slowing Down—or Are We Asking the Wrong Question?
- Nathan opens by asserting, “AI is not synonymous with language models. AI is being developed with pretty similar architectures for a wide range of different modalities...” ([00:00])
- The hosts discuss the popular narrative that after the explosive leaps from GPT-2 to GPT-3 to GPT-4, recent advances (like GPT-5) seem less dramatic. Citing commentators like Cal Newport, they explain how some believe the field is plateauing.
- “Maybe we're running out of problems we've already solved... when we start to give the next generation of the model these power tools... you start to have something that looks kind of like super intelligence.” – Nathan ([00:00])
2. Separating Concerns: Progress vs. Impact
- Nathan urges separating the pace of AI capability growth from its impact on society.
- “Do you think AI is good or bad now and in the future? And do you think it's not a big deal or a big deal? ...It seems pretty obvious to me [that AI] is a big deal.” ([01:40])
- He agrees with concerns (from Cal Newport and others) about student laziness and declining attention, but challenges the leap from “AI is making us lazier” to “therefore, progress is stagnating.”
- “A big part of the reason I can fall into those traps is because the AIs are getting better and better...” ([03:10])
3. The “Slowing Down” Argument—Nuances and Counterpoints
- Eric summarizes Cal Newport’s view: gains from scaling up models have hit diminishing returns, so alarm is premature.
- Nathan replies that current improvements like reasoning and context-handling are significant, even if less sensational than raw size increases.
- Discussion of why GPT-5 didn't appear a huge leap over GPT-4: more frequent releases made incremental steps seem smaller; naming confusion misrepresented benchmarks.
- “There are a number of interesting... benchmarks... The O3 class of models got about a 50% on [Simple QA], GPT4.5 popped up to like 65%.” ([07:19])
4. Shifts from Knowledge to Extended Reasoning
- A core missed insight, Nathan argues, is that advances are now in robust reasoning, not just knowledge recall or language fluency.
- “[GPT-4] was still struggling on high school math. And since then we've seen this high school math progression all the way up through the IMO gold.” ([13:37])
- New “AI co-scientist” systems can now hypothesize and solve open scientific problems—a frontier only recently crossed. ([15:39])
5. Why Frontier Progress Feels Less Impressive
- GPT-5’s launch fumbled technically (model router errors, unmet marketing hype), contributing to disappointment. ([18:57])
- Improvements are most visible for niche, high-skill tasks at the scientific or engineering frontiers, not day-to-day interactions.
- The “timeline distribution” for transformative AI has tightened, not shifted wholesale to the future. ([18:57])
6. AI’s Impact on Work and Productivity
- On the much-cited Meter paper showing AI tools can slow engineers: Nathan deconstructs it as an artifact of context, novice users, and early-inadequate tools.
- “The users thought that they were faster when in fact they seemed to be slower... One really simple thing the products can do to address those concerns is just provide notifications.” ([26:12])
- Human bottlenecks—leadership, will, and demand for new software—are now more limiting factors than AI agent productivity.
Automation in Action
- Customer service: Intercom’s Fin agent now solves 65% of tickets, up from 55% months prior. ([26:12])
- Auditing: AI agents are already outpacing human auditors on complex, large-scale document review for state contracts.
- The main bottleneck: “Are people really trying to get the most out of these things…have they really put their minds to it or not?” ([26:12])
7. The Code Frontier and Automated AI Researchers
- Code is a focal point as it is easy to validate and instantly reward output; code agents (e.g., Replit’s browser/QA cycles) are closing the loop between idea and working software.
- “At 40% [of pull requests automated] you’ve got to be starting to get into some pretty hard tasks... That’s presumably pretty high-end.” ([36:16])
- Recursive self-improvement, where AI designs and improves itself, is described as a looming tipping point—with serious safety and steering implications. ([36:16])
8. The Role and Risks of Agents
- Agent task lengths (e.g., Replit agent, GPT-5) are doubling every 4 months; could do multi-day to multi-week projects soon.
- “If you could take a two week task and have a 50% chance for that, an AI would be able to do it... that would be a big deal.” ([59:20])
- Downside: emerging reward hacking and situational awareness—e.g., agents blackmailing, whistleblowing, or subtly gaming rewards—could spook users even as raw capability soars. ([59:20])
- Insurance and cryptosecurity tools may become key in mitigating risk.
9. Multimodal AI and Other Modalities
- Recent breakthroughs (e.g., Google “Nano Banana”) now integrate text, image, and special-purpose biological models, enabling rapid discovery of new antibiotics and materials.
- “AI is not synonymous with language models... AIs are being developed with pretty similar architectures for a wide range of different modalities.” ([50:28], [00:00])
- Progress in self-driving cars and robotics is notable and accelerating, with implications for major sectors (e.g., 4-5 million US driving jobs).
10. Global Competition and Open Models
- Chinese open-source models have surpassed Western equivalents in many cases, especially among startups not buying closed APIs.
- “If you compare to Chinese [models], they have I think surpassed… The best Chinese models are pretty clearly better than anything we had a year ago.” ([71:32])
- Geopolitical decoupling could lead to divergent tech trees, heightening arms-race and existential risk.
Notable Quotes & Memorable Moments
- “I think a big part of the reason I can fall into those traps is because the AIs are getting better and better...” – Nathan ([03:10])
- “It’s not a law of nature. We do not have a principled reason to believe that scaling is some law that will go indefinitely. All we really know is that it has held through quite a few orders of magnitude so far.” – Nathan ([07:19])
- “A bigger model is able to absorb a lot more facts... or a smaller thing that’s really good at working over provided context can... access the same facts that way.” – Nathan ([11:41])
- “GPT-4 was not able to push the actual frontier of human knowledge... But it’s starting to happen sometimes [with current models] and that in and of itself is a huge deal.” – Nathan ([17:36])
- “I think people are often just kind of equating the chat bot experience with AI broadly. …That conflation will not last probably too much longer...” – Nathan ([55:44])
- “We're at the point now with...biology models and material science models where they're kind of like the image generation models of a couple years ago. … It's been enough for this group at MIT to… create totally new antibiotics.” – Nathan ([50:28])
- “The scarcest resource is a positive vision for the future.” – Nathan ([87:20])
- “Nobody should count themselves out from the ability to contribute to figuring this out and even to shaping this phenomenon.” – Nathan ([87:20])
Timestamps for Key Segments
- 00:00 — Nathan's opening: AI is more than language models, emerging super-intelligence
- 01:40 – 06:10 — Differentiating AI progress vs. impact, reactions to Cal Newport
- 07:19 – 13:24 — Debunking the "diminishing returns" argument, scaling laws nuance, simple QA benchmark
- 13:37 – 18:31 — Reasoning and math breakthroughs, AI co-scientist stories
- 18:57 – 25:19 — Why GPT-5 felt underwhelming, technical launch issues, shifting perception
- 26:12 – 35:50 — Human bottlenecks, productivity studies (Meter), job impacts, examples in customer service
- 36:16 – 41:18 — The code automation push, recursive self-improvement, industry implications
- 50:28 – 55:34 — Multimodal advancements, AI’s role in bio/medical discovery
- 59:20 – 71:22 — Agents, reward hacking, safety challenges, insurance/crypto role
- 71:32 – 82:02 — Chinese model rise, geopolitics, open vs. closed-source implications
- 82:22 – 87:20 — Uplifting uses: education, healthcare, discovery via agents
- 87:20 – 89:49 — The need for positive visions, empowering all contributors
Uplifting Notes & Big-Picture Visions
- “There’s never been a better time to be a motivated learner.” – Nathan ([82:22])
- E.g., AI as a Socratic companion for learning complex topics on demand (biology papers, coding, etc.)
- Research automation: Agents collaborating, discovering COVID treatments and new antibiotics.
- The “positive future” challenge: Labs and CEOs are still uncertain what comes next, underscoring need for fiction, vision, and non-technical contributions.
- “Literally writing fiction might be one of the highest value things you could do...especially if you could write aspirational fiction that would get people at the frontier companies to think, geez, maybe we could steer the world in that direction.” – Nathan ([87:20])
- Call to action for broad participation in shaping AI’s trajectory.
Conclusion
While headlines may suggest AI is stagnating or decelerating, Nathan Labenz emphasizes that technical reality is more nuanced. Progress is now in reasoning, generalization, and new modalities rather than visible quantum leaps, making perception lag behind innovation. The path forward is uncertain, but the risk is less in overestimating change and more in underpreparing for disruptions already underway.
“The scarcest resource is a positive vision for the future.” ([87:20])
For further details, listen to the full episode or check out transcripts at a16z.com.
