Deep Questions with Cal Newport
Ep. 367: What if AI Doesn’t Get Much Better Than This?
Air Date: August 25, 2025
Overview
In this thought-provoking episode, Cal Newport examines the rapidly shifting narrative around artificial intelligence, focusing on the critical question: What if AI doesn’t get much better than it already is? Newport dissects the hype, recent disappointments with AI’s latest iterations, the disconnect between industry claims and real-world impacts, and the technical plateau faced by cutting-edge language models. Through interviews, analysis, and a look at the scaling laws underlying AI progress, Newport urges listeners to recalibrate their expectations and consider a more nuanced, gradual view of how AI will shape society.
Key Discussion Points
1. The Hype and the Awakening
(00:00–24:30)
-
AI Hype Narrative: Newport opens with clips and commentary from leading AI CEOs (Dario Amodei, Sam Altman, Mark Zuckerberg) making bold claims about AI’s transformational potential and looming threat to jobs.
- “There’s been this drumbeat from these AI CEOs that you cannot fathom the impact of the disruption that’s coming. And it’s coming soon.” — Cal Newport (07:50)
- Sam Altman compares AI’s development to the Manhattan Project, with existential awe and dread.
- Zuckerberg insists superintelligence is imminent.
-
Pivot Point — GPT-5’s Disappointing Release:
Highlighting user and reviewer letdowns after GPT-5’s launch, Newport notes critics like YouTuber Mrwhosetheboss and AI commentator Gary Marcus expressing how minimal the progress was over previous releases.- “GPT5 is the biggest piece of garbage.” — Redditor, paraphrased (16:45)
- Gary Marcus: “Kevin Scott...said GPT5 is a humpback whale compared to GPT4...And there ain’t no humpback whale there. It’s better in a bunch of different ways...but it’s not separated from the pack...It’s a disappointment.” (18:40)
-
Media Reaction:
The media narrative swings rapidly—panic now gives way to skepticism, asking if AI is stalling out.
2. Is AI Already Taking Jobs? Separating Hype from Reality
(24:31–54:30)
-
Debunking Job Loss Claims:
Newport interrogates headlines claiming that AI is already decimating white-collar and tech jobs.- “Young people not being able to find work [and] AI being involved...People are conflating that with the idea that someone is being replaced by AI, despite the fact that AI is not replacing a damn person. No numbers, no data, just vibes, baby.” — Ed Zitron (36:47)
-
Economic Reality:
Layoffs at tech companies stem from post-pandemic contraction, not direct AI replacement. AI coding tools are useful but not revolutionary nor responsible for mass layoffs.- Newport: “People are embracing AI coding tools in software development…but...majors are down. If you put the fact that people are using AI coding tools in the middle of the sentence, that has to be...to try to make people believe that the jobs are going away because they’re being replaced by AI. It’s not related.” (43:30)
-
Real Impact Today:
AI’s actual revenue is much less impressive than believed. Zitron states:- “The actual revenue is smaller than last year’s smartwatch revenue...They’re expecting $40 billion max of total revenue in this entire industry — including OpenAI. It’s ludicrous.” (50:00)
3. The Death of the Scaling Law and Missed Leap Forward
(54:31–1:33:00)
-
Scaling Laws Explained:
Newport describes how the exponential improvements in model ability (from GPT-2 to GPT-3 to GPT-4) were powered by “scaling laws”—the idea that bigger models trained on more data and compute always got better.- Jensen Huang, Nvidia CEO: “The scaling law says that the more data you have, the larger model you have, and the more compute you apply, the more effective...your model will become.” (57:50)
-
The Plateau Arrives:
But with attempts to scale further for GPT-5, Behemoth (Meta), and Grok 3 (xAI), AI companies found the expected exponential gains had largely vanished — despite massive new hardware and training investments.- “When they tried to do the same trick for a third time, they didn’t get the same applause. The model was like somewhat better...it only got somewhat better.” — Newport (1:21:20)
- Ilya Sutskever: “The 2010s were the age of scaling. Now we’re back in the age of wonder and discovery. Once again, everyone is looking for the next thing.” (1:24:15)
-
Industry’s Response: Change the Narrative:
With scaling running aground, industry leaders pivot to touting post-training techniques and small, specific performance improvements—benchmarks, fine-tuning, and system tweaks—rather than true leaps.- Newport: “Now we have a bar chart showing a 4% increase on some sort of benchmark metric...You didn’t need that to know that GPT-4 was amazing. ...We left the world of just ‘We trained it twice as long and when we came back the baby was doing quantum physics.’” (1:30:00)
4. What Happens Now?—AI’s Realistic Trajectory
(1:33:01–1:42:00)
-
A More Modest Future:
Newport reads from his New Yorker article, suggesting that AI will bring steady but incremental gains. Some professions will change, a few may disappear, and many will use AI as a tool—but there will be no mass automation crisis.- “AI tools will make steady but gradual advances. ...A minority of professions such as voice acting and social media copywriting might essentially disappear. But AI may not massively disrupt the job market and more hyperbolic claims like superintelligence may come to seem unserious.” — Newport (1:33:42)
-
Financial Hype and Bubbles:
Massive AI capital expenditures from tech giants ($560 billion) far outstrip industry revenue (~$35-40 billion).- Zitron: “When you look at these numbers, you feel insane.” (1:35:15)
-
Post-Scaling Innovation:
While Gary Marcus and others are skeptical about the AGI trajectory of current models, Newport points out that AI development may become more fragmented—bespoke, gradual tools, not world-changing singularities. -
Importance of Regulation and Ethics:
With time to catch our breath, society should prioritize thoughtful regulation, economic planning, and ethical frameworks for the AI we have, not just the AI we dreamed of.
Notable Quotes & Memorable Moments
-
On AI Plateau:
- “The emperor is not wearing nearly as much clothes as I once thought that he was.” — Cal Newport (1:31:25)
-
On Industry Hype:
- “They waved their hands wildly and hoped we wouldn’t notice...that this stopped working last summer of 2024, didn’t it? And this new thing is just you polishing up the Camry you already have. It’s not the Ferrari you promised.” — Newport (1:41:10)
-
On AI and the Job Market:
- “It is not giving us 20% unemployment. It is not like a college educated entry level worker. Like Amodei said, ideas like superintelligence are completely unserious on our current technological trajectory.” — Newport (1:39:50)
-
On Expertise and Skepticism:
- “I just call it. I’m interested in the actual technologies. ... I think language model technology is really cool. I just think it’s more narrow than they were letting on.” — Newport (around 1:45:00)
Additional Listener Q&A Highlights
(Timestamps approximate as ad sections are present in transcript)
-
Will AI replace software engineers or white-collar jobs?
- “AI is not about to replace all software developers. Don’t worry about that.” — Newport (1:48:30)
- Master’s degrees may help in CS for higher starting positions, but AI isn’t a reason to leave the career.
-
How to manage residual overhead from old creative projects?
- “If it gets to the point that people’s questions prevent you from doing more work now you’re minimizing your impact. ... So eventually, I think, you have to just have a hard rule like: I’m not able to really answer questions for the most part.” — Newport (2:03:40)
- Cites Neal Stephenson’s “Why I’m a Bad Correspondent” essay on creative bandwidth.
Cultural Spotlights & Light Moments
Ed Sheeran on Tech Minimalism (2:22:45)
- Sheeran: “I haven’t had a phone since 2015. ...I got an iPad. I moved everything onto email, which I reply to once a week. No one expects a reply to an email. It’s a cult.”
- Newport: Lauds this example of reclaiming presence and focus, showing there’s nothing forcing us to be constantly accessible.
Conclusion & Takeaways
Newport sums up:
- The “scaling law” miracle is over—AI’s most recent leaps were real, but further exponential progress is not arriving just by tossing more chips and money at the same architectures.
- The practical impact of current AI is incremental; transformational predictions of recent years are proving overblown.
- Future AI progress will likely be gradual, fragmented, and shaped by custom tools, not monolithic AGI or mass unemployment.
- We now have the breathing room to pursue sensible AI regulation and ethics—key for societal adaptation and well-being.
Resources & Further Reading
- Newport’s New Yorker article: What If This Is as Good as AI Is Going to Get?
- Ed Zitron’s “Better Offline” podcast
End of Summary
For a deeper dive, subscribe to Cal Newport’s newsletter and check the New Yorker for expanded coverage of these themes.
