Deep Questions with Cal Newport — Episode 386: "Was 2025 a Great or Terrible Year for AI?" (w/ Ed Zitron)
Date: January 5, 2026
Host: Cal Newport
Guest: Ed Zitron (Better Offline podcast, Where’s Your Ed At Substack)
Episode Overview
Cal Newport invites AI commentator and reporter Ed Zitron to help make sense of the whirlwind year that was 2025 in the AI industry. Together, they walk through the major AI news stories of each month, unravel hype cycles, analyze financial realities, and ultimately seek to answer: was 2025 a great year or a terrible year for AI?
The conversation is wide-ranging, sometimes irreverent, and deeply critical of both the technological and business narratives that dominated headlines. The tone combines sharp skepticism, technical expertise, and biting humor—offering an insider perspective for listeners keen to cut through the noise in AI discourse.
Main Theme
Was 2025 a Great or Terrible Year for AI?
A month-by-month exploration of AI’s most significant stories, missteps, overblown hype, financial realities, and the persistent gap between marketing and substance.
Key Discussion Points & Insights
1. January 2025 — DeepSeek and the ‘Chinese Threat’ Narrative
- DeepSeek, a Chinese AI company, releases the R1 model, trained at a fraction of the cost (reportedly $5.3M vs. $50M+ for US models), and claims GPT-level performance.
- This “spooked" the US market, highlighting both Nvidia’s monopoly and inefficiencies in American AI model development.
- Ed Zitron: “It was kind of the thing that shone a spotlight on the Nvidia problem, which is that Nvidia is like the only company really making money in this era…” (05:09)
- US AI elites (including Sam Altman) respond with attempts to discredit DeepSeek on IP/copyright grounds and stir up xenophobic anxieties.
- The story is quickly “memory holed” by US media and industry—largely because it undermines the narrative of the necessity for vast, expensive infrastructure.
- Cal and Ed discuss the underlying issue: commodification of AI model training, and whether smaller, domain-specific models running on local/edge devices might represent the only truly profitable AI future.
2. Agent Hype: Vaporware as Marketing Strategy
- January also brings out-sized hype around AI “agents”, sparked by paraphrased claims from OpenAI’s Chief Product Officer (“2025 is the year of agents” — paraphrased, not actually said).
- “Agents” are positioned by Sam Altman and Marc Benioff as imminent digital labor—technology poised to “revolutionize the workforce”.
- Ed Zitron: “Sam Altman's comment was… agents may join the workforce. Egregious lie. There was no proof at that time that it was even possible to do it. And guess what? Where we are today, it’s not possible either.” (16:47)
- Cal and Ed dissect “coding agents” and the reality that, while LLMs can credibly autocomplete or “vibe code” trivial projects, autonomous software creation is still a fantasy.
- Memorable exchange:
- Cal: “These things that were being vibe coded have no economic utility…”
- Ed: “Vibe coding I believe is one of the greatest frauds of all time.” (22:50)
3. February–March — The Stagnation: GPT-4.5, Gemini 2.0 & Tuning over Scaling
- GPT-4.5 launches, with Sam Altman promising “magic,” but it’s essentially a product of bigger data centers and diminishing returns on brute-force scaling.
- OpenAI internally pivots from pure scaling to “reasoning models” — focusing on chained, multi-step outputs to goose limited benchmark improvements, at the expense of even more compute.
- Ed Zitron reading Altman’s old tweet (29:18):
- “Bad news. It’s a giant, expensive model… out of GPUs. We will be adding tens of thousands of GPUs next week… I’m pretty sure y’all will use every one we can rack up.”
- At Nvidia’s GTC conference (March), Jensen Huang reframes the narrative, formally declaring the shift from a “training” (building models) to an “inference” (running models) era—meaning continual, massive compute spending.
- Ed: “The truth is, all those GPUs aren’t building better models. It’s just running the bloody things…” (37:21)
- Nvidia benefits; AI service providers face spiraling unprofitability.
4. April — The AI Doomer Hysteria (AI 2027)
- AI 2027, a fictive scenario by former OpenAI staff and rationalists, asserts that “superintelligent” AI could annihilate humanity by 2027.
- Ed Zitron: “The whole thing is written, it’s like thousands of terribly written words, lots of scary numbers. But when you read it, it hinges on one idea. Just one. That in 2025… [someone] invents an AI that can do AI research… And they never define it.”
- Both Cal and Ed forcefully reject the plausibility of recursive self-improvement through current LLM architectures.
- Cal: “We do not know how to build a language model that can produce a software for AI that’s better than any human can produce… That’s not how language models work.” (45:08)
- They critique the “AI safety” movement’s tendency to ignore concrete harms (e.g., exploitation, theft, energy use) in favor of sensationalist science fiction.
5. May–June — Job Loss/Disruption Hype & Academic Backlash
- Dario Amodei (Anthropic CEO) widely quoted predicting AI will eliminate “half of all entry-level white-collar jobs in five years,” citing AI’s performance on PhD-level tests.
- Ed and Cal deconstruct this position as simplistic, noting that excelling at benchmarks does not translate into generalized job replacement.
- June brings a pivotal MIT study on “AI cognitive debt," showing that using LLMs for writing leads to poorer output and weaker learning. The narrative of AI as a magical student/teacher or productivity assistant is scrutinized.
6. Summer — The Reality Check: GPT-5 and the End of the Hype Curve
- The release of GPT-5 is underwhelming, with Sam Altman theatrically comparing himself to Oppenheimer, only to quickly pivot away from AGI language post-launch.
- Technical reporting (notably by guest Ed Zitron) uncovers that even headline improvements in reasoning/coding require massively cost-ineffective inference, often making things worse instead of better.
- Cal: “Because it was hard to ignore that GPT-5 wasn’t that different… The thing that seemed to really open up was a story that really only you [Ed] had been covering for a year: the numbers don’t make sense on these companies.” (74:21)
- By September, mainstream outlets begin running “is this a bubble?” stories, with financial reporters digging into the unsustainable economics of current AI business models.
7. Autumn — The Bubble Pops: Hype Deals, Spending, and Sora
- AI startups and hyperscalers announce fantastical future spending deals—e.g. OpenAI signing $300B “agreements” with Oracle for unbuilt data centers, or $100B with Nvidia, all based on projected revenue no one knows how to realize.
- Ed Zitron: “Everyone believed it. Everyone was like, wow, wow. Number go up so big. Number so huge. Well, number didn’t stay big for long and things started to fall apart…” (80:21)
- The Sora app debuts as a TikTok-style video generator powered by OpenAI’s latest model, but is revealed to be incredibly expensive and largely a costly distraction.
- Underneath, core business use cases remain elusive; profit remains out of reach for almost every major player.
8. Year-End Reckoning: Retrenchment, Financial Realities, and the Beginning of the End for the Bubble
- November–December: New model releases (GPT-5.1, Gemini 3, Anthropic Opus 4.5) do not excite the market; agents, once hyped, are walked back in leaked internal memos.
- Financial reporting and leaks reveal the magnitude of AI company losses:
- OpenAI reportedly spent $8.67B on inference through September, with only ~$4.5B revenue.
- Anthropic consumed similar sums; “efficiency” is a pr myth.
- Ed: “These costs increase with revenue… There’s no real reversing that trend.” (92:23)
- Disney invests $1B into OpenAI’s Sora—viewed as a sign of executive desperation to be “in” on AI, not as a meaningful strategy.
- Oracle, Broadcom, and others revise or postpone massive AI infrastructure expansions; cracks in the “capex as destiny” narrative appear.
- December: AI safety doomer narratives subside; agents are de-emphasized; “code red” internal alarms begin to sound at OpenAI and other major players.
Notable Quotes & Memorable Moments
- “Vibe coding I believe is one of the greatest frauds of all time.” — Ed Zitron (22:50)
- “The returns, the bigger benchmark scores that we love to see, they’re going to come from actually renting more GPUs just to run the models…” — Ed Zitron (37:21)
- “All those GPUs aren’t building better models. It’s just running the bloody things.” — Ed Zitron (37:21)
- “The whole thing [AI 2027 doom scenario] is written, it’s like thousands of terribly written words, lots of scary numbers. But when you read it, it hinges on one idea. Just one. That… [someone] invents an AI that can do AI research.” — Ed Zitron (43:25)
- “If [Hinton] actually wanted to talk about scary bad things that are happening, I don’t know, talk about the Kenyans who are training these models for… $2?” — Ed Zitron (49:23)
- “Is there any conversation about [AI] seriously becoming conscious, AGI, at these companies? …Just no.” — Ed Zitron (107:17)
- “I think [OpenAI] is like an adult summer camp. I think that they’re all just dicking around, doing random projects.” — Ed Zitron (102:02)
- “What have you been doing all year?” — Cal Newport, on OpenAI’s “Code Red” plan (102:18)
- “It’s the era of smiles is beginning. It’s really… It’s dark out there for them. But I’m laughing, I’m having a good time.” — Ed Zitron (105:32)
Timestamps for Key Segments
- [05:09] — DeepSeek, Chinese AI model disrupts the market
- [14:27] — The (mis)quoting and hype around AI “agents”
- [16:47] — Reality: No evidence agents could join the workforce
- [22:50] — The myth and fraudulence of “vibe coding”
- [29:18] — Sam Altman’s “magic” tweet about GPT-4.5
- [37:21] — Nvidia’s GTC conference and post-scaling “inference” era
- [43:25] — The AI 2027 doom scenario dissected
- [51:59] — Critique of Hinton and the “AI safety” movement’s misplaced focus
- [61:46] — MIT study on “AI cognitive debt” makes a splash
- [66:09] — GPT-5’s router model and unintended cost/technical implications
- [74:21] — Media pivots to AI bubble skepticism post-GPT-5
- [80:21] — Examining Oracle, Nvidia, Broadcom megadeals and market delusion
- [92:10] — OpenAI’s financials: $8.7B spent, $4.5B in revenue
- [102:02] — OpenAI’s “Code Red”, media attitude shift, and business reality
- [107:17] — No one building AGI or “conscious” AI; it’s marketing
- [111:00] — Empathy for ordinary people swept up by the hype
- [117:48] — Final answer: “Terrible year. Started off bad, only got worse.” — Ed Zitron
Guest Plug
- Ed Zitron hosts the Better Offline podcast (Webby-award winning) and writes Where’s Your Ed At on Substack—deep reporting, technical and financial breakdowns, and media criticism on AI and internet business.
Conclusion
2025 was a watershed year in which AI’s grand narratives collided resoundingly with financial and technical reality. The year began with extravagant promises—autonomous agents; superintelligence on the horizon; economic transformation, job elimination—and ended with whistleblowers without evidence, doomer scenarios fizzling, costs ballooning, and the core technology confronting hard technical and business limits. Analysts, journalists, and even many former boosters have begun to sober up.
Final Verdict:
“Terrible year. Started off bad, only got worse.” — Ed Zitron (117:48)
AI remains an immensely impressive technology, but the era of infinite hype, endless VC money, and world-changing fables—as told by tech CEOs and rationalists—appears to be drawing to a close. The question for 2026: what comes after the bubble?
Recommended Listen:
- Better Offline with Ed Zitron
- “Where’s Your Ed At” Substack
Compiled and summarized by an AI podcast expert, preserving the original voices and tone of the hosts and guest.
