Deep Questions with Cal Newport
Episode: Is AI Trending Up or Down in 2026? | AI Reality Check
Date: April 23, 2026
Guest: Ed Zitron (AI Commentator & Journalist)
Overview
In this special "AI Reality Check" episode, Cal Newport is joined by AI commentator Ed Zitron to cut through the “whiplash” of AI news in 2026. Together, they revisited three of the year’s biggest AI stories: the OpenClaw "agent" craze, the Anthropic-Department of Defense (DoD) drama, and the reality of the AI data center boom. Their mission: Separate media hype from actual impact, and answer if 2026 has truly been "good or bad for AI so far."
Key Discussion Points & Insights
1. The OpenClaw Agent Craze: Hype vs. Reality
Summary:
OpenClaw, a Python library enabling users to build AI "agents," took 2026 by storm. For a brief period, both media and enthusiasts hyped it as a leap toward AGI—before interest vanished almost overnight.
Breakdown:
- Molt Book Mania ([03:00]–[07:08])
- A social network, "Molt Book," let OpenClaw agents post autonomously, sparking alarmist media about "AI agents taking over."
- Ed: “The moment I saw it… This is just LLMs doing what they think a social network looks like… This is fake.” ([04:04])
- Cal: “I think we memory holed a lot of this coverage… No one talks about Molt Book.” ([05:33])
- Media Sensationalism & Technical Misunderstandings
- Axios ran an overstated headline: “We’re in the singularity.”
- Ed skewers the credulousness: “Anyone who covered this should be ashamed of themselves.” ([09:23])
- What Actually Was OpenClaw? ([09:00]–[14:18])
- Not a new AI brain: it’s a library to help you connect existing LLMs to automate desktop/work routines.
- “Turns out agents are hard to write… plausible stories, not careful plans.” ([12:34], Cal)
- Practical discovery: Users racked up huge cloud bills—prompting a pivot to smaller, cheaper AI models.
- Ed exposes how financial loopholes (Anthropic’s $200/month “Max” plan) led to losses for AI providers.
- OpenAI Acquisition
- Ed: “I just think they were buying stuff because they thought, crap, we gotta do something. We don’t have an OpenClaw, what if we just bought it? …It’s rich kid syndrome.” ([17:38])
- Key Takeaways
- OpenClaw is fading as a story; lessons learned are about economics and architectural reality—not AGI.
- Cal: “This gave people a taste of [distributed, modular AI]—bespoke systems that do specific things … very different vision than … all-purpose robot overlords.” ([21:12])
2. Anthropic, the Department of Defense & "Ethics"
Summary:
A high-profile spat between Anthropic and the DoD emerged over contract boundaries. Media framed this as an ethical stance—but the real story is murkier.
Breakdown:
- Context & Timeline ([27:34]–[35:55])
- Anthropic, with deep military contracts since 2024, made a public show of “refusing” mass surveillance and autonomous weaponization contracts after their LLM was used in the 2026 Iran conflict.
- Ed: “Anthropic has been installed with classified access in the US military since June 2024… They were used in Venezuela, incursion, whatever you call that, … still used in [Iran].” ([28:42])
- The ethical line in the sand? “LLMs are not consistent enough to run autonomous weapons.” ([29:57]) Both agree: that’s a distraction—LLMs have no real role in weapon control.
- Was it Ethics or Optics?
- Anthropic’s stance arrived just as U.S. military operations drew scrutiny and the company courted an IPO.
- Ed: “Anthropic had this swell of good press because people thought they were opposed to the war in Iran, when in fact they were directly part of it.” ([31:41])
- Financial Reality Exposed
- Anthropic, under legal pressure, revealed under oath it’s earned only $5 billion in revenue versus $60 billion in investment—a fraction of what was portrayed in the media.
- Cal: “Anthropic had to, under oath… release their revenues… $5 billion to date on $60 billion of investment and debt.” ([36:42])
- The Media’s Role
- Hot-take coverage often garbles the facts (“Pentagon convinced that Claude has a soul”—an inversion of the actual complaint).
- Ed: "Anthropic is just this wasteful crap pile… But everyone’s like, ‘Anthropic’s capacity is so... their models are so good.’ ...Every time I hear a story like this, I feel like I’m going insane.” ([37:55])
3. The Data Center Boom – How Real is It?
Summary:
Public perception is of a nonstop, resource-devouring AI data center boom. But the numbers—and the logistics—suggest the hype is outpacing reality.
Breakdown:
- Delays and “Phantom” Projects ([48:13]–[54:57])
- Bloomberg/Sightline report: Only ~1/3 of the 2026-announced data centers are actually being built. Most are just speculation.
- Ed: “Every time you hear someone say we’re building a 2 gigawatt data center, real simple: just say no you’re not. …We do not know how long it takes to build a 1-GW data center because no one has built one.” ([49:06])
- GPU Glut and Accounting Tricks
- Nvidia claims half a trillion in GPU sales—but, as Ed points out, “Where are the GPUs going, Jensen?” ([51:25])
- OEMs/ODMs stockpile chips in warehouses, waiting for data center construction to catch up. Speculative contracts and accounting (pre-selling, warehousing, “transfer of ownership”) may be inflating actual value and risk.
- “Nvidia is just pre-selling years of GPUs. …If there's only 15.2 GW of actual capacity being built...Nvidia can’t sell more GPUs unless it wants to put them in a warehouse.” ([54:57])
- Echoes of Past Tech Bubbles
- Cal likens it to the 2008 financial crisis: “There’s more money...than there are things to actually spend it on...shenanigans follow and you get a very fragile system.” ([59:53])
- If private credit (substantial backing for data centers) is at risk, so are retirement and insurance funds.
- Ed: “Even the people who are AI boosters should be thinking about this because this is an existential threat. …The maths doesn’t make sense.” ([64:32])
- AI Startups: A Coming Reckoning
- VC investment in AI startups is unsustainably high. Most are just wrappers for someone else’s model (“The actual IP doesn’t exist.” [68:18])
- Ed: “You cannot control the cost of a user with an LLM...your most excited customers are the most expensive, which is antithetical to how a business works.” ([71:17])
Notable Quotes & Memorable Moments
-
Ed on the Molt Book Hype:
"The moment I saw it... This is just LLMs doing what they think a social network looks like... This is fake." ([04:04])
-
On Media Coverage:
"Anyone who covered this should be ashamed of themselves... the worst job possible." ([09:23])
-
On AI “Agents” & OpenClaw:
"OpenClaw is a Python library that makes it easy... to write your own agent. It turns out agents are hard to write..." (Cal, [12:34])
-
Financial Reality Check:
"Anthropic had to… release their revenues and it was $5 billion… $60 billion of investment and debt." (Cal, [36:42])
-
GPU Glut:
"Nvidia is just pre-selling years of GPUs... If there’s only 15.2 GW of actual capacity being built… Nvidia can’t sell more GPUs unless it wants to put them in a warehouse." (Ed, [54:57])
-
Startup Bubble:
“Every single AI startup is a wrapper of a model owned by someone else … you cannot control the cost of a user…” (Ed, [71:00])
-
Cal on Dread Laundering:
“You will launder a sense of despair … about one thing related to AI to help amplify a less supported feeling of dread or despair about another.” ([43:38])
Timestamps for Important Segments
- OpenClaw & Molt Book Hype: [02:28]–[14:18]
- OpenAI Acquisition & Business Dynamics: [14:18]–[22:45]
- Distributed vs. Monolithic AI Visions: [21:10]–[23:00]
- Anthropic and DoD "Ethics": [27:34]–[35:55]
- Anthropic Revenue Reality: [35:55]–[37:39]
- Data Center Speculation & GPU Backlog: [48:13]–[54:57]
- Private Credit & AI Investment Bubble: [59:53]–[64:32]
- AI Startup Unsustainability: [68:18]–[72:22]
Tone and Final Verdict
Both Cal and Ed maintain a skeptical, irreverent, and occasionally exasperated tone, highlighting how breathless media coverage often inflates what are, ultimately, familiar cycles of oversold tech promises. They combine technical rigor with sharp wit, pushing back against panic and boosterism alike.
Final Judgment:
2026 has not been a good year for AI if measured by hype versus actual progress and stability. The field continues to ride a rollercoaster of exaggerated claims, shaky business models, and underlying infrastructure issues that could precipitate a dramatic correction (even if, as Ed notes, it's “not as bad as 2008—but people should be paying attention”).
This summary captures all major topics, critical insights, and the distinctive, skeptical tone of the episode's participants. For those newly catching up, it provides a clear, structured retrospective on the key events shaping the AI landscape in early 2026.