DealBook Summit: Anthropic Chief Executive Has A.I. Bubble ‘Concerns’
Podcast: DealBook Summit
Host: Andrew Ross Sorkin, The New York Times
Guest: Dario Amodei, CEO & Co-founder, Anthropic
Date: December 4, 2025
Episode Overview
In this episode, Andrew Ross Sorkin sits down with Dario Amodei, CEO and co-founder of Anthropic, at the 2025 DealBook Summit in New York City. The conversation explores the realities and risks of the explosive growth in artificial intelligence, questions over an “AI bubble,” the financial models behind leading AI companies, industry competition, national security, regulatory dynamics, and the societal impacts of AI—especially on jobs and the future of work. Amodei offers a candid, deeply informed perspective based on his central role in the field, including his previous work at Baidu, Google, and OpenAI.
Key Discussion Points & Insights
1. Reflections on AI’s Rapid Growth
- Surprises and Expectations Since 2014
- Amodei is unsurprised by AI’s economic and societal importance—"that it would be central to the economy, that it would be central to national security, that it would be central to scientific research" (03:01).
- He did not foresee becoming a leader in the space or the financialization of the AI sector.
- Scaling Laws & Technological Confidence
- Amodei reiterates his confidence in the fundamental scaling laws of AI: "You put more compute, you put more data into AI with small modifications...they get better and better at every task" (04:36).
2. The AI Bubble: Is It Real?
-
Technological vs. Economic Sides
-
Amodei: "On the technological side of it, I feel really solid...On the economic side, I have my concerns" (04:17).
-
Current industry investment numbers (“$100 billion a year” in spend) reflect high-stakes bets on future value, but there’s a “real dilemma deriving from uncertainty in how quickly the economic value is going to grow” (07:07).
-
He describes a "cone of uncertainty": planning massive infrastructure investments years before future revenue is known.
“There is what I've been calling internally, this cone of uncertainty, where I don't know if a year from now it's going to be 20 billion or it's going to be 50... it's very uncertain. I try to plan in a conservative way...” (07:47)
-
-
YOLO-ing Players & Risk Management
- Amodei expresses concern over some industry actors who are “YOLOing, who pull the risk dial too far” in their growth assumptions (07:07), without naming names.
3. AI Financial Structures & “Circular Deals”
-
Vendor Financing and Chip Investments
-
Amodei explains the logic behind chipmakers like Nvidia taking stakes in AI companies, which then use that capital to buy chips, and offers a rational defense for such deals under “reasonable” growth assumptions.
“One player has capital and has an interest because they're selling...the chips. And the other player is pretty confident they'll have the revenue at the right time. But they don't have $50 billion at hand. So I don't think there's anything inappropriate about that in principle. Now if you start stacking these to huge amounts of money...then, yeah, you can overextend yourself…” (14:44)
-
-
Depreciation Schedule Debates
-
On how long chips retain competitive value, Amodei takes a conservative stance, acknowledging rapid hardware cycles:
“The issue isn't the lifetime of the chips. Chips keep working for a long time. The issue is new chips come out that are faster and cheaper...we make conservative assumptions here, and we think we're going to be okay in basically almost all worlds” (15:53).
-
4. Competitive Landscape: Anthropic, Google, OpenAI
-
Enterprise vs. Consumer Focus
-
Anthropic places itself outside the high-profile “model wars” between Google and OpenAI, emphasizing its enterprise focus:
“Both of these other two players are primarily focused on the consumer...we've optimized our models more and more for the needs of businesses” (17:46).
-
-
Model Differentiation and Stickiness
- Amodei disputes the idea that AGI would make all models interchangeable:
“Specialization exists. It exists alongside general intelligence…even our API business...companies have great difficulty switching from one model to another...” (20:23)
- Amodei disputes the idea that AGI would make all models interchangeable:
5. AGI: How Close Are We and How Do We Get There?
-
Scaling Will Get Us There
- Amodei argues against sudden leaps: “Scaling is going to get us there...every few months we release a new model gets better at coding and it gets better at science...” (21:20).
-
Timeline Skepticism
- He dislikes hard AGI timelines, seeing progress as “just an exponential” like Moore’s Law with no special inflection point (21:35).
6. National Security & US-China Technology Policy
-
Chips and Export Controls
-
Amodei reiterates his strong stance against selling advanced chips to China, citing potential national security threats:
“Eventually, the models are going to get to the point where they look like a country of geniuses in a data center...If it's plopped down in an authoritarian country, I feel like they can outsmart us in every way” (24:13).
-
-
Surveillance Risks—Here and Abroad
-
While more concerned about authoritarian regimes, Amodei warns of surveillance risks in democracies.
“We should aggressively use [AI] in every possible way, except in the ways that would make us more like our authoritarian adversaries...We need to beat them, but we need to not do the things that would cause us to become them.” (25:29)
-
7. Regulation, Policy Fights, and Accusations of ‘Regulatory Capture’
-
Accusations and Exemptions
-
Responding to accusations (from figures like David Sacks) of “regulatory capture” and fear-mongering, Amodei points out that legislation he’s supported contains exemptions for startups and small AI players (27:18).
“Almost all the AI regulation that we've supported has exemptions for small players...” (27:18)
-
-
Why Regulation Is Different This Time
-
Amodei distinguishes AI from past tech waves, warning the stakes are higher:
“Those who are closest to AI don’t feel [that regulation can wait]...If you poll the actual researchers...they're excited...but they're also worried...” (28:25)
-
He uses a car metaphor for the folly of banning regulation:
“Saying that for 10 years we won't regulate that technology...It's like saying, I'm driving a car, I'm going to rip out the steering wheel because I don't need to steer for 10 years.” (30:44)
-
8. Jobs, Productivity, and the Future of Work
-
Job Losses and Societal Adaptation
-
Amodei acknowledges the risk to “half of all entry-level jobs,” but emphasizes adaptation rather than doom:
“Warning about them is the first step towards solving them...if we don't warn about them, we'll just kind of blindly walk into the landmine...” (31:04)
-
-
Three Levels of Response:
- Private Sector Adaptation: Encourage value creation via AI that augments rather than just replaces human work.
- Government Role: Retraining programs and possible fiscal policies to distribute gains from increased productivity. “I think the government is going to need to have some role here” (32:43).
- Societal Transformation: Society must consider new structures for a world where work is less central—citing Keynes’s prediction of shorter workweeks and greater fulfillment (33:46).
Notable Quotes & Memorable Moments
-
On Model Scaling:
“As you train these models in this very simple way, you know, with a few simple modifications, they get better and better at every task under the sun.” (04:36, Dario Amodei)
-
On the ‘Cone of Uncertainty’:
“We want to buy enough compute that we're confident, you know, Even in the 10th percentile, you know, scenario...But we're trying to manage that risk well while also buying an amount of compute that allows us to be competitive with the other players.” (11:46, Dario Amodei)
-
On Industry Risk-Taking:
“I think there are some players who, you know, who are YOLOing, who pull the risk dial too far. And I'm very concerned.” (07:07, Dario Amodei)
-
On US-China AI Rivalry:
“We need to beat [authoritarians], but we need to not do the things that would cause us to become them. That is the one constraint we should observe.” (25:29, Dario Amodei)
-
On the Need for Regulation:
“If you poll the actual researchers who work on AI...they're excited about the potential, but they're also worried...We as a society...need to think ahead about those downsides.” (28:25, Dario Amodei)
-
On Jobs and Social Structure:
“I think society is flexible and society can...we all need to figure out how to operate in the post AGI age.” (34:20, Dario Amodei)
Timestamps for Key Segments
- Introduction & Amodei’s Background: 00:57–02:29
- AI’s Surprising Growth & Personal Journey: 02:29–03:41
- Are We in an AI Bubble?: 04:11–05:50
- Revenue “Cone of Uncertainty” & Infrastructure Gamble: 07:07–09:56
- Circular Deals and Chip Investments: 12:33–15:05
- Chip Depreciation & Business Strategy: 15:05–16:43
- Model Wars: Google, OpenAI, Anthropic: 16:53–19:44
- AGI Timelines & Scaling Law Perspective: 20:57–22:46
- AI, National Security, & China: 22:46–25:11
- Surveillance, Policy, & Political Leadership: 25:11–26:39
- Regulatory Debates & Capture Allegations: 26:39–30:44
- AI’s Impact on Jobs & Societal Structure: 30:44–34:57
Conclusion
This episode delivers a rare, nuanced perspective from one of AI’s central architects. Dario Amodei voices optimism for AI’s vast potential while issuing sobering warnings about financial bubbles, regulatory inaction, security threats, and existential impacts on jobs and society. His “cone of uncertainty” metaphor and candid assessment of industry risk-taking bring clarity to the high-stakes, forward-looking debates within the sector. If you want to understand not just what’s happening, but what’s at risk and how insiders see the road ahead, this conversation is essential.
