The a16z Show: “Ben Thompson: Anthropic, the Pentagon, and the Limits of Private Power” (March 5, 2026)
Episode Overview
This episode features Ben Thompson (of Stratechery) in conversation with John Coogan and Geordie Hayes, exploring the escalating conflict between private AI companies (specifically Anthropic) and the U.S. government. Prompted by the Department of War’s recent designation of Anthropic as a “supply chain risk” over its AI model's safeguards, the discussion dives into the power dynamics between the state, private enterprise, and the future of AI governance. Thompson articulates the profound implications of AI as a tool of power—comparing it to nuclear weapons—and examines the dangers of private companies unilaterally making far-reaching decisions that traditionally belong to the public sphere.
Key Discussion Points & Insights
1. Private Power vs. Government Authority
-
Nuclear Analogy & The Role of Government
- Thompson echoes Dario Amodei's analogy of AI as the next nuclear weapons—what if a private company developed something so powerful, it would demand state-level intervention?
- Quote: “If we're going to analogize it to nuclear weapons, as Dario Amadei has done repeatedly, you have to think through what would happen in a world where a private company developed nuclear weapons. What would the government's response be?” (02:09, Ben Thompson)
-
The Realpolitik of Powerful Technologies
- Thompson is not making a normative argument (“this is good/bad”) but rather observing a structural reality: if AI is as powerful as claimed, “the people with guns are going to want to say,” whether that's the U.S. or a nation like China.
- Memorable Quote: “You may not be interested in politics, but politics has an interest in you.” (00:00, Ben Thompson & repeated throughout)
2. Anthropic, Alignment, and International Tensions
-
Anthropic’s Position & Taiwan/China
- Thompson critiques the lack of nuanced public discussion about the trade-offs in restricting AI chips to China and the geopolitical consequences for Taiwan.
- He argues it may be safer to maintain China’s dependency on Taiwan chip manufacturing rather than completely sever those ties, citing that otherwise, China could be incentivized to attack Taiwan, destabilizing global supply chains.
- Quote: “Taiwan is 70 miles off the coast of China. It's not an ideal position…All this stuff—everything going forward has massive trade-offs.” (08:09, Ben Thompson)
-
Trade-Offs and Absolutism
- Thompson highlights that every decision in tech policy and geopolitics has “massive trade-offs,” and unilateral restrictions can be “inviting very bad outcomes.”
3. Democracy, Private Decision-Making, and the Problem of Alignment
-
Who Decides?
- The episode explores the paradox of alignment: aligning AI to “humanity” is often a stand-in for aligning it with a particular nation’s interests—here, the U.S.—so long as nation-states exist.
- Thompson is wary of allowing private executives to become de facto sovereigns: “If AI is what it is, I think that's going to be...intolerable—to those with power—to have a private executive making those decisions…” (19:58, Ben Thompson)
-
Law, Power, and the Limits of Private Action
- The fundamental reality, per Thompson: “All these laws are subject to the agreement of those governed…And the final say is those who successfully inflict violence.” (13:25, Ben Thompson)
- Memorable moment: Thompson’s blunt reduction of why we pay taxes (“because someone with guns will come to my house and throw me in jail”) as a metaphor for state power over private innovation. (18:51–19:14, Ben Thompson)
-
Democratic Legitimacy vs. Technocratic Control
- Thompson points out the danger in surrendering policy decisions to unelected, unaccountable executives. Even if Congress is slow or dysfunctional, “giving up on the democratic process” for private governance is “quite fraught.” (24:20, Ben Thompson)
4. Information Asymmetry and Policy Conflict
- The Anthropic-Pentagon Dispute
- There’s speculation on how information asymmetry led to recent events: the Pentagon may have advanced classified knowledge of an impending major conflict, while Anthropic saw the government’s deadline as arbitrary.
- Thompson cautions against over-parsing the incident but says it is emblematic of bigger structural forces at play. (11:27–13:25)
5. AI, Surveillance, and Legal Loopholes
-
Surveillance Law & Tech Evolution
- Thompson is alarmed at how old laws were written for a world of friction and now fail in a “computers at scale” and AI world that obliterates those limits.
- Quote: “So many things in our society assumed a certain level of friction in doing things that computers already obviated. And AI is going to just do that on steroids.” (02:09, Ben Thompson)
- On the NSA and domestic surveillance: “You just sort of thought about it as like an independent agent, like the CIA. But that's—[the NSA] made a lot of this story make more sense, right?” (26:19, Ben Thompson)
-
Legislative Stalemate
- Thompson’s prescription: “We need new laws.” He urges Congress to address digital surveillance, not leave it to either executive fiat or government overreach.
- Quote: “If you don't have ‘it's legal or not legal’ as your guiding standard, the only alternative is someone has to decide. And…the implication of that…means a private executive is deciding.” (19:58, Ben Thompson)
6. Historical Parallels and Corporate Dilemmas
-
Intel’s Approach & The Limits of Public Sector Innovation
- Thompson references Bob Noyce’s philosophy at Intel: sell to the government, but don’t design exclusively for it—general-purpose commercial innovation outpaces bespoke government projects.
- In AI, the scale and cost required—“hundreds of billions, approaching a trillion dollars a year in capex”—requires a mass-market approach; the government cannot sustain this alone. (15:56–18:50, Ben Thompson)
-
OpenAI & “Jailbreak” Agreements
- OpenAI’s approach is contrasted with Anthropic: it seems to allow lawful Pentagon uses, while still maintaining its ability to block digital surveillance—resulting in a “jailbreak competition” with government customers. (29:11, Ben Thompson)
- Anthropic’s position is seen as holding stronger public and talent-base support (“local advantage in SF”), but a weaker position nationally.
7. Broader Reflections and the Future
-
Silicon Valley and its Obligations
- Thompson notes Silicon Valley’s long-standing tension about working with the military, the relevance of Project Maven, and the uneasy position tech giants find themselves in regarding defense work.
- Quote: “There is this very naive view of the world that doesn’t understand why militaries are important and necessary. And I think Silicon Valley got itself in a lot of trouble by giving in to this naive mindset…” (32:21, Ben Thompson)
- The core question: Are tech companies just another arm of American state power, or do they have the right (even obligation) to resist?
-
Democracy and Private Power
- “We should have unelected, unaccountable individuals making weighty decisions”—Thompson warns against this drift if public policy abdicates to private actors. (24:57, Ben Thompson)
8. Notable Quotes & Memorable Moments
- On Realpolitik:
- “At the end of the day, if you really want to distill down…If I don’t [pay taxes], someone with guns will come to my house and throw me in jail.” (18:51–19:14)
- On Law and Power:
- “All these rights, all these laws are subject to the agreement of those governed by them…And the final say is those who successfully inflict violence.” (13:25)
- On Private Governance:
- “That implication is quite fraught.” (24:20)
- On AI’s Uniqueness:
- “Is AI actually applicable to every other technology that's come along, or if it is the potential to be a source of power…it's going to be dealt with as such.” (33:53)
9. Final Reflections
- AI’s Future as a “Source of Power”
- Closing thoughts circle back: as AI becomes central to national—and even international—power, both industry and policy must reckon with the “limits of private power.”
Important Timestamps
- 00:00 – “You may not be interested in politics, but politics has an interest in you.”
- 02:09 – Nuclear analogy and central thesis on private vs. public power.
- 08:09 – Trade-offs in U.S./China chip policy and global risk calculus.
- 13:25 – Laws, violence, and the roots of government authority.
- 18:51 – Why we pay taxes: power, not just legality.
- 19:58 – The “intolerability” of private executives making sovereign decisions about AI.
- 24:20 – Warning about the democratic process and private governance.
- 32:21 – Silicon Valley’s ambivalence about supporting the military.
- 33:53 – Is AI a fundamentally new order of technology?
Flow, Tone, and Concluding Thoughts
The conversation mixes philosophical, historical, and practical policy analysis, with Thompson generally maintaining a sober, slightly exasperated tone. He is careful not to pass normative judgment but stresses over and over that “power will want a say,” and that neither idealistic nor purely market-based approaches adequately grapple with the scale of what’s now at stake. By episode’s end, the hosts and Thompson agree that the current moment is unprecedented—forcing both governments and companies to confront foundational questions previously kicked down the road.
Summary Author’s Note:
The discussion is essential listening for technologists, policymakers, and anyone attentive to the evolving intersection of AI and governance, as it frames the next decade’s struggles in terms that are refreshingly clear-eyed—if unsettling.
