The Rest Is Politics: "The Future of Warfare: Anthropic vs OpenAI"
Date: March 5, 2026
Hosts: Alastair Campbell & Rory Stewart
Guest: Matt Clifford (AI policy expert, former UK AI advisor)
Theme: A discussion on AI's growing role in defense, the ethics and sovereignty issues between governments and AI companies (Anthropic, OpenAI, Starlink), and the urgent need for transatlantic cooperation.
Episode Overview
In this episode, Rory Stewart and guest Matt Clifford delve into the rapidly evolving intersection of artificial intelligence, defense, and geopolitics. They analyze Anthropic’s controversial contract with the US Department of Defense (DoD), explore precedents set by private tech companies like Starlink in military contexts, and reflect on the vulnerabilities and complexities of modern defense procurement. The conversation also assesses the challenges European states face when relying on foreign-controlled AI and tech infrastructure.
Key Discussion Points & Insights
1. Ethical Boundaries: Private Companies versus Government Control
- The Anthropic-DoD deal sparks debate over whether private companies should draw ethical lines on government defense use of their technology.
- Scenario Analysis: Matt Clifford and ‘Guest Expert’ describe a thought experiment (Kamala Harris as president, Elon Musk imposing ethical red lines on Starlink), showing public opinion often shifts based on politics rather than principle.
- Quote:
- "I think the same people who are protesting now...would be saying, why should a private company be setting red lines over what a democratically elected government does with the technology?" (Guest Expert, [01:07])
2. Sovereignty and Dependency in Modern Military Tech
- Increasing complexity means states are dependent on continual service—and sometimes, goodwill—of private contractors.
- Key Example:
- The collapse of the Afghan Air Force after US contractors left, as their planes became inoperable without ongoing software updates.
- Quote:
- "What really happened to the Afghan military was not the US decision to withdraw 5,000 troops. It was that the US withdrew about 20,000 contractors...as soon as they were removed, all these helicopters and planes...were completely useless because they basically needed a software engineer tinkering with them every time they landed." (Matt Clifford, [03:12])
- The notion of “ownership” changes: “If what you've purchased is a hand grenade, nobody can stop you using a hand grenade...But increasingly, what we're talking about is defense companies...they make their money through software engineers that maintain them.” (Matt Clifford, [02:44])
3. Ultimate State Power: Nationalization as Last Resort
- Governments, in extremis, have the option to nationalize critical technology (e.g., Starlink), but this is drastic and only likely in true wartime.
- Quote:
- “Short of nationalizing Starlink or Anthropic, which they could do...you are very vulnerable to companies no longer providing.” (Matt Clifford, [04:34])
- Quote:
4. Contradictions in AI Company Messaging
- Critique of Anthropic (and similar firms): It’s inconsistent to claim you are building potentially civilization-ending tech, while being surprised when governments insist on stronger oversight.
- Quote:
- "There is a contradiction in spending 3, 4 years saying, by the way, we're building incredibly dangerous technology...then being upset when government says, well, in that case we really need to control it." (Guest Expert, [05:08])
- Quote:
5. Competitive Dynamics: Anthropic vs OpenAI
- The arms race between AI labs (Anthropic, OpenAI) is dangerous: if one holds back on defense applications, another might step in for profit, with little strategic consideration for safety.
- Public negotiations and policy posturing are occurring in real time, sometimes even on social media.
- Quote:
- “As soon as Anthropic refuses to supply, Sam Altman turns up and seems to initially say, oh, well, I'm not too bothered about surveilling everyone and killing everyone with autonomous weapons...then a couple of days later it's like, oh, okay, maybe I sadly ever did that and I'm going to backtrack.” (Matt Clifford, [05:39])
- “Someone point out it's the first live negotiation on Twitter between a private company and the Department of Defense.” (Guest Expert, [06:09])
- Quote:
6. Wider Political Context: U.S. Instability
- Deep concern over the current U.S. administration’s influence (President Trump, Secretary of War Pete Hegseth).
- Accusations of impulsiveness, lack of strategic direction, and the risk that critical AI advances are happening in a uniquely unstable political environment.
- Suggestion that China may handle long-term AI risks more systematically than the contemporary U.S.
- Quote:
- "If you were really thinking what is the ideal political framework in which to develop...the most dangerous world changing technology...My God, we're unlucky that it's happening in the middle of Trump's America." (Matt Clifford, [06:36])
7. Strategic Autonomy for the U.K. and Europe
- The episode closes with a call for Europe and allies to build domestic AI development capacity to avoid dependence on U.S. firms.
- The urgency of building compute power, acquiring advanced chips, and developing homegrown frontier AI capacity is stressed.
- Quote:
- “We need to have bargaining chips, we need to be relevant, and at the moment, we don't have the hard assets in AI to have a seat at the table.” (Guest Expert, [07:33])
- “Let’s put the economies of Germany, France, the European Union, together with Canada, the UK...and let's solve the problems that Matt's raised. Let's work out how the hell we do build a 2 gigawatt center...get the advanced chips...get one of these frontier AI models here, because everything that you've told me is terrifying.” (Matt Clifford, [08:12])
- Quote:
Notable Quotes & Memorable Moments
| Timestamp | Speaker | Quote | |-----------|---------------|------------------------------------------------------------------------------------------------------------------| | [01:07] | Guest Expert | "Why should a private company be setting red lines over what a democratically elected government does with the technology?" | | [03:12] | Matt Clifford | "The Afghan Air Force...were completely useless because they needed a software engineer tinkering with them every time they landed." | | [04:34] | Matt Clifford | "Short of nationalizing Starlink or Anthropic, which they could do...you are very vulnerable to companies no longer providing." | | [05:08] | Guest Expert | "There is a contradiction in...building incredibly dangerous technology...then being upset when government says, well, in that case we really need to control it." | | [06:09] | Guest Expert | "It's the first live negotiation on Twitter between a private company and the Department of Defense." | | [06:36] | Matt Clifford | "My God, we're unlucky that it's happening in the middle of Trump's America." | | [07:33] | Guest Expert | "We don't have the hard assets in AI to have a seat at the table." | | [08:12] | Matt Clifford | "Let's work out how...we get one of these frontier AI models here because everything that you've told me is terrifying." |
Timeline of Key Segments
- [01:07] – Thought experiment: private sector vs government control over technology
- [02:44–04:34] – Dependency on contractors and the Afghan example
- [05:08] – Contradictions in AI companies’ ethical postures
- [05:39–06:09] – Competitive dynamics (Anthropic vs OpenAI) and public policy drama on Twitter
- [06:36] – Critique of current U.S. leadership and global AI risk management
- [07:33–08:12] – Urgent call for European and ally AI investment and strategic autonomy
Tone and Style
The episode is characterized by frank, insightful debate, with an undercurrent of concern about the current trajectory of AI’s military integration and the fragile nature of Western technological sovereignty. The conversation remains accessible but serious, blending policy analysis with personal experience and historical analogy.
