The Times Tech Podcast
Episode: Anthropic vs Pentagon: How AI is changing war
Date: March 6, 2026
Hosts: Danny Fortson (San Francisco), Katie Prescott (London)
Guest: Sean Gourley (founder & former CEO, Primer; AI Defense Advisor)
Episode Overview
This episode delves into the explosive conflict between Anthropic—a leading AI company—and the US government, exploring how AI is transforming modern warfare. On the heels of American airstrikes on Iran reportedly aided by Anthropic’s technology (Claude), the Trump administration abruptly cut ties with the company, designating it a national security threat. The hosts break down the ethical, political, and technological ramifications, chat to defense AI expert Sean Gourley, and tease out the larger geopolitical implications—especially as the AI arms race with China intensifies.
1. Setting the Scene: The Anthropic-Pentagon Feud (02:12–07:24)
- Trump’s Dual Stance: Just as AI built by Anthropic allegedly played a role in the Iran airstrikes, President Trump announces a ban on all federal use of Anthropic’s tech, declaring on Truth Social:
“We don’t need it, we don’t want it and we will not do business with them. Again, exclamation point, just to underline the point.” (02:26, quoted by Danny) - Nature of the Dispute:
- Anthropic’s contract with the DoD originally allowed use of its AI models for:
- Intelligence analysis, cyber ops, operational planning, simulations.
- But Anthropic insisted on two “red lines”:
- No domestic surveillance inside the US.
- No fully autonomous, human-out-of-the-loop weapons.
- Anthropic’s contract with the DoD originally allowed use of its AI models for:
- The Fallout:
- Pentagon gives an ultimatum; Anthropic refuses and loses an alleged $200M government deal.
- Anthropic is labeled a security threat—unprecedented for a Silicon Valley company.
- Anthropic vows to sue; OpenAI swoops in to take the DoD contract, touting even stricter red lines—which confuses the debate.
Memorable Moment:
“It all went pear shaped...and amazingly, the Defense Department...designated Anthropic, an American company, a security threat and a risk to its supply chain.”
— Danny (05:21)
2. Key Players and Personalities (08:29–19:38)
- The crisis turns as much on individual leadership and rivalries as technology.
Emil Michael: The Pentagon’s “CTO” (09:11–13:18)
- Uber’s infamous business exec, known for aggressive dealmaking and controversial culture.
- Now, as Undersecretary of War, he’s tasked with injecting AI into US defense.
- Quote from Bloomberg:
"We can't let any one company stand between us and the warfighter because they don't make the rules. Congress makes the rules, the president sign them, we execute them, and we do so safely." (12:35, Emil Michael)
Sam Altman vs Dario Amodei (13:31–17:10)
- Altman (OpenAI): Initially backs Anthropic’s ethical stand, but takes Pentagon deal after Anthropic is dropped, drawing backlash.
- Explains flip-flop:
"One thing I did, I think I did wrong. We shouldn't have rushed this thing out on Friday. The issues are super complex and demand clear communication. We were genuinely trying to deescalate things..." (17:10, Altman) - Anthropic founded in split from OpenAI to focus on AI safety; now caught on its own ethical knife edge.
Culture Wars and Tech Ethics
- Political polarization: Users pick AI tools based on perceived ethics and politics; OpenAI seen as “Trump’s partner.”
- Larger question: Should private tech CEOs or the government decide the limits of military AI?
3. The AI-Defense Dilemma: Interview with Sean Gourley (21:38–40:52)
Segment Start [21:38]
Gourley’s Insider Perspective (21:47–24:09)
- Built one of the first AI tools for US intelligence.
- Early AI deployments (e.g. Project Maven) revealed bureaucratic and cultural challenges—especially getting “social license” from internal teams to work with the military.
The Strategic Stakes (24:09–27:52)
- “Those who control artificial intelligence will control the geopolitical sphere through military dominance.” — Gourley (24:09)
- AI is the “third technological offset” in military history (first: nukes; second: precision/stealth).
- Rapid technological improvement:
“A human piloted F-35 against a machine piloted autonomous drone… The autonomous drone will beat the human again and again.” (26:24) - If a nation secures even a six-month lead in fielding advanced autonomous weapons, it could dominate global conflict.
Moral Dilemmas for AI Companies (30:14–32:19)
- Once a company builds and deploys AI tech for the Pentagon, pulling back is either naive or irresponsible.
“If you’re so fortunate as a company to build a technology that has immense military advantages, you don’t just get to hit stop and say, I’m out.” (31:03) - When AI firms restrict the government beyond legal limits, they risk subverting democracy rather than protecting it.
Employee “Social License” (32:19–35:51)
- Google’s Project Maven debacle: Employee revolts show how friction over defense work can cripple AI ambitions (“social license” lost).
- “In such a competitive war for talent…losing 10, 20% of your top researchers could put you back six months compared to your competitors…” — Gourley (34:44)
- Founding a company on “do no harm” makes later pivot to defense highly fraught.
Impossible Regulation, Global Arms Race (35:51–40:13)
- International AI regulation is considered futile by Gourley:
“It’s incredibly difficult to regulate a set of weights that can be downloaded onto a thumb drive.” (36:14) - The “rule-based order” is crumbling in favor of “might is right”; arms races will define AI’s trajectory.
- The threat is real and immediate—e.g., Anduril’s unveiling of the world’s first fully autonomous fighter jet:
“If China were to get a six month lead on that…they would take advantage…because I think whoever owns that is going to own the skies…” (39:12)
Light Note – The Top Gun Problem (40:13–40:39)
- Joking about AI not replacing Tom Cruise in Top Gun:
Katie: “Nothing is going to look as good as Tom Cruise in Top Gun, guys. Come on.” (40:16)
4. Reflection & Key Takeaways (40:56–42:30)
The Human Burden
- Katie reflects: “With great power comes great responsibility.” (41:35)
- Danny: AI engineers and company leaders have to seriously confront the reality that their creations will inevitably be used in war—including for lethal purposes.
Final Provocation
- Katie: "Who decides the guardrails? Is it the military? The government? Silicon Valley bosses?" (42:48)
- The debate remains unresolved—and the episode ends inviting listener feedback on these foundational questions.
Notable Quotes
- President Trump (quoted):
“We don’t need it, we don’t want it and we will not do business with them. Again, exclamation point…” (02:26) - Emil Michael (Pentagon):
“…We can’t let any one company stand between us and the warfighter because they don’t make the rules.” (12:35) - Sean Gourley:
“Those who control artificial intelligence will control the geopolitical sphere through military dominance.” (24:09)
“If you get a six month lead on your opponent, you may have twice the capabilities…which may mean you have half or a quarter of the attrition or any conflict.” (37:50) - Danny Fortson:
“To put a fine point on it…they’re going to be used to kill people.” (41:35)
Timestamps for Key Segments
- 02:12 – The Anthropic-Pentagon battle explained
- 05:21 – Anthropic labeled security threat; OpenAI steps in
- 09:11 – Emil Michael’s Uber history and Pentagon role
- 13:18 – Pentagon’s view on tech company red lines
- 15:11 – The OpenAI/Anthropic rivalry and fallout
- 21:38 – Sean Gourley interview begins
- 24:09 – Why AI is now central to military power
- 26:24 – War games: AI vs. human pilots
- 32:19 – The “social license” issue at AI firms
- 36:14 – The “impossibility” of regulating AI
- 38:40 – Autonomous fighter jets and the China threat
- 40:56 – Hosts’ closing reflections
Tone & Style
- Witty, open, at times irreverent, engaging deeply with tough moral and strategic questions.
- Hosts use humor to lighten heavy, high-stakes topics (“Would you watch an AI play volleyball greased up in jeans?” – 40:31).
- Expert insights provided without jargon, making the cutting edge of AI military technology accessible to a broad audience.
In Summary
This episode uses the dramatic Anthropic-Pentagon crisis to expose how the arrival of powerful, general-purpose AI tools is forcing governments, tech companies, and society to confront previously theoretical, but now urgent, questions of ethics, control, and international security. With the stakes raised by geopolitical tensions, gut decisions in Silicon Valley boardrooms may now carry consequences measured in lives—and in the fate of nations.
