"Anthropic vs. the Pentagon: Inside the Battle Over A.I. Warfare"
The Daily (The New York Times), March 9, 2026
Host: Natalie Kitroeff
Guest: Shira Frenkel (New York Times tech reporter)
Episode Overview
This episode explores the explosive standoff between AI giant Anthropic and the Pentagon amid the escalating U.S. military campaign against Iran. At the heart of the story is an unprecedented power struggle over who dictates the boundaries and ethical use of AI in warfare. Through the lens of negotiations between Anthropic and the U.S. Department of Defense, the conversation lays bare the stakes, the future of "robot wars," and how private tech companies, military leaders, and their employees are shaping—and being shaped by—the coming reality of AI-backed conflict.
Key Discussion Points & Insights
1. AI’s Role in U.S. Military Operations
- Practical Deployment in War: The U.S. is now employing AI, specifically from companies like Anthropic, to analyze intelligence, satellite imagery, and identify military targets in Iran.
- “AI can analyze data for the military faster than a human being possibly could. It’s proving its worthiness every single day.” (Shira Frenkel, 02:17)
- Increasing Mutual Dependence: The Pentagon and Silicon Valley are now more intertwined than ever—AI companies need government contracts, and the military needs cutting-edge tech to stay competitive.
2. Origins of the Battle: From Cooperation to Crisis
- Invitation to Silicon Valley: In a spirit of optimism, the Pentagon invited leading AI players—Anthropic, OpenAI, Google, Xai—to contribute technology for military use. Anthropic quickly emerged as an indispensable partner, even being authorized to work on classified systems.
- “Anthropic emerges as kind of the best and the most seamlessly integrated into the Pentagon systems...it really quickly became absolutely fundamental to their work.” (Shira Frenkel, 05:21)
- Anthropic’s Reputation: Known for a focus on AI safety and ethical standards, Anthropic’s deep government integration surprised some given their origin story as a “socially responsible” alternative to OpenAI.
- “This is a company that was founded by people who left OpenAI because they wanted a safer AI company.” (06:09)
- Turning Point – The Maduro Operation: Tensions escalated when reports claimed Anthropic’s technology was used in Venezuela's Nicolás Maduro capture—raising fears about uninformed or inappropriate uses of AI.
- “It was even surprising, confusing for people who work at Anthropic who did not know if their technology was used in the Maduro raid.” (Shira Frenkel, 08:18)
3. Core of the Crisis: Who Sets AI Red Lines?
- Anthropic’s Demands: Anthropic wanted contractually binding language prohibiting:
- Use of its models for mass surveillance of Americans
- Use in autonomous weapon systems
- Rationale: Concerns ranged from models’ error rates and catastrophic PR, to internal employee unrest.
- “When it comes to something like picking a target to hit with a missile, that kind of error rate could mean life or death.” (Shira Frenkel, 10:18)
- “...a PR nightmare on their hands where Americans are contending with this very real-life use case where...AI chose the wrong target and humans were killed.” (10:33)
- “People who work there are worried about the use of AI in war. They really risk alienating a lot of the people that they paid a lot of money to come work at that company.” (11:08)
- Pentagon’s Response: The Department of Defense bristled at any private sector meddling in national security decisions.
- “You are a private company. You do not get to make these calls. Whoever decides that AI is ready to control a weapon should be sitting here in the Pentagon, in the military.” (Shira Frenkel, 11:39)
- “We are going to implement all lawful uses of this technology.” (12:00)
4. The Showdown: “Sign or Be Blacklisted”
- Negotiations Peak: Pentagon gives Anthropic until a Friday deadline to accept full military access or face consequences.
- “Defense Secretary Pete Hegseth gave CEO Dario Amodei until the end of the week to sign a document ensuring the military would have full access...” (12:41)
- The Threats:
- Labeling Anthropic a “supply chain risk” (shutting it out of all U.S. government contracts)
- Invoking the Defense Production Act—potentially forcing Anthropic’s compliance
- “[Pentagon]: Either we force you to comply or inflict a ton of pain...by punishing anybody else that does business with them.” (14:00)
5. Industry Fallout & OpenAI’s Role
- Surprising Solidarity in Silicon Valley: Rival AI companies, notably even OpenAI’s Sam Altman (despite past acrimony), publicly back Anthropic’s red lines.
- “He even stands up and he says...‘I back them. I back Anthropic.’” (14:38)
- Anthropic Holds Out—But Misses Deadline:
- Pentagon designates Anthropic a supply chain risk; President Trump denounces the company as “radical left woke.”
- “Anthropic is a supply chain risk. It's going to be booted, banned from the entire federal government...” (15:38)
- OpenAI Steps In: While Anthropic is blacklisted, OpenAI swoops in—with a very different negotiation strategy.
6. OpenAI’s Tactic: Safety by Code, Not Contract
- Sam Altman’s Approach: Instead of legal language, OpenAI embeds safety and guardrails directly into the AI code ("writing into the stacks").
- “What Sam Altman did was say...We’re going to make sure [safety measures] are there...in the stacks.” (17:50)
- Anthropic’s Objection: Safety written into code isn’t good enough—it can be undone or changed at any moment.
- “When you write something into the stack, it can be unwritten. You can write something else the next day.” (Shira Frenkel, 18:45)
- Result: Pentagon gets what it wanted—AI with flexible, revocable “safeguards,” not binding conditions.
7. Aftermath: Winners, Losers, and Chilling Effects
- Immediate Results:
- OpenAI wins a huge contract but faces employee backlash for failing to insist on firm red lines.
- Anthropic is shut out but emerges as a “hero” among tech workers for holding to its principles.
- “Anthropic’s claw technology shoots to the top of the App Store...they have not just become a household name, but...synonymous with security, with safe AI. That's a huge PR win.” (21:10)
- Broader Impact:
- “I spoke to someone who works at Google who said, ‘that's, that's terrifying. If they can threaten to label Anthropic a supply chain risk...what’s to stop them from doing it to any tech company in Silicon Valley if they don’t get their way?’” (20:10)
- Major Silicon Valley–Pentagon trust built over years “shattered in the last week.”
- Internal Shakeups:
- OpenAI’s Sam Altman forced to do damage control: “He’s had to meet with his own employees more than once to assure them that he's going to seek a safe contract...He’s actually sought new language now around the mass surveillance of Americans.” (22:52)
Notable Quotes & Moments
-
On Anthropic’s Motivation:
“What they also are, however, is a company that really believes in working with the government...They are, by all accounts, deeply patriotic as well.” (Shira Frenkel, 06:17) -
On Government Ultimatums:
“President Trump called Anthropic a radical left woke company which will not dictate how the United States fights and wins wars.” (15:50) -
On the Silicon Valley Reaction to Pentagon Power:
“If they can threaten to label Anthropic a supply chain risk...what’s to stop them from doing it to any tech company in Silicon Valley if they don’t get their way?” (20:10, as relayed by a Google employee) -
On the Inevitability of AI Warfare:
“When you speak to some of these technologists, they describe what the world looks like in the future...a war in which there’s no human soldier on the battlefield...” (Shira Frenkel, 25:08)
Key Timestamps
| Timestamp (MM:SS) | Segment |
|-----------------------|-------------|
| 02:04 – 03:37 | How AI is used in U.S. military operations; Pentagon’s needs |
| 04:50 – 06:50 | How Anthropic became the Pentagon’s AI of choice |
| 07:42 – 11:29 | Crisis origins: Anthropic’s red lines and Pentagon’s anger |
| 12:22 – 15:38 | High-stakes negotiation & the “blacklist” threat |
| 16:03 – 19:06 | OpenAI’s backdoor deal and differing safety strategies |
| 20:10 – 22:52 | Fallout in Silicon Valley; employee and PR battles |
| 24:26 – 26:32 | What “AI safety” really means and the future inevitability of AI in warfare |
Conclusion
This episode peels back the curtain on a pivotal moment for AI, military power, and corporate responsibility in America. With U.S. bombs falling and the world racing towards ever-more automated conflict, the struggle between Anthropic and the Pentagon—rivaled and complicated by OpenAI—captures crucial questions: Who will set the boundaries for lethal, automated decision-making? Can safety truly be guaranteed by self-regulation, or does it require enforceable, external oversight? As the host summarizes, the future of warfare is coming fast—and this battle just made it starkly clear to all.
Summary prepared for listeners who missed the episode and want a comprehensive, quote-rich account of its contents.
