Big Take – "Iran War Spotlights the Pentagon’s AI Strategy"
Date: March 12, 2026
Host: David Gura (Bloomberg News)
Guests: Katrina Manson (National Security & Tech Reporter), Mike Shepard (Senior Editor)
Episode Overview
In this episode, the Big Take explores the role of artificial intelligence (AI) in the rapidly escalating Iran war, with a focus on how the Pentagon is integrating AI into its military strategies and the complex ethical, political, and technical challenges that arise. The discussion is driven by the fallout between the Pentagon and AI company Anthropic, the growing arms race with China, and a critical look at recent AI-enabled incidents within the Department of Defense.
Key Discussion Points & Insights
1. War Developments and AI on the Battlefield
-
Escalation: The Iran war intensifies, with US, Israeli, and Iranian forces exchanging advanced strikes (01:41).
-
AI-enabled Warfare: The US launched over 1,000 strikes in 24 hours—double the opening of the Iraq war in 2003—enabled by AI to rapidly process and execute targeting decisions (02:13).
"The decision to wage a campaign like this is intimately linked to how remotely the US can pursue this war."
— Katrina Manson, 02:51
2. Pentagon–Anthropic Breakup and Silicon Valley’s Ethical Red Lines
-
Severed Ties: The Pentagon ended its contract with Anthropic, identifying the company as a supply chain risk (03:19).
-
Anthropic’s Red Lines: Anthropic refused to allow their AI, Claude, to be used for autonomous lethal decisions or mass domestic surveillance (03:43).
"Anthropic, which is best known for its CLAUDE AI tool, wanted assurances from the US Government that its technology would not make lethal decisions on its own or help to conduct mass surveillance on Americans."
— David Gura, 03:43 -
OpenAI Steps In: OpenAI began collaboration with the Department of Defense after Anthropic’s exit, intensifying the debate on tech’s role in U.S. military campaigns (03:53).
3. The Pentagon’s Push for AI Across All Branches
-
Accelerated Adoption: Defense Secretary Pete Hegseth’s directive encourages widespread use of AI in the military, rejecting any provider-imposed use restrictions (05:27).
-
Private Sector Partnerships: The military is increasingly reliant on commercial technology firms for cutting-edge AI (Project Maven, Palantir, Amazon, Microsoft), moving beyond traditional defense contractors (08:14).
"They need commercial companies and so when Project Maven happened, for the first time really, they were going outside the traditional primes… looking at companies like Google, Microsoft and Amazon."
— Katrina Manson, 08:14 -
Worker Resistance: Google’s Project Maven involvement prompted employee backlash in 2018; now, Anthropic’s stance is signaling to other tech workforces about ethical boundaries (08:53).
4. Ethics and the Human Role in Targeting
-
Speed Versus Oversight: AI dramatically accelerates mission planning and targeting—from days or hours to seconds (10:03).
-
Ethics of Automation: While policies state humans remain 'in the loop,' oversight is shrinking, increasing concerns over automation bias, collateral damage, and unchecked error margins (10:03).
"As you involve AI, that process speeds up the decision making… fears for things like automation bias… tend to naturally get worse over time. All of that has yet to be worked out."
— Katrina Manson, 10:49
5. Risks, Failures & Black Box Technology
-
Systemic Vulnerabilities: AI’s unpredictability—especially in "life and death decisions"—poses risks for both civilians and allied troops (14:54).
"AI is this fundamentally unpredictable black box technology. So it's brilliant at bringing a lot of data to bear. It just might be the wrong data and it might be organized in the wrong way."
— Katrina Manson, 14:54 -
Project Maven and ‘Google Maps for War’: Integration of AI with battlefield information is akin to a digital map, but problems with data quality, labeling, and algorithm robustness persist (15:38).
6. Real-World Testing Gone Wrong: The Drone Boat Incident
-
Failure in Autonomy: During a June 2025 test for the Replicator program, a drone boat misinterpreted a command, nearly injuring a captain and demonstrating the dangers of operational missteps (17:23).
"Inadvertently, a command was sent from the dock… The boat started accelerating… the captain is capsized. At that point, he's in the water, and then the drone boat turns and comes toward him. That's a very dangerous moment..."
— Katrina Manson, 17:23 -
Lesson: Such incidents, even in controlled settings, highlight how nascent and risky battlefield AI remains (19:23).
7. Congressional Oversight—Or Lack Thereof
-
Regulation Remains Distant: While lawmakers discuss AI in warfare, meaningful legislation or oversight is still far off (19:57).
"They do like to showcase their interest. But actually advancing a proposal to codify… how AI is deployed in warfare, we are a long way from that."
— Mike Shepard, 19:57 -
Industry–Government Ties: Tech companies are increasingly aligned with political leadership, blurring regulatory independence (20:45).
8. The Irreversibility of the AI Arms Race
-
The ‘Costless’ War Myth: As wars become increasingly automated, the public may misunderstand the very real human costs that persist or grow with AI-enabled conflict (21:21).
-
Civic Engagement: Manson suggests citizens question whether true “riskless war” is possible and who is harmed if AI fails or makes war easier to pursue (21:21).
"People talk about a costless war… if in any way AI isn't saving civilians or even there are misfires that involve AI, then you really have to re-examine if it makes war more likely."
— Katrina Manson, 21:21
Notable Quotes & Memorable Moments
-
On AI Speed in Targeting:
"They've hit 5,500 targets. That speed is exactly why they want AI... CENTCOM has been particularly proud this week to say that AI is helping them reduce operational decisions from what used to be days and hours to seconds."
— Katrina Manson, 10:03 -
On AI's Unpredictability:
"You could be going in the wrong direction for quite some time before you notice it."
— Katrina Manson, 16:53 -
On Public Engagement:
"If in any way AI isn't saving civilians or even there are misfires that involve AI, then you really have to re-examine if it makes war more likely."
— Katrina Manson, 21:21
Timestamps for Key Segments
- [01:41] – War Escalation and AI-enabled warfare overview
- [03:03] – Introduction of Katrina Manson, framing remote warfare and leadership stakes
- [03:19] – Pentagon pulls Anthropic, sparks ethics debate
- [05:27] – Overview of Pentagon’s AI push (Mike Shepard)
- [07:12] – History and aims of Project Maven
- [10:03] – Human role, ethics, and acceleration of targeting via AI
- [14:54] – AI's unpredictability and systemic risks
- [17:23] – Failed drone boat test and implications
- [19:57] – Congressional (non-)action on AI in warfare
- [21:21] – Big picture: irreversible shift and public responsibility
Summary
This episode unmasks the Pentagon’s internal and external struggle to responsibly blend AI into its war machine at a critical historical moment. As the Iran conflict rages, the US is testing the limits of what AI can do—and what it should be allowed to do. Safety red lines are fraying, both inside companies like Anthropic and across government. Real-world failures, like the drone boat mishap, underscore that the technology is not mature—and that the consequences could be catastrophic. Meanwhile, lawmakers lag desperately behind the technology’s pace, failing to craft meaningful regulations. Ultimately, the episode cautions that the public, not just the government or Silicon Valley, must engage with what it means to wage "AI war" and face hard questions about risk, oversight, and what kind of world this emerging technology is creating.
End of summary
