Fresh Air (NPR): "America's First AI-Fueled War Is Unfolding. How'd We Get Here?"
Air Date: March 26, 2026
Host: Tonya Mosley
Guest: Katrina Manson (Bloomberg reporter, author of Project: A Marine Colonel, His Team, and the Dawn of AI Warfare)
Episode Overview
This episode centers on the real-time emergence of America’s first war significantly powered by artificial intelligence (AI), focusing on the U.S. and Israel's campaign against Iran. Tonya Mosley interviews Bloomberg reporter Katrina Manson about her reporting on Maven — the Pentagon’s AI-enabled warfare system — and the central figure behind it, Marine Colonel Drew Cukor. The episode unpacks the transformative impact of these AI systems on how wars are fought, the ethical and political debates they provoke, risks and failures seen already in active battle, and what this means for the future of warfare and international stability.
Key Discussion Points & Insights
The Maven Smart System: How It Works
- Maven is likened to "Windows for war," acting as an operating system with a digital map interface, integrating 160+ intelligence feeds. [02:00]
- AI Capabilities:
- Computer vision analyzes imagery to identify objects, targets, and friendly forces.
- Large language models (LLMs) like Claude (from Anthropic) are used for data crunching, assisting in mission planning, pairing weapons to targets, and expediting decision processes — but do not have final authority for lethal strike decisions.
- Quote:
“It’s essentially a digital map...and then Claude is doing something different. That is an AI tool based on a large language model that can crunch data...everything short of sign-off, Claude can help with.”
— Katrina Manson [02:00]
Civilian Casualties & Data Risks
- Incident: On the war’s first day, a U.S. missile — potentially guided by outdated coordinates — hit a girls' elementary school in Iran; 165+ civilians killed. AI’s involvement under investigation.
- Key Risk:
“Any system, particularly one that uses AI, will only ever be as good as the data that feeds it.”
— Katrina Manson [05:13] - The tragedy highlights the catastrophic consequences when outdated or erroneous data underpins precision-strike systems, regardless of AI sophistication.
Anthropic, OpenAI, and Pentagon Contracts
- Controversy: Pentagon “blacklisted” Anthropic for refusing Claude's use in fully autonomous weapons; within hours, OpenAI stepped forward and announced similar restrictions.
- Significance: First time leading AI companies provided their models in classified cloud environments for military use.
- Political Overtones:
"It's a very politicized divide when you have the President calling Anthropic left wing nutjobs…there is also a political flavor to this falling out."
— Katrina Manson [07:45]
Classified Cloud Explained: Like civilian cloud storage but with much stricter safeguards to protect secret or top-secret military data from hacking or insider threats. [08:56]
AI Tendency Toward Escalation
- Research: King's College London put AI models like Claude, ChatGPT, and Gemini through simulated nuclear crises; 95% recommended tactical nuclear weapons as options.
- Expert View:
“AI can be escalatory…there is a tendency within AI also to buttress that opinion.”
— Katrina Manson [11:00] - Systems can also “hallucinate,” raising the risk military responses may be triggered by false information.
- The military claims to have “under the hood” safeguards: red-teaming prompts, checking for escalation bias—but these safeguards’ effectiveness remains unproven. [11:50]
AI in Recent and Ongoing Wars
Iran War (2026)
- Maven enabled striking 1,000 targets on the first day, now over 9,000 — nearly doubling the scale of the 2003 “Shock and Awe” campaign in Iraq.
- The system’s speed and “efficiency” may paradoxically make protracted, high-casualty wars more likely. [18:20]
- “Is there ever a way to deliver palatable killing?” — Katrina Manson [19:30]
Ukraine War
- Initial U.S. deployment of AI (Maven) to support Ukrainian targeting faltered: models trained on Middle Eastern landscapes failed in snowy Ukraine; network delays were significant (up to eight seconds — “a lifetime in war”).
Fixes included updating image libraries and network architecture. [26:53] - U.S. passed “points of interest” (targeting data) to Ukrainian counterparts, straddling the line between support and direct involvement.
- Quote:
“It was almost a sort of Pinocchio-like relationship, the Americans potentially pulling the strings on Ukrainian decisions.”
— Katrina Manson [29:40] - Trust developed: U.S. says, “Trust us, hit it,” leading to 18-minute strike cycles. [30:20]
Gaza
- Israel’s use of AI targeting systems “Gospel” and “Lavender” points to blurred ethical, legal guardrails.
- AI is often defended as neutral (“just a tool”), but Manson notes:
“The more you have an AI-infused killing machine, the more you can use it.”
[33:53]
Autonomous Weapons: Goalkeeper and Whiplash
- U.S. quietly developed “Goalkeeper” (autonomous attack drones) and “Whiplash” (autonomous weaponized jet skis) for potential conflict over Taiwan.
- The concern: If communications are jammed, these systems must operate independently — potentially making lethal decisions without human input. [35:19]
- UN and Treaty Efforts: Decade-long UN debates stalled; U.S. position is “let’s make it first and then regulate it.”
Exclusion of major powers (U.S., China, Israel, Russia) from treaties likely. [37:14]
Intentions & Readiness for Conflict with China
- U.S. military officials say China aims to be capable of taking Taiwan by 2027; U.S. urgently working to field AI-enhanced, autonomous forces in preparation — but quietly admit, “We’re not ready.” [38:45]
- The logic of technology investment: Keeping pace with adversaries (especially China), even as readiness lags.
Major Personalities: Col. Drew Cukor & Palantir
Col. Drew Cukor
- Unheralded champion of AI in U.S. military, driven by frustration with battlefield intelligence failures during the Afghanistan and Iraq wars.
- Sought to modernize military tech, moving the DoD toward a software-company mindset.
- Perspective:
“He simply felt...better information. And in the modern world, better information has come to mean AI.”
— Katrina Manson [16:38]
Palantir’s Role
- Cukor was early advocate for Palantir’s analytics tools; helped push the company to integrate AI capabilities.
- Palantir’s approach — “good, but not as good as everyone makes out,” and seen as “very divisive.” Earned both ardent internal support and frustration for aggressive business tactics. [25:37]
Memorable Moments & Notable Quotes
-
“Any system, particularly one that uses AI, will only ever be as good as the data that feeds it.”
— Katrina Manson [05:13] -
“AI can be escalatory...There is a tendency within AI also to buttress that opinion.”
— Katrina Manson [11:00] -
“It was almost a sort of Pinocchio-like relationship, the Americans potentially pulling the strings on Ukrainian decisions."
— Katrina Manson [29:40] -
“The more you have an AI-infused killing machine, the more you can use it.”
— Katrina Manson [33:53] -
“We’re not ready.” — Pentagon officials (via Manson) [38:45]
Timestamps for Key Segments
- How Maven works; Claude’s role: 02:00
- Civilians killed by AI-enabled targeting error: 03:40–06:48
- Pentagon/Anthropic/OpenAI policy fights: 06:48–08:49
- Escalation risks of Large Language Models in war: 10:04–13:46
- AI in U.S. military strikes (Iraq, Syria, Iran): 13:51–15:01
- Col. Drew Cukor’s origins and motives: 15:16–18:20
- Discussion of Palantir’s influence and controversy: 22:59–26:22
- AI warfare failures in Ukraine; learning and adaptation: 26:53–28:47
- US-Ukraine cooperation: legal/ethical lines: 28:47–30:52
- Israel's Gaza AI targeting as case study: 32:46–34:55
- U.S. autonomous weapons programs Goalkeeper and Whiplash: 35:19–37:02
- International arms race and treaty limitations: 37:02–37:59
- US preparedness for potential conflict with China: 38:26–39:59
- Katrina Manson’s reflection on the “costless war” and AI’s consequences: 40:19–42:15
Tone & Closing Thoughts
- The conversation is thoughtful, probing, and sometimes somber — grappling with unprecedented speed and scale of war made possible by AI (“a thousand targets in the first day”).
- Manson highlights both the “panacea” narrative among AI proponents and her own deep skepticism about costless, remote war:
“Does remote war make war more possible, more likely? Does it mean that war option will someone will press play on it not understanding the long deep impacts?” [41:25] - The episode closes with appreciation for Manson's reporting and a recognition that the dawn of AI warfare is here, with profound and still poorly understood global implications.
For further insight:
Check out Katrina Manson’s book, Project: A Marine Colonel, His Team, and the Dawn of AI Warfare.
Listening recommendation:
Skip to 02:00 for the start of substantive discussion.
Key deep-dive on AI’s ethical challenges: 10:00–14:00.
US-China-Taiwan future readiness: 38:26–40:00.
