Podcast Summary: Sources & Methods – “Inside a Secret Pentagon Effort to Bring AI to the Battlefield”
Date: March 23, 2026
Host: Mary Louise Kelly
Guest: Katrina Manson, Bloomberg reporter and author of Project Maven: A Marine Colonel, His Team and the Dawn of AI Warfare
Overview
This episode delves into the secretive origins and evolving deployment of artificial intelligence (AI) in U.S. military operations, centered around the Pentagon’s Project Maven. Host Mary Louise Kelly interviews Katrina Manson, whose reporting and new book track the journey of Marine Corps Colonel Drew Cukor—the driving force behind Project Maven—and examines how bureaucratic inertia, ethical dilemmas, and rapid technological advancement have shaped military AI’s integration into both past and current warfare.
Key Discussion Points & Insights
1. Meet Drew Cukor: The Man Behind Project Maven
- Katrina Manson’s Relentless Pursuit:
- Manson chased Cukor for a year before finally meeting him post-service, now in finance. She notes, “[H]ere I was in front of this former intelligence officer thinking I was interviewing him. And of course, he was the one interviewing me.” (01:19)
- Cukor’s Frustration in Afghanistan:
- On the ground, inefficient, outdated, and unreliable tech plagued U.S. operations. Data wasn’t live, nor easily shared or analyzed, hindering mission effectiveness.
- “[H]e was using programs that didn’t work... None of that data was live... He couldn’t... keep track of where... IEDs were buried... to even try to find patterns that might save US operators.” (03:14–04:07)
2. Why Project Maven, and Why in 2017?
- The China Challenge & A.I. Race:
- U.S. military leaders feared falling behind adversaries, especially China. Bob Work, then Deputy Defense Secretary, was particularly focused on tech rivalry and moving beyond “forever wars.”
- The aim: Use AI for autonomy, take humans off the battlefield, and deliver “overwhelming U.S. power.” (04:50–06:04)
3. Early Tech Stumbles and Learning Curves
- Field-to-Learn, Misfires, and Breakthroughs:
- Initially, AI algorithms adapted from mundane tasks (e.g., wedding cake recognition) performed poorly on the battlefield—mistaking clouds for buses, rocks for buildings.
- “...algorithms... could recognize wedding cake, tears, bridal veils, a groom’s suit. And this technology was repurposed to... the battlefield... One time... a cloud was identified as a school bus.” (06:32–08:11)
- Slow operator uptake and frustration led to strategic redeployment of experienced analysts. Gradual improvements included AI detecting threats (and civilians) faster than humans could.
- “…AI detected a farmer walking across a field who the US was about to target. And a human simply hadn’t seen them. …They had been able to cool off the strike in time…” (08:20)
- Initially, AI algorithms adapted from mundane tasks (e.g., wedding cake recognition) performed poorly on the battlefield—mistaking clouds for buses, rocks for buildings.
4. Bureaucratic Battles: Marines vs. the Pentagon
- An Insurgency Within:
- Cukor recruited Marine reservists—young, battle-hardened, and “cheap”—to drive change in a risk-averse, aging bureaucracy.
- “They very much acted as if they were an insurgency inside the Pentagon.” (09:33)
- Human connections and “comedic” naivety about AI enabled the team to sneak innovations past institutional inertia.
- “‘The pissed off Marines... were a very small team, underpowered... they really were not informed about what AI was, but they set about it nonetheless... just get it done.’” — Eric Schmidt, referenced by Manson (11:14)
- Cukor recruited Marine reservists—young, battle-hardened, and “cheap”—to drive change in a risk-averse, aging bureaucracy.
5. Ethical Blind Spots and Emerging Dilemmas
- Little Early Debate on Life-and-Death Decisions:
- At inception, the Maven team “just wanted to move forward,” with little consideration of ethics, focusing on proving the tech worked at all.
- “They…didn’t even spend their time considering ethics. Other parts of the Pentagon did. They were all move, move, move.” (12:37–13:14)
- Outsider pushback came primarily from tech companies (notably the Google walkout), and only later did internal stakeholders consider testing and statistical guardrails.
- “I did come across some former Mavenites…who…in their job interview said they wanted to use AI to reduce the non American population. Another said they thought that with AI they'd be able to kill people all the time. These were of course light-hearted comments, but…I think that it gives you insight into the kind of attitude they brought…” (13:14–14:24)
- At inception, the Maven team “just wanted to move forward,” with little consideration of ethics, focusing on proving the tech worked at all.
- Current Controversies:
- Discussion of Anthropic’s legal battle with the Pentagon over “red lines” (autonomous weapons, mass domestic surveillance). Anthropic’s Claude LLM is already deployed in Pentagon operations.
- “This red line that he is drawing, that the Pentagon obviously rejects, is about fully autonomous weapon systems…this really is now the cutting edge of where the Pentagon wants to take this technology.” (14:47–16:04)
- Discussion of Anthropic’s legal battle with the Pentagon over “red lines” (autonomous weapons, mass domestic surveillance). Anthropic’s Claude LLM is already deployed in Pentagon operations.
6. Reporting in the Shadows: Researching a Secret Project
- Opacity and Secrecy:
- Following the Google walkout (2018), Project Maven became all but impossible to FOIA; even staffers hid Maven experience from LinkedIn.
- “For years, Project Maven operated in a kind of blackout…people were asked not even to put on their LinkedIn that they had worked on Project Maven.” (17:21)
- Manson relied on interviews, documents, personal notes, even workplace cartoons (out of boredom and protest), to reconstruct events.
- Following the Google walkout (2018), Project Maven became all but impossible to FOIA; even staffers hid Maven experience from LinkedIn.
7. Project Maven Today: AI in the War on Iran
-
AI’s Evolving Military Role:
- CENTCOM has publicly acknowledged AI use, reducing targeting timelines from days to seconds and aiding in target identification and course-of-action planning.
- “The commander of Central Command… has said he’s using AI… bringing down processing time… from days and hours, sometimes to seconds.” (20:04)
- AI supports “points of interest”—all the data leading up to a kill decision—but (officially) does not “pull the trigger.”
- Massive scale: Over 7,000 targets processed, double the “shock and awe” of the Iraq invasion.
- “In the first 24 hours alone, Centcom went through 1,000 targets. They're now beyond 7,000 targets…” (21:33)
- CENTCOM has publicly acknowledged AI use, reducing targeting timelines from days to seconds and aiding in target identification and course-of-action planning.
-
Data Quality and Atrocity Risks:
- Uncertainties remain about AI’s role in tragic strikes, like on an Iranian girls’ school; speed can mean spreading underlying data errors further and faster.
- “Even if AI is not to blame, if you are speeding up your ability to call on errant data, you are speeding up your rate of potential mistake making…” (21:33)
- “[W]hat has happened in this girls school, like former mistakes… really matters when it comes down to record keeping. … data and accountability, revealing what has gone wrong…is very key.” (22:12)
- Uncertainties remain about AI’s role in tragic strikes, like on an Iranian girls’ school; speed can mean spreading underlying data errors further and faster.
8. AI’s Limits and Ongoing Challenges
- Narrow, Faulty Tool—Not Yet Reliable:
- AI’s flaws: hallucination, bias, “algorithmic drift,” and escalation risks when using LLMs.
- “AI remains a narrow, faulty tool with considerable limits to its usefulness and reliability that the US military is still discovering.” (23:13)
- Pentagon attempts to embed “guardrails” and prompt-testing to ensure LLMs don’t escalate.
- “When you prompt an LLM underneath the hood…it tries to say, are you going to escalate? Check that you don’t. …I think that needs to be continually tested.” (24:21–25:25)
- AI’s flaws: hallucination, bias, “algorithmic drift,” and escalation risks when using LLMs.
Notable Quotes & Moments
"Here I was in front of this former intelligence officer thinking I was interviewing him. And of course, he was the one interviewing me."
— Katrina Manson (01:19)
"AI was just a bag of potato chips to other people, meaning that it simply wasn’t good enough."
— Katrina Manson on early skepticism (08:11)
"They very much acted as if they were an insurgency inside the Pentagon."
— Katrina Manson (09:33)
"They…didn’t even spend their time considering ethics. Other parts of the Pentagon did. They were all move, move, move."
— Katrina Manson (12:42)
"If you are speeding up your ability to call on errant data, you are speeding up your rate of potential mistake making."
— Katrina Manson (21:33)
"AI remains a narrow, faulty tool with considerable limits to its usefulness and reliability that the US military is still discovering."
— Quoted by Mary Louise Kelly, written by Katrina Manson (23:13)
Timestamps for Key Segments
- 01:19 – First encounter with Colonel Drew Cukor
- 03:14–04:07 – Tech frustrations on the Afghanistan battlefield
- 04:50–06:04 – Origins of Project Maven and Pentagon’s motivations
- 06:32–08:11 – Early AI failures and breakthroughs
- 09:33–11:05 – Bureaucracy vs. Project Maven’s Marine “insurgency”
- 12:37–14:24 – Ethical blind spots and the Google walkout fallout
- 14:47–16:04 – Anthropic controversy and Pentagon’s future ambitions
- 17:21–19:40 – Reporting challenges and uncovering secrets
- 20:04–21:17 – AI’s current use in the Iran war
- 21:33–23:13 – Risks, tragic targeting errors, and questions of data
- 23:32–25:25 – AI’s technical limitations and ongoing research
Tone & Language
The discussion is candid, nuanced, and at times darkly humorous, reflecting the “insurgent” spirit inside the Pentagon, the high stakes of war, and the weighty ethical and technical challenges posed by AI. Both host and guest maintain an investigative, skeptical, and human-centered approach throughout.
For those who want to understand how military AI development moved from secrecy to the modern battlefield, and the complex mix of ambition, bureaucracy, technical hurdles, and ethical debate—this episode is essential listening.
