
Loading summary
A
Hello, you're listening to NPR's Book of the Day. I'm Tinbit Ermias Technology has long shaped modern warfare, from the development of nuclear weapons to sophisticated drones. But what about artificial intelligence? What's the role this nascent technology plays in something as high stakes and existential as war? That's the topic of Project Maven, a new book by Bloomberg correspondent Katrina Manson. In it, she explores how the Defense Department has been using AI to reach its aims and the roadblocks it's encountered along the way. Manson spoke about it with All Things Considered host Mary Louise Kelly.
B
This message comes from Midi Health, a virtual care platform for women in perimenopause and menopause. Chief Medical Officer Dr. Kathleen Jordan shares the wide range of symptoms they work to address for women in midlife.
C
The there's dry eyes, dry hair, dry skin. There's dry mouth, trouble sleeping, panic and anxiety attacks. When we ask patients about common symptoms, on average they report six MIDI Health
B
committed to helping women in midlife with perimenopause and menopause care. Accessible via telehealth visits@joinmidi.com this message comes from Progressive Insurance do you ever think about switching insurance companies to see if you could save some cash? Progressive makes it easy to see if you could save when you bundle your your home and auto policies. Try it@progressive.com, progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states. Support for this podcast and the following message come from Wayfair. It's Wayday at Wayfair where you can score the best deals in home and upgrade your space that work within your budget. And the best part? Everything ships fast and free during Wayday. Get up to 80% off with fast and free shipping on everything. Head to Wayfair.com April 25th through the 27th to shop Wayday. That's W A Y F A I R.com Wayfair Every style, every home the
C
problem with war has always been the humans. We humans are inefficient. We get tired. We get killed. That is the view of a Marine Corps colonel named Drew Cukor who arrived at the conclusion that humans do better when machines help us and that AI will come completely change. Maybe already is changing the way that America fights wars. Well, his story is at the heart of a new book about the Pentagon's campaign to incorporate AI into combat, a campaign known as Project Maven. Project Maven is also the title of the book. The author is Katrina Manson and she is in our New York Bureau, welcome.
D
Thanks so much.
C
So Project Maven actually get stood up in 2017. Why? What was happening then that this got greenlit?
D
By then the US is deep into its forever wars, which are meant to be winding down in Afghanistan and Iraq, but they're also fighting isis. And at this time, several people at the very senior most ranks of the intelligence and defense communities are also looking towards a potential future conflict with China and needing to lean into, in their view, modern tech, cutting edge tech. Seeing that the commercial world in the US was now relying on AI, increasingly bringing together what was then known as big data and finding out that the Pentagon really was behind, in their view, and they wanted to develop much more sophisticated weapons in the same way almost that the US had tried to get a jump start on the nuclear bomb. They wanted to get a jumpstart on AI. The aim of this was autonomy, to take humans off the battlefield and deliver overwhelming U.S. power.
C
So you just used the phrase that they wanted to lean into cutting edge tech. I'm trying to cast my mind back to 2017 and where AI was and it certainly would not count as cutting edge tech today. There must have been early disasters, early triumphs, as they're trying to figure this out. Because it occurs to me, if you're trying to figure out how do you get humans off the battlefield and test AI on the battlefield, the only way to do that is test AI on the battlefield.
D
They tried to do it in safe ways. So they weren't immediately running algorithms into operations, but they were running it over operations at forward deployed centers. And they really were cutting edge. But these were algorithms that had been trained initially on things as human as wedding cakes. So the algorithms, initially the models could recognize wedding cake, tears, bridal veils, a groom suit. And this technology was repurposed to start recognizing things on the battlefield. And these algorithms were not working in the early days. They would mistake trees for people, rocks for buildings. A cloud was identified as a school bus. And even Drew Cukor himself, who was this big evangelist for AI, said that AI was just a bag of potato chips to other people, meaning that it simply wasn't good enough. But he argued that it would get better and he wanted to build the systems, the operating systems, the digital interface, and really the trust and almost muscle memory of operators to try to lean into newtek.
C
Were there consequences to some of those early. It sounds like huge errors, like mistaking. What did you say, a cloud for a bus?
D
I think the consequences there were fury and a lack of take up. So operators just stopped using it and then they had to rethink. And they sent out people who were very skilled as drone analysts to try and encourage them to say, look, AI could help. One of the first breakthroughs they had was AI did detect someone hiding quicker than a human did. On another occasion, the AI detected a farmer walking across a field who the US Was about to target. They had been able to call off the strike in time, but it had taken the human something like 40 seconds to notice there was a farmer there. The AI had spotted that very quickly and sometimes was able to spot Marines in the fray of battle quickly enough to count out those Marines, say they were safe, and then call in a missile against the enemy targets. So they did start seeing results with some algorithms.
C
I want to bring us up to how the Defense Department is using AI today. You've talked about. It was used to share targeting information with Ukraine near the start of the war, 2022, that it was used in 2020 strikes against Syria and Iraq, the Houthis in Yemen. What do we know about the current war in Iran?
D
I think it's very interesting that CENTCOM has been prepared to take time out during these operations to make public that they are using AI tools. The spokesperson of Central Command has also told me they're using a variety of AI tools to generate points of interest. Now, points of interest is sort of military speak for everything before a decision to target. So the line they're drawing there is that AI is not deciding what to shoot at, but they are using AI to develop targets, including location, elevation, description. And most recently, a senior defense official even explained that the System Maven Smart System can develop courses of action and work through something called Target Workbench, all of which is about developing not only a target, but also the weapon you would pair with it and what order you might shoot it in.
C
This brings me to ask about a line in your book that caught my eye. You write, AI remains a narrow, faulty tool with considerable limits to its usefulness and reliability. That the US Military is still discovering limits like what?
D
There's widespread knowledge within the Pentagon that AI can make mistakes. We all know that AI can hallucinate. It can be prone to bias. It also has this thing called algorithmic drift. Over time, algorithms tend to become less right. And in addition, research has shown, and some of the advisors to the Pentagon have highlighted this research to me, that chatbots can be escalatory. They can tend to agree with you.
C
You're reminding me of the 1980s Matthew Broderick movie War Games.
D
Right, right, exactly. And one official I interviewed did say it's not, you know, we're not building the Whopper. But actually, if you are asking questions about, shall I take this move? Is this a sensible move? Are we in line with the laws of war? You have to be very careful about the way in which you ask that question. And I do report in the book that they have thought about this, or some quarters of the Pentagon have, and they're trying to add guardrails in into the prompt. It tries to say, are you going to escalate? Check that you don't. And so the claim was made to me that you can actually rein in that capacity for error rather well. I think that needs to be continually tested. And the extent to which this administration is prepared to accelerate AI and also consider the policy implications and just the technical realities of AI is still something that's rolling out.
C
Katrina Manson is a Bloomberg reporter who covers tech and national security. Her book is Project A Marine Colonel, His Team and the dawn of AI Warfare. Katrina Manson, thank you.
D
Thanks.
E
Support for NPR and the following message come from Washington Wise decisions made in Washington can affect your portfolio every day. Washington Wise from Charles Schwab is an original podcast that unpacks the stories making news in Washington. Listen@schwab.com Washingtonwise this message comes from takeoff
B
by IXL, the K through 5 core math curriculum that continuously differentiates learning everything teachers need to personalize Instruction is on TakeOff's digital platform. Learn more at takeoff by IXL dot com.
Date: April 9, 2026
Host: Tinbit Ermias
Guest: Katrina Manson (Bloomberg correspondent, author of Project Maven)
Guest Interviewer: Mary Louise Kelly (All Things Considered)
Duration: Approx. 8 minutes (exclusive of ads/intros/outros)
This episode explores Project Maven, the U.S. Department of Defense’s mission to integrate artificial intelligence (AI) into military operations. Tinbit Ermias introduces Bloomberg reporter and author Katrina Manson, who delves into the Pentagon's ambitions, the technical and ethical challenges faced, and the very real consequences for how future wars might be fought. Drawing on her book Project Maven: A Marine Colonel, His Team and the Dawn of AI Warfare, Manson discusses the journey from AI’s early failures on the battlefield to its evolving—yet limited and risky—role in modern combat.
[02:47 – 03:56]
Why was Project Maven created in 2017?
The U.S. was transitioning from the “forever wars” in Afghanistan and Iraq, but also facing new threats like ISIS and anticipating competition with China.
Key Reason:
Pentagon leaders recognized that commercial U.S. sectors were leveraging AI ("big data") at an accelerating pace, while military technology lagged behind. The goal became "autonomy"—reducing human presence on the battlefield to deliver overwhelming U.S. power, analogous to the early drive for nuclear dominance.
“They wanted to develop much more sophisticated weapons...almost the same way the US had tried to get a jumpstart on the nuclear bomb. They wanted to get a jumpstart on AI.”
(Katrina Manson, 03:33)
[04:25 – 06:00]
Initial AI models were surprisingly primitive:
The Pentagon adapted commercial algorithms originally trained to identify things like wedding cakes—"wedding cake tiers, bridal veils, a groom suit"—for battlefield imagery.
Frequent, dangerous errors:
Early AI confused clouds for school buses, trees for people, rocks for buildings.
“These algorithms were not working in the early days. They would mistake trees for people, rocks for buildings. A cloud was identified as a school bus.”
(Katrina Manson, 04:50)
Operator skepticism and backlash:
Many frontline personnel stopped using the unreliable AI systems. Marine Colonel Drew Cukor, an AI evangelist, described early AI as “just a bag of potato chips,” signifying something fun but not substantial or reliable.
“Even Drew Cukor himself, who was this big evangelist for AI, said that AI was just a bag of potato chips...”
(Katrina Manson, 05:07)
[05:42 – 06:39]
Early Successes:
With persistent effort to build operator “muscle memory” and trust, breakthroughs emerged:
“The AI had spotted that very quickly...”
(Katrina Manson, 06:13)
[06:39 – 08:00]
Current operational use:
AI has supported U.S. targeting in Ukraine (2022), and in strikes in Syria, Iraq, and against Houthi targets in Yemen.
CENTCOM’s Transparency:
Central Command (CENTCOM) has publicized its reliance on AI for generating “points of interest”—pre-targeting steps like identifying locations, elevation, and descriptions, but stops short of allowing AI to make the final decision to strike.
“The line they're drawing there is that AI is not deciding what to shoot at, but they are using AI to develop targets…”
(Katrina Manson, 07:26)
Advanced “Maven Smart System”:
The system now helps generate not only likely targets, but also recommends weapons pairings and strike sequences (“courses of action,” “Target Workbench”).
[08:00 – 09:45]
Persistent vulnerability and unpredictability:
AI remains “narrow, faulty,” easily biased, prone to “algorithmic drift,” and, at times, escalation.
Concerning research:
Studies cited by Pentagon advisors show chatbots can become “escalatory,” often agreeing or amplifying aggressive intent.
Guardrails and prompts:
Some Pentagon departments are consciously programming AI prompts to reduce escalation risk—e.g., regularly checking “are you going to escalate?” within the prompt and instructing operators to resist that.
“AI remains a narrow, faulty tool with considerable limits to its usefulness and reliability. The US Military is still discovering limits like what?”
(Mary Louise Kelly, 08:00)
“Over time, algorithms tend to become less right...Chatbots can be escalatory. They can tend to agree with you.”
(Katrina Manson, 08:22)
Analogy to pop culture and persistent human oversight:
Referencing the movie War Games (“we’re not building the Whopper”), Manson underscores the ongoing caution exercised by policy makers.
“If you are asking questions about...are we in line with the laws of war? You have to be very careful about the way in which you ask that question.”
(Katrina Manson, 09:03)
On the Pentagon falling behind:
“They wanted to develop much more sophisticated weapons...in the same way almost that the US had tried to get a jumpstart on the nuclear bomb. They wanted to get a jumpstart on AI.”
(Katrina Manson, 03:33)
On early AI gaffes:
"A cloud was identified as a school bus.”
(Katrina Manson, 04:54)
On why operators gave up on early AI:
"The consequences there were fury and a lack of take up. So operators just stopped using it and then they had to rethink."
(Katrina Manson, 05:43)
On AI’s first practical battlefield successes:
"On another occasion, the AI detected a farmer walking across a field who the US was about to target...the human [analyst] took about 40 seconds to notice there was a farmer there. The AI had spotted that very quickly..."
(Katrina Manson, 06:13)
On current uses and ethical guardrails:
"The line they're drawing there is that AI is not deciding what to shoot at, but they are using AI to develop targets..."
(Katrina Manson, 07:26)
On fundamental limits and risks:
“AI remains a narrow, faulty tool with considerable limits to its usefulness and reliability. The US Military is still discovering limits like what?”
(Mary Louise Kelly, 08:00)
"Chatbots can be escalatory. They can tend to agree with you."
(Katrina Manson, 08:35)
Katrina Manson’s Project Maven sheds light on the U.S. military’s fraught but dogged journey to harness AI as a force multiplier—one beset by technical, ethical, and existential quandaries. While AI has moved from comic failure to operational asset, its role remains tightly curated, with persistent and novel risks. As Manson makes clear, the work of building trust, accountability, and safety is as unfinished as the technology itself.
(End of summary)