Podcast Summary: What Next: TBD – "Are We Ready for A.I. Warfare?"
Host: Lizzie O’Leary (Slate Podcasts)
Guest: Steve Feldstein (Carnegie Endowment for International Peace)
Date: March 13, 2026
Overview
This episode dives deep into how artificial intelligence (AI) and rapidly evolving drone technology are transforming modern warfare, with a focus on the U.S. involvement in the Iran conflict. Host Lizzie O’Leary interviews national security and technology expert Steve Feldstein to explore recent events—including a tragic U.S. missile strike—and how new technological capabilities, accountability, and the laws of war are struggling to keep pace with the realities of “AI wars.”
Key Discussion Points & Insights
1. The Iran School Strike: What Went Wrong?
- Context: A U.S. missile strike on an Iranian school killed at least 175 people, mainly children ([01:43]).
- Cause: Despite speculation about AI involvement, initial reports suggest outdated intelligence, not AI, mislabeled the target ([03:10]).
- Notable Quote:
“Most likely this was a result of human error, a result of outdated information supplied by the Defense Intelligence Agency that mislabeled the school … Based on what I’ve heard from that, it would indicate to me that this is not an AI issue.”
— Steve Feldstein ([03:14])
- Notable Quote:
- AI’s Potential Role: Feldstein suggests up-to-date, AI-vetted satellite imagery might have prevented the error.
2. Is This Really an ‘AI War’?
- Assessment: While AI powers much military analysis and enables high-intensity operations, humans still handle most critical decisions ([04:21]).
- Notable Quote:
“To simply call it an automated or an AI war I think leaps ahead a little bit too far. … Humans still hold a lot of accountability and oversight.”
— Steve Feldstein ([04:55])
- Notable Quote:
- Looking Forward: “Maybe better to call it a preview of the future of war,” says O’Leary ([05:15]).
3. Drone Technology: From Elite to Mass Market
- Evolution: Shift from exclusive, expensive drones (e.g., Reapers, Predators) to democratized, cheaper models—the TB2 (via Turkey) and even more basic, disposable drones ([08:14]–[10:49]).
- Notable Insight:
Countries like Ukraine manufactured 4.5–5 million basic quadcopter-type drones last year ([10:15]).
- Notable Insight:
- Innovation Path: U.S. now deploys low-cost “one-way” attack drones, inspired by designs from Iran and Israel ([11:03]).
- Quote:
“It’s a really interesting kind of like reversal of how innovation has typically been thought of.”
— Steve Feldstein ([11:16])
- Quote:
4. Precision, Proliferation, & Civilian Harm
- Accuracy: Drones are more precise than older munitions, but the sheer volume of strikes raises the odds of collateral damage ([12:45]).
- Quote:
“Even if they are more precise, you still can’t get around this problem of civilian casualties … because you’re shooting so many more missiles.”
— Steve Feldstein ([12:45])
- Quote:
- Ethical Divide: U.S. follows the laws of armed conflict more strictly than Russia or Iran, but using adversary-derived tech has unforeseen consequences ([13:45]–[14:37]).
- Accountability: Recent Pentagon decisions (e.g., scrapping civilian casualty programs) raise questions about institutional commitment to minimizing harm ([14:37]).
5. The New Role of AI in Targeting and Analytics
- AI Integration: Programs like Palantir’s Maven (powered by Anthropic’s Claude) help sift through immense quantities of intelligence—from satellite imagery to social media ([15:30]–[17:10]).
- Quote:
“Claude has been able to … look through millions of bytes of data and try to derive patterns … and generate with a reasonable degree of confidence those suspected whereabouts.”
— Steve Feldstein ([16:24])
- Quote:
6. Tech Industry–Military Tensions: Anthropic v. Pentagon
- Legal Clash: Anthropic, maker of Claude, sued the Pentagon over its “supply chain risk” designation—even as its tech was being used in operations ([18:51]).
- Quote:
“Their vision is that they [the Pentagon] call the shots, the companies provide the product, and … the companies have little to no say.”
— Steve Feldstein ([19:29])
- Quote:
- Limits of Oversight: Defense Department’s legal frameworks for AI-enabled weapons remain vague and subject to change, fueling concerns about future autonomy ([21:06]).
7. The Threat—and Inevitable Arrival—of Fully Autonomous Weapons
- “Human in the Loop?” The Pentagon’s stated commitment to oversight remains, but Feldstein predicts fully autonomous weapons within five years ([22:07]).
- Quote:
“We're not there yet. That is a scary prospect and Rubicon to cross.”
— Steve Feldstein ([22:11])
- Quote:
8. Psychological and Societal Effects of Technologically Mediated War
- Screen Separation: Drone warfare may numb operators and the public, but trauma for remote drone pilots remains significant ([23:46]).
- Quote:
“When you’re looking through a screen and fighting war remotely … it certainly changes your perception of what you're doing. … There seems to be a lot of damage and a lot of harm that over time many soldiers end up accruing from undertaking these tasks.”
— Steve Feldstein ([23:46])
- Quote:
- Gamification and Propaganda: AI-powered dashboards, gamified civilian tools, and propaganda videos blur entertainment and war, trivializing real destruction ([25:01]).
- Quote:
“It’s sort of undercutting the seriousness and the kind of moral valence of war and turning it into something that is entertainment … ultimately what we’re talking about is death and destruction.”
— Steve Feldstein ([25:54])
- Quote:
9. AI & Drones: Fueling Endless, Insulated Conflict
- Perils of Distance: Technology reduces U.S. casualties, making longer, less visible wars more likely ([26:47]).
- Quote:
“It makes it easier to fight just a longer war, one that just goes on and on.”
— Lizzie O’Leary ([26:47])
- Quote:
- Contrast: In ground wars like Ukraine, drone warfare inflicts massive casualties and “hellscape” conditions ([27:24]).
10. Should We Want Fully Autonomous Weapons?
- Philosophical Closing: Feldstein cautions strongly against delegating lethal decisions to machines ([29:04]).
- Quote:
“I think the more that we remove human decision making from something so inherently critical, I think the more that leads to lots of hazards down the line. … You ought to know what you’re doing and have a human say, I signed off on that.”
— Steve Feldstein ([29:04])
- Quote:
Notable Quotes & Memorable Moments
-
On Outdated Intelligence and Missed Opportunities for AI:
“If anything, I would have guessed that AI could have helped in terms of vetting with updated satellite imagery … Grossly outdated information, that this was still something used to conduct military operations by the Iranians.”
— Steve Feldstein ([03:14]) -
On Gamification of War:
“We are seeing this confluence of the artificial, the real, entertainment, gamification. … It’s sort of undercutting the seriousness and moral valence of war and turning it into entertainment.”
— Steve Feldstein ([25:54]) -
On the Dangers of Outsourcing Lethal Decisions:
“To say, well, I sent it off to the machine and I don’t know if the data that it used for targeting is good or not … That, to me, is a nightmare scenario.”
— Steve Feldstein ([29:37])
Important Timestamps
| Topic | Timestamp | |------------------------------------------------------------------------------------------------|------------| | Overview of school strike & AI speculation | 01:43–03:14| | Are we in an “AI war?” | 04:10–05:07| | Drones: tech evolution and global proliferation | 08:14–10:49| | U.S. adoption of low-cost, “one-way” drones | 11:03–12:04| | Precision, civilian casualties, ethical divides | 12:13–14:37| | Pentagon accountability and civilian harm programs | 14:37–15:26| | AI/Palantir/Claude in targeting | 15:30–17:10| | Anthropic vs. Pentagon legal clash, regulation loopholes | 18:51–22:07| | Is a fully autonomous weapon inevitable? | 22:07–23:02| | Psychological effects on drone operators | 23:29–24:59| | AI-powered civilian dashboards & gamification | 25:01–26:22| | AI/drones making longer wars easier | 26:40–28:06| | Should we want autonomous weapons? | 28:26–30:14|
Final Takeaway
The episode underscores the uneasy and rapidly shifting ground at the intersection of warfare, technology, and accountability. AI and drones are enabling new forms of warfare—more data-driven, distant, and potentially endless—that challenge old ethical assumptions and human oversight. While AI offers potentially life-saving precision and intelligence, the risks of dehumanization, legal ambiguity, and technological “distance” from the battlefield grow every year. As Feldstein warns, the drive toward full autonomy in lethal force brings us to a “nightmare scenario” unless we retain strong human accountability.
