Front Burner: "Iran and AI on the Battlefield"
Date: March 6, 2026
Host: Jayme Poisson (B)
Guest: Heidi Klaff (C), Chief AI Scientist at AI NOW Institute, AI safety and autonomous weapons expert, ex-OpenAI
Episode Overview
In this episode, Front Burner explores the rapid escalation of AI integration into modern warfare, with a focus on the recent US and Israeli military operations against Iran. Host Jayme Poisson and guest Heidi Klaff discuss how large language models (LLMs) like Anthropic’s Claude have become central to military decision-making, the confusion these systems create around accountability, and the broader implications for state power, ethics, and international law. The conversation ranges from specific incidents (like a disputed airstrike on a girls’ school in Iran) to the global race to deploy AI in military and surveillance contexts, highlighting the ethical and existential risks of this new era.
Key Discussion Points and Insights
1. How AI Is Used in the Iran Conflict
-
AI as Decision Support
- Anthropic’s Claude acts as a decision support system, aggregating and analyzing data sources including satellite images, social media, and intercepted communications to recommend military targets and prioritize strikes.
“Claude is currently being used as...decision support systems...looking at satellite images, social media feeds, intercepted phone communications...then it uses them to make recommendations, including target recommendations, prioritizing them and even providing coordinates for those targets.” (C, 02:13)
- Claude works in tandem with Palantir’s Maven Smart system, combining vast classified and open-source datasets for analysis.
- Anthropic’s Claude acts as a decision support system, aggregating and analyzing data sources including satellite images, social media, and intercepted communications to recommend military targets and prioritize strikes.
-
Obscuring Accountability
- Use of AI muddies the chain of responsibility for military actions.
“AI is actually being used to evade accountability...it makes it difficult to distinguish if some attacks were in fact deliberate due to intelligence failures or due to the lack of AI accuracy...” (C, 03:53)
- Ambiguity exists over whether civilian casualties result from human error, outdated data, or AI misjudgments.
“It's very difficult for us to say if this was due to an AI mistake or if this was deliberate or not. And that's exactly why a lot of militaries are using AI. It obscures that.” (C, 03:53)
- Use of AI muddies the chain of responsibility for military actions.
2. Evolution and Accuracy of Military AI
-
Progression of AI in Warfare
- AI use in military contexts is not new, but LLMs/GPT models are a recent development—previous AIs were more task-specific, trained on controlled data, and more accurate.
- LLMs, originally civilian tech, are now minimally adapted for targeting but remain error-prone.
“Military purpose built AI...tends to be very task specific...LLMs...are very general purpose...LLMs have an incredibly low accuracy rate...you're looking at something like 25 to 50% accuracy, and yet they're still being deployed.” (C, 06:04)
-
Comparison to Old Attacks
- AI-driven warfare is likened to high-tech carpet bombing, lacking precision, and blurring legal norms.
“It's almost just a high tech version of carpet bombing. And ...AI is being used to evade accountability.” (C, 10:42)
- AI-driven warfare is likened to high-tech carpet bombing, lacking precision, and blurring legal norms.
3. Real-World Examples: Gaza, Lavender, and Gospel
- Previous Israeli systems (Lavender, Gospel) did not use LLMs but similar AI targeting approaches; more recent adoption of GPT-4 and Google Gemini in targeting confirmed after October 7, 2025.
- All target-identifying AIs share the purpose of maximizing targets, often justifying mass human impact.
"They're all looking to generate as many targets as possible...they all ingest similar types of information...both types of AI really have accuracy problems here." (C, 09:29)
4. Rubber Stamping and Automation Bias
- "Human in the loop" is eroded by automation bias—humans tend to trust AI outputs without verifying, creating de facto autonomous weapon operations
“Humans often trust the recommendations of algorithms without corroborating with other sources or checking...so it does in the end of the day...end up being rubber stamping.” (C, 12:01)
- The gap between “decision support” and “autonomous weapons” is problematic and largely a formality.
5. Nightmare Scenarios and AI Expansion
- AI deployment in nuclear command and control (e.g., Scale AI’s recent contracts) is highlighted as a peak risk.
“I think we are in the nightmare scenario when it comes to military use...giving these models access to nuclear weapons…we are already in such a terrible worst case scenario…” (C, 13:27)
- The normalization of LLMs in critical infrastructure is seen as reckless and dangerous.
6. Defining Autonomous Weapon Systems
- Many forms exist: from landmines to smart drones; frontier AI (LLMs) aims to automate entire targeting and deployment processes, removing humans entirely.
"There's many different types of autonomous weapon systems…so it's to just eliminate the human from that loop and just to automate that process." (C, 16:29-17:57)
7. Corporate-Government Tensions: Anthropic, OpenAI, and US DoD
-
Anthropic’s Red Lines and Pentagon Demands
- Anthropic resisted removing guardrails on domestic surveillance and fully autonomous weapons.
"We are okay with all use cases...except for two...domestic mass surveillance...and fully autonomous weapons.” (D, 18:35)
- Military preference is clear: they seek flexibility and minimal restrictions, partly to maintain plausible deniability.
"They very clearly want to use these technologies as an alibi for whatever actions they want to carry out…AI really helps you evade accountability...” (C, 19:16)
- Anthropic resisted removing guardrails on domestic surveillance and fully autonomous weapons.
-
Surveillance and Data Dual-Use
- LLMs enable analysis of bulk civilian data, raising domestic surveillance risks.
"These models are what we call dual use. So they can be used for civilian purposes and...for military purposes." (C, 20:24)
- LLMs enable analysis of bulk civilian data, raising domestic surveillance risks.
-
International Legal and Regulatory Voids
- Decisions often rest on private corporate choices and US DoD interpretations, rather than international law or transparent oversight.
- Even with supposed legal compliance, international norms and institutions like the UN or ICRC have not weighed in meaningfully yet.
“We shouldn't be at the whims of a private corporation red lines on whether or not this dangerous technology should or should it be deployed..." (C, 21:58)
-
Supply Chain Concerns
- US government labeled Anthropic a “supply chain risk,” leveraging its influence; Klaff argues the risk is inherent to all LLMs, given their open training and potential for backdoors.
"I actually do believe that all LLMs are a supply chain threat to national security and defense...trained on the open Internet and publicly available information..." (C, 24:18) "You only need about 250 malicious documents to produce a backdoor into one of these models..." (C, 25:37)
- US government labeled Anthropic a “supply chain risk,” leveraging its influence; Klaff argues the risk is inherent to all LLMs, given their open training and potential for backdoors.
-
OpenAI’s Position and "Safety Theater"
-
OpenAI’s response to military interest is described as “safety theater”—public claims of guardrails are not operationally feasible, and monitoring real military use is not possible.
“The safety guardrails…are not operationally feasible...you can't monitor after that what that individual does..." (C, 27:09) "I would call it safety Theater, Safety co option…" (C, 28:36)
-
Notable for dropping stated military use bans in 2024.
“Like military use. That was eliminated in 2024 when it was initially part of their charter or their terms of service…” (C, 28:36)
-
8. Why AI Companies Court Military Contracts
- Military contracts provide both a critical revenue stream for expensive-to-train models and strategic entrenchment—embedding companies in essential state infrastructures.
“When you have the military, that's a really big money pot…embeds you within military and safety critical infrastructure, which means that you're then too big to fail.” (C, 29:47)
9. AI as a Tool of State Power
-
General-purpose AI enables unprecedented surveillance, centralized data collection, and decision automation by the state, with little transparency or recourse.
“It is the perfect tool to concentrate power because you're collecting essentially all data possible on humans, on our behavior...” (C, 31:11)
-
A dangerous “black box” logic now mediates critical decisions in migration, judicial systems, and war.
“Now it feels like no one can question that power. This is an unfortunate side effect of something like these very large general purpose models.” (C, 31:11)
Notable Quotes & Memorable Moments
-
On Accountability:
“When you're using these types of systems, it makes it difficult to distinguish if some attacks were in fact deliberate due to intelligence failures or due to the lack of AI accuracy...AI is actually being used to evade accountability.” (C, 03:53)
-
On AI Accuracy:
“LLMs have an incredibly low accuracy rate...you're looking at something like 25 to 50% accuracy, and yet they're still being deployed.” (C, 06:04)
-
On Human Oversight:
“Humans often trust the recommendations of algorithms without corroborating with other sources...so it does in the end...end up being rubber stamping.” (C, 12:01)
-
On Nuclear Risks:
“I think we are in the nightmare scenario when it comes to military use...we are already in such a terrible worst case scenario.” (C, 13:27)
-
On Legal Responsibility:
“[AI] decisions shouldn’t be at the whims of a private corporation’s red lines on whether or not this dangerous technology should or shouldn’t be deployed...there is international law.” (C, 21:58)
-
On LLMs as National Security Threats:
“I actually do believe that all LLMs are a supply chain threat to national security...this is about the nature of LLMs themselves...very different from purpose military built models.” (C, 24:18)
-
On OpenAI’s Safety Claims:
“The safety guardrails…are not operationally feasible...it could just very well be taken to then select and engage a target without any further oversight.” (C, 27:09)
-
On AI as State Power:
“It is the perfect tool to concentrate power because you're collecting...all data possible on humans, on our behavior...there’s no accountability because these models are black boxes.” (C, 31:11)
Key Timestamps
- 00:42 – Episode premise: AI on the battlefield, US/Israeli operations, civilian school strike.
- 02:13 – Explanation of Claude's role in targeting.
- 03:07 – How Palantir and Claude integrate for military analysis.
- 03:53 – AI and the crisis of military accountability.
- 08:04 – LLMs' accuracy compared to earlier military AIs.
- 09:29 – Gaza/Israel: evolution from earlier AI systems to LLMs.
- 10:42 – Declining precision and legality in modern, AI-driven warfare.
- 12:01 – Automation bias and the myth of “human in the loop.”
- 13:27 – Fears about nuclear command and loss of oversight.
- 16:21 – Defining and unpacking “autonomous weapon systems.”
- 18:35 – Anthropic’s boundary conditions vs. DoD preferences (quote from Pentagon official, D).
- 20:24 – The dual-use nature of LLMs and the surveillance implications.
- 24:18 – Anthropic labeled a supply chain risk and the inherent threat of LLMs.
- 27:09 – OpenAI’s deal with DoD, and the futility of proclaimed safety guardrails.
- 29:47 – The financial and strategic motivations for AI firms courting military clients.
- 31:11 – AI as the new tool of centralized, unaccountable state power.
Conclusion
This episode delivers a sobering look at how rapidly AI is transforming military operations—not with greater precision and safety, but with more confusion, less accountability, and the erosion of basic legal and ethical standards. From decision support to autonomous targeting and surveillance, LLMs are enabling states and militaries to centralize power and evade responsibility, with little oversight from either international law or the corporations developing the technology. The conversation between Poisson and Klaff paints a picture not of progress, but of regression, toward a world where life-and-death decisions are made inside a black-box algorithm, with no one ultimately to answer for the consequences.
