Podcast Summary: The Weekly Show with Jon Stewart
Episode: Silicon Valley Goes to War
Date: March 11, 2026
Host: Jon Stewart
Guests: Dr. Sarah Shoker (AI researcher, UC Berkeley, former OpenAI geopolitics lead) & Paul Scharre (EVP, Center for a New American Security; former Army Ranger and Pentagon policy analyst)
Episode Overview
Jon Stewart explores the collision of artificial intelligence, Silicon Valley, and modern warfare. Focusing on recent controversies—such as Anthropic’s refusal to allow its AI to be used in certain military contexts and OpenAI’s willingness to step in—Stewart and his guests dig into how AI is influencing the way wars are planned and fought, who is setting the rules for these new technologies, and what moral, legal, and practical dilemmas arise as AI becomes a tool of military power. The conversation is urgent, wide-ranging, and sometimes darkly humorous, aiming to pull back the curtain on the real stakes as technological and ethical boundaries are tested.
Key Discussion Points & Insights
1. How the Military Uses AI
- Military Adoption: The military uses AI much like other sectors: as an optimizer for productivity, logistics, personnel management, and increasingly, on the battlefield. (04:21–10:09)
- Types of AI:
- Handcrafted software: e.g., autopilots, missile guidance with bounded autonomy.
- Machine learning: for tasks like computer vision, analyzing satellite images, and drone footage.
- Generative AI (LLMs): large language models (like Claude and GPT) now being integrated to process, summarize, and prioritize vast military/intelligence data. (07:56–09:17)
“It’s not something special or different—the military sees it as a productivity tool, an optimizer.”
— Paul Scharre (05:46)
2. Anthropic vs. OpenAI: The "Red Lines" Controversy
- Anthropic's Stance: Refused to allow its AI models to be used for autonomous weapon systems (AWS) or mass surveillance of Americans, citing insufficient reliability. (10:20–12:05)
- Definitions Matter: U.S. defines AWS as systems that can select and engage targets without human input (a human may be “in the loop,” but it’s not required). Anthropic echoed current US policy, which requires “appropriate levels of human judgment.” (11:08–12:05)
- Contract Fallout & Public Backlash: While Anthropic drew a public line, OpenAI took the contract after Anthropic bowed out, leading to criticism from AI scientists and observers who questioned the legitimacy of these so-called red lines. (31:14–32:28)
- Reality Check: Both companies’ positions and contracts are similar; much of the dispute seems to be about PR and corporate relationships rather than substantive differences in ethical stances. (24:09–34:26)
"Both companies have essentially agreed to both red lines... [it's] not necessarily that different between Anthropic or OpenAI."
— Dr. Sarah Shoker (24:36)
3. Role of AI in Recent Military Actions
- Specific Use Case: In the recent Iran campaign, the Maven Smart System (MSS), incorporating Anthropic’s Claude model, was credited with generating 1,000 military targets in a single day—double the volume in the 2003 “shock and awe” Iraq campaign. (14:33–18:26)
- Integration: Claude (by Anthropic) used within Maven (by Palantir) summarizes and prioritizes data for human analysts—improving efficiency but potentially offloading some human judgment. (16:14–17:21)
- Scale & Speed: AI has made possible a previously unprecedented scale and rapidity in target selection and analysis.
“[It] boosts efficiency... but it does also seem to offload a little bit of human autonomy and decision-making as well.”
— Dr. Sarah Shoker (17:21)
4. Human In/On/Out of the Loop
- Current Policy: Humans are kept in the loop, particularly at the final decision point ("who said it was a good idea to blow this thing up?"). But as automation increases, risks arise that machines may end up making more decisions by default or that humans might blindly trust AI recommendations. (12:05–14:14)
- Reliability & Risk: Current AI models are not seen as sufficiently reliable for autonomous kill decisions.
5. Transparency, Accountability, & Contracts
- Opacity: Details of defense contracts (e.g., $200M between Anthropic and DoD) are highly secretive—even employees often don’t know terms. Dollars are relatively small compared to overall government or company budgets, making military revenue a fraction of overall funding for these companies. (27:22–28:37)
- Consumer Power: Most revenue for these companies still comes from individual users and enterprise customers, suggesting public pressure can be a significant lever for influencing corporate behavior. (29:10–30:03)
6. Who Should Set the Rules?
- Debate Over Rule-Setting: Should regulations come from the Pentagon, private AI companies, or elected representatives? (38:00–41:50)
- Lobbying & Influence: AI companies are major political donors, sometimes advocating for low regulation as a matter of “national security.”
- Congressional Oversight: There are tools Congress can use—legislation, hearings, classified briefings, controlling procurement—to oversee and guide military AI use, though tech literacy and political gridlock are limiting factors. (40:25–43:20)
“I think the model of who should be setting the rules—maybe it’s our democratically elected representatives.”
— Paul Scharre (41:50)
7. AI’s Psychological & Strategic Risks
- "AI Confidence": There’s a risk that analysts (or commanders) will develop undue confidence in AI-generated outputs, mistaking statistical predictions for objective truth, potentially making the military more trigger-happy or prone to escalation. (53:41–55:38)
- Human Error vs. Machine Error: Stewart questions whether AI might in some cases be less fallible than humans (“We bombed randomly before computers ever happened”), though others note that increased speed and scale of AI could magnify mistakes. (78:17–80:18)
“Are we elevating humanity to a higher status than we’ve earned?”
— Jon Stewart (79:54)
8. AI, Biological Risks, & Escalation
- Dual-use Dangers: The same tools used for targeting could, in other contexts, be used to design new biological or chemical weapons, lowering barriers for rogue actors. (56:33–58:02)
- Escalatory Bias: Studies show LLMs tend to escalate conflict in military simulations more quickly than humans, possibly due to training data that over-represents war and crisis scenarios. (50:31–51:17)
“Models have a tendency to escalate more aggressively than humans would... That in itself is, of course, a cautionary tale.”
— Dr. Sarah Shoker (51:17)
9. International Governance & Regulation
- Global Difficulty: Attempts at global treaties to regulate lethal autonomous weapons have stalled. Achieving international consensus is tough, though reaffirming basic norms (like civilian protection) has been possible. (62:20–64:34)
- Hardware as a Choke Point: Since advanced AI models require specialized chips (mostly dependent on Taiwanese, Japanese, Dutch, and U.S. tech), controlling chip exports and usage is seen as the most promising current lever for setting guardrails on military (and WMD) AI use. (65:11–67:55)
“That actually [hardware] is a really narrow choke point to begin then to control the technology.”
— Paul Scharre (66:24)
Notable Quotes & Memorable Exchanges
-
“If public reporting is anything to go by... 1,000 targets in Iran has largely been credited to the MSS, the Maven Smart System.”
— Dr. Sarah Shoker (15:39) -
“The question is, should there be any rules? And if so, who sets those rules?... The Pentagon’s answer is, we get to set the rules. We don’t want these companies dictating to us.”
— Paul Scharre (37:36) -
On lobbying:
“AI companies are... donating significant sums to lobbying efforts and tying those donations to US-China tech competition... This conversation is in fact coming for Congress and they probably better be equipped at the very least.” — Dr. Sarah Shoker (38:46) -
On international agreements:
“It’s really going to be an all-of-society effort... A one-size-fits-all approach to safety is probably not going to work.”
— Dr. Sarah Shoker (71:45) -
On pessimism/cynicism in AI governance:
“There isn’t really a way out of this that doesn’t involve talking a lot to other people, but there is something there to build on.”
— Dr. Sarah Shoker (73:40)
Key Segment Timestamps
- Host + Guest Intros: 01:08–04:51
- How Military Uses AI: 05:25–12:05
- Anthropic/OpenAI “Red Lines”: 10:20–14:33
- AI in Iran Targeting, Maven System: 14:33–18:49
- Transparency & Contracts, Scale Discussion: 24:09–34:26
- Who Sets the Rules? Congress/DoD: 38:00–44:49
- AI Confidence & Risk: 53:41–58:02
- Personality of AI in Warfare: 58:35–59:45
- Global Regulation, Hardware Choke Points: 62:20–68:07
- Quantum Computing & Future Risks: 68:41–70:48
- Summary/Wrap-up – Where Is the Tension: 70:48–74:40
Conclusion: The Big Picture
The heart of the conversation is not just the technical capabilities or current uses of AI in warfare, but the broader moral, legal, and democratic dilemmas: Who should decide what roles AI plays in war? Can public pressure, regulatory action, and international norms keep pace with technological transformation—or will commercial and military priorities win out? The guests encourage public engagement, transparency, and broad debate, noting that for-profit companies, the military, and democratic institutions must all be part of framing the path ahead.
“These decisions are too important to be left up to any one of these entities on their own... All of us, your listeners, have a role to play in weighing in on this debate.”
— Paul Scharre (74:40)
Final Thoughts by the Panel
- AI is changing warfare rapidly, but the biggest risks may lie not in “killer robots” today, but in reliance on tools even their creators barely understand.
- Corporate posturing often masks deeper, unresolved ethical gaps.
- Future regulatory frameworks may depend not just on law or technology, but on public engagement and persistent, sometimes messy, democratic debate.
For listeners and readers: This episode is an urgent, accessible, and occasionally chilling tour through the real world of military AI, offering nuance beyond the headlines and a candid call for civic engagement in shaping the future of war and technology.
