80,000 Hours Podcast:
AI Won’t End Mutually Assured Destruction (Probably)
Guests: Sam Winter-Levy & Nikita Lalwani
Hosts: Rob Wiblin & Luisa Rodriguez
Date: March 10, 2026
Episode Overview
This episode explores whether the advent of advanced AI, including AGI (artificial general intelligence), could disrupt or even end the era of nuclear deterrence and mutually assured destruction. The discussion dives into the mechanics of nuclear deterrence, the technicalities of maintaining a secure second-strike capability, and the ways AI might alter the strategic balance—or why, despite fears, it likely won’t fundamentally change the central logic of nuclear stability. The guests, Sam Winter-Levy (Carnegie Endowment for International Peace) and Nikita Lalwani (former White House Director for Technology and National Security), draw on their Foreign Affairs article to probe these dynamics and their policy implications.
Key Discussion Points & Insights
1. Basics of Nuclear Deterrence and Secure Second Strike
- Nuclear deterrence: Discourages adversaries from nuclear aggression due to the threat of a devastating second strike.
- “Deterrence refers to the practice of dissuading an adversary from taking certain undesirable actions up to and including a nuclear attack.” — Nikita, [03:22]
- Mutually assured destruction (MAD): Both sides can retaliate after an attack, making a first strike suicidal.
- Second strike capability: Survivability is central (via redundancy, triad forces, concealment).
- US maintains triad (ICBMs, nuclear subs, airborne bombers); other nuclear states use unique mixes (road mobile launchers, subs).
- “US has triad...UK Only has nuclear submarines. Russia and China have road mobile launchers.” — Sam, [08:06]
2. Limits of Nuclear Deterrence
- Deterrence doesn’t eliminate all coercion or conflict but sets strict boundaries, especially over “vital interests” like territory or regime survival.
- Ultimately, power in the nuclear context is as much about resolve and nerves as about arsenal size.
- “Winner ... is determined less by the balance of power ... and more by the balance of nerves.” — Sam, [00:07]/[04:55]
- Even hugely outmatched nations (e.g., Russia vs. US) retain bargaining leverage if nuclear weapons are at stake.
3. How Might AI Undermine Nuclear Deterrence?
- There are three main ways:
- Splendid first strike: Ability to find and destroy all adversary nuclear forces.
- Crippling NC3 (Nuclear Command, Control & Comms): Paralyzing retaliatory capability at key moments.
- Missile defense breakthroughs: Making retaliation ineffectual by intercepting incoming nukes.
Deep Dives: Nuclear Arsenal Survivability in the AI Era
4. Submarine-Launched Ballistic Missiles (SLBMs) and Sub Survivability ([11:25] – [19:42])
- Finding subs is extremely hard.
- The ocean’s vastness, noise, and the ultra-quiet design of modern nuclear subs make detection a “brutal” technical problem.
- “Finding needles in a haystack, designed to be as difficult to find as possible...” — Sam, [13:37]
- Could AI "fix" this?
- Hypothetically, advanced AI might fuse multisensor data (acoustic, magnetic, satellite), deploy swarms of autonomous underwater drones, hack sub communications, and improve tracking.
- States could respond with countermeasures: decoys, jamming, moving to protected waters, or making their own subs harder to track.
- Bottom line: Even with AI, physical and operational limits, cost, and counter-countermeasures make comprehensive submarine detection “an extremely hard technical problem.”
- “Ballistic missile submarines are likely to remain reliable second strike nuclear forces over the next 20 years and beyond.” — Sam quoting Tom Stefanik, [18:47]
5. Road-Mobile Launchers ([22:39] – [28:18])
- What makes them survivable:
- Concealment, frequent relocation, hiding in tunnels/remote places, use of decoys.
- Can AI solve it?
- AI could accelerate image analysis from satellites, fuse intercepted signals, and dramatically shrink the search area.
- Response: Classic low-tech methods (chicken wire, cloud cover, decoys) can stymie even advanced AI. Adversaries can destroy satellites in wartime.
- “Defending states have a lot of options...including very low-tech measures.” — Sam, [27:33]
6. Missile Defense ([29:11] – [35:20])
- Challenge: Intercepting ICBMs is staggeringly hard—likened to “hitting a bullet with a bullet.”
- AI improvements: Accelerate sensor processing, distinguish warheads from decoys, perhaps create more agile interceptors.
- Limits:
- Massive cost; uncertainty; offense-defense competition still favors the attacker (easier to build more missiles/decoys than interceptors).
- “Missile defense remains a... domain that is really tilted against the defender.” — Sam, [34:40]
7. Nuclear Command, Control, and Communications (NC3) ([35:48] – [43:24])
- Basic structure: Hundreds of redundant and hardened systems, both digital and analog, above and below ground.
- Cyber vulnerabilities:
- AI-enabled cyberoperations could sabotage rival networks—but heavy redundancy, compartmentalization, and extreme secrecy create many obstacles.
- High-risk: states may hesitate to test or use cyberweapons against NC3 for fear of triggering nuclear escalation.
- Memorable moment:
- “The UK has these ‘letters of last resort’ for sub commanders... Russians had the ‘dead hand’ system, to ensure retaliation even if command is destroyed.” — Nikita/Sam, [43:02]
Strategic Dynamics & Windows of Instability
8. Move-Countermove Dynamics, Arms Racing, and Deterrence Stability ([44:09] – [48:27])
- The history of nuclear arms competition is shaped by perpetual adaptation. AI would accelerate the pace but not the logic.
- “The onus is entirely on the attacker to be able to get close to 100% certainty.” — Sam, [44:09]
- Insecurity about survivability could spark dangerous arms races, making the world costlier and potentially less stable—even if MAD persists.
9. Damage Limitation vs. Mutual Assured Destruction
- Some scholars/officials think damage limitation (being able to destroy most of an adversary’s nukes) could still be meaningful (fewer cities lost).
- Others (dominant in academia) argue: as long as retaliation is possible, "winning" a nuclear war is an oxymoron.
- “Even if AI could help the US take out 90%... 10% is still absolutely devastating.” — Sam, [51:54]
- No algorithm determines what level of risk is “acceptable” for leaders to launch a first strike.
Fast Takeoff Scenarios and Strategic Surprise
10. What if AI Progress Becomes Sudden or Discontinuous? ([56:40] – [61:31])
- Danger is highest if AI breakthroughs come so quickly that adversaries can’t adapt (via redundancy, decoys, etc).
- If AI enables sudden leaps in intelligence analysis, rivals may not realize a vulnerability window has opened.
- “The critical question...is the relative speed of two things: AI progress and adversary adaptation.” — Sam, [57:29]
- Institutional, legal, doctrinal, and political factors will still slow the integration and use of new technology—even with rapid advances.
- “Even if technology changes overnight, states don’t generally integrate advanced technology at the same speed.” — Sam, [59:46]
Policy Recommendations & No-Regrets Moves ([62:38] – [68:53])
11. What Should Governments Do?
- Dialogue: Enhance communication between AI and nuclear experts (who often lack each other’s expertise).
- “AI experts...are not expert on nuclear weapons...Nuclear community...not entirely on top of...AI breakthroughs.” — Sam, [63:17]
- Review vulnerabilities: Rigorous, ongoing assessments—especially of cyber risks in nuclear systems.
- Avoid fueling arms races: Avoid “wonder weapon” rhetoric and maintain arms control dialogues.
- Preserve redundancy and resilience in NC3 and arsenals.
- Prepare for fast-takeoff scenarios: Build state capacity and policy flexibility.
- Maintain communication & escalation control: More important than ever.
Notable Quotes & Memorable Moments
-
On the spirit of deterrence stability:
“The balance is tilted against the state that might want to launch a splendid first strike.” — Sam, [44:09] -
On decision calculus for a perfect first strike:
“Launching a splendid first strike here involves launching hundreds, potentially thousands of nuclear weapons ... based on a belief that you have 100% probability ... that's a huge gamble...” — Sam, [54:22] -
On arms racing risks:
“Even if second-strike capabilities survive ... that’s still a very kind of destabilizing, potentially scary world to live in.” — Sam, [47:31] -
On lessons from history:
“The US had technology dominance over Vietnam and Taliban, but suffered unambiguous defeat. So … technological power ≠ political dominance.” — Sam, [54:22]
Timestamps for Central Sections
- Intro to nuclear balance and AI’s potential impact: [00:00] – [01:53]
- Deterrence and MAD 101: [03:22] – [08:06]
- Undermining the second strike – 3 pathways: [09:09] – [10:31]
- Submarines: [11:25] – [19:42]
- Mobile launchers: [22:39] – [28:18]
- Missile defense: [29:11] – [35:20]
- Command, Control & Communications: [35:48] – [43:24]
- Fast takeoff scenarios, adaptation, and surprises: [56:40] – [61:31]
- Policy recs (“no regrets” moves): [62:38] – [68:53]
- Book/movie recommendations: [69:01] – [70:43]
Final Recommendations & Reflection
The episode’s tone is thoughtful and sobering, emphasizing uncertainty, technical challenges, and the historical resilience of nuclear deterrence logic, even as AI adds complexity and speeds up the strategic “move-countermove” game. The key takeaway: while AI can complicate nuclear strategy and exacerbate arms racing, the fundamental challenge of guaranteeing a splendid first strike—and the willingness of leaders to gamble on it—are likely to keep MAD in place for the foreseeable future.
Book rec: A Swim in a Pond in the Rain, George Saunders
TV rec: The UP Series (documentary, UK)
