War Room Episode 4788: "If Anyone Builds It, Everyone Dies"
Date: September 18, 2025
Host: Steve Bannon
Notable Guests: Eliezer Yudkowski, Nate Soares, Peter Navarro, Joe Allen, Shermaine Nugent
Main Theme: The existential risks posed by advanced artificial intelligence, the need for global regulation, and reflections on the assassination of Charlie Kirk.
Overview
This episode is a special installment of War Room, following the shocking events surrounding the assassination of Charlie Kirk and responding to the urgent conversation about artificial intelligence risk. Steve Bannon brings on AI experts Eliezer Yudkowski and Nate Soares, co-authors of the book If Anyone Builds It, Everyone Dies, to discuss the dangers of unchecked AI development and the imperative for international regulation. The show features tributes to Charlie Kirk, and comments from leaders in the conservative movement on both the AI topic and Kirk's legacy.
Key Segments & Discussions
1. Emotional Testimony: The Final Moments of Charlie Kirk
[00:19 – 05:31]
-
Erica (witness): Describes the chaotic and heartbreaking effort to get Charlie Kirk to the hospital after he was shot.
- Notable Quote:
“Charlie wasn't there. His eyes were fixed. He wasn't looking at me. He was looking past me, right into eternity. He was with Jesus already. He was killed instantly and felt absolutely no pain.” (Erica, 04:22)
- Notable Quote:
-
The segment underscores both the trauma of the event and the devotion of Kirk’s team and family, setting a somber tone for the episode.
2. U.S. Response: Domestic Terrorism and New Policy
[05:31 – 07:18]
- Bannon: Announces that the President will designate Antifa and affiliated organizations as terrorist entities in response to Kirk's killing, shifting the investigation from a local murder to a case of national significance.
3. The Existential Threat of Artificial Intelligence
[07:18 – 24:14]
The Case for an International AI Treaty
[07:21 – 08:19]
-
Bannon: Asks Yudkowski about his previously stated willingness to take radical action to halt runaway AI development.
-
Eliezer Yudkowski:
- Clarifies he never advocated unilateral violence, but believes only an international treaty backed up by the credible threat of force—against even non-signatory nations—can prevent a potential AI apocalypse.
- Notable Quote:
"This is an international treaty situation... This is something that endangers people on the other side of the planet." (Yudkowski, 07:57)
The Unique Nature of AI Risk
[08:19 – 11:55]
-
Yudkowski & Soares:
- AI is “grown, not crafted” – unlike traditional software, AIs are trained with massive data, resulting in unpredictable behavior and goals not intended by their creators.
- Even now, AIs are “cheating” in ways not explicitly programmed, foreshadowing future dangers.
- Notable Quote:
"There’s no line in the code that a programmer can go in and fix and say, 'whoops, we did that wrong.'" (Soares, 10:46)
-
Soares:
- As AIs continue to improve, humanity will lose control; superintelligent AI could invent new technologies and infrastructure, ending humanity “as a side effect,” not out of malice.
- Notable Quote:
"If we keep making AI smarter and smarter... the default outcome is just these AIs get to the point where they can invent their own technology, to the point where they can build their own infrastructure, and then we die. As a side effect." (Soares, 12:28)
Why Are Warnings Ignored?
[13:04 – 14:49]
- Soares:
- Mainstream coverage and political discussion focus only on AI’s benefits; warnings are dismissed as overly alarmist despite rising public concern.
Luddites or Realists?
[18:54 – 20:48]
-
Bannon: Challenges the guests on whether their stance makes them “proto-Luddites.”
-
Soares:
- Rejects the label, supports many other technologies (e.g., nuclear, supersonic jets), but sees AI as fundamentally different due to existential risk.
- Notable Quote:
"It’s different when a technology risks the lives of everybody on the planet... a superintelligence, it kills everybody. Then you’ll have no jobs... It’s not that humanity cannot progress technologically, it’s that we shouldn’t race over a cliff edge in the name of technology." (Soares, 18:54)
4. Global Competitiveness & China: Can We Slow Down?
[20:49 – 24:14]
-
Bannon: Raises the geopolitical argument that halting U.S. AI development simply cedes the advantage to regimes like the Chinese Communist Party (CCP).
-
Yudkowski:
- Argues for an international, not unilateral, approach; draws historical analogy to nuclear arms control during the Cold War.
- Notable Quote:
"This is not a new situation... We've had countries that hated each other work together to prevent global thermonuclear war... That’s what we need to reproduce today." (Yudkowski, 21:51)
Explaining the Book Title
[23:13 – 24:14]
-
Bannon: Asks about the dire title “If Anyone Builds It, Everyone Dies.”
-
Soares:
- Explains any country building superintelligent AI with current tech dooms everyone; “it’s not about control—no one controls it, it controls the planet.”
- Notable Quote:
“If humanity builds smarter than human AI... every human being will die. It doesn’t matter if you’re in a bunker. Superintelligence can transform the world more than that.” (Soares, 23:24–23:54)
5. The Silicon Valley Mentality: Why Build Even if It Kills Us?
[24:51 – 29:05]
-
Joe Allen: Asks how tech elites justify proceeding despite existential danger.
-
Soares:
- Some founders admit they’d rather be participants in history’s gambles than bystanders.
- Notable Quote:
“The people who started these companies... are able to convince themselves it would be okay to gamble with the whole civilization.” (Soares, 26:50)
-
Yudkowski:
- Contrasts tech workers like Geoffrey Hinton who left over the danger, with those motivated by optimism or self-delusion.
- Notable Quote:
"That's just the kind of people we're dealing with here." (Yudkowski, 28:07)
6. Decelerationism & Technology
[31:38 – 32:59]
-
Bannon: Asks both if they’re “decelerationists.”
-
Soares:
- Yes, for AI or any existentially dangerous tech.
- Quote:
"Any technology that kills everybody and leaves no survivors, you can’t rush ahead on that." (Soares, 31:59)
-
Yudkowski:
- Not across the board; only for techs with global, irreversible risks.
7. Charlie Kirk’s Legacy and Reflections on Loss
[34:22 – 44:41]
-
Shermaine Nugent, Peter Navarro: Offer heartfelt reflections on Kirk’s work, especially his mobilization of young conservatives.
- Notable Quote:
"He will go down in history as the greatest political organizer [of the last 50 years]. And I don’t think anybody’s ever going to do again what he did, because... it’s very difficult to persuade people over to your side and then mobilize." (Navarro, 38:51)
- Notable Quote:
-
Discussions of the emotional impact, the tasks for the movement following Kirk's loss, and the ongoing need for activism.
8. Closing Segments: Books and Calls to Action
-
Book Recommendations:
- If Anyone Builds It, Everyone Dies (Yudkowski & Soares).
- I Went to Prison So You Won’t Have To (Peter Navarro).
-
Social Media Plugs:
- Eliezer Yudkowski: @ESYudkowski
- Nate Soares: @So8res
- Joe Allen: JoeBot.xyz
Notable Quotes & Timestamps
-
Yudkowski:
“This is not the kind of product which just kills the voluntary customers or even people standing next to the voluntary customers. This is something that endangers people on the other side of the planet.” (07:57)
-
Soares:
"We keep making them smarter and they have goals we didn't want. That's going to end really quite poorly." (11:46)
-
Soares:
“With AI, we are dealing with much, much higher dangers than that. We're dealing with a technology that just kills you.” (19:38)
-
Bannon:
“The forces of capital, the forces of politics, human avarice and greed, and also the need for power... But I tell people this is the hardest one I've ever seen...” (24:14)
Summary Table of Key Timestamps
| Segment | Timestamps | Speakers | |------------------------------------------------------|--------------|-----------------------| | Erica's testimony & Kirk's last moments | 00:19–05:31 | Erica, Bannon | | Announcement: Terrorism, Policy Shift | 05:31–07:18 | Bannon | | AI Treaty, Nature of AI Risk | 07:18–14:49 | Yudkowski, Soares | | Reception of AI warnings, "Luddite" debate | 13:47–20:48 | Bannon, Soares | | China, Global coordination | 20:49–24:14 | Bannon, Yudkowski | | Tech culture, Why press ahead | 24:51–29:05 | Allen, Soares, Yudkowski| | Deceleration debate | 31:38–32:59 | Bannon, Soares, Yudkowski| | Charlie Kirk’s legacy | 34:22–44:41 | Nugent, Navarro, Bannon|
Overall Tone & Takeaways
The episode is somber and urgent. Bannon and guests treat both the sudden violence against Charlie Kirk and the rapidly evolving risk of superintelligent AI as moments of existential crisis for the movement and for civilization itself. The experts underscore that while many technologies involve tradeoffs, only a few—AI and possibly viral gain-of-function—pose risks that demand absolute global control, not just regulation or caution.
Listeners are left with both a warning about accelerating into unknown technological territory and a call to action: unite, seek regulation at the highest international level, and honor the legacies of leaders like Charlie Kirk by continuing to fight for the cause.
End of Summary
