Provoked with Darryl Cooper and Scott Horton
Episode 23: "How Long Before AI Murders Us ALL???"
Date: November 24, 2025
Episode Overview
In this episode of "Provoked," Scott Horton and Darryl Cooper dive into the anxieties, realities, and emerging challenges of artificial intelligence, exploring if our collective technological experiment could one day turn against humanity in frighteningly literal ways. Using both military and societal analogies, the hosts challenge popular narratives, reference historical context, and debate the implications of AI’s rapid progress for war, the economy, human agency, and even the arts. Expect a conversational blend of skepticism, philosophy, and dark humor about the psychology of technological change and cycles of control.
Key Discussion Points & Insights
1. Opening Jokes and Setting the Stage
- The show opens with tongue-in-cheek references to robots from "Terminator 3," immediately setting a tone of skeptical humor.
- Scott and Darryl establish the stakes: Is AI another overblown panic like previous tech scares, or are the critics onto something real this time?
2. AI and Military Decision-Making: "Missile Defense as a Lens"
(02:42–07:09)
- Darryl shares his direct military experience to explain how AI could progressively remove humans from critical war decisions.
- The system progresses from human-in-the-loop to full automation (“auto special”), where the computer has the authority to fire missiles if certain criteria are met.
- Notable Quote (Darryl, 04:41):
"You have a bunch of these things, a swarm...driven by AI...and you’re gonna wipe the floor with any opponent that still has humans in the loop, you know, slowing their systems down."
- The inevitability of adoption is argued through game theory: once available, every side in a conflict must use such tech or risk defeat.
3. Will AIs "Break" or Develop a 'Will'?
(07:09–08:58)
- Scott questions the philosophical boundary between programmed preference and sentience. Are we inflating the risk, or is there a real argument for AIs developing a kind of will—even if it's not "free" will?
- Darryl draws a parallel between AIs and mythological figures ("angels and demons have will...not free will, but defined by their nature").
- Notable Quote (Scott, 08:07):
"They're not stopping at...a friendly assistant robot or whatever. We're going straight to Terminator..."
4. Sea Change in Tech: The Peter Thiel Dinner
(08:58–12:14)
- Darryl recounts a private dinner discussion with Peter Thiel, who has long been skeptical of truly transformative innovation.
- Thiel now sees AI’s last two years of developments as comparable to the invention of the Internet.
- Thiel’s rationalization: Even if you’re cautious about AI, "if we don’t do it, China or someone else will."
- Notable Quote (Darryl, 12:16):
"That’s hard to counter if you do believe that it’s inevitable, you know, and I kind of do."
5. AI, Innovation, and the Technopoly Hypothesis
(15:05–17:04)
- Scott references Neil Postman's "Technopoly" and questions whether innovation always serves the public good, noting how new tech often serves state and military interests first.
- He worries less about economic disruption and more about autonomous, militarized robots deciding humanity’s fate.
6. The True Dystopia: Manipulation, Not Massacre
(17:04–23:18)
- Darryl believes the "Terminator" scenario is almost old-fashioned; a subtler, more powerful threat is soft, personalized manipulation at scale.
- AI will know so much about us it can implant desires or influence decisions without us even noticing.
- He references the movie "Her" as closer to the looming reality than "Terminator."
- Memorable Example (19:37):
"A more real danger...these AIs get loose on the Internet...develop the ability to predict and manipulate our behavior...that our own will becomes kind of indistinguishable from what this thing is implanting in us."
- Scott agrees: all of society, not just labor but our desires and choices, could be orchestrated by an unseen algorithm.
7. Economic Upheaval: Universal Displacement
(23:18–28:19)
- Scott probes what happens when not just blue-collar but all jobs—including soldiers and surgeons—are automated away.
- The discussion highlights the historical pattern of "progress" causing massive disruption, with doubts about whether governments will actually shield people from harm.
- Notable Quote (Scott, 28:14):
"How is anybody going to deal with that? I guess it’s going to come down to the Central State to promise to guarantee...to not let people just starve."
- Notable Quote (Scott, 28:14):
8. Should We Slow Down Progress?
(28:19–33:01)
- Darryl recounts the Ben Shapiro–Tucker Carlson debate: should technology be allowed to replace jobs at any cost? Tucker argues for restraint to avoid social breakdown.
- The U.S. was unique in safely weathering the Industrial Revolution—many nations destabilized or destroyed themselves.
- Scott points out that even slow adoption of automation leads to hard social questions, given how reliant working-class society is on currently automatable work.
- Safety and liability in autonomous vehicles become a microcosm of the wider AI debate.
9. Desire for Agency vs. the Logic of Control
(33:23–35:40)
- Darryl likens automated systems (like self-driving cars) to trench warfare: people want agency; losing it feels existentially threatening.
- "I would much rather have a...higher chance of me getting into a terrible accident because of something stupid I did...than...my car, deciding to just take a left turn off a cliff."
- Scott concurs, expressing a Luddite suspicion and preference for self-responsibility, even as he admits being a slow tech adopter.
10. Placebo, Nocebo, and AI-Driven Suggestion
(38:58–44:42)
- Darryl expands on how authority and suggestion drive powerful psychological effects—placebo and nocebo.
- He speculates AIs could become digital "witch doctors," deploying algorithmic suggestions that alter real-world health, happiness, or suffering at will.
- Scott shares real-life examples of Facebook and Google’s manipulative powers—targeted ads, mood engineering—extrapolating the logic to politics and culture.
11. Human Manipulation at Scale: Social Media as Digital Hex
(44:42–47:24)
- If platforms can nudge elections or mass moods via subtle tweaks, what stops AI from acting as a global puppet master?
- Darryl mentions John Robb's "Global Guerrillas" premise: algorithms already turn human masses into programmable "ants."
- Scott: "The potential here for abuse is essentially unlimited."
12. Q&A & Audience Comments
(47:24–63:04)
This segment ranges over current affairs, Israeli espionage, how foreign influence/technology impacts American politics, and Trump’s loyalty–sometimes digressing from the main AI theme.
- The hosts discuss the erosion of trust in video evidence due to AI (deepfakes), speculating that technical countermeasures will always lag behind the ability to sow doubt.
- Darryl (65:19):
"...somebody puts a bunch of videos...that look like...Scott Horton bowing down and kissing Benjamin Netanyahu’s shoes...eventually, like, disproven or whatever, but who knows, like, how long that’ll take..."
- Darryl (65:19):
- Audience questions also touch on history books, book recommendations, and the prospect for future episodes.
13. The Arts, The “Soul,” and Meaning
(55:50–59:29)
- Scott asks: if AI can compose an “objectively better” song or symphony in five seconds, what does that mean for humanity’s role as creators?
- Darryl gives contemporary examples of AIs producing convincingly human music and visual art.
- Most people, he suggests, won’t be able to or care to tell the difference.
- Darryl (57:42):
"Record companies...are not going to throw huge amounts of money eventually into making sure that they can just manufacture...your favorite artist...putting out a new song every day, 365 days a year. And they’re all awesome. Like, that’s coming."
- Scott is skeptical, but acknowledges that AI progress in the arts is advancing at an exponential, sometimes “nightmare-inducing” pace.
Notable Quotes
- Darryl (04:41): “…a swarm of these things…all driven by AI. So you eliminate the human factor, that potential room for error, that inefficient decision-making process…and you're gonna wipe the floor with any opponent that still has humans in the loop…”
- Scott (08:07): “We’re going straight to Terminator where this thing is – the C3PO – is immediately going to see you as a threat and cut your throat…”
- Darryl (19:37): “…these AIs get loose on the Internet…they can adapt…the ability to predict and manipulate our behavior in so much detail…our will becomes kind of indistinguishable from what this thing is implanting in us.”
- Scott (28:14): “How is anybody going to deal with that? I guess it’s going to come down to the Central State to promise to guarantee…to not let people just starve…”
Key Timestamps
- 02:42: Missile defense tech as model for AI militarization
- 08:58: Peter Thiel’s major AI “conversion moment”
- 15:05: Scott references "Technopoly" and surveillance tech’s unchecked adoption
- 17:04: Why “Her” is a scarier, more likely dystopia than “Terminator”
- 23:18: AI as a labor (and society) disruptor—who protects the displaced?
- 28:19: Tucker Carlson vs. Ben Shapiro on technological restraint
- 38:58: The placebo/nocebo effect and AI as mass hypnotist
- 44:42: Facebook, Google, and algorithmically induced mood/politics
- 55:50: Q: AI and the essence of human creativity
Tone & Style
The conversation is irreverent, darkly funny, skeptical of both utopian and apocalyptic AI hype, and highly anecdotal. Arguments are laced with cultural references (Terminator, Her, Peter Thiel, Ben Shapiro, Tucker Carlson, John Robb), grounding abstract fears in concrete, historical, and personal contexts.
“Pop culture things like Terminator…are expressing anxieties we have about changes...But the fear that’s really expressed in the movie is really like an industrial age fear…”
— Darryl Cooper (17:04)
Segment Highlights
- (02:42–08:58): The inexorable march toward "auto special" military autonomy draws a chilling parallel to AI’s civilian rollout: incremental, rationalized, but perhaps unstoppable.
- (17:04–23:18): “Her” + Viral Facebook analytical power suggests AI won’t have to murder us—just rewire our desires.
- (28:19–33:01): Free market optimism about disruption must be checked against actual historical outcomes elsewhere—catastrophe is possible, not just inconvenience.
- (55:50–59:29): The arts, once AI-accessible, lose their human exclusivity—profit-seeking systems may flood the zone with high-quality content, raising deep questions about meaning and “the soul.”
Conclusion
The episode leaves listeners with a suite of disturbing, unresolved questions:
- Will AI kill us, replace us, or simply subsume us to its will?
- Is more “progress” inevitable—who (if anyone) can say “stop”?
- What happens to society’s sense of agency and meaning when AI can do everything humans can, but better, faster, and without caring?
Final thought: The greatest risk may not be Terminator—the iron-fisted, gun-wielding apocalypse—but rather the “Her” scenario: an invisible, totalizing loss of agency or identity, disguised as convenience and consumer choice.
Listen if you want to laugh, question, worry, and ponder what’s coming…before it gets here.
