Podcast Summary: Today in Focus
Episode: Will the backlash against AI turn violent?
Date: April 24, 2026
Host: Annie Kelly (The Guardian)
Guests: Nick Robins-Early (Journalist, extremism/disinformation/tech), Sean Fleming (Researcher, University of Nottingham)
Overview
This episode delves into a recent, unprecedented attack targeting Sam Altman—the CEO of OpenAI—and explores whether this incident indicates a broader, escalating backlash against artificial intelligence that could become violent. The hosts and their expert guests discuss the perpetrator’s motivations, the radicalization of anti-AI communities, public distrust of AI, the potential for copycat attacks, and how the tech industry might respond.
Key Discussion Points & Insights
1. The Attack on Sam Altman
- Incident Description
- In early April 2026, Daniel Moreno Gamma, a 20-year-old from Texas, carried out arson and attempted break-in attacks targeting Sam Altman’s home and OpenAI’s San Francisco headquarters. He left behind a manifesto detailing anti-AI sentiments and plans to target leading tech figures (01:00–02:39).
- Law Enforcement Response
- Moreno Gamma was charged with attempted double homicide and arson. Authorities suggest the attack was premeditated and may qualify as domestic terrorism if proven to be politically motivated.
"Federal authorities have been quite harsh on this...if it's proven that he intended to carry out this for political reasons, they would treat it as domestic terrorism." — Nick Robins-Early (07:00)
- Mental Health & Family's Statement
- The suspect’s family emphasized his mental health crisis, describing him as caring and non-violent when well (06:06–06:26).
2. Who Is Daniel Moreno Gamma?
- Background
- Former college student, worked in a restaurant, had no criminal record.
- Maintained an active digital presence under the name ‘Butlerian Jihadist’ (Dune reference to anti-machine uprising); posted in anti-AI forums (04:00–04:44).
- Views on Violence
- Publicly condemned violence in a pre-attack podcast interview, but expressed that he saw violence as a last resort.
"No, that would be bad. It wouldn't do anything. It's just not worth it." — Daniel Moreno Gamma, on killing Sam Altman (05:38)
"I would normally only advocate for violence as the absolute final...you get what I'm saying?" — Daniel Moreno Gamma (05:45)
3. Rise of Anti-AI Extremism
- Emergence and Influence
- Many anti-AI activists stem from rationalist communities focused on eliminating cognitive biases.
- Influential figures, such as Eliezer Yudkowsky, promote ‘AI doomerism’—the belief that unchecked AI development could end humanity.
"I've been predicting that the backlash against technology would become increasingly violent for a few years now." — Sean Fleming (09:11)
- Example: Yudkowsky argued for extreme measures in case of dangerous AI, including military actions against rogue data centers and risking nuclear war (11:04–12:20).
- Distinction Within Movements
- Most anti-AI groups advocate regulation, not violence; violent rhetoric is isolated to a fringe minority (14:24–15:42).
"The vast majority of people who travel in these message boards are not contemplating assassinations of corporate executives, and they're banning people who even mention such things." — Sean Fleming (14:38)
4. Grievances Fueling Discontent
- Existential vs. Economic Fears
- Economic anxiety (job loss, community impact) may drive minor sabotage; true risk of violence lies with those radicalized around existential AI fears and catastrophic narratives (13:16).
"It's concern about existential risks on the part of extremely alienated people who travel in very strange echo chambers online... that's where the real threat of political violence comes from." — Sean Fleming (13:16)
- Stochastic Terrorism
- Experts warn about lone actors inspired by stochastic terrorism, targeting tech leaders for ideological reasons (15:18–15:42).
5. Trend of Anti-AI Sentiment and Violence
- Other Incidents
- Lockdowns at OpenAI after threats (Nov 2025). Shooting at a local politician’s house in Indiana with anti-data center messaging (16:00).
- Public Distrust and Demographics
- AI is highly unpopular—only 26% approval in a U.S. NBC poll; especially disliked by Gen Z (17:22–18:30).
"AI ranks below Donald Trump, it ranks below the Republican Party ... It's not a technology that people view as having positive outcomes." — Nick Robins-Early (17:23)
6. Blame, Messaging, and Industry Response
- Industry Messaging
- AI firms initially hyped existential risk to gain attention/funding but now seek to shift toward more positive narratives in the face of public backlash (19:26–20:49).
"Broadcasting your technology as this world changing or world ending thing is valuable from a marketing perspective...But then after that...you have to switch marketing strategy." — Nick Robins-Early (20:49)
- New messaging includes promises of wealth redistribution and reduced work hours via AI.
- Security and PR Measures
- Expect more PR campaigns to humanize leaders, increased security, and legal pushback against activists.
"We're going to see a lot more public relations efforts from AI companies and a lot more attempts to humanize both their leaders and the companies themselves. And then, on the other hand, we're just going to see a lot more security go up." — Nick Robins-Early (25:03)
- Criminalization and Demonization of Activists
- Possible industry moves to conflate mainstream protesters with radicals, potentially targeting them for suppression (26:03–26:44).
7. Potential for Copycat Attacks
- Manifesto as Inspiration
- Concern that high-profile attacks could inspire further violence, referencing cycles in previous extremist movements.
"Whenever you look at radicalism or extremist attacks...there's always the risk of copycats...Certainly with regards to his purported manifesto, you don't really write a manifesto unless you want to inspire others." — Nick Robins-Early (27:59)
- Industry as Future Target
- Ongoing divisiveness, radicalization, and online echo chambers make AI and its leaders likely future targets (28:35–28:58).
Notable Quotes & Memorable Moments
-
On Motivation for Political Violence:
"My big worry is that rogue insiders are going to be the ones who turn against the system and do a lot of damage."
— Sean Fleming (13:55)
-
On the Disconnect Between Messaging and Public Sentiment:
"It's not an appealing sales pitch for a lot of people, and I think that these companies are beginning to realize this."
— Nick Robins-Early (19:49)
-
On Industry Response to Activism:
"In California, OpenAI subpoenaed a number of groups...They subpoenaed these nonprofits saying, we want your communications...as part of an influence campaign, taking action against us."
— Nick Robins-Early (26:44)
-
On the Risk of Copycats:
"I don't think this is the last attempt on the life of an AI executive we'll see in this decade."
— Sean Fleming (15:42)
Important Segment Timestamps
- 01:00–02:39: Detailed account of the attack on Sam Altman and OpenAI.
- 05:04–06:00: Excerpts from an interview with Daniel Moreno Gamma illustrating his online rhetoric on violence.
- 07:32–08:12: Legal ramifications and potential labeling as domestic terrorism.
- 09:01–10:58: Introduction to AI doomerism and its influence.
- 13:16–14:06: Discussion of economic vs. existential motivators for anti-AI violence.
- 15:18–15:42: Explanation of stochastic terrorism and lone-wolf threats.
- 17:22–18:12: Public opinion data showcasing AI’s unpopularity.
- 19:17–20:49: How industry messaging both seeded and worsened public fears.
- 25:03–25:54: Anticipated industry reactions (security, PR, lobbying).
- 27:59–28:58: Prediction of continued risk and potential for further violent acts.
Summary Conclusion
The attack on Sam Altman represents a new level of backlash against AI—not simply a protest, but an act of violence targeting the technology’s leadership. While the accused attacker suffered from mental health issues, his radicalization via online anti-AI discourse highlights the power and danger of extremist rhetoric. Though most anti-AI groups advocate peaceful regulation, there is a growing risk from fringe individuals or insiders inspired by catastrophic narratives.
Public distrust, especially among the young, is mounting. The AI industry’s earlier apocalyptic messaging has backfired, forcing a strategic rethink both in communications and security. As polarization and radicalization continue, experts caution that further acts of violence against AI executives or infrastructure are possible—if not likely—going forward. The challenge ahead lies in balancing legitimate debate over AI’s future with strategies to limit the risk of stochastic terrorism and to avoid unjustly demonizing peaceful protest and activism.