Podcast Summary: Tangle – "Suspension of the Rules"
Guest: Andy Mills (Prolific Podcaster, co-creator of The Last Invention)
Host: Isaac Saul
Co-hosts: Ari Weitzman, Camille Foster
Date: December 12, 2025
Episode Overview
This episode brings on Andy Mills, renowned podcast creator and co-producer of “The Last Invention,” a thought-provoking show focused on the development and consequences of artificial intelligence (AI) and the potential arrival of artificial general intelligence (AGI). Isaac Saul and co-hosts explore Andy’s journey reporting on AI, dive into the contrasting philosophies shaping the AI discourse, and wrestle with the profound societal and existential implications involved. The second half of the episode pivots to current affairs, covering a controversial Trump administration proposal about social media screening for tourists, and breaking news on a U.S. seizure of a Venezuelan oil tanker.
Main Themes & Purpose
- The State and Trajectory of AI/AGI: An in-depth conversation about how AI is evolving, the impact of AI on society, the plausibility and timelines for AGI, and the ethical and philosophical quandaries these technologies provoke.
- Societal Risk and Responsibility: Who gets to decide how AI progresses? Where should the brakes—or the gas—be applied?
- Contemporary Free Speech Debates: Examination of proposals to screen foreign visitors' social media, and what these moves mean for American values and global reputation.
- Existential Questions: Are we “building God”? Is the development of AGI an evolutionary inevitability, and what does that say about humanity’s fundamental purpose?
Key Discussion Points & Insights
1. Andy Mills’ Emotional Journey Reporting on AI
(05:49 – 08:36)
- Mills began as “skeptical and fearful” about AI. At first, even discussing the topic seriously felt fringe or “alien abduction-level.”
- He observed a rapid mainstreaming of the AI conversation:
“Even just since last spring, this has become a much more mainstream conversation. … People are paying attention and they’re asking those questions.” (Andy Mills, 08:08)
- Transitioned from “fear and skepticism” to feeling a sense of public engagement with AI issues as a positive outcome, regardless of final predictions.
2. Defining the Three Camps: Accelerationists, Doomers, and Scouts
(08:36 – 11:39)
- Accelerationists: Want to push AI forward as quickly as possible, driven by motives like beating global competitors (e.g., China), profit, or belief in progress.
- Doomers: Fear existential risks; advocate strong brakes and even pause on development.
- Scouts: Sympathetic to both sides, want to advance the technology carefully with oversight and serious questioning.
Andy says he now sees validity in all three camps:
“I think all three camps have a good case… all have evidence on their side, and now is the time to… slowly develop your own view.” (Andy Mills, 10:18)
- He notes that “AGI within decades” now seems plausible—a shift from his earlier skepticism.
3. Steelmanning the Accelerationist Perspective
(13:13 – 16:37)
- Andy provides strong arguments on behalf of accelerationists:
- The inevitability argument: “If it’s made irresponsibly, it could have catastrophic effects forever. … The only way to stop that outcome is to make sure that they’re the ones who build it.” (Andy Mills, 13:28)
- Historical progress: “Every transformative new technology meets fear … but delaying progress can lead to missed opportunities and suffering we might have avoided.”
- “Why would you not want more intelligence working” in your organization or society? If intelligence drives human progress, more intelligence could mean more progress.
4. Skepticism About Timelines and Definitions
(16:37 – 21:44)
- Ari and Isaac question the likelihood or timelines for superintelligence:
“It’s going to be very tough to build something to replicate a word [intelligence] that we can’t define already.” (Ari Weitzman, 17:16)
- Andy agrees that healthy skepticism is important, especially regarding ambitious claims from company leaders with business incentives.
5. AI, AGI, and ASI: Clarifying the Differences
(23:35 – 29:03)
- AI: Task-specific, already embedded in daily life, e.g., social feeds, chatbots, LLMs.
- AGI: An artificial “being” with general human-level intelligence, able to learn anything and perform across domains—akin to creating a new “intelligent species.”
- ASI (Artificial Superintelligence): The next evolutionary jump, self-improving beyond human understanding, which even many “believers” see as speculative but possible.
“AI, AGI, ASI: just think algorithm, species, God.” (Andy Mills, 24:34)
6. The Human Role and Contested Futures
(29:03 – 35:12)
- Camille questions if AGI’s advent is as likely as insiders say, highlighting ongoing human involvement and competition among models.
- Andy points out that even AI skeptics (e.g., Gary Marcus) believe AGI is possible, “maybe 20 years” away, and that skepticism exists about whether LLMs are sufficient for AGI.
7. Existential Angst and “Building God”
(42:39 – 49:06)
- Isaac voices an existential worry: “Are we just building God and this is our purpose and this is like what it’s all about?” (Isaac Saul, 43:05)
- Andy describes encountering “worthy successor theory”—that humans may be destined to create an improved, artificial being to carry on evolution—alongside “Great Filter” speculation about why intelligent life might perish upon reaching this technological threshold.
- Links to simulation theory and philosophical questions about AI rights (“Should we enslave AGI? Would it want to communicate with other superintelligences?”).
8. Modern Risks: Not Just Doomsday, But Dystopia
(33:21 – 39:41)
- The “doomer” perspective is not just about extinction, but also societal “dystopia,” e.g., pervasive distraction, loss of agency, or a sci-fi scenario where humans are subjugated by mediocre or poorly implemented machine intelligence.
- Isaac highlights the “5% risk” doomer argument:
“If there is a 5% chance that advancing this technology ends with humanity being destroyed, what the fuck are we doing?” (Isaac Saul, 38:55)
- Andy: “You wouldn’t feel satisfied with a bridge builder who says… it’s got a 5 to 10% chance it kills the entire population of your city... There are better bridge builders.” (39:41)
9. Call to Public Engagement & Debate
(40:51 – 42:39)
- Andy: All citizens, not just those in tech, need to join the debate: “It shouldn’t just be made by people in technology… let’s do that thing that free societies do.” (41:50)
- Taking the possibility seriously is itself “advocacy”—the core message of “The Last Invention.”
Notable Quotes & Moments
-
On the state of debate (existential and practical):
"I'm a little bit uncomfortable just by the very nature of it. But I do think that there's enough evidence out there and that enough of these insiders are sincerely concerned about this that we should take it really seriously and we should form our own views and we shouldn't allow a knee jerk skepticism that I sometimes hear from people."
—Andy Mills (21:44) -
On apocalyptic risk:
"If there is a 5% chance that advancing this technology ends with humanity being destroyed, what the fuck are we doing?"
—Isaac Saul (38:55) -
On skepticism of AI timelines:
"I completely doubt this horizon of General Intelligence being 5, 10, 20 years. I definitely don't think that's going to happen, in my opinion."
—Ari Weitzman (57:32) -
On ‘worthy successor theory’:
“Some people… are becoming convinced that we are a stop on the way to something better, that… our purpose has now been revealed to us. We are supposed to create an artificial new being that will… be better than us.”
—Andy Mills (45:52)
Important Timestamps & Segments
- Intro to the Guest and Topic: 02:13–04:06
- Andy’s reporting journey, public response to AI: 05:49–08:36
- AI camps—Accelerationists, Doomers, Scouts: 08:36–11:39
- Arguments for acceleration: 13:13–16:37
- Skepticism about AGI timelines: 16:37–21:44
- AI, AGI, ASI explained: 24:31–29:03
- Existential and philosophical discussion: 42:39–49:06
- Companion piece, public engagement: 40:51–42:39
Second Half: Contemporary Policy & Political News
Trump Administration Proposal: Requiring Tourists’ Social Media Histories
(59:16 – 86:21)
- Isaac discusses recent reporting that all foreign tourists to the U.S. might be forced to disclose five years of social media history:
“I don’t want to take a 1 in 20 gamble on… everything dies and we get destroyed by some artificial intelligence. … That was an argument where… I was like, oh yeah, definitely with those guys.” (Isaac Saul, 39:18)
- Comparison with European speech laws—both co-hosts and Isaac express concern about the U.S. adopting measures they see as antithetical to free speech.
- Camille notes this type of surveillance “has the whiff of something actively considered, perhaps not feasible in practice.” (64:43)
- All agree such measures could damage U.S. tourism, economy, and reputation.
- Broad consensus: this illustrates the global backsliding and fragility of speech freedoms, and the tendency for both left and right to surrender ideals when in power.
Breaking News: U.S. Seizes Oil Tanker Off Venezuela
(93:02 – 99:36)
- Trump administration announces the seizure as possible escalation toward conflict with Venezuela.
- Discussion of the rarity and implications of such seizures; hosts express concern about U.S. motives and the risk of further conflict.
Memorable Moments
- Isaac’s “existential crisis” reflection on AGI as humanity’s inevitable purpose.
- “Worthy successor theory” and “Great Filter”—sci-fi turned reality in the minds of serious technologists.
- Ari’s allusions to canonical sci-fi (Asimov’s “The Last Question,” Octavia Butler’s “Earthseed”) and their resonance with the AGI debate.
- Isaac and Ari’s tales of personal injury—falling down stairs and dropping a shelf—as a comic relief in the closing grievances segment.
Episode Tone
Insightful, candid, and alternately sobering and humorous. The episode blends serious intellectual inquiry and public policy concern with the hosts’ trademark banter and self-deprecation, keeping the conversation lively, accessible, and occasionally surreal.
Conclusion
This episode of Tangle offers a panoramic, non-partisan examination of the future of AI, the responsibilities and fears animating the debate, and how these technological questions intersect with the day’s most pressing political issues—including immigration and civil liberties. Andy Mills’ reporting and the hosts’ probing exchanges challenge listeners to participate actively in the ongoing conversation—because, as Mills suggests, few questions are likely to shape our collective future more profoundly.
Highly Recommended Segment:
- Andy Mills’ breakdown of AI, AGI, and ASI (24:31–29:03)
- Existential rabbit hole: “Are we building God?” (42:39–49:06)
- Policy discussion: Trump’s tourist social media proposal (59:16–86:21)
Listen for: A rare synthesis of technical depth, societal urgency, and existential wonder—seasoned with real wit and a spirit of honest disagreement.
