QAA Podcast, Episode 347: "AGI Is a Conspiracy Theory"
Date: November 7, 2025
Hosts: Julian Feeld (A), Travis View (B), Jake Rockatansky (C)
Guest: Will Douglas Heaven (C) – Senior Editor for AI at MIT Technology Review
Episode Overview
This episode critically explores the concept of Artificial General Intelligence ("AGI") and asks: Is AGI a real technological goal—or the most consequential conspiracy theory of our time? The hosts, joined by guest Will Douglas Heaven, examine what AGI actually means, who is selling the idea, why it's gained such traction among tech elites and governments, the parallels with conspiracy theory belief, and the social, economic, and psychological dynamics driving the AGI hype.
Key Discussion Points
1. The Shift from AI Tools to AGI Hype
- Consumer AI tools vs. AGI promises: The episode opens with a humorous critique of how everyday AI tools are largely playful or convenient—turning friends into cartoons, generating images, managing small tasks—while the rhetoric from tech elites suggests we're on the edge of a species-defining revolution (01:00–03:33).
- Quote: “We're getting sort of, you know, cute little Studio Ghibli generators and erotic chatbots.” — Will Douglas Heaven [03:09]
- AGI as a technological singularity: Big tech leaders and governments increasingly act as if AGI—AI that can match or surpass human capabilities across the board—is inevitable and imminent, though expert definitions and timelines vary wildly.
- Quote: “AGI is definitely coming. And it's definitely going to be a big deal, a mystical event, a turning point in the development of humanity, after which nothing will ever be the same.” — Travis View summarizing tech industry rhetoric [01:36]
2. Defining AGI and Its Origins
- Difference between current AI and AGI: Will explains that current AI excels at narrow tasks, but AGI aspires to ‘general’ abilities—an AI as capable as a person across any task. This is not remotely what today’s chatbots and tools can do (04:30–05:56).
- Quote: "The difference...is the generality. ...What we're aiming for is an AI that you really could just ask to do anything you would ask a reasonably capable person..." — Will [04:31]
- AGI’s conceptual roots: AGI is a surprisingly recent (mid-2000s) term coined as an aspirational label by fringe AI researchers (notably Ben Goertzel and Shane Legg). Early AI pioneers wanted to match human cognition, but “AGI” as a movement was fringe until very recently (09:49–11:09).
3. The AGI Narrative as Conspiracy Theory
- Treating AGI as a belief system: The discussion suggests the AGI narrative functions like a conspiracy theory—with prophecies, hope for salvation or doom, and a powerful, flexible storyline that persists even as timelines slip (12:15–14:02).
- Quote: “Every age has its believers. People with an unshakable faith that something huge is about to happen, a before and after that they are privileged (or doomed) to live through.” — Will, as read by Julian [12:15]
- Quote: "It's all part of the same belief system, the flip between boom and doom." — Will [13:03]
- Pop culture reinforcement: Narratives of AI apocalypse or utopia are reinforced by decades of sci-fi, from "HAL 9000" to "Skynet"—the benevolent AI is rare and unexciting, so the dramatic pendulum swings reinforce public suspicion and awe (14:02–15:43).
- Quote: “It doesn't make good drama, does it? Like the idea of a beneficent, all powerful AI that just basically solves all our problems is—it's pretty boring.” — Will [15:32]
4. The Economic and Social Incentives for AGI Hype
- AGI as a business model: Big claims drive investment and company valuations. Companies build hype partly because claiming to pursue AGI is more lucrative than modest, incremental improvements (17:49–20:16).
- Quote: "What we're seeing now is utterly unprecedented... the valuations we're seeing—and it's nearly all riding on this promise that is really quite vague and hand wavy." — Will [17:51]
- Rise of a new grift: The "AI will change everything" messaging replaces previous bubbles like crypto/NFTs; often the same ‘rise and grind bros’ are pushing both (06:51).
5. The Moving Goalposts and Flexibility of AGI Prophecy
- Perpetual imminence: Like classic conspiracy prophecy, AGI is “always coming soon.” Each generation of AI advances gets reframed as “evidence” we’re even closer, while failures are shrugged off or timelines move (21:16–23:04).
- Quote: "If you're building a conspiracy theory, you need a few things... flexible enough to sustain belief even when things don't work out as planned... the promise of a better future... hope for salvation from the horrors of this world." — Will (as read by Travis) [22:34]
- Benchmarking issues: Standard tests for AI "advancement" (benchmarks) often fail to capture true abilities; they're more about passing narrow tests than demonstrating general intelligence. This allows hype to run ahead of technical reality (27:14–28:40).
6. The Psychological Triggers Behind Belief in AGI
- Anthropomorphizing chatbots: Because the latest AIs are language-based, users instinctively project humanlike intelligence onto them, fueling both hope and fear (23:04–27:14).
- Desire to believe in something imminent and transformative: The AGI narrative scratches a deeply human itch—for both salvation from a disappointing status quo and meaning in the face of uncertainty.
- Quote: "We want to believe, like Fox Mulder says." — Will [21:12]
7. The Limits and Dangers of the AGI Hype
- Potential for economic bubbles: Massive VC funding is pouring into AI (40% of deals under $100 million, per State Science and Technology Institute), much of it riding on speculative AGI promises [(41:02)].
- Quote: "Hugging Face's Margarete Mitchell has ascribed artificial general intelligence as just 'vibes and snake oil.'" — Travis [41:02]
- Tech for its own sake: Most new AI tools weren’t created to solve a concrete problem, but because they were “possible and cool”; practical uses are often retrofitted after the fact (43:14–46:59).
- Environmental and societal costs: Scaling up for the promise of AGI involves huge financial, environmental, and social risks—often justified by the claim that future “world-changing” benefits will outweigh present costs (50:59).
Notable Quotes & Memorable Moments
The AGI as Impossible Promise
- "There's not going to be one day when it's low. We've made AGI, here it is. ...The sense that this near future technology is inevitable, I think that really needs to be pushed back on. Like, says who?" — Will [49:10]
The Psychological Need
- "It's exciting at least, you know, personally, it's exciting to think about living in an apocalypse where all of a sudden your credit cards don't matter anymore, your job doesn't matter anymore... It's fun to think about this system collapsing because all of our current problems are now solved. Oddly enough, that's what they're saying the AI is going to do." — Julian [16:16]
On Benchmarking and Anthropomorphism
- "If I sat a math exam and did really well ... it's a proxy for my broader intelligence. But with these models, if it passes that particular math test, all it tells you is that it passed that particular math test. You shouldn't then project more onto it." — Will [26:11]
Cynicism Towards Tech Elites
- "There's nothing less good enough for these guys who have achieved every kind of material [success]... they want to conquer the spiritual world through technology." — Julian [37:51]
Timestamps for Key Segments
- [00:47–03:09] — Consumer AI vs. AGI rhetoric, playful tech vs. grandiose promises
- [04:30–05:56] — Will explains practical AI vs. aspirational AGI
- [09:49–11:09] — Origins of the AGI term, Ben Goertzel and Shane Legg’s influence
- [12:15–14:02] — Parallels between AGI belief and conspiracy theory mechanics
- [17:49–20:16] — Tech startup/VC incentives: why the myth of AGI persists
- [21:16–23:04] — Perpetual AGI arrival: moving goalposts and narrative flexibility
- [23:04–27:14] — How natural language interfaces fuel anthropomorphic hype
- [27:14–28:40] — The benchmark fallacy in AI assessment
- [31:16–33:27] — Geoffrey Hinton’s doomer prophecy: “Superintelligence may simply replace us”
- [36:46–37:51] — AI’s real social impacts: more likely job loss than Skynet
- [46:59–49:10] — Did anyone want this? Tech as a solution looking for a problem, and how AGI is sold
- [50:56–52:25] — Environmental/resource costs; the allure of black-box “mystery” in AI
Tone and Language
- Wry, skeptical, conversational, but informed: The hosts and guest keep a mix of humor and critical reporting, undercutting both utopian and dystopian narratives.
- E.g., joking about “elder millennials” using aging Snapchat filters and how AGI is “like my generalized anxiety—bigger, not specific” [11:03].
- Candid about cultural and psychological drivers: The panel freely admits their own susceptibility to hype and explains why AGI promises resonate in anxious times.
Conclusion: Why AGI Is (Maybe) the Biggest Conspiracy Theory
The episode concludes that AGI is not a plot hatched in secret but functions like a conspiracy theory: it sustains through hope and fear, flexible narrative, and appeals to mystery. Its believers are rich, powerful, and often earnest—but their speculative promises fuel bubbles, distract from real problems, and may ultimately offer little more than improved tools hidden under grandiose sales pitches.
Memorable outro:
"May the superintelligence bless you and keep you as pets." — Julian [53:15]
Where to Find More
- Guest writing: Will Douglas Heaven’s work at technologyreview.com. Stories on AI, biotech, and more.
- QAA Podcast: More episodes and premium content at qaapodcast.com.
For anyone curious “What’s the real deal with AGI?”, this episode delivers skepticism, humor, and history—helping listeners see the enormous gap between present AI, future dreams, and the human need to believe in technological destiny.
