The Last Invention | Episode: The AI Skeptics (Feb 20, 2026)
Episode Overview
This episode of The Last Invention dives deep into the skeptical perspectives in the hotly debated AI revolution. Host Andy interviews leading critics and nuanced thinkers, exploring the counter-narratives to AI hype. The discussion is structured in three acts, each representing a different brand of skepticism: "AI is grift," "We're on the wrong path," and "AI as normal technology." Each guest brings a unique lens, challenging dominant narratives about artificial general intelligence (AGI) and the transformative potential or threat posed by AI.
Key Discussion Points & Insights
I. Why Even Skeptics Matter
- The mainstream AI conversation is dominated by two extremes: "doomers" (who fear existential risk) and "accelerationists" (who see enormous promise).
- Genuine AI skeptics, especially those with deep expertise, are rare but their views help clarify assumptions, challenge hype, and keep the debate grounded.
- Three main skeptic "camps" are highlighted:
- AI is Grift – AI as industry hype and profit-driven “snake oil.”
- Wrong Path – AGI is possible, but current approaches (LLMs) won’t get us there.
- AI as Normal Technology – AI is powerful but will be absorbed slowly, like previous general-purpose innovations.
II. Act 1: AI Is Grift (Ed Zitron)
Ed Zitron’s Perspective
- Ed runs a tech PR firm and a critical newsletter (“Where’s Your Red App?”); he positions himself as a “professional critic of big tech.”
- Core argument: “The main thing holding back AI is that it doesn’t fucking work.” (06:57)
- He sees current AI products as oversold and fundamentally underdelivering, emphasizing that real-world impact is far from the utopian or apocalyptic promises.
Discussion Highlights
- AI has become a vague, catch-all marketing term, detached from original meaning (08:12).
- Hype is driven by investors, media, and tech elites who profit from inflated expectations, not real progress.
- Skeptical of celebrated figures (e.g., Geoffrey Hinton) – questions whether public warnings are marketing or sincere, and critiques media’s focus on existential risk over immediate & tangible harms.
Quote Highlights
- “They need you to believe these are the early innings because if you start looking at this and thinking about this even a little and realize that the depth of hubris, everyone involved kind of looks stupid.” — Ed Zitron (27:12)
- “The people that are benefiting from this don’t give a rat fuck about your or my ability… They give a rat fuck about making… their investments worth more.” — Ed Zitron (09:18)
Timestamps/Notable Moments
- 06:57 – Ed’s blunt assessment that AI “doesn’t fucking work.”
- 11:18 – Pushback on Geoffrey Hinton’s transformation from skeptic to warning advocate.
- 14:01 – Doomerism as marketing: “There’s another word I use for doomerism: marketing.”
- 24:49 – On technological underperformance relative to the hype.
Tone
- Highly irreverent, unsparing, and combative. Ed is unapologetically cynical about motives behind AI hype, likening the whole boom to historical bubbles and the “rot economy.”
III. Act 2: AI Skepticism – The Wrong Path (Gary Marcus)
Gary Marcus’s Perspective
- Longtime AI researcher, known for predictions about the limits of neural networks dating back to early 1990s (32:37).
- Marcus is NOT skeptical about AGI as a theoretical possibility, but deeply skeptical about the path pursued by the current AI mainstream: scaling up Large Language Models (LLMs).
Discussion Highlights
- LLMs are inherently flawed: “It’s inherently prone to all kinds of reasoning errors. It’s inherently hard to control. … It’s inherently prone to hallucinations…” (34:28)
- AI’s recent leaps (GPT-3, GPT-4) were real but stagnation has emerged; GPT-5 “wasn’t as impressive,” matching his earlier warnings (39:12).
- Scaling laws don’t guarantee continuous, exponential progress (“trillion-pound baby” misapplied extrapolation, 36:58).
- The real path to AGI will need new “symbolic” approaches and likely several additional breakthroughs.
- Marcus is a “scout” not a “doomer” or “grifter”—calls for investing in serious readiness, focusing on both immediate and long-term risks.
Quote Highlights
- “It’s a better ladder, but it’s not necessarily going to get us to the moon.” — Gary Marcus on LLM progress (35:51)
- “I think that we will get to AGI... but I don’t think we can do it with current techniques. We need new ideas.” — Gary Marcus (43:32)
- “LLMs chatbots have been a dress rehearsal and it’s a really depressing train wreck of a dress rehearsal…” (45:40)
Timestamps/Notable Moments
- 33:07 – Marcus recounts his early fascination and work in AI.
- 34:28 – Core critique of LLMs’ limitations.
- 36:58 – The “trillion-pound baby” analogy and scaling myth.
- 43:23 – “It’s back to the age of research again, just with big computers.”
- 45:40 – On the disappointing “dress rehearsal” that LLMs have provided.
Tone
- Earnest, technical, cautiously optimistic pending the right breakthroughs; more concerned with honest assessment than hype.
IV. Act 3: AI as Normal Technology (Arvind Narayanan)
Arvind Narayanan’s Perspective
- Computer science professor at Princeton; co-author of “AI as Normal Technology.”
- Argues that AI is a significant general-purpose technology but will integrate into society gradually, not as instant revolution.
Discussion Highlights
- AI is between a toaster and the discovery of fire: “Somewhere in between iPhone and fire.” (49:01)
- Major technical revolutions (electricity, Internet, Industrial Rev.) took decades to diffuse.
- Human inertia, messy bureaucracies, and cultural preferences for human interaction are powerful brakes on change.
- Even true AGI would not mean an overnight societal transformation; most jobs already could be automated but persist for deeper human and social reasons (58:21).
- Strong emphasis on “agency”—societies have a choice in how fast and deeply AI transforms occupations and relationships.
- Policy and social adaptation are where real risks and opportunities reside, not in sci-fi singularity scenarios.
Quote Highlights
- “Change will happen at the speed of human behavioral change, at the speed at which organizations can adapt…” — Arvind Narayanan (52:59)
- “At best, this ChatGPT mayor will reflect the views of OpenAI developers … It won’t reflect the will of the people. It’s just missing the point of democracy.” (60:04)
- “Whatever your profession is probably needs to come up with a normative framework for what are ethically acceptable and unacceptable uses of AI…” (75:18)
Timestamps/Notable Moments
- 49:01 – AI as normal technology, a general-purpose tech but with no quick “hinge moment.”
- 52:11 – Historical pace of electricity, telephone, Internet, and what to expect for AI.
- 58:21 – Even super-powerful AGI would face resistance: “number of tasks that could be automated is shockingly high, but we choose not to.”
- 60:04 – The “AI mayor” example and the irreplaceability of human judgment in collective life.
- 67:01 – Real social harms (e.g., chatbot reinforcement of delusional thoughts, social media) come from collective failures, not AI agency.
- 73:43 – Amtrak analogy: transformative technologies still get bottlenecked by non-technical constraints.
- 75:18 – Final message: “Every single one of us has a role to play…” on defining ethical AI use in daily life.
Tone
- Calm, empirical, pragmatic—champions agency and collective responsibility rather than techno-determinism.
Memorable Quotes (by Section & Timestamp)
- Ed Zitron (Grift)
- “The main thing holding back AI is that it doesn’t fucking work.” (06:57)
- “There’s another word I use for doomerism: marketing.” (14:01)
- Gary Marcus (Wrong Path)
- “It’s a better ladder, but it’s not necessarily going to get us to the moon.” (35:51)
- “I think that we will get to AGI... but I don’t think we can do it with current techniques.” (43:32)
- “LLMs chatbots have been a dress rehearsal and it’s a really depressing train wreck of a dress rehearsal…” (45:40)
- Arvind Narayanan (Normal Tech)
- “I would say somewhere in between iPhone and fire.” (49:01)
- “Change will happen at the speed of human behavioral change...” (52:59)
- “It won’t reflect the will of the people. It’s just missing the point of democracy.” (60:04)
- “Whatever your profession is probably needs to come up with a normative framework for what are ethically acceptable and unacceptable uses of AI…” (75:18)
Useful Timestamps Overview
- 06:57 – Ed Zitron’s core critique of AI hype
- 11:18–14:21 – On “doomerism” and hype as marketing
- 24:49 – Technological underperformance vs. hype
- 34:28 – Gary Marcus on LLM limitations
- 36:58 – Scaling myth and the “trillion-pound baby”
- 39:12 – GPT-5 as a failure to deliver exponential improvement
- 43:32 – Current path won't reach AGI, need breakthroughs
- 49:01 – Arvind Narayanan: AI as normal technology
- 52:11 – Historical analogies for adoption (electricity, phone)
- 58:21 – Why mass automation won't happen overnight
- 60:04 – Social and political meaning of automation attempts
- 67:01 – Empirical risks vs. sci-fi risks (mental health, suicide, chatbot harms)
- 75:18 – Everyone's role in AI ethics and deployment decisions
Final Takeaways
- AI Skepticism is Diverse: Not all critics say “it will fail”—the most thoughtful want better, safer, more honest AI rather than hype or panic.
- Immediate Harms vs. Existential Risks: Real harms often fly under the media radar but can be deadly serious (bias, mental illness, economic precarity), while doomsday talk distracts from actionable concern.
- Progress ≠ Sudden Overthrow: Even with transformative potential, societal and psychological speeds are much slower than technical ones.
- Urgency Without Panic: There’s a strong message that “not panic, but policy, preparation, and participation” will define AI’s real impact.
This episode brings skeptical but informed voices to the front, pushing listeners to ask hard questions not just about what can be built with AI, but how—and how soon—it will actually matter for real people.
