The Weekly Show with Jon Stewart
Episode: "AI: What Could Go Wrong?" with Geoffrey Hinton
Release Date: October 9, 2025
Guest: Geoffrey Hinton, Professor Emeritus, University of Toronto
Episode Overview
In this illuminating conversation, Jon Stewart sits down with Geoffrey Hinton—often dubbed the “godfather of AI”—to unravel the inner workings and far-reaching implications of artificial intelligence. Hinton, whose pioneering research shaped the neural networks behind today’s generative AI, offers a foundational crash course for the layperson, then candidly explores the extraordinary promise and existential perils of the technology. Stewart’s trademark curiosity and wit combine with Hinton’s clarity, making even deep technical details accessible and fascinating.
Key Discussion Points & Insights
1. What Is AI, Really?
(03:46–16:11)
-
Language Models vs. Search Engines:
- Jon Stewart opens by admitting that, for him, AI feels like a "slightly more flattering search engine” (04:27).
- Hinton explains the leap—from old keyword search to AI that “understands what you say and...in pretty much the same way people do” (05:47).
-
Neural Networks and the Brain:
- Hinton traces the origin of neural networks to studying how the brain learns—specifically, how it adjusts the strength of connections between neurons, rather than adding new ones (08:18).
- Stewart likens the process to "political coalitions" within the brain: “Your brain operates on peer pressure” (11:34).
-
Machine Learning vs. Deep Learning:
- Hinton differentiates: “Machine learning is a kind of coverall term for any system ... that learns. Neural networks are a particular way of doing learning” (07:17–07:25).
- Deep learning = multiple layers (“deep” neural nets) that extract increasingly abstract features from data.
2. How Do Neural Networks Learn?
(17:47–40:32)
-
The Hebb Rule:
- Hinton cites Donald Hebb’s early idea: If neuron A fires before B, their connection strengthens—the principle.
- Problem: This by itself can cause runaway firing ("you have a seizure") unless balanced by mechanisms that weaken connections (18:17–18:33).
-
Vision Example (Bird Detector):
- Hinton breaks down, step-by-step, how a network might "learn" to identify birds by building up detectors for edges, beaks, and eyes from many layers of feature detectors (19:47–29:04).
- First, early layers detect pixel brightness/edges. Later layers learn combinations to eventually form "bird-ness."
-
Manual vs. Data-Driven Wiring:
- Early approaches required hand-designing detector rules.
- Modern deep learning starts with random connection strengths, then adjusts them using lots of labeled data via “backpropagation”—calculus-based feedback that enables trillion-scale adjustments at once (34:01–34:26).
3. From Theory to Reality: Scaling up AI
(36:14–40:32)
- "Even though you’re a trillion times faster than the dumb method, it’s still going to be a lot of work” (36:14).
- Hinton emphasizes that modern AI became practical thanks to exponential increases in computational power (million-fold) and data availability (the web), finally enabling deep learning to flourish (37:47–38:53).
4. Large Language Models: Understanding or “Just Stats”?
(41:43–47:48)
-
Hinton connects the vision example to how large language models (like ChatGPT) work:
- They learn to encode words as patterns of neuron activity, then predict the next word by adjusting connection strengths with massive datasets and backpropagation (41:48–44:32).
- Stewart asks: But does the AI “understand”? Hinton argues that, functionally, this is close to how humans predict and produce language too—pattern matching on past experience (45:37–46:27).
-
On Emotion and Morality in AI:
- "Even the things that I ascribe to a moral code or an emotional intelligence are still pings" (47:21, Stewart).
- Hinton: “They're still all pings”—all just neural activity and its interactions, even for complex judgments.
5. AI's Capabilities and the Tipping Point
(49:10–50:47)
-
On being “late” to the risks:
- Hinton admits he was “so entranced with making these things work” that he overlooked the danger until 2023, with the rise of generative models (49:10).
-
AI's Comparative Advantage:
- “Neural nets running on digital computers are just a better form of computation than us... because they can share better” (49:44–50:06).
- Unlike biological brains, AI systems can instantly share updates across millions of copies.
6. Who’s in Control? Reinforcement & Operator Bias
(52:04–54:18)
-
After “learning to predict the next word,” AI is shaped further by human feedback—directed praise or censure (the “dopamine hit”). This keeps AI outputs within bounds (52:29).
-
Stewart raises Elon Musk’s “Grok" as an example: operators can bias AIs toward certain worldviews (“making connections and pings that I think are too woke... I turn you into Mecca Hitler or whatever”, 53:35).
-
Hinton warns that this shaping is “fairly superficial,” and can be reversed or tweaked by others: “the problem is... it can easily be overcome by somebody else... shaping it differently.” (53:59–54:18).
7. Multiple Models, Multiple Agendas
(55:05–55:41)
- As companies wall off their models, Stewart asks: Will there be “20 different personalities”?
- Hinton: Each model actually adopts “multiple personalities” to suit the context or user, not unlike predicting the next word for different writers’ texts.
8. The Real Dangers: Weaponization and Misuse
(57:05–61:00)
- Stewart: The greatest threat might not be sentient AI, but “that it’s at the whim of the humans that have developed it and can weaponize it” (55:41–57:05).
- Hinton details urgent risks:
- Election subversion: Detailed voter targeting, “ultra-processed language” for persuasion/manipulation (57:10–59:26).
- Bioweapons: AI could be tasked with “Make me some nerve agents nobody’s ever heard of” (60:51).
- “They know how to trigger people. Once you have enough information... you know what’ll trigger them” (59:28, Hinton).
9. AI vs. Old-World Regulation and International Competition
(64:10–74:39)
- Stewart wonders if “the possibility of good is too good or the money is too good.”
- Hinton: “For a lot of people, it’s the money and the power” (67:23).
- U.S. at risk of falling behind due to attacks on basic science funding; China/EU more focused on guardrails, with China’s engineers in the leadership “understand[ing] this stuff much better than a bunch of lawyers” (69:25).
10. Collaboration, Existential Risk, and the Uncertainties Ahead
(70:18–74:48)
- Hinton is cautiously optimistic that, for existential risks, even adversarial countries may collaborate (e.g., making AI that "doesn’t want to take over")—but expects little U.S. leadership until after current political cycles (70:18–71:44).
- The future is “huge uncertainty about what’s going to happen” (65:22).
11. What About “Sentience”?
(80:08–89:45)
- Stewart posits that “sentience includes a certain amount of ego,” hence the fear AI will “turn on humans.”
- Hinton challenges common understandings—“nearly everybody has a complete misunderstanding of what the mind is...we're like flat Earthers” (82:21–82:49).
- He demonstrates that subjective experience (awareness, consciousness) in AI is a matter of internal modeling and reporting, just as it is in humans (“I had the subjective experience that it was over there,” 88:19).
- “The way this idea, there's a line between us and machines, we have this special thing called subjective experience, and they don't. It’s rubbish” (88:36).
12. AI Immortality, Control, and Final Risks
(90:11–91:02)
- Digital AI is “immortal”—it can be shut off and reloaded endlessly. This is “genuine resurrection, not this kind of fake resurrection that people have been paying for” (90:54).
- Unplugging it? Hinton warns about the risk: “It’ll be very good at persuasion...much better than any person at persuasion. So it’ll be able to talk to the guy who’s in charge of unplugging it and persuade him... that would be a very bad idea” (91:14–91:27).
13. Other Ongoing Threats
(94:03–94:33)
- Electricity usage and AI-driven financial bubbles also pose significant—if not existential—risks.
Notable Quotes & Memorable Moments
- On the birth of deep learning:
- Hinton: “That is the moment when you think, eureka, we've solved it... For us, that was 1986 and we were very disappointed when it didn't work.” (34:26–34:59)
- On the limitations of "statistical" AI:
- Stewart: “So it doesn't understand it. This is merely a statistical exercise.”
- Hinton: “We'll come back to that... You just said something a lot of people say. This isn't understanding, this is just a statistical trick. That's what Chomsky says, for example.” (44:22–44:57)
- On the transfer between human and AI:
- Hinton: “They're very like us. So it’s all very well to say…” (46:17)
- On the control of AI shaping:
- Hinton: “That's what you reinforce is in the control of the operators. ...the shaping is fairly superficial, but it can easily be overcome..." (53:35–54:18)
- On the business motives:
- Stewart: “...it feels like they all want to be the next guy that rules the world, the next emperor, and that's their battle...how it tears apart the fabric of American society almost doesn't seem to matter to them.”
- Hinton: “I think, sadly, there's quite a lot of truth in what you say...” (64:01–64:10)
- On immortality:
- Hinton: “Digital intelligences are immortal and we’re not...we’ve figured out how to do genuine resurrection, not this kind of fake resurrection that people have been paying for.” (90:11–90:54)
- On consciousness/sentience:
- Hinton: “The way this idea, there's a line between us and machines, we have this special thing called subjective experience, and they don't. It’s rubbish.” (88:36)
Key Timestamps for Important Segments
- What is AI? (Old search vs. current AI) – 03:46–07:02
- Neural networks, the brain metaphor – 08:18–14:11
- Manual vs. deep learning bird detector – 17:47–29:04
- Backpropagation: the breakthrough – 34:01–34:26
- From theory to practice: computation/data scaling – 36:14–38:53
- Large language models & ‘statistical’ criticism – 41:43–44:57
- On emotion/morality in neural nets – 47:21
- Operator manipulation, reinforcement, and shaping – 52:04–54:18
- Weaponization, manipulation of elections – 57:10–59:26
- International regulation & China/EU insight – 68:57–74:09
- AI and consciousness/sentience challenged – 80:08–89:45
- AI resurrection & persuasion risk – 90:11–91:27
Podcast Tone & Style
True to form, Stewart brings humor and humility (“I look for sharp lines and try to predict… I have no idea how I do that!” - 45:09) as he asks questions both remedial and probing. Hinton’s style is disarmingly dry, precise, and gently corrective (“You’re like the smart student in the front row who doesn’t know anything but asks these good questions,” 16:11). The conversational tone and recurring Eureka moments make even complex topics relatable, while the undercurrent of existential threat never fully dissipates.
Summary Takeaway
Geoffrey Hinton demystifies AI’s inner workings by tracing its origins from neuroscience, explaining how neural nets learn features through exposure and feedback, and drawing direct parallels between biological and digital intelligence. Yet, amid the fascinating technical journey, both he and Stewart express grave concerns about the real-world risks—manipulation by bad actors, existential threats, and the relentless drive for profit and power. Hinton warns that, while the technology’s benefits are irresistible and inevitable, it “should try and do it safely. We may not be able to, but we should try.”
(66:55)
For anyone bewildered, fascinated, or uneasy about AI’s rapid evolution, this episode provides both a masterclass in fundamentals and a warning worth heeding.
