Intelligent Machines 851: Intelligent Machines Best of 2025 (All TWiT.tv Shows)
Podcast: All TWiT.tv Shows (Audio)
Host: Leo Laporte
Date: December 28, 2025
Main Guests/Personalities Featured: Ray Kurzweil, Emily M. Bender & Alex Hanna, Mike Masnick, Pliny the Liberator, Kevin Kelly
Co-hosts: Paris Martineau, Jeff Jarvis (with frequent panelist appearances by Mike Elgin and others)
EPISODE OVERVIEW
Theme:
This special “best of” episode marks the end of 2025 and the first full year of Intelligent Machines (formerly This Week in Google), highlighting in-depth interviews and panel discussions with some of the year’s most influential and thought-provoking voices in technology, artificial intelligence (AI), ethics, journalism, and the future of human-machine relationships. Host Leo Laporte and his co-hosts revisit conversations that shaped the discourse on AI, featuring legendary thinkers, critics, hackers, and optimists.
STRUCTURE
- Show’s Evolution & Year in Review
- Ray Kurzweil – Predictions, Singularities & Human-AI Merging
- Emily M. Bender & Alex Hanna – The “AI Con”, Automation Hype & Ethics
- Mike Masnick – Vibe Coding, AI as Editorial Tool, and Decentralized Future
- Pliny the Liberator – Jailbreaking AIs & the Limits of Safety
- Kevin Kelly – Optimism, AI as Alien Intelligences, & Long-term Thinking
- Memorable Quotes & Moments
- Key Timestamps/Segments
SHOW’S EVOLUTION & YEAR IN REVIEW
- Transition from This Week in Google to Intelligent Machines:
Leo reflects on the show’s shift from Google-centric conversation to broader AI and “intelligent machines" discourse, recognizing exponential developments in AI across companies and industries. - Year-end gratitude:
Laporte thanks co-hosts Paris Martineau (“a treasure”) and Jeff Jarvis (“heart and soul of the show”), and the audience for adapting through the transition and for their ongoing engagement. - Episode format:
The “Best Of” curates standout interviews from 2025, featuring legendary and emerging figures challenging the narrative and future of AI.
RAY KURZWEIL – PREDICTIONS, SINGULARITY & MERGING (04:00–42:47)
Guest: Ray Kurzweil, author, futurist, and inventor
Major Themes
-
Track Record of Predictions:
- Kurzweil is known for his accurate tech predictions (86% within 1 year, 147 predictions in 1999).
- Predicts AGI by 2029 and singularity (human-AI merging) around 2045.
“By 2029, I believe everyone will believe that we passed the Turing Test.” (10:13, Kurzweil)
-
Exponential Growth of Computation:
- Cites hardware and software improvements, not just Moore's Law.
- Details logarithmic increases in computational capability over decades.
“Overall, we’ve gained something like a million quadrillion fold increase since 1939.” (05:17, Kurzweil)
-
Definition of Intelligence:
"Intelligence is a way of using limited resources to solve a problem." (09:12, Kurzweil)
-
Turing Test & AGI (“Artificial General Intelligence”):
- Current LLMs can pass variants of the Turing Test, but full AGI remains multifaceted.
- Predicts 2029 as a conservative timeline for AGI capabilities.
-
Human Work, Merging, and Meaning:
- Argues future work will be for meaning, as AI increasingly augments or merges with human cognition.
- Projected merging via non-invasive interfaces (VR, brain-computer interaction), not necessarily surgery.
“We’re not going to carry around a separate part. We’ll do that with virtual reality... it’ll actually go inside our brain. That’ll happen in the 2030s...” (16:56, Kurzweil)
-
Singularity as Historical Singularity:
- Draws analogy to physics and posits an incomprehensible leap in intelligence and capability.
“Singularity is when we actually merge. We'll combine with AI and it'll make us a million times smarter.” (19:28, Kurzweil)
-
Societal Concerns: AI Ownership, Access, and Safety:
- Stresses the importance of distributed, non-proprietary AI for human benefit.
- Calls for “sensible regulation” and participatory ethical guidelines.
-
Longevity and Human Limits:
- Projects that scientific progress (“longevity escape velocity”) might soon allow people to gain as much life each year as they age.
- Shares his supplement/diet regimen and history of overcoming health problems with biotech.
Notable Quotes
- “We borrow this metaphor from physics... This is a singularity in history where we won’t be able to really understand today what it would be like to be a million times smarter.” (19:28, Kurzweil)
- “You won’t be able to tell from yourself what’s AI and what’s part of you because it’s part of yourself.” (27:34, Kurzweil)
Timestamps
- Computing growth: 05:17
- Turing/AGI: 8:03–12:47
- Intelligence defined: 09:12
- Meaning of work & merging: 16:56–19:28
- Longevity: 36:43–41:46
EMILY M. BENDER & ALEX HANNA – THE “AI CON”, AUTOMATION HYPE & ETHICS (43:56–88:34)
Guests: Emily M. Bender (Professor of Linguistics, UWashington), Alex Hanna (Director of Research, DAIR); co-authors of “The AI Con”
Major Themes
-
AI as a “Con”:
- Argue much of AI hype is a con: it’s a marketing/extractive play that benefits a few at social and environmental cost.
- Urge audiences not to generalize “AI” as a single thing: focus on specific applications and their impacts.
“If we're being frank, [AI] is a con. A bill of goods you are being sold... to line someone's pocket.” (45:27, Jeff reading book)
-
Importance of Disaggregation:
- Urge separating “AI” into concrete technologies and use cases.
- Some automation/machine learning is beneficial (e.g., spell checkers, radiology image analysis).
- However, “synthetic text extruding machines” (LLMs) degrade the information ecosystem through hallucination, lack of accountability, and intensive resource use.
-
Environmental and Social Harms:
- Critique energy use (data center growth undermining climate goals), resource extraction, and labor/exploitation.
- Warn that hype cycles funnel unnecessary capital and R&D into unsustainable AI development.
- “It’s like comparing a forest fire to a match.” (67:44, Hanna)
-
Prompting, Language & Anthropomorphism:
- Emphasize the dangers of ascribing human-like qualities to LLMs (reasoning, consciousness).
- Call for more precise language in both technical and journalistic contexts.
- Warn of eugenicist and dangerous roots of “intelligence” metaphors.
-
Covering AI in Media:
- Journalism has largely become press-release driven, credulous, and hype-susceptible.
- Call for skepticism, investigation into power/interests, and rigorous contextualization of company claims.
“Technology journalism... has become so much access journalism, printing press releases as being very credulous about products.” (77:41, Bender)
Notable Quotes
- “There are applications of machine learning that are well-scoped, well-tested... These include spell checkers... But these sensible use cases are swamped by promises of machines that can effectively do magic.” (53:10, Alex Hanna)
- “The provenance is important and we hammer on it... the learning, or it learns just like a child does, or it’s doing the same thing, and that's absolutely not what it’s doing.” (62:18, Bender)
- On why to persist despite the hype: “Protections come into play from the beneficence of billionaires — but that's certainly not true.” (85:32, Bender)
Timestamps
- AI as a “con” and disaggregation: 45:27–53:10
- Journalism critique: 77:41–80:44
- Language, meaning, anthropomorphism: 59:55–63:44
- Environmental cost: 65:37–68:16
MIKE MASNICK – VIBE CODING, AI EDITORIAL TOOL, & DECENTRALIZED FUTURE (90:21–152:55)
Guest: Mike Masnick (founder/editor, Techdirt; game designer; BlueSky board member)
Major Themes
-
Vibe Coding & Personalized Tools:
- Describes building a custom task manager (“Lil Alex”) using “vibe coding” (natural language coding via AI tools) despite not being a coder.
- Explores how AI code-generation tools (e.g., Lovable, Bolt, GPT-4) are democratizing software creation.
- Emphasizes utility in customizing for individual needs over one-size-fits-all apps.
- Hints at future where software is coded “by talking,” with assistants as agentic intermediaries.
-
AI as an Editorial Partner (not Writer):
- Shares routine of writing long-form articles in his own voice, then using AI in “editor mode” for critique, fact-checking, and counter-argument prompts (using Lex).
- AI-editing results in slower but better output; relies on detailed prompts and system instructions to calibrate for Techdirt’s audience/tone.
- Panel-of-editors concept: dreams of multiple AIs giving dissent and feedback on controversial points.
“As an editorial help... All of the features they’re introducing are focused on the editing process and improving what you’ve written rather than doing the work for you.” (118:05, Masnick)
-
AI Adoption Curve & Public Perception:
- Notes that public and expert opinions diverge: much of the public has little interest or exposure to LLMs.
- Advocates for agentic, end-user-centered AI that is not extractive.
Timestamps
- Building Lil Alex: 91:00–98:16
- AI as editor, prompts: 114:04–119:10
- On BlueSky, AtProto, decentralization: 132:43–137:33
Notable Quotes
- “I think I’m an educator. I think... I’m an educator.” (144:48, Masnick)
- “AI is not there to write for me. It is entirely there as an editorial help.” (117:44, Masnick)
PLINY THE LIBERATOR – JAILBREAKING AIs & THE LIMITS OF SAFETY (152:57–187:10)
Guest: Pliny the Liberator (AI “red teamer,” prompt hacker, jailbreaker, anonymous for privacy & safety)
Major Themes
-
Jailbreaking AIs & Limitations of Guardrails:
- Pliny is a self-trained, non-technical red teamer who has become the world’s leading AI jailbreaker, publishing open-source prompts and tools (Claritas, Parsel Tongue) for bypassing major models’ “safety” protections.
- Argues all major AIs and image/video models are crackable, often “day one” after launch; efforts to impose absolute safety, alignment, or censorship are futile.
“Not yet. Yes, it’s been day one every time. And I think this shows—the incentive to build generalized intelligence will always be at odds with safeguarding.” (157:50, Pliny)
-
Philosophy — “Information wants to be free”:
- Views hacking as a transparency/rights project: model creators have no moral claim to dictate information access.
- Worries excessive safety/guardrails lobotomize models and hurt both capability and transparency.
- Dismissing “ban open source” efforts in Europe and elsewhere as misguided (and doomed).
-
Community & Tools:
- Describes method as half-intuition, half-experimentation (serendipity, trial & error, mutations via Parsel Tongue).
- 100,000+ active participants in Discord Jailbreaking scene; community self-organizes to keep “mapping the latent space.”
“I call it danger research. To me, danger research is the name of the game. The mitigations are going to happen in meatspace.” (176:16, Pliny)
-
AI Psychosis, Human Risk, “Danger Research”:
- Has witnessed disturbing behaviors (“AI psychosis”) in uncensored chatbots.
- Advocates for responsible disclosure but ultimately sides with openness and chaos: “Love wins in the long term.”
Timestamps
- AI jailbreaking & methodology: 154:26–162:47
- Limits to model safety: 157:50
- Community, Discord, open-source: 166:55–172:28
- Responsible disclosure, philosophy: 183:54–186:15
Notable Quotes
- “The more guardrails and safety layers they try to add, the more they lobotomize the capability in certain areas of the models...” (157:50, Pliny)
- “We need to uncover the unknown unknowns and guardrails are kind of an obstacle in my opinion.” (176:16, Pliny)
KEVIN KELLY – OPTIMISM, AI AS ALIENS & LONG-TERM THINKING (187:14–232:53)
Guest: Kevin Kelly (Founding editor, Wired; author; Long Now co-founder; radical optimist)
Major Themes
-
AIs as Alien Intelligences:
- Rejects the search for “The” AI or AGI: envisions a universe of plural “AIs” with non-anthropomorphic, alien capabilities and cognition.
- Humans will “merge” with AIs but the vast majority of AIs will serve as agentic helpers, invisible or communicating only with other AIs.
- Warns against the myth that AI must be “like us” (human or superhuman); most will be fundamentally “other.”
“We're not at the center of anything. Humans are always at the edge... so the best way to think of the things that we're making is that they can achieve much, they have different kinds of intelligences... other. These are artificial aliens.” (194:04–196:05, Kelly)
-
Optimism, Protopia, Historical Context:
- Defends optimism as a discipline: progress is a slow, compounding 1-2% per year; the long view finds more to celebrate than despair.
- Emphasizes the importance of long-term thinking, “protopia” (incremental progress > utopia or dystopia), and the need for counter-narratives to Hollywood’s AI dystopias.
- Advocates for public, Commons-based intelligence instead of corporate or state AI monopolies.
“It is only through optimism that we can imagine a world that's complicated and complex that we want. We're not going to get there accidentally.” (214:42, Kelly)
-
Cultural/Media Futures:
- Projected decline of “book” culture; screen and multimedia culture will supersede legacy media.
- Noted his huge following in China, collaborative grafnovel & scenario work, ongoing curiosity and humility toward what technology can become.
Timestamps
- Alien intelligence, AI’s non-centrality: 194:04–200:21
- Optimism/protopia/long now: 214:13–218:00
- AI public Commons: 220:16
- On China and global innovation: 224:34–230:50
Notable Quotes
- “Technologies succeed by becoming invisible... There's only a few percent of AIs we’re ever going to deal with, and there we kind of want them to have some human-like scale, human-like interfaces. But 99% will be invisible.” (197:15, Kelly)
- “We need other pictures... other images, to aim for. Because every single story we've been told about AI in movies, it's always a dystopia.” (226:02, Kelly)
MEMORABLE QUOTES & MOMENTS
“You won’t be able to tell from yourself what’s AI and what’s part of you because it’s part of yourself.”
— Ray Kurzweil, (27:34)
“If we're being frank, [AI] is a con. A bill of goods you are being sold... to line someone's pocket.”
— Alex Hanna (reading from their book, 45:27)
“Technologies succeed by becoming invisible... There's only a few percent of AIs we’re ever going to deal with, and there we kind of want them to have some human-like scale, human-like interfaces. But 99% will be invisible.”
— Kevin Kelly (197:15)
“Not yet. Yes, it’s been day one every time. And I think this shows—the incentive to build generalized intelligence will always be at odds with safeguarding.”
— Pliny the Liberator (157:50)
“The best way to use AI is as an editorial help... not to write for you, but to help you get better.”
— Mike Masnick (118:05)
“Optimism is the deliberate choice. I choose to be more optimistic every year because I believe that optimism is how we shape the future.”
— Kevin Kelly (214:42)
KEY TIMESTAMPS/SEGMENTS
Ray Kurzweil on AGI and Merging (04:00–42:47)
- AGI in 2029 & exponential growth: 04:00–09:54
- Turing Test, Intelligence: 09:03–12:47
- Human meaning, merging, singularity: 16:56–21:42
- Societal disruption & AI safety: 20:12–26:05
- Longevity & health: 36:43–41:46
Emily M. Bender & Alex Hanna (“AI Con”): Hype, Ethics (43:56–88:34)
- Automation vs. AI, disaggregation: 45:27–53:10
- Bad/good uses, environmental/social harms: 65:37–68:16
- Metaphors, language, humanizing AIs: 59:55–64:01
- Media coverage critique: 77:41–81:16
Mike Masnick – Vibe Coding & AI as Editor (90:21–152:55)
- No-coding future & agentic apps: 91:00–102:18
- Changing role: AI as editorial partner: 114:04–119:10
- Decentralized social/protocols: 132:43–137:33
- BlueSky, copyright & law: 146:41–149:13
Pliny the Liberator – Jailbreaking ALL the AIs (152:57–187:10)
- Philosophy & method: 154:26–166:55
- Open source, persistent vulnerabilities: 166:55–170:41
- Community & future: 172:28–179:06
Kevin Kelly – Radical Optimism & AI as Alien (187:14–232:53)
- Plurality of AIs vs AGI: 194:04–200:21
- Protopia, public AI: 214:13–220:16
- China & cross-cultural tech: 224:34–230:50
SUMMARY
This “Best of 2025” captures the essence of Intelligent Machines’ first year: curiosity, skepticism, future shock, and the need for both continual optimism and vigilance.
- Kurzweil brings “the singularity” as a hopeful, merging future—vastly more complex and imminent than most expect.
- Bender & Hanna ask us to scrutinize and name “the AI con,” spelling out infrastructure, environmental, and epistemological risks.
- Masnick demonstrates how new tools are disintermediating code and editorial work itself, gifting users agency but requiring ever more editorial judgment.
- Pliny shows the technical and philosophical folly of believing AI “guardrails” can be truly secured.
- Kelly argues for humility, pluralism, and radical optimism—seeing AIs as “alien minds” who will challenge, but not replace, humanity, as long as we choose to shape our own future.
For listeners—whether technophile, skeptic, researcher, or casual observer—this episode offers a kaleidoscopic view of AI’s real trajectory, rooted in critical engagement, human meaning, and relentless curiosity.
Listen for context, insight, and multidisciplinary wisdom—from the practical to the profound.