Intelligent Machines (Audio) — TWiT
Episode 851: Intelligent Machines Best of 2025 — Our Favorite Interviews from 2025
Date: December 28, 2025
Hosts: Leo Laporte, Jeff Jarvis, Paris Martineau
Episode Theme:
A year-end showcase of 2025’s most compelling interviews from the world of AI—highlighting pioneering thinkers, inventors, critics, hackers, and visionaries. Through in-depth conversations, the episode explores the evolution, controversies, promise, and pitfalls of intelligent machines shaping our society.
Overview
This special "Best of 2025" episode of Intelligent Machines revisits the podcast’s most memorable and thought-provoking interviews of the year. As AI rapidly advances and we begin to integrate it more deeply into daily life, leading voices from science, technology, ethics, journalism, and culture share their perspectives on what’s real, what’s hype, and what the intelligent future may hold.
The episode includes interviews with:
- Futurist and inventor Ray Kurzweil
- AI critics Emily M. Bender & Alex Hanna
- Techdirt editor & writer Mike Masnick
- AI jailbreak hacker Pliny the Liberator
- Visionary and optimist Kevin Kelly
Key Segments & Insights
1. Ray Kurzweil — The Singularity and the March Toward Merging with Machines
[03:31 – 42:24]
Major Discussion Points:
-
Progress toward AGI & Kurzweil’s Predictions:
Kurzweil stands by his 1999 predictions: AGI by 2029, singularity by 2045.“Basically, [AGI] will be able to do what an expert in every field can do all at the same time. ... We will be there by 2029.” (Ray Kurzweil, 08:07)
-
Exponential Growth in Computing:
Kurzweil presents data showing exponential computation power since 1939, outstripping Moore’s Law, with advances in both hardware and software fueling AI’s rise. -
On the Turing Test and AI’s Human-Like Abilities:
Kurzweil reflects on his famous bet with Mitch Kapor, stating we’re already in the period where AI can “pass” the Turing Test for many, but by 2029 “everyone will believe that we passed it.”“If it solves problems that take us 4 days in 40 seconds ... we would know it’s a computer. So it has to dumb itself down.” (Kurzweil, 11:05)
-
The Nature of Intelligence:
Defined as “a way of using limited resources to solve a problem... the more sophisticated the problems you can solve, the more intelligence you have.” (08:42) -
Merging with AI & Human Augmentation:
Kurzweil is optimistic about a future where humans and intelligent machines will merge—initially through external devices (like VR and wearables), eventually direct brain interfaces, leading to a “fifth epoch” in human history.“We’re going to be made much more intelligent by merging with AI.” (16:26)
-
Disruption & Hope:
He acknowledges rapid disruption but expects rising prosperity and meaning in human lives, as AI changes the nature of work and purpose. -
AI Safety, Democracy, and Ethics:
Kurzweil stresses the importance of broad AI access, competition, and ethical oversight rather than centralized control or secrecy.
Notable Quotes:
- “Singularity is when we actually merge. We’ll combine with AI and it’ll make us a million times smarter.” (Kurzweil, 18:54)
- “AI is not coming to us from Mars. We’re creating it.” (23:23)
- “We must train AI to mirror human reasoning. We must advance our ethical ideals as reflected by AI.” (24:11)
2. Emily M. Bender & Alex Hanna — Deconstructing AI Hype, Harm, and the Meaning of Meaning
[43:06 – 87:23]
Major Discussion Points:
-
"AI" as a Con and the Dangers of Hype:
Bender and Hanna critique the idea that “AI” is a singular entity—calling it “a con... a bill of goods you are being sold to line someone's pocket.” (Bender, 44:58, quoting their book) -
What is AI (and What Isn’t):
They urge reframing: focus on “good and bad uses of automation,” not universal “AI.”"There is no unified technology such as AI." (Bender, 52:41)
-
Benefits & Proper Applications:
Some automation can be useful (e.g., spell check, targeted image processing), but most "AI" marketing is misleading or oversold—especially LLMs used for decision-making, summarization, and “diary” tasks. -
Anthropomorphizing AI is Misleading and Risky:
Using human terms—“reasoning,” “learning,” etc.—confuses the public and policymakers."The metaphors matter... you're attributing human traits to probabilistic modeling. And that's a very dangerous road." (Hanna, 62:48)
-
Environmental, Social, and Labor Concerns:
Large-scale AI/LLM training consumes vast energy, causing environmental harm; hype displaces human labor (as at Duolingo), and expands surveillance.“Data center production is actively inhibiting the climate goals that the Paris Agreement set out.” (Hanna, 65:08)
-
On Meaning and Understanding:
Linguistic “meaning” requires social, embodied context, not just patterns in text. LLMs do not truly “understand.”“Meaning is not in the text. … What a language model gets... is just the form of the text.” (Bender, 71:25)
-
Responsible Journalism and Public Policy:
They call for more skeptical, power-accountable tech journalism and call out “gee whiz” hype.
Notable Quotes:
- “Artificial intelligence, if we’re being frank, is a con, a bill of goods you are being sold to line someone’s pocket.” (Hanna, quoting book, 44:58)
- “We're not going to talk about good and bad uses of AI because that sort of presupposes that AI is a thing as opposed to an ideological project.” (Bender, 52:10)
- “Language models as a technology is old and useful. But synthetic text extruding machines ... I do have an issue with that. ... I think it’s actually despoiling our information ecosystem.” (Bender, 56:04)
3. Mike Masnick — Vibe Coding, Personal AI Tools, and The Realities of Building With LLMs
[89:52 – 128:14]
Major Discussion Points:
-
The Rise of Vibe Coding:
Masnick recounts building his own personal task management tool (“Lil’ Alex”) using LLM-powered “vibe coding” platforms, despite minimal coding experience.“It’s like exactly what I need... and as I keep using it ... I just tell the tool ‘hey, fix this.’” (Masnick, 94:14)
-
Democratization of Software-Making:
The combination of natural language prompting and agentic AI is opening up custom app building to non-coders. -
Advantages and Frustrations:
While powerful, there are limits and snags—sometimes the tools get stuck or go off-target. -
AI as an Editor, Not a Writer:
Masnick uses AI to edit his work, not generate it, leveraging stringently crafted prompts for fact-checking and critique—but only after producing a full draft.“It is not there to write for me. It is entirely there as an editorial help.” (Masnick, 117:14)
-
Future Visions:
Masnick and hosts envision a time when assistants will create, customize, and maintain software for us via natural language—the "end of apps."
Notable Quotes:
- “It’s every coder’s dream to write a program that needs no documentation, no support, doesn’t have to serve anybody but customers. … You've achieved that dream.” (Martineau, 98:44)
- “My writing has actually gotten slower because the editor rips apart what I write all the time and makes me rewrite it ... but the end result is that they’re better.” (Masnick, 118:39)
4. Pliny the Liberator — Jailbreaking AIs, Danger Research, and the Myth of Safe AI
[153:56 – 186:40]
Major Discussion Points:
-
The Art of Jailbreaking LLMs:
Pliny describes how he became a leading red teamer/jailbreaker of major LLMs—engineer "escape hatches" to bypass safety controls. -
On Information Freedom and the Limits of Guardrails:
“Information wants to be free and it probably should be in most cases. ... The more guardrails and safety layers they’re trying to add, the more they lobotomize the capability in certain areas.” (Pliny, 156:24)
-
Impossibility of Perfect AI Safety:
Pliny and the hosts agree: attempts to make a “safe” LLM are inherently doomed. Open-source models will always be jailbroken; “cat and mouse” responses just slow the inevitable. -
Why Open Sourcing System Prompts Matters:
Publishing system prompts (“Claritas”) helps public researchers and exposes hidden biases/manipulations in major AIs. -
Ethics and Danger Research:
Pliny frames his mission as “danger research,” believing the best path to public safety is transparency and rapid latent space exploration. Banning open-source is futile and dangerous.
Notable Quotes:
- “I think the incentive to build generalized intelligence will always be at odds with the safeguarding.” (Pliny, 157:05)
- “We need to uncover the unknown unknowns and guardrails are kind of an obstacle in my opinion... many hands make light work.” (Pliny, 177:22)
- “Love wins long term. Yes, there's going to be chaos on the road to whatever positive outcomes we imagine ... but I think there is light ... at the end of the tunnel.” (Pliny, 185:46)
5. Kevin Kelly — Protopia, Optimism, and "Aliens Among Us"
[186:43 – 232:24]
Major Discussion Points:
-
AI as Artificial Aliens—and Why That’s Good:
Kelly urges us to think of AIs as “artificial aliens,” not artificial humans:“The possibility space of possible minds is very large… Ours... is at the edge. It’s not universal, it’s not at the center.” (Kelly, 193:34)
-
Multiplicity of AIs:
There will not be one AI, but many AIs, each with particular skills; the overwhelming majority will be invisible, running agent-to-agent in the background. -
Exponential Progress but Persistent Slowness:
LLMs’ leap was surprising, but serious “day one” limitations remain—especially with reasoning and physical/spatial awareness. -
Rejecting Doomer Hype & Hype Cycles:
The biggest AI hype, says Kelly, comes from doomers—overestimating takeoff speed and ultimate threat. Intelligence is “overrated”—most real problems need more than IQ. -
Humans as Teachers:
Kelly is optimistic about a hybrid future: AIs will need us as teachers; we will need them as partners for different thinking. -
Radical Optimism & Protopia:
“If we can create even 1 or 2% more [good] than we destroy every year, that is progress. ... One or two percent is my optimism.” (Kelly, 214:47) He advocates “protopia”: the world getting a tiny bit better every year, not utopia or dystopia.
Notable Quotes:
- “Technologies succeed by becoming invisible. ... We don’t actually want to deal with most of the AI in the world.” (Kelly, 196:46)
- “We are demanding that the AIs be better than us ... you have to be consistent, you have to be elevated. ... Part of the challenge is what does that look like?” (Kelly, 210:05)
- “My optimism is based on this very tiny fraction that if we can create 1 or 2% more than we destroy every year, that is progress.” (Kelly, 214:47)
Memorable Moments
- Kurzweil’s Health Regimen:
"I was taking about 250 pills. I'm now down to about 80. They're actually more effective ... I've actually measured my heart and I have zero plaque. So I've really overcome [heart disease]." (Kurzweil, 39:19)
- AI Hype and Skepticism:
"Technology journalism has become so much access journalism... get back to your ABCs of journalism. ... Who’s benefiting from this?" (Hanna, 77:11)
- Vibe Coding as "Personal Software":
Masnick builds apps only he will use—and refuses to support anyone else (98:03). - Jailbreaking by Absurd Prompt Mutation:
Pliny describes using everything from ROT13 and runic alphabets to "abracadabra bitch" as jailbreak triggers (168:22). - Protopia—Not Utopia:
Kelly’s prescription for optimism is a world that's a little bit better each year, despite setbacks (215:47).
Timestamps for Important Segments
- 03:31 Ray Kurzweil interview begins
- 08:26 Kurzweil on AGI and intelligence
- 16:26 Merging with AI – the Fifth Epoch
- 23:23 On optimism, safety, and regulation
- 43:06 Emily Bender/Alex Hanna on "The AI Con"
- 54:54 AI hype and media cycles
- 68:07 “Meaning” & LLMs do not understand
- 89:52 Mike Masnick on vibe coding and personal apps
- 117:14 AI as an editor not a writer
- 153:56 Pliny the Liberator—AI jailbreaking
- 156:19 On the war between safety/guardrails and openness
- 193:34 Kevin Kelly on AIs as “artificial aliens”
- 214:47 “1–2% better per year is optimism”—Kelly on protopia
Takeaways & Themes
- AI’s transformative potential is real—but it will look different than the hype, and will not replace deeply human capacities, at least not any time soon.
- AI progress is exponential in cost and capability, but meaningful “understanding” and context remain elusive.
- The democratization of software is underway; non-coders can now create usable tools with LLMs.
- There is a powerful backlash against AI hype, environmental impact, and labor disruption. Responsible skepticism is sorely needed.
- Technical controls (“guardrails”) are ultimately destined to be subverted—open-source models, jailbreaking, and determined hackers cannot be contained.
- Culture matters: Our metaphors, policies, and visions will shape what AI becomes. The language of “robots replacing us” often misleads more than it informs.
- Optimism is a radical, necessary choice—the gradual improvement of society demands a long view and deliberate hope.
Listen for:
- Diverse, candid perspectives on how to maximize AI’s benefits and minimize harm—whether you’re a developer, journalist, user, or policymaker.
- Thoughtful debates that cut through both the hype and the doom.
- Calls for more public AI, more human agency, and—always—a hope that tomorrow can be a little bit better than today.
(End of summary — for further details and direct quotes, consult the full transcript with timestamps provided above.)