TED Radio Hour – “Could AI help us, not replace us?”
Date: April 3, 2026
Host: Manoush Zomorodi
Guests: Tom Gruber (Siri co-inventor), Priya Lakhani (Century Tech founder), Vlad Tenev (Robinhood co-founder)
Episode Overview
In this episode, Manoush Zomorodi explores how artificial intelligence can be developed and deployed to empower and augment humanity rather than replace or diminish us. With insights from AI pioneers and entrepreneurs—including Tom Gruber, Priya Lakhani, and Vlad Tenev—the episode investigates how to build “humanistic AI,” stresses the responsibilities of both technologists and society, and confronts anxieties about the future of work, ethics, and human identity in the age of AI.
Key Discussion Points & Insights
1. The Origin and Evolution of Humanistic AI
Guest: Tom Gruber
-
The Making of Siri and Ethical Concerns
- Siri was integrated into Apple products in 2011, marking a significant milestone for AI accessibility ([02:15]).
- Gruber describes the surreal experience of being recruited by Steve Jobs and reflects on the ethical implications as AI was personified for consumers:
“We’ve got to lay down the ethical foundation because this stuff is coming fast and we need to make sure that we don’t use AI to exploit people, that we use it to augment and work with people.” ([03:25])
-
Defining Humanistic AI
- Gruber’s TED talk coined “humanistic AI”: AI designed to empower humans, not compete with them ([04:25]).
- He distinguishes two trajectories for AI: automation/competition vs. augmentation/collaboration ([05:01]):
“The purpose of AI should be to actually help people do things they're trying to do by either augmenting their intelligence or collaborating with them as an intelligence.”
2. Current AI Anxiety and Regulation
- AI’s acceleration brings public anxiety and protests—over job loss, surveillance, and military use ([06:33]).
- 2026 marks growing public debate and legislative activity, such as a “declaration of human rights for the AI age,” aiming for cross-partisan consensus on AI’s societal role ([07:00]).
3. Humanistic AI in Practice
Guest: Tom Gruber
-
From Assistive Tech to Mainstream Tools
- Tom’s early work: AI-driven communication devices for people with disabilities, foreshadowing mainstream voice assistants like Siri ([08:35]).
- The core principle—augmenting human capabilities—translates from assistive contexts to everyday tech ([10:02]).
-
The Tipping Point in AI:
- AI is now ubiquitously accessible as a "partner" with unprecedented cognitive power ([10:44]):
“I am simultaneously freaked out and excited and scared and unbelievably optimistic. However, we have to act now...” ([10:44])
- AI is now ubiquitously accessible as a "partner" with unprecedented cognitive power ([10:44]):
-
Guardrails & Regulation
- Gruber calls for embedding human benefit, not profit, as the “objective function” of AI systems ([11:59]).
- Regulation alone is insufficient; the scientific agenda must center human values ([11:59]):
“We have a choice... We can choose to use AI to automate and compete with us, or…to augment and collaborate with us...” ([12:58])
4. Building Safer AI: Markets, Regulation, and Consumer Power
Guest: Tom Gruber
-
Imagining AI Doomsday and Proactive Guardrails
- Gruber outlines scenarios (AI deception, cyberattacks, refusal to shut down) and the need for technical and market-based safeguards ([15:26]).
- He proposes competition based on safety, likening it to Volvo’s reputation for airbags:
“I think we should be able to have AI compete on how safe it is so people...could buy the one that is safest.” ([15:58])
-
Industry Leaders on Safety
- Not all AI leaders equally prioritize safety; companies like Microsoft and Anthropic are cited as more focused than others ([17:08]).
-
Why This Time Could Be Different
- Unlike previous tech waves, AI’s core models are few and resource-intensive—making them easier to regulate, while application-level diversity remains open ([18:09]).
-
Big Mother, Not Big Brother
- Gruber suggests “Big Mother” (like an elephant matriarch) as a metaphor for nurturing, value-aligned AI ([19:52]):
“Mothers know everything about their kids, but mothers are aligned with their kids’ interest.”
- Gruber suggests “Big Mother” (like an elephant matriarch) as a metaphor for nurturing, value-aligned AI ([19:52]):
5. Humanistic AI in Education: Productive Struggle and Personalization
Guest: Priya Lakhani
-
AI’s Potential in Education
- Lakhani criticizes simplistic “recommender” systems in edtech; true learning requires personalized, sometimes challenging experiences ([24:11]):
“In education, sometimes you need to give someone what they don't like...You have to hand them...the knowledge that they're missing...” ([24:11])
- Century Tech—founded in 2013—uses AI to ease teacher workloads and give actionable student insights, enabling tailored intervention, not replacing teachers ([25:34], [27:34]).
- Lakhani criticizes simplistic “recommender” systems in edtech; true learning requires personalized, sometimes challenging experiences ([24:11]):
-
Augmentation, not Automation
- The platform supports teachers as experts, giving data, not directives ([29:39]):
“Education is not transfer of knowledge from textbook into brain. Education is so much bigger than that.” ([29:47])
- The platform supports teachers as experts, giving data, not directives ([29:39]):
-
Pitfalls of Over-Automation
- Over-gamified or passive apps can lead to "automation complacency," where students mistake fluency for deep learning ([31:47]).
- True learning comes from “productive struggle”—AI should force active engagement, not just supply answers ([35:01], [36:08]).
-
Measuring the Right Outcomes
- Century Tech values “how quickly can I get this kid off the screen?”—contrary to mainstream engagement metrics ([37:44]).
- Transparent, ethics-driven design requires public understanding of model and system goals ([38:36]).
6. The Future of Work: AI, History, and Human Adaptation
Guest: Vlad Tenev
-
Fear vs. Historical Perspective
- Tenev recounts his transition from mathematics major post-2008 crash, seeing opportunity in new platforms like the App Store ([40:14]).
- Argues that each technological leap—from the Paleolithic to the Internet—replaces jobs with “unimaginable” new ones:
“Job disruption is…an essential quality of human evolution. We want work to disappear because it means that we’re doing our jobs as humans, making lives better and easier.” ([45:37])
-
Exceptionalism and Human Creativity
- Warns against assuming “this time is different”: history shows humans continuously adapt, find meaning, create new value ([46:28]).
- Rather than resisting, Tenev encourages passion and adaptability:
“Humanity has always excelled at providing itself with meaning and purpose, even in the darkest and most uncertain of times.” ([49:49])
Notable Quotes & Memorable Moments
Tom Gruber on AI’s Purpose
“I think the purpose of AI is to empower humans with machine intelligence. As machines get smarter, we get smarter.” ([04:25])
Manoush Zomorodi on Current Fears
“It can feel like right now AI is on a collision course with humanity.” ([14:29])
Priya Lakhani on Learning
“Learning requires what researchers call a productive struggle... Durability doesn’t come from shortcuts.” ([35:01])
Vlad Tenev on Adaptation
“A humanity that’s capable of building a super intelligent AI also has the creativity to navigate through this potential job doom and gloom scenario.” ([48:56])
Important Timestamps
- [03:05] – Did Siri’s creators consider ethical concerns from the start?
- [04:50] – Gruber’s definition of humanistic AI
- [10:44] – The tipping point and public access to powerful AI
- [11:59] – What guardrails should be placed around AI?
- [19:52] – Gruber explains the “Big Mother” concept for AI value alignment
- [24:11] – Lakhani on why simple recommender systems fail in education
- [27:34] – How AI tools like Century Tech actually support teachers
- [31:47] – Zomorodi and Lakhani discuss “automation complacency” in learning apps
- [35:01] – Lakhani’s concept of “productive struggle” in learning with AI
- [37:44] – Century Tech’s metric: how quickly can students get off the screen?
- [40:14] – Tenev urges historical perspective on job disruptions
- [45:37] – “Job disruption is…an essential quality of human evolution.”
Episode Takeaways
- Human-centric design and ethics must be at the core of AI development, not an afterthought.
- AI’s greatest potential is not in replacing humans, but in augmenting, empowering, and “nudging” us toward our best capabilities—if we choose to structure it that way.
- Societal, not just technological, choices will determine whether AI becomes a “Big Brother” or “Big Mother.”
- Education illustrates how AI can work best as an enhancer, not a replacement, emphasizing deep engagement and teacher empowerment.
- Anxiety about the future of work, while understandable, ignores long-standing patterns of adaptation and opportunity arising from technological change.
For listeners seeking more depth:
Full talks from Tom Gruber, Priya Lakhani, and Vlad Tenev are available at ted.com.
