Future of Life Institute Podcast: “Preparing for an AI Economy”
Guest: Daniel Susskind
Host: Gus Ducker
Date: June 27, 2025
Overview
In this episode, economist and writer Daniel Susskind joins Gus Ducker of the Future of Life Institute to explore how artificial intelligence (AI) is reshaping the economy, the future of work, and education. Drawing from his books and current research, Susskind discusses the interplay between technological progress, employment, and societal values, and offers guidance on how individuals—especially the next generation—can thrive amid rapid and uncertain change.
Key Discussion Points & Insights
1. Daniel Susskind’s Background and Focus
[01:03]
- Susskind’s main interest: the impact of technology, and particularly AI, on work and society.
- Authored three books:
- The Future of the Professions (2015): Technology’s effect on white-collar work.
- A World Without Work (2020): Broader take on technology’s transformative effect on labor.
- Growth: A Reckoning (2024): Explores the dual nature of technological growth—fueling human flourishing and presenting significant challenges.
- Upcoming: What Should My Children Do? How to Flourish in the Age of AI—responding to the most common question Susskind receives from parents.
2. Economists vs. AI Researchers: Different Worldviews
[03:42]
- 10–15 years ago, economists underestimated what machines could do, clinging to a “routine vs. non-routine” task dichotomy. As Susskind notes:
"Economists were making this systematic mistake in thinking about the capabilities of technology." [04:49]
- AI researchers historically overlooked the nuances of technology’s impact on labor markets, especially the distinction between “substitution” (machines replacing workers) and “complementarity” (machines creating new types of demand for human labor).
[08:07]
"The indeterminate, uncertain aggregate effect that these technologies can have on work is not as straightforward as focusing on these sort of cinematic substitution effects." — Susskind
3. The Productivity Paradox and Technological Diffusion
[09:27]
- Despite rapid tech advances, productivity statistics often lag. Susskind points to:
"You see technology everywhere apart from in the productivity statistics." — A classic economist’s lament [09:27]
- Historical context: It took decades for Industrial Revolution inventions to impact productivity statistics, emphasizing the gap between innovation and implementation.
4. What Metrics Should We Watch?
[10:40]
- Productivity growth is critical for solving economic issues:
"There are very few problems in the British economy which would not be solved by more productivity growth." [10:55]
- Beyond productivity, need to track changes in the quality of work—often affected earlier and more deeply by technological change than aggregate employment rates.
5. Steering Technological Progress: Is It Possible?
[13:48]
- Susskind critiques the “train on fixed rails” metaphor for tech progress, proposing instead a “sailing boat” metaphor:
“We also have a huge amount of discretion over the kind of direction of technological progress as well.” [13:48]
- Technology’s effect on work can be steered by policy incentives, such as tax codes that currently favor automation over human employment.
6. The Limits of Steering: Prediction Difficulties
[17:32]
- It’s hard to anticipate whether a technology will substitute or complement human labor. Example: ATMs increased bank teller employment by changing the nature of their work and growing the overall economy.
- Effects can change over time (e.g., GPS systems serving human drivers vs. autonomous vehicles).
- Underlying moral issues: Should we always steer tech to complement labor, or might a society with less work and alternative income-sharing mechanisms be desirable?
"It's not entirely obvious that we want to be steering technology away from substituting towards complementing." — Susskind [21:29]
7. Will Any Human Work Remain Once AI Outperforms Us?
[22:32], [24:49]
- Even if machines surpass humans in all economically relevant tasks, work may persist due to:
- Comparative Advantage: As in international trade, even “lesser” partners can specialize and find a role.
- Preferences & Motives: Humans may prefer goods/services created by humans (aesthetic, achievement, empathy motives).
8. Preference Motives: Aesthetic, Achievement, and Empathy
[28:35]
- Aesthetic: Some objects/experiences valued for human origin (e.g., artwork, crafts, artisanal goods).
“We value the fact that that painting was done by a human being, not simply that it's extraordinarily beautiful.” [30:56]
- Achievement: We value witnessing human competition and accomplishment (sports, chess).
- Empathy: Certain roles (e.g., end-of-life care) require genuine human understanding and presence.
Limits to These Limits
- As AI becomes more capable, societal attachment to human production may weaken. [36:13]
- Jobs based on achievement are inherently limited; most people can’t be at the very top.
9. Moral Limits: Keeping Humans in the Loop
[39:45]
- Some roles may be protected for ethical reasons (e.g., judges, military command, legal decisions).
"There are lots of micro examples of tasks or activities that have this kind of moral flavor…we want to keep a human in the loop.” [40:04]
- Even these limits may erode under pressures for better outcomes or efficiency.
“As these technologies just become more and more capable and the outcomes become better and better…the attachment to kind of pure process based moral reasoning is going to…hold up?” [44:31]
10. Societal Choices and the Erosion of Moral Limits
[45:31]
- Commercial and geopolitical competition may lead us to adopt AI in areas previously reserved for humans, potentially “disempowering” ourselves.
"In the process of kind of accepting these arguments, we will disempower ourselves." — Gus Ducker [45:31]
- Susskind: Moral, cultural, and social arguments already play a greater role than technical limitations in constraining AI use.
11. The Unpredictability of the Future of Work
[47:42]
- Susskind on the folly of career prediction:
“It just would have been unimaginable…because the technologies that transformed our lives…it was almost unimaginable.” [48:28]
- Instead, the real challenge is teaching young people to be adaptable—preparing for uncertainty.
12. The New Educational Imperative: Teaching AI Usage
[51:11]
- Susskind advocates dedicating up to a third of school and university curricula to learning how to use AI thoughtfully and effectively:
"I think we need to be spending something like a third of our time in school and university learning how to use AI effectively." [51:11]
- Not just prompting, but understanding history, technical limits, and ethical/moral implications.
- Personalizes this through analogy to mathematics education at Oxford.
13. Rethinking Education, Not Retreating from AI
[55:23]
- Susskind opposes reverting to pen-and-paper methods or Luddite approaches:
"The challenge for all educators…is not to go backwards, but to go forwards…How do I enable these kids to use these technologies to understand ideas, to solve problems…that would have been unimaginable before?" [55:23]
- The new standard for educators: make teaching deeper and more challenging by embracing technological advancement.
14. Personalized Learning and Opportunity
[57:36]
- With AI, individualized and affordable tutoring becomes possible, potentially benefiting all students, including those with learning difficulties:
"The promise of these technologies is that it can replicate the kind of interaction that you might have with a human tutor, but do so at a far lower cost." [57:36]
15. Parenting in an AI World: Avoiding the Wrong Lessons
[59:25]
- Susskind warns against conflating the legitimate concerns over social media and smartphones with AI more broadly:
"AI is not the same thing as social media. If we bundle technology into a kind of monolithic, indivisible lump of bad stuff, parents are going to let down their kids in preparing them to use these technologies." [59:25]
16. Societal Support for Vulnerable Groups
[61:11]
- The uncertainty generated by rapid change especially affects children and the elderly.
- Susskind is hopeful about AI’s potential to address the shortcomings of traditional education, offering tailored opportunities for new generations.
“I think we can, if we get it right, use [AI] to do really extraordinary things.” [61:11]
Notable Quotes & Memorable Moments
-
On Policy and Technological Progress:
"We also have a huge amount of discretion over the kind of direction of technological progress as well." — Susskind [13:48]
-
On Measuring the Impact of Technology:
"You see technology everywhere apart from in the productivity statistics." [09:27]
-
On the Limits of Prediction:
"If we think about the future… the idea that we can predict jobs in a couple of decades time just seems…incredibly hubristic." [49:25]
-
On Education for the AI Age:
"The most important thing is that we teach people how to use AI effectively." [51:11]
Timestamps for Key Segments
- [01:03] — Daniel Susskind introduces his research focus and books
- [03:42] — Differences between economists and AI researchers
- [10:55] — Why productivity growth is so important
- [13:48] — Policymaking metaphors: Sailboats and steering tech progress
- [17:32] — Predicting substitution vs. complementarity in tech
- [28:35] — Human preference motives: Aesthetic, achievement, empathy
- [39:45] — Moral boundaries in automation
- [47:42] — The unpredictability of future jobs and skills
- [51:11] — Susskind’s proposal for an “AI curriculum” in education
- [55:23] — Why teachers shouldn’t go Luddite but embrace AI
- [57:36] — AI’s role in individualized, accessible education
- [59:25] — The danger of conflating AI with social media in parental attitudes
Conclusion
Daniel Susskind makes the case for facing the future head-on: embracing the uncertainty brought by AI, resisting backward-looking educational methods, and proactively teaching the next generation how to thrive alongside intelligent machines. His vision is hopeful, pragmatic, and rooted in both economic theory and the moral values that shape our use of technology.
