Harvard Data Science Review Podcast
Episode: "AI Won’t Take Your Job (But It Might Change It)"
Date: September 26, 2025
Host(s): Liberty Vittert (Feature Editor, HDSR), Xiao-Li Meng (Editor-in-Chief, HDSR)
Guests:
- Ben Waber (Visiting Scientist and Lecturer, MIT)
- Rafaella Sadun (Professor of Business Administration, Harvard Business School)
Episode Overview
In this engaging episode, Liberty Vittert and Xiao-Li Meng explore how artificial intelligence is reshaping the workforce, examining its immediate and potential long-term impacts. With expert guests Ben Waber and Rafaella Sadun, the discussion moves beyond typical fears of job loss to consider how AI is augmenting roles, creating new opportunities, and presenting real managerial and societal challenges. Notably, all interview questions in the episode were generated by ChatGPT to highlight the ongoing integration of AI into knowledge work.
Key Discussion Points & Insights
1. AI’s Immediate Impact: Hype vs. Reality
[02:27-06:36]
- Ben Waber highlights how generative AI, like ChatGPT, has granted executives “permission to credibly cut large numbers of the workforce or give their existing workforce more work with not more compensation, and juice their profit margins by doing that,” but warns this can have long-term performance costs.
- “A big reason for that... companies are hurdling without fully understanding the technology.” [03:24, Waber]
- Rafaella Sadun points out that, although AI is widely discussed, its true organizational potential is still unknown.
- “Nobody has the playbook yet. If people tell you that they have the playbook, I don’t trust them.” [05:40, Sadun]
- Current uses are mostly “AI as copilot”—helping with information retrieval, summarization, visualization—rather than full job automation.
2. Will AI Displace Jobs or Boost Productivity? Variance, Not Averages
[06:36-13:25]
- Sadun: It’s too early for aggregate predictions; the “big story is heterogeneity”—AI adoption and impact varies widely across sectors and firms.
- “If AI ends up creating new jobs...that will be a conscious choice that might come out of a discovery process, but it’s not written anywhere.” [08:54, Sadun]
- Waber: Highlights poor analogies in evaluating AI; the right comparison isn’t “AI vs. a naked human” but “AI tools vs. existing tools.”
- Uses the example of a generative AI calculator—small error rates are unacceptable at scale.
- “We don't say a knife has superhuman cutting ability... That's incorrect, it's just a tool.” [10:19, Waber]
- Both agree that valuable impacts will come from better interface design, not just task automation.
3. AI as an Expertise Amplifier & Value Creator
[13:25-15:33]
- Sadun: Excited by AI’s ability to “free tacit expertise”—helping less-experienced workers access knowledge previously trapped in experts’ heads.
- Gives manufacturing as an example: blue-collar workers can solve problems autonomously using AI interfaces.
- “If we can think in this frame and validate that they actually add value, this could potentially be very exciting.” [15:21, Sadun]
4. Fears of Mass Youth Unemployment: A Realistic View
[15:33-21:20]
- Waber: Sees risks not from AI replacing jobs, but from “deskilling” as young professionals lean on AI before building foundational skills.
- Describes MIT students who relied on AI code generation but didn’t know basic debugging: “In terms of what I actually think that in the near to medium term we’re doing is deskilling...” [17:27, Waber]
- Sadun: Suggests AI is most productive in experienced hands—junior workers may struggle without context and foundational skills, posing risks for career development pipelines.
- “How do we bring young people to the point at which they can make the best use of this technology?” [20:36, Sadun]
5. What Skills Will Be Future-Proof?
[21:20-23:52]
- Waber: Endorses “a basic grounding in statistics... [and] the ability to collaborate and build relationships effectively.”
- Sadun: Emphasizes developing contextual knowledge and decision-making experience inside organizations—shifting higher education toward apprenticeship and continual learning.
- “It’s going to be even more important to think about giving students time to get experience inside the real organizations and making real decisions.” [23:25, Sadun]
6. How AI Changes the Definition of Good Management
[23:52-25:52]
- Sadun: During “J-curve” experimental stages, managers should focus less on quantitative targets and more on providing direction and supporting experimentation. As AI adoption matures, measurement and metrics regain importance.
7. AI’s Role in Organizational Decision-Making & Structures
[25:52-30:08]
- Waber: Sees promise in AI for data labeling (e.g., training models on internal communication patterns), provided companies have “processes to integrate these capabilities intentionally.”
- Sadun: Sees a need for “skunk teams” and experimentation—optimal organizational structures for AI “don’t exist yet,” as new expertise and creative application are needed.
8. Risks of Productivity & Adoption Gaps Between Countries
[30:08-33:17]
- Sadun: Warns that countries (e.g., Italy, parts of Europe) already lagging in digital skills may fall further behind if organizations aren’t agile and experimental enough with AI.
- Cites the historic productivity paradox of computers; the impact depends on “organizational and managerial practices, especially flexibly allocating and developing talent.”
9. Human-AI Collaboration: Getting the Model Right
[33:17-39:47]
- Waber: Objects to language of “collaboration” with AI as if it’s an agent: “It’s a tool. The question is, how does one use it effectively?”
- Warns of risks when AI is deployed for convenience without safeguards or human oversight—uses the example of LLMs writing HR handbooks, possibly omitting critical info like harassment policies.
- “The mental model I would suggest...is it’s like a cookbook written by random people on the Internet.” [36:38, Waber]
- Sadun: Productive “human-computer interaction” requires thoughtful technology design, clear hypotheses about value, and a continuous experimental mindset in organizations.
- “Companies...need to get much, much better at becoming experimenters. This means formulating hypotheses, monitoring, measuring, learning, and doing this continuously.” [39:20, Sadun]
10. Magic Wand: If You Could Change One Thing About AI...
[39:47-42:19]
- Waber: Wishes AI couldn’t pretend to be a person (“I think/I’m thinking”), or have all-purpose text box interfaces—design should reflect specific tasks.
- Sadun: Concerned by AI’s ability to make people feel heard—can help with loneliness but also risks when users need actual human support:
- “There is a limit where you really need a human... If only we could use this technology to help alleviate this loneliness, but also...get the human help when needed...” [41:20, Sadun]
Notable Quotes and Memorable Moments
-
Ben Waber:
- “We don’t say a knife has superhuman cutting ability. It’s just a tool.” [10:19]
- “I don’t think the technology itself is going to replace jobs, but ... people ... will be very unqualified. And we’re going to have some really big problems.” [18:24]
- “The mental model that I would suggest folks have ... is it’s like a cookbook written by random people on the Internet.” [36:38]
-
Rafaella Sadun:
- “If people tell you that they have the playbook, I don’t trust them. I think companies need to figure it out.” [05:40]
- “It’s up to the creativity and imagination and maybe even the courage of people to go out and find out new ways of doing business.” [09:03]
- “My suspicion is that that type of contextual knowledge will become very, very relevant to distinguish yourself from, from others.” [23:41]
-
Meta-commentary:
- “Organizations need to get much, much better at becoming experimenters... monitoring, measuring, learning, and doing this continuously.” [39:20, Sadun]
- “We're choosing not to, but we have full control over what we're building.” [41:00, Waber]
Useful Timestamps
- 01:49: Co-hosts reveal all questions were AI-generated.
- 02:27: Ben Waber addresses generative AI’s current impact.
- 05:40: Sadun: "Nobody has the playbook yet."
- 09:03: Sadun: Why AI outcomes hinge on human choices.
- 10:19: Waber challenges "superhuman AI" rhetoric.
- 13:25: Sadun introduces "freeing tacit expertise."
- 17:27: Waber warns about deskilling new entrants.
- 20:36: Sadun: How do we bridge the skills gap for young workers?
- 23:41: Sadun: Importance of contextual knowledge.
- 36:38: Waber’s “cookbook” analogy.
- 39:20: Sadun prescribes continuous experimentation.
- 41:00-41:20: Both guests share their one “magic wand” wish about AI.
Summary & Takeaway
This episode expertly dismantles simplistic narratives about AI either taking or saving jobs. The reality is nuanced: AI is a tool whose impact—positive or negative—depends on organizational choices, how it is implemented, the skills of its users, and the willingness to experiment and learn. The greatest risks may lie not in automation, but in the deskilling of young workers and increasing inequalities between organizations and nations. Leaders, educators, and policymakers must prioritize contextual learning, flexibility, and measurement. Perhaps most importantly, we control how AI evolves—what it augments, who it helps, and what unforeseen problems it might bring.
