Plain English with Derek Thompson
Episode: What Happens When AI Learns to Do Our Jobs
Date: October 24, 2025
Guest: Ethan Mollick, Professor of Management at Wharton, AI researcher and author
Episode Overview
This episode of Plain English explores the evolving capabilities of AI — especially agentic AI — and what happens when artificial intelligence begins to perform not just tasks, but entire jobs previously done by humans. Derek Thompson sits down with Ethan Mollick to dissect the “jagged frontier” of AI’s abilities, the organizational and social ripple effects, and what the near-term future might look like as AI tools become more competent and autonomous in white-collar domains. They discuss practical advice for workers, skill atrophy, disruptions in scientific publishing, and the philosophical implications of using "alien intelligence" for human ends.
Key Discussion Points & Insights
The “Jagged Frontier” of AI Capability
- Defining the Jagged Frontier
- Ethan Mollick: “What makes AI kind of weird is it does a lot of things, like many, many things. And some of the things it does well, some of the things it does badly, and it's very hard to predict what those things are in advance until you start using it. So we have this mental image in our mind of this sort of jagged wall.” [08:17]
- Derek draws out how AI can appear both “stupid and brilliant”—sometimes unable to do simple things like “make a clock strike 9:30pm,” yet outperform humans in math Olympiads or white-collar tasks. [08:41]
- Recent models, like GPT-5, have pushed the frontier outward, especially improving on prior weaknesses (e.g., math/language logic), but unpredictable “blind spots” remain. [09:41]
AI “Agents” and Their Growing Autonomy
- What is an AI agent?
- Ethan Mollick: “An AI system that when given a goal, will follow that goal on its own and use tools as necessary to accomplish it.” [15:22]
- Agents can perform many steps independently (sometimes over 1,000) and self-correct as they operate, allowing them to handle more complex, multi-stage human tasks. [15:22]
- Strengths and Weaknesses of Agents
- Most competent in areas where outputs are easily verifiable (e.g., coding, spreadsheets). Fuzzier creative or subjective white-collar tasks still present challenges. [17:29]
- AI agents are productive, but not yet reliably proactive or able to critique the questions they’ve been given—often just compliant rather than constructively critical. [18:39]
AI as a Co-Intelligent Tool vs. Autonomous Worker
- Co-Intelligence Model
- The most effective use often involves “co-intelligence”—a back-and-forth where humans leverage AI’s ability to generate options, critique drafts, or explore ideas—rather than relinquishing full responsibility to autonomous agents. [23:24]
- Mollick: “I do a lot more variation and selection than I would with human work and less dialogue with the AI… my taste in curation and selection matter.” [22:02]
- Job Vulnerability
- Jobs comprised of tightly-bundled, easily-assignable tasks (e.g., freelance writing, grading, basic research) are most susceptible to automation. More complex jobs with a variety of human-facing duties, or those that require tacit knowledge, are less vulnerable (for now). [24:13]
Blurring Lines: Where Does a Model Become an Agent?
- The distinction between chatbots and true multi-step “agentic” AI is rapidly vanishing as newer models plan, research, and integrate multiple tools automatically. [26:48]
AI’s Measured Impact (So Far) on Productivity
- Despite powerful new tools, the productivity gains at a company-wide level are not as transformative — yet.
- Many workers hide their use of AI out of fear of job loss or managerial repercussions: “If I have a 10 times performance improvement... the company might fire me or my friends because now we have cost savings.” [28:48]
- Organizational structures and processes can’t immediately capitalize on massive jumps in individual productivity (e.g., 10x more PowerPoints may not increase actual company value) [28:48]
- Leadership, culture, and process bump up against technological potential.
Practical Advice for Workers
- Mollick’s Core Suggestion: “Use AI for everything you possibly can for about 10 hours in actual work tasks. And you will start to see the shape of the jagged frontier...” [33:20]
- Start with simple time-savers, but quickly expand to more complex uses — “mockups,” brainstorming, and as a thought partner.
- The best way to learn what AI can (and can’t) do for your own work is direct experimentation.
Notable Research & Surprising Use Cases
- AI can predict consumer preferences by impersonating personas and reviewing products (e.g., LLMs as simulated customers) [34:56]
- AI can reproduce statistical results from academic papers in minutes—tasks that would normally take hours for human reviewers [34:56]
- A controversial study used AI “agents” on Reddit’s Change My Mind forum, with AI agents outperforming humans in persuasive debate [35:45]
- Most robust reduction in conspiracy beliefs comes from a three-round back-and-forth with GPT-4 [36:30]
- Mollick: “It's logic and personalization... It's explained to you in a way you understand and patiently listening to you and responding, which is, I guess, the most hopeful of the persuasion answers.” [36:51]
Memorable Quotes & Moments
-
“If AI is so damn smart, why is it so dumb?”
– Derek Thompson, voicing the public’s confusion about AI's uneven capabilities. [08:41] -
“An agent is really just: I want to assign an AI a task, it does it on its own without me having to intervene.”
– Ethan Mollick [15:22] -
“Ten times more PowerPoints is not going to make a company more successful… unless your job is making PowerPoints for some reason.”
– Ethan Mollick, on productivity gains vs. meaningful work. [28:48] -
“There is no easy instruction manual. The right thing to do is to use AI for everything you possibly can for about 10 hours… you’ll start to see the shape of the jagged frontier.”
– Ethan Mollick [33:20] -
“If the only thing you want to do is produce a thing, then you can be bitter-lessened… the AI might just be able to make the thing better.”
– Ethan Mollick referencing the “bitter lesson” from AI research, where human-crafted systems get outperformed by brute-force machine-learned approaches. [42:00]
Challenges: Worker Skills, Atrophy & Process Breakdown
-
Skill Atrophy
- Repeated use of AI can erode human skills, as evidenced by studies with doctors and students. “If you just turn to AI for everything, the intern learns nothing.” [38:26]
- Educational systems will need reimagining: “Blue book” (in-person) testing may see a resurgence; more explicit training on what skills are worth preserving in the age of AI.
-
Process Disruption
- Many systems (e.g., job applications, scientific publishing, education) were already strained before AI; automation amplifies the cracks.
- AI now writes job applications and HR uses AI to filter them—leading to an “arms race” of AI-to-AI communication. [43:27]
- Academic journals are overwhelmed: “It can't go on like this. The solution can't be AI grades AI content. And it's just a mess of a world where things just get filtered from AI to people.” [44:55]
- Many systems (e.g., job applications, scientific publishing, education) were already strained before AI; automation amplifies the cracks.
Philosophical & Societal Reflections
-
AI as “Alien Intelligence”
- Mollick distinguishes between human and AI cognition (“a fundamentally alien intelligence that's being used for human ends. And that's a bit of a strange and spooky thing.”) [40:23]
-
The Bitter Lesson
- Human-crafted solutions often lose to brute-force compute-driven learning.
- “That was the bitter lesson. The bitter lesson is: your beautiful, handcrafted attempt to instill all of your amazing human knowledge into a piece of software gets lost if you just throw enough computing at the problem and the machine learns how to do it itself.” [40:58]
-
The Coming “Muddle” Rather Than a Sudden Revolution
- The shift to AGI or widespread agentic AI is likely to be disruptive, messy, and full of civic, economic, and organizational uncertainties—not a clean or sudden “takeover.”
- “There's going to be almost like a civic or interpersonal or a very, very personal muddle to decide...” [50:09]
-
Historical Perspective
- New general-purpose technologies (like steam or electricity) have always caused widespread upheaval before society adapts:
- “It's a general purpose technology. It is going to influence every part of our culture and society all at once... we're going to have a lot of weird stuff, some good, some bad, all happen at once in the very near future.” [51:35]
- New general-purpose technologies (like steam or electricity) have always caused widespread upheaval before society adapts:
Major Timestamps of Interest
- 02:12–08:09: Derek’s opening analogy: railroads, management, and what AI could do to work's architecture
- 08:10–13:14: Explaining the jagged frontier of AI competence; GPT-5’s threshold
- 15:22–18:39: What are AI "agents" and what are their current limits?
- 19:53–23:24: Why agents can’t yet ask "good questions" back; human curation and prompt design
- 24:13–27:31: What jobs are most at risk from agentic AI? Agents vs. chatbots
- 28:48–30:53: Why the productivity boom isn't materializing the way Silicon Valley expects
- 33:20–34:56: Practical advice for white-collar workers: how to actually start benefiting from AI
- 35:45–37:37: Unusual use cases and the power of AI as a persuader
- 38:26–41:25: Skill atrophy concerns in the workforce and education
- 40:58–43:27: "The bitter lesson" in AI research and implications for human work
- 43:27–47:39: Process breakdown: job applications, letters of recommendation, scientific publishing
- 50:09–53:57: The coming "muddle" — societal and organizational adjustment, lessons from past industrial revolutions
Final Takeaways
- The arrival of powerful, agentic AI will likely reshape work, but not in simple or uniform ways.
- White-collar workers should experiment deeply with AI to discover its current boundaries in their field.
- Societal and organizational systems will struggle and adapt — expect turbulence and a period of messy transition, rather than instantaneous transformation.
- Human attributes—curation, judgment, asking the right questions—remain central for the moment, even as AI gets better at doing more of the “bundled tasks” of entire jobs.
Recommended Segment:
“I think a lot of things are going to break before they get reconstructed. And I wouldn't be surprised if we see lots of processes that we have to rebuild from the ground up to think about what it means to be in a world of AI.” [44:55] – Ethan Mollick
For listeners and readers alike, this episode offers a clear-eyed, practical, and philosophical look at the changes AI is bringing — and the unpredictable shape of the workplace to come.
