Decoder with Nilay Patel
Episode: How AI is Fueling an Existential Crisis in Education
Air Date: November 6, 2025
Episode Overview
This episode of Decoder delves into the rapidly evolving role of generative AI in education. Host Nilay Patel brings together voices from teachers, researchers, and professors to unpack how AI is disrupting pedagogy, assessment, and even the very mission of education. The conversation moves beyond concerns over student cheating and explores deeper dilemmas: Are AI tools undermining the skills and values schools should be imparting? Who truly benefits from these technologies? And amid fractured policies and stressed-out teachers, where does education go from here?
Key Discussion Points & Insights
1. The Existential Question: Why Are We Here? (00:40–02:49)
- Teachers feel AI is driving an existential crisis—not just about academic honesty, but the purpose of education itself.
- Evie May (Instructional Designer): Raises the question: With AI creating, submitting, and grading assignments, "what are we even doing here in higher ed now?" (01:27)
2. The Myth of the "Digital Native" (02:49–06:35)
- Dr. Adam Dube: Debunks the term "digital natives"—merely growing up with technology doesn't mean kids can use it critically, especially for learning (03:09).
"We assume kids have skills that they [don’t actually have]."
- Reliance on devices like iPads led to oversimplified learning experiences; most student interactions with early ed-tech were actually random, not meaningful learning (07:52).
3. AI as a New Computing Paradigm (04:21–06:35)
- Natural language interfaces fundamentally change how kids interact with technology.
- Key concern: If students use AI to answer everything, do they lose the ability to critically evaluate information?
- Ongoing research (e.g., children's "theory of artificial minds") seeks to understand how students conceptualize AI and its reasoning.
4. Lack of Understanding and Testing (06:35–07:38)
- Teachers worried that students are "guinea pigs" for unproven, unregulated AI products.
"We're treating children like guinea pigs on an untested and unproven and unregulated host of products." – Ann Lutz Fernandez (06:47)
- Fear that we’re repeating mistakes of past tech rollouts, prioritizing screens and devices before understanding their impact on real learning.
5. How Students Are Actually Using GenAI (09:30–12:29)
- High adoption rates, but most use is for supplemental tasks (idea generation, explanations). Only about 10% admit to using AI to cheat by generating full assignments (10:05; 11:39).
"Only 10% of students are actually reporting that they're using it to generate their whole assignment, which is what people are really worried about." (10:05)
6. Fragmented School Responses and Policies (14:08–16:51)
- School approaches described as a "wild mishmash"—some ban AI/phones outright, others aggressively integrate AI into education (14:08).
- Policy varies based on local leadership and community pressures (anti-screen sentiment, budget constraints, perceived benefits of automation).
- Contradictions abound: admin may want AI for efficiency while disallowing students from using the same tools.
7. The Teacher's Work Experience with AI (16:51–21:47)
- Mixed reactions from educators:
- Paul (Middle School Science): Finds AI can help implement better teaching strategies by reducing grunt work (17:02).
- Evie May: Finds GenAI “more trouble than it’s worth,” faster to create materials manually, raises plagiarism and ethical concerns (17:55).
- Anne Rubenstein (Historian): AI translation tools hallucinated/fabricated historical data, requiring costly human correction (18:34).
- For many, the promise of time-saving is illusory; mistakes and hallucinations can add more work.
"If they actually tracked how long it takes them to generate a lesson versus how long it takes to fix the lessons that generative AI produces, it might not actually be faster." (21:01)
8. Teacher Autonomy, Motivation, and Demotivation (21:47–23:04)
- Forcing educators to use AI erodes their sense of autonomy and motivation:
"Whenever we remove workers' autonomy... people get demotivated." (22:09)
9. AI as a Motivator for Students? (24:20–27:02)
-
Conversational AIs motivate students through constant positive feedback—but at the risk of providing incorrect or fabricated answers.
-
Contrasts the question of engagement with the question of accuracy and trust.
"What if... they said, 'I can't actually answer that question for you because I'm not sure if it's right.' ... Maybe it's not as engaging and motivating." (25:00)
10. The Calculator Analogy and Cognitive Offloading (26:31–30:03)
- Like calculators, AI risks eroding core skills if over-relied upon.
- Research (e.g., MIT study) shows students who use AI to write essays have poor recall of content because they bypass essential reflection and memory-building (27:02).
"...When you use systems that just generate answers on your behalf, you don't engage in those practices. ...They don't remember the work that they actually wrote." (27:02)
11. Critical Digital Literacy and Purpose-Driven Use (30:03–32:43)
- Some educators tackle AI head-on by teaching about its limitations, especially the tendency to "bullshit":
"Historians aren't allowed to bullshit. ... You cannot use [ChatGPT] in conducting historical research or writing about history, because it is the exact opposite of what historians are supposed to do." – Anne Rubenstein (30:13)
- Fostering understanding of when and why to use (or not use) AI is becoming an essential literacy.
12. Grades, Process, and Systemic Pressures (33:16–38:43)
- Educational systems reward finished product over process, incentivizing students to outsource to AI tools if it expedites completion.
"The grade matters a lot more to them than to me... So it makes sense that if there’s a tool that promises a product that will help them pass... of course they’ll think about using it." – Brian S. (33:16)
- AI amplifies an existing problem: true learning is about the process, but stressed, overworked students focus on tasks that safeguard graduation, aid, or jobs.
"...If the student didn’t do it, if there was no process, then what are we doing here? No real learning has happened. ...What we need is not more tools that produce product. What we need is fewer stressors." – Todd Harper (35:33; 38:43)
- Suggests the solution isn’t just better detection, but changing incentives and pressures within education.
Notable Quotes & Timestamps
-
Evie May:
"What are we even doing here in higher ed now?" (01:27) -
Ann Lutz Fernandez:
"We're treating children like guinea pigs on an untested and unproven and unregulated host of products." (06:47) -
Dr. Adam Dube:
"It's not that because you're young and you grew up around it, it's just about how much you've used it, how much exposure you've had to it." (03:09) -
Education Researcher:
"Kids can just talk to AI speakers... But is that child actually benefiting from that experience?... Because it looks so easy, we get convinced that this is somehow useful." (07:52) -
Education Researcher:
"Only 10% of students are actually reporting that they're using it to generate their whole assignment, which is what people are really worried about." (10:05) -
Paul (Teacher):
"By partnering with an AI tool like ChatGPT, a lot of this becomes way more doable... support when it comes to building new materials with those strategies highlighted." (17:02) -
Anne Rubenstein:
"It hallucinated, it made crap up... And that ended up costing just about twice as much as just hiring a human translator would have done." (18:34) -
Education Researcher:
"Is it actually saving you time?... If they actually tracked how long it takes... it might not actually be faster." (21:01) -
Education Researcher:
"Whenever we remove workers' autonomy... people get demotivated." (22:09) -
Nilay Patel:
"So from the teacher's perspective, generative AI in schools is a workplace issue, a labor issue." (21:47) -
Education Researcher:
"If everyone is just in the same level of expertise... Who is actually able to evaluate whether or not that work is any good?" (27:02) -
Anne Rubenstein (on 'bullshit'):
"Historians aren't allowed to bullshit. ... You cannot use [ChatGPT] in conducting historical research... it is the exact opposite of what historians are supposed to do." (30:13) -
Brian S.:
"The grade matters a lot more to them than to me... So it makes sense that if there’s a tool that promises a product that will help them pass... of course they’ll think about using it." (33:16) -
Todd Harper:
"...If the student didn’t do it, if there was no process, then what are we doing here? No real learning has happened. ...What we need is not more tools that produce product. What we need is fewer stressors." (35:33; 38:43)
Memorable Moments
- Teachers sharing real classroom stories (translation hallucination disaster, classroom "bullshit" lesson).
- Candid acknowledgment of widespread student stress and the rationality of using AI to “check off the list.”
- Honest debate over whether AI is truly saving time for educators or just creating more busywork.
- Call to fundamentally revisit how schools measure learning and success—shifting from product to process.
Timestamps of Key Segments
- 01:27 — Existential worries about AI and education
- 03:09 — Digital natives debunked
- 07:52 — Early tech in classrooms: the iPad era’s failures
- 10:05 — How students actually use GenAI, with stats
- 14:08 — The patchwork of school AI policies
- 17:02 — Teacher optimism (Paul), counterpointed (Evie May)
- 18:34 — AI’s translation hallucinations in history work
- 21:01 — The myth of “AI saves time”
- 22:09 — AI and teacher demotivation
- 25:00 — Student engagement with chatbots: at what cost?
- 27:02 — AI, calculators, and memory loss
- 30:13 — Historical methods vs. AI "bullshit"
- 33:16 — Why students reach for AI: pressure and priorities
- 35:33/38:43 — The call to value process, not just product; relieving student pressures
Tone
- Deeply conversational, exploring real dilemmas from multiple angles, candid and often humorous, while direct about the shortcomings and stakes of AI in education.
Summary
This episode paints a nuanced, often sobering portrait of AI’s disruptive incursion into education. Teachers and researchers warn that labeling kids as "digital natives" is misleading, automation isn't always a time-saver, and current educational incentives all but encourage taking shortcuts. Generative AI’s promise of instant answers and content runs up against the messy reality of classroom teaching, lasting learning, and what students actually value. The hosts and guests leave listeners with the sense that unless education systems focus on reducing stressors, valuing process, and building real critical literacy—not just chasing shiny new tools—AI's impact risks hollowing out both how we teach and what we learn.
