Podcast Summary: Click Here – Introducing The Homework Machine
Date: December 9, 2025
Host: Recorded Future News
Guest Podcast: The Homework Machine, MIT Teaching Systems Lab
Main Voices: Dena Temple-Raston, Jesse Dukes, Justin Reich, Devon O’Neill, Ray Salazar, Miriam Reichenberg, Jessica Petit Frere, Joe O’Hara, Student Voices
Purpose: Exploring the impact of generative AI (especially ChatGPT) on classroom learning, academic integrity, and teacher-student trust.
Overview:
This episode introduces The Homework Machine, a podcast investigating what happens when artificial intelligence, like ChatGPT, "wanders" into K-12 classrooms. The episode explores teachers' and students’ real experiences with AI—from cheating temptations to attempts at “AI-proof” assignments—and highlights the broader educational challenges when major technological change leaps ahead of school policies.
“What does all this actually mean for learning and for the fragile trust that schools run on?” – Dena Temple-Raston (00:37)
Key Discussion Points and Insights
The Arrival of AI in Schools: A Culture Shock
-
Devon O’Neill’s Teacher Perspective:
- Returned to teaching after two years (missed the pandemic/AI boom).
- Noticed immediate changes: better grammar in essays, but off-topic writing, and students using Bing/ChatGPT for assignments.
- “I would have these really well written paragraphs... but not at all on topic. Grammar was off. Even the most brilliant 14 year old still talks like a 14 year old.” – Devon O’Neill (02:35)
- Spotted “rookie mistakes,” such as students leaving traces like “Here are your results,” signifying pasted AI responses (03:25).
-
Rapid, Uninvited Adoption:
- Unlike previous tech in education, generative AI wasn’t formally introduced; students found and used it themselves, with schools scrambling to respond (05:54).
Understanding Generative AI & Large Language Models (LLMs)
-
Explained Through AI’s Own Words:
- LLMs described poetically as “canyons of language,” echoing and reshaping what’s put in (08:02–09:30).
- LLMs work by predicting the next word based on massive text datasets, not by understanding or remembering (09:47–12:25).
-
Key Concepts:
-
“Temperature” Setting: Controls how creative vs. formulaic AI’s output is (13:57–15:53).
-
Bias & Ethical Blocks: LLMs inherit internet biases (gender, race, etc.). Post-hoc blocks (RLHF) and ethical filters try to prevent offensive or harmful content, but can be arbitrary or inconsistent (16:22–18:56).
-
Hallucinations: AI sometimes confidently generates plausible but false information (“Beaver Rush” at MIT) (20:19).
“You can watch two thoughts go through their head: The first one is, there is no Beaver Rush. And the second one is, I wasn’t invited to Beaver Rush.” – Justin Reich (21:20)
-
-
Jagged Frontier:
- AI tools are inconsistent: sometimes powerful, sometimes wrong. This unpredictability is new for teachers used to “reliable” tools like calculators or encyclopedias (23:12–26:07).
Teachers’ Approaches to AI and Cheating
- Cheating Redefined:
- Generative AI makes shortcut-taking faster, easier, and harder to detect; but teachers are reluctant to call it outright “cheating” (27:20–29:07).
- Teachers face a dilemma: how to enforce fairness without perpetuating past discipline disparities, especially affecting marginalized students (29:21).
Three Broad Approaches Emerged:
-
Monitor and Communicate:
- Building a culture of honesty through direct conversations, letting students redo assignments if caught using AI.
- “I told them all the time… you can use ChatGPT... if you have no idea where to start. However, you will not turn in what you just saw.” – Jessica Petit Frere (32:44)
- Emphasis on guidance over punishment; becoming “pettier than the students” to stay ahead of cheating (31:54).
- Building a culture of honesty through direct conversations, letting students redo assignments if caught using AI.
-
Detection and Enforcement:
- Using tech “traps” (e.g., inserting hidden text) and AI detectors to catch AI-assisted work.
- Parent and administrator involvement, with a mix of strictness (“zeroes”) and empathy (allowing redo for a first offense).
- “We’ve put little…traps in our rubrics to try to catch kids… The easiest one is one-point white font on the rubrics.” – Joe O’Hara (35:08)
- Awareness of school-community distrust and the need to manage disciplinary fairness (34:16–37:17).
-
AI-Proofing / Rethinking Assignments:
- Designing assignments that require personal input, multi-stage work, or creativity, making it harder for AI to “do the work.”
- “If he’s done his job well, the students don’t want to cheat. But also, if they do use ChatGPT, it’ll only help with small parts…” – Summarizing Ray Salazar (38:49–39:56)
- “Work worth doing” is the best deterrent to cheating (39:56).
- Designing assignments that require personal input, multi-stage work, or creativity, making it harder for AI to “do the work.”
Students’ Voices: Temptation, Justification, and Complication
-
Why Use AI?
- Some students use AI for “busy work” or when they feel assignments lack learning value; others use it under stress, fear of bad grades, or simply as a shortcut (40:44–41:18).
- Amelia’s Story: Used AI when she felt stuck and unfairly judged by a new teacher. Caught via Google Doc revision history; felt frustrated and “not even AI can save me” (41:51–42:22).
- “She will now think that I use AI for the rest of the semester.” – Amelia (42:38)
-
Gray Areas / Partial Use:
- Miriam uses AI to summarize boring assignments so she can save her energy for meaningful work, but feels conflicted (44:05–45:28).
- Teacher Reflection: It's risky for students to judge the value of assignments themselves; often “busy work” is more useful than it seems (46:09–46:52).
-
AI Tools with Blurred Lines:
- Caitlin used Grammarly for grammar help, got flagged by AI detectors. Feels learning is about using your own brain (47:04–48:23).
-
“Strategic” Cheating:
-
Leandro used ChatGPT, edited answers and ran them through AI detectors to avoid being caught—admitted it was “a lot of work to not do the work” (49:48–50:13).
“You were doing work to not do work.” – Justin Reich (50:14)
-
Often, students cheat when they hit an emotional or motivational wall (51:18). Sometimes, just for the thrill of “boundary pushing” (50:57–51:18).
-
Teachers’ Struggles, Institutional Vacuums, and Slow Change
- Many teachers lack policy guidance or training on handling AI; most are “building the plane as they fly it.”
- “As of 2024, only about one quarter of teachers said they had gotten any guidance or training about how to manage the challenges raised by AI.” – Jesse Dukes (52:59)
- The system currently relies on individual teacher philosophy and resourcefulness.
Notable Quotes & Moments
-
On ChatGPT’s Sudden Entry:
- “Generative AI wasn’t invited into the schools. Not for the most part. It crashed the party.” – Jesse Dukes (05:54)
-
On Hallucinations:
- “Maybe we should have Beaver Rush, but it’s not true.” – Justin Reich (22:11)
-
On Policy and Support:
- “I don’t think public schools has given us any guidance that I’m aware of.” – Joe O’Hara (30:06)
- “Teachers and their students are trying to navigate a new and shifting landscape... on their own.” – Jesse Dukes (52:59)
-
On Student Cheating:
- “That’s a lot of work to not do the work.” – Jessica Petit Frere (50:08)
-
On Teachers’ Resilience:
- “Good design is holding ideas, in tension together at the same time. And I think this is going to be one of them that teachers have to face for a while.” – Justin Reich (52:23)
Timestamps for Important Segments
- Episode Introduction and Theme: 00:14–01:30
- Devon O’Neill’s Story (Teacher culture shock): 01:48–04:15
- Defining Generative AI and LLMs: 07:08–13:39
- Bias, Ethical Blocks, Hallucinations in AI: 16:22–22:30
- The “Jagged Frontier” of AI’s Reliability: 23:12–26:07
- Teacher Dilemma: Cheating Detection and Policy Gaps: 27:20–30:28
- Practical Approaches (Monitor/Detect/AI Proof): 30:28–39:41
- Student Perspectives: Cheating and Gray Areas: 40:44–48:41
- Case Studies: Full-on and partial cheating: 49:05–51:18
- Institutional Response and Lack of Training: 52:59–53:40
Conclusion
The episode pulls back the curtain on real, everyday dilemmas in classrooms as AI upends traditional notions of learning, cheating, and authority. Teachers and students alike are improvising responses, with little clear policy and much personal negotiation. As schools “build the plane while flying it,” the need for deeper guidance, robust pedagogy, and honest conversations has never been more urgent.
“Technology is fast, but schools are slow.” – Jesse Dukes (52:59)
“Nearly three years after the arrival of the homework machine, educators say their schools are still figuring it out.” – Jesse Dukes (53:36)
For more: Listen to the full series of The Homework Machine to explore additional case studies, including a school "all in" on AI and a teacher who wrote her own responsible AI policy.
