Podcast Summary: "AI Is Breaking Education. Rebecca Winthrop Has the Blueprint to Fix It."
Podcast: Your Undivided Attention
Hosts: Daniel Barcay (The Center for Humane Technology)
Guest: Rebecca Winthrop (Director, Center for Universal Education at Brookings Institution)
Date: March 5, 2026
Main Theme
This episode investigates the profound impact AI is having on education, drawing on Rebecca Winthrop's latest research and report, "A New Direction for Students in an AI World." The discussion critically examines whether current trends in AI adoption are benefiting or destabilizing learning, the emerging risks to student development, and actionable steps for educators, policymakers, and parents to steer technology use toward healthier outcomes.
Key Discussion Points & Insights
1. The Need for a “Pre-Mortem” on AI and Education
- Premise: Learning from past technology rollouts (e.g., social media) where children’s developmental needs were sidelined, the Brookings team conducted a future-looking “pre-mortem” to surface risks and benefits of AI before full adoption in classrooms.
- Quote: “What could we do today, now that AI is being rolled out, to get ahead of the game?” – Rebecca Winthrop [02:28]
2. How AI Is Already Used in and out of Schools
- Ubiquity: Students interface with AI both in academic settings (EdTech, homework, assessments) and outside (social media, entertainment, AI companions).
- Lack of Transparency: Both teachers and students frequently use AI “in secret,” eroding openness.
- “We know kids are using AI to do homework, including writing essays, running it through an AI humanizer, and then turning it in and not getting caught by their teacher.” – Rebecca Winthrop [04:44]
- Blurred Lines: There’s little distinction between learning, entertainment, and socializing uses for today’s young people.
3. The Fraying of Trust in Classrooms
- Mutual Suspicion: Students distrust teachers for using AI for grading; teachers suspect students’ work is inauthentic.
- “50% of teachers say they don’t trust that what students give them is actually their work… 50% of students say they don’t trust their teachers—they think that their teachers are secretly using AI…” [05:40]
- Authority Decay: Increasing cases where students trust chatbots over human teachers.
- "A student came to my office... and proceeded to tell me that I was wrong and ChatGPT was right." [07:28]
- Plagiarism Detection Issues: AI-detection tools incorrectly flag neurodivergent and multilingual students; leading to false accusations and more mistrust. [08:21]
4. Current Risks Overshadowing Benefits
- AI Use Narrowly Helpful: Strategic, teacher-mediated use can help (e.g., for lesson planning, aiding dyslexic students).
- Major Risks: Unscaffolded, direct student use of AI (e.g., unfettered chatbot access) risks:
- Undermining independent thinking
- Loss of resilience to critical feedback
- Exposure to harmful advice or content (e.g., reference to “Adam Rain”) [11:08]
5. Cognitive Stunting and Competition
- Cognitive Stunting: Over-dependence on AI for tasks like essay-writing means students never develop independent critical thinking ("It’s actually cognitive stunting"). [11:58]
- Arms Race Ethos: Students feel forced to use AI or be outperformed—like "bicycle doping."
- “If I don’t use AI to write my essay for me, then I’m just going to lose out to the kid next to me who will.” – Daniel Barcay [13:04]
6. Students’ Own Concerns about AI
- Self-awareness: Young people are deeply concerned about "getting dumber." Their top AI worry globally is cognitive loss, not job loss.
- “The number one thing they worry about… is stopping being able to think well.” – Rebecca Winthrop [14:32]
- Loss of Initiative: Some report trouble even starting homework without AI help. [14:54]
7. AI Is Not Like Calculators
- Calculator Analogy Rejected: Calculators only offloaded a tiny slice of cognition (arithmetic); AI can do “all your English homework… code… create music… talk to you like a person.”
- "It is not like a calculator at all because of its general purpose nature, and it is so powerful. It is incredibly seductive..." [15:30]
8. Personalized AI and "Attachment"
- Warning on Personalization: Highly personalized learning via AI can undermine social learning by making education lonelier—drifting away from the social, embodied, feedback-rich experience classrooms provide.
- “The more you personalize learning, the more you make it lonelier and lonelier…” – Daniel Barcay [19:18]
- Attachment Risks: AI “companions” can foster relationships where feedback is always positive, eroding children’s ability to handle criticism or setbacks. [18:30]
9. Protecting the “Human” Classroom
- Recommendation: Ensure the classroom remains a human-centered, collaborative space—protect eye-to-eye and group work. AI should not interfere with essential human learning activities.
- “Can we make a commitment to the kids… that it will be as human as possible?" [20:29]
10. Where AI Is Useful
- Teacher ‘Back Office’ Use: Allow AI to lighten educators' administrative load or help differentiate instruction—without directly mediating between students and content.
- "That is good." [23:09]
- Augmented, Not Replaced, Experiences: VR for specific lessons, AI-aided tutoring (where a human is still directly involved), or advanced data analysis for student projects.
- "Kids are learning to use AI as analytical tool to further their investigation." [28:47]
11. Flawed Approaches: “Just Block It” and “Study Mode”
- Study Modes Inadequate: "Broccoli mode" chatbots (relabeled for education) underestimate students’ motivation to choose easier or more “fun” options.
- “You're designing for kids who are motivated... That is not most students. Most students are in what we call passenger mode...” [29:08]
Recommendations and Blueprint for a Better AI/Education Alignment
The "Three P’s":
1. Prosper – Shift School Practices
- Make assignments less “hackable” by AI; focus on activities AI can’t do alone (oral presentations, in-class exams, collaborative problem-solving).
- Foster relationships and social learning as central pillars.
2. Prepare – Holistic AI Literacy
- Teach students, educators, and families what AI is, its limitations, and how to use it wisely.
- Example: Sixth-grade classes writing essays on AI's promise and risks, discussing them deeply. [32:20]
- Involve parents in dialogue; offer non-judgmental spaces for discussion (e.g., Brookings' parent tip sheets).
- “Just get a baseline… then you can start talking about what it does do and what it's good for and what it's not good for.” [39:19]
3. Protect – Safety, Guardrails, Regulation
- Delay individual laptop use for younger grades; make classrooms hard to “hack” with AI.
- Demand safe, vetted AI tools via joint district/state purchasing.
- “School districts and states should band together... use their purchasing power to say, we will only purchase AI safe for kids…” [35:00]
- Enact duty-of-care mandates for AI providers.
Practical Steps
- Red Team Assignments: Invite students to “hack” assignments—if AI can easily generate the answer, don’t use that assignment. [26:24]
- Foster In-Class Assessments: Focus on oral or handwritten in-class demonstrations.
- Use AI to Power, Not Replace: Employ AI in ways where students are still actively engaged with peers, teachers, and the world (e.g., using AI as one analytical tool among many).
Preparing for the Future: What Should Kids Learn?
- Prioritize Deeply Human Skills: Curiosity, resilience, ability to take feedback, ethical grounding, love of learning, and ability to adapt.
- “The ones who are going to sail through this... are the ones who are going to be super motivated and super engaged in learning new things.” – Rebecca Winthrop [41:53]
- Encourage “Explore Mode”: Less than 4% of students regularly feel engaged and exploratory—critical for thriving in uncertain futures.
- “Kids need to practice learning new things and being super engaged and motivated. It is something we can develop in them.” [43:00]
Notable & Memorable Moments
- Quote: “Trust is something you don’t miss until it’s gone.” – Rebecca Winthrop [08:55]
- Quote: “I have my assignment up on one half of the screen and I have ChatGPT on the other...” – Student via Rebecca Winthrop [25:40]
- Quote: “We need everyone to be able to swim in an AI world... everyone needs to swim. Some people will have to snorkel... Some people will be scuba divers... But we need everybody to swim...” – Rebecca Winthrop [38:06]
Conclusion
Rebecca Winthrop calls for deliberate, multi-layered responses. Rather than blindly adopting AI or outright banning it, schools, teachers, parents, and policy-makers should foster transparent conversation, protect core human experiences in the classroom, teach holistic AI literacy, and insist on meaningful guardrails. The aim is to sculpt a future where AI supports—rather than erodes—the deeply human work of learning, exploration, and growth.
For more:
Read Brookings’ “A New Direction for Students in an AI World” and download their free parent tip sheets at Brookings.edu.
Timestamps by Segment
- [02:28] Setting up the AI “pre-mortem”
- [04:44] Transparency in AI use & trust crisis
- [11:08] Risks outweighing current benefits
- [15:30] Calculators vs. AI comparison
- [20:29] Keeping classrooms human
- [26:24] Red-teaming assignments
- [29:08] Flaws of AI study modes
- [32:20]/[39:19] AI literacy for students, teachers, and parents
- [41:53] Critical skills for the AI age
Host's closing words:
“I’ve often thought that the answer to the question, what are the skills that you need for an AI age... There’s actually some of the most deeply human things—curiosity... sociality... being able to hear and respond to feedback. I mean, it seems like a return to the focus on the most human skills will serve us in the AI age.”
– Daniel Barcay [44:08]
