
The system is broken. ChatGPT cheating is just a symptom.
Loading summary
Podcast Sponsor/Announcer
Support for Decoder comes from Adobe. Life is unpredictable, and that means you need your projects to adapt with whatever gets thrown at you. That means mastering the ability to pivot and collaborate with others to reach your goals. Adobe gets that, which is why they made a tool that's just as flexible as you are. PDF Spaces and Acrobat. Your PDF files are no longer static. Instead, they're living documents that flex with you and your project's needs. Learn more@adobe.com do that with Acrobat. Support for the show comes from Hostinger. Ever had an idea for a business or side hustle but never actually launched it? With Hostinger you can turn that idea into something real in minutes instead of weeks. Hostinger is an all in one platform that brings everything into one place. Your domain, website, email marketing, AI tools, and AI agents. You can create websites, online stores and custom apps with simple prompts, then use AI agents to automate tedious tasks and grow your business. Go to hostinger.com decoder to bring your idea online for under $3 a month. Use promo code decoder for an extra 20% off support for today's show comes from CNN. Do you want to live forever? Influential journalist Kara Swisher is taking a hard look at the longevity industry to separate the influencer hype from evidence backed science. In her new CNN Original series, Kara's talking to Silicon Valley power players and trying out the latest in anti aging technology to see what works and what's a waste. Kara Swisher wants to Live Forever New series now streaming with a CNN subscription. Go to CNN.com subscribe to get started and save 40% for a limited time. Terms apply.
Nilay Patel
Hey everybody, it's Nilai. Decoder's off today while the team and I are cooking up a lot of really great stuff for the upcoming weeks. We'll be back with an all new interview on Monday. In the meantime, we really wanted to highlight this episode we first aired back in the fall because it's about a huge subject, the AI in schools. The school year is starting to wrap up now around the country and we're no closer to figuring out how to thread the needle about generative AI in education than we were back in September. Lots of people are worried about students using ChatGPT to cheat on assignments, and that is a problem. But really the issues go a lot deeper to the very philosophy of education itself. Dr. Adam Dube, an expert in educational technology from McGill University, joined me on the show to talk through how generative AI fits into education right now and where it might be heading in the future. We also talked to a whole lot of actual teachers. You'll hear their voices throughout this episode. And we kept hearing one thing over and over again. What are we even doing here with AI? What's the point of this? It's a big question with not a lot of answers. Here it is. AI in education. Enjoy. Hello and welcome to Decoder. I'm Nilai Patel, editor in chief of the Verge, and Decoder is my show about big ideas and other problems. We've talked a lot about generative AI on the show lately, which is a very big idea that is causing quite a few problems. And one thing we keep hearing about over and over again is that generative AI is causing a lot of problems in schools. There are a lot of people out there, including many of the listeners of the show, who email us, who are worried about the obvious. Students using ChatGPT to cheat on assignments. But when our team went and poked at the story, they found that the issues in education with AI go a lot deeper to the very philosophy of education itself. We sat down and talked to a lot of teachers. You'll hear a lot of their voices throughout this episode. And we kept hearing a common theme. What are we even doing here? What's the point?
Evie May
Hi, I'm Evie May. I'm an instructional designer at a small college in Michigan. When I attended the Online Learning Consortium's innovate conference in 2024, one of the presenters discussed using various genai tools to give feedback on student papers. So if this technology becomes more ubiquitous, we'll have courses created by AI graded by AI.
Nilay Patel
We'll.
Evie May
With submissions from students absolutely generated by AI. So it begs the question, what are we even doing here in higher ed now?
Nilay Patel
Every teacher is having a different experience with AI in the classroom and with their students. But the common thread is that so many of those experiences feel bad. A few teachers who talk to us find tools like ChatGPT are helping their workflow, but a lot of others are facing those deep existential questions, like you just heard from Evie. Luckily, there are experts in education and educational technology who research what's going on in a more detailed way. So I sat down with Dr. Adam Dube from McGill University to talk about how generative AI is fitting into education right now and where all of this might be going in the future.
Nilay Patel (interviewer)
Doctor Adam Dubay, you are associate professor of Learning Sciences at McGill and Co lead of the McGill Collaborative for AI in Society. Welcome to Decoder.
Dr. Adam Dubay
Thanks for having me here, Nilay.
Nilay Patel (interviewer)
There's a slide you have about the lessons we have learned and not learned from the Internet and mobile. And it says digital natives do not exist.
Dr. Adam Dubay
Yes.
Nilay Patel (interviewer)
Can you briefly explain what you mean by that?
Dr. Adam Dubay
The term digital natives was coined back in the early 2000s, and it was this idea that perhaps the kids that are born into technology understand it better than, say, what he called digital immigrants. And there's been a lot of problems with this type of language and the framing about how he talked about it. And then there was 20 years of research to see if this is actually true. Are kids that are born growing up with technology better at using it than people that adopt it later on in life? And the research for 20 years has shown this isn't actually the case. It's not that because you're young and you grew up around it. It's just about how much you've used it, how much exposure you've had to it. And this really matters when we talk about AI education and technology. And education, because even though a kid grows up using YouTube or a phone for playing Roblox, doesn't mean they know how to use technology to actually learn. But we assume that they do. Teachers assume that kids know how to use technology in the classroom. They assume they know how to use it for learning purposes because they use it for YouTube. And so we aren't digital natives. We just have previous experience using technology for specific things. And it's caused a lot of problems when it comes to education because we assume kids have skills that they.
Nilay Patel (interviewer)
We've done a lot of coverage over the past five years here at the Verge about what smartphones and tablets have done in education, in particular, as it relates to really core computing concepts. One of my favorite stories we've ever done is called filenot Found. And it's just about kids in STEM who don't know how file systems on Windows work. And so the STEM professors in college have to spend a day explaining what files and folders in Windows are before they can go use the radio, telescope, or whatever tool they need to use next. And that always felt to me like we take for granted that the frameworks of the past will be intuitively understood by the kids in the future. But those frameworks change. Would you put AI into that kind of category, that this is a framework change for how we use computers and the frameworks of the past might just be abandoned?
Dr. Adam Dubay
I think the way that we interact with computers could be changing by having us engage with it through natural language interfaces. Unfortunately, the logic that underlies that system is still really important. For being able to interpret the answers that it's giving us. And so yes, kids, people growing up using a computer where it's say a primarily a text based or a voice based system, they're not going to think of it the same way than someone who grew up with a file system and engaging with individual applications. Instead of everything being launched through ChatGPT, say for example, that's going to be a problem not for them using and interacting with the system and asking it to do things, but then how they actually interpret the way those systems give them answers and how they evaluate it. Can they actually make sense of these, the responses and then make critical judgments about it? And so actually this is an area of research that myself and my PhD candidate Nandini are working on where we're looking at children's theory of artificial minds. We're trying to understand how do children think computers think, specifically how do they think that AI reasons, if we can say that it reasons and then what's the impact that that's going to have on how they learn from AI that's put in their schools and in their homes, like smart speakers that are already everywhere. And we're just starting these types of studies, but these devices are now being deployed actively into schools where we don't have a great understanding of this yet.
Nilay Patel
That idea that we don't really understand AI yet, that a lot of people don't know how it works, and that we have no long term data about its effects in the classroom because it's so new. Well, that's a really big point of contention that we heard from a lot of teachers.
Ann Lutz Fernandez
I'm recently retired high school English teacher Ann Lutz Fernandez. During my last year of teaching, I began to see more students using generative AI to replace their own reading, thinking and writing, even creative and personal writing. We're treating children like guinea pigs on an untested and unproven and unregulated host of products. It feels to me like we haven't learned some key lessons, a lot of them very recent. One of those during the pandemic was the costs of unhuman teaching and learning. And I worry that as we did with cell phones and over reliance on one to one devices, we're going to wake up a decade or more from now and realize we jumped on a tech bandwagon that keeps kids tethered to screenshots, harms them and harms learning.
Nilay Patel
That shift to personal one to one devices was really huge. And it means that there are a lot of screens in K through 12 education now. And it feels like there are some lessons we should learn about how those prior technologies were introduced.
Dr. Adam Dubay
I was a math cognition researcher that looked at how children understand simple things like learning how to add. And then I got into studying technology because Apple came out and was pitching the iPad as the future of education. And then a bunch of math apps were launching that were saying, okay, this is how your child's going to learn math the best. We were actually testing this early on. Back in 2011, 2012, we were giving kids a bunch of different iPads with learning apps. And you would think that the kids knew how to use them, but 90% of the interactions they had with that learning app were actually complete mistakes and errors. They were just randomly tapping around the screen. And so there was a lot of guessing, but the apps actually had no negative consequences for getting stuff wrong. So we called this as just being the app was too dumb to cause a mistake. It didn't matter if you interacted with it in a random way. It always just progressed. So this sort of idea that kids get how to work with technology is actually a byproduct of an oversimplified design of a lot of the apps that are used for learning. And then now we've got new systems coming in where it seems like, well, kids can just talk to AI speakers, and they can just talk back to them. An example of this is that video that Sal Khan put out with Mego Tutor. It's like, well, look, I can just sit my kid down in front of this. I can say, teach him how to solve this math problem. And it just does the thing, okay, but is that child actually benefiting from that experience? Are they interpreting the lesson correctly? Is that going to help them understand it? Is that a good way to teach whatsoever? But because it looks so easy, we get convinced that this is somehow useful. And I think that's going to be a problem right now with the generative AI tools that are coming out and the way they're being pitched.
Nilay Patel
So kids aren't predisposed to using iPads or AI any better or more competently than adults are. In fact, they might be worse at it if they don't have the experience they need to be able to tell how AI is really working. But even still, research from Pew in January found that about a quarter of teens were already using ChatGPT in their schoolwork.
Nilay Patel (interviewer)
That number edges up to a third for high school juniors and seniors. In May, the College Board published research saying 84% of high school students were using some kind of generative AI tool in some way for schoolwork. Does that match up with the numbers you're seeing in your research?
Dr. Adam Dubay
I look to research by Victor Lee. He looked at 4,000 high school students, and what matters not just if they're using it, but what they're using it for. And so the biggest use of them is that students in high schools are using it to explain concepts to themselves. So that's 80% of them, and that's gone up from a year previously. They're using it to generate ideas for assignments, and that's about 70% of the use cases. And then they're using it to summarize texts instead of reading it. 40% are using it to edit portions of text. And then at the very bottom, and I think this is something that probably people are going to disagree with, is that only 10% of students are actually reporting that they're using it to generate their whole assignment, which is what people are really worried about when they think about AI cheating in schools. But that number is pretty consistent and hasn't really changed over the years. That percentage of students that report actually cheating has stayed around 10%. It's just how they cheat changes over time. So paying someone or copying stuff off the Internet, and then now. And so the numbers that you say reflect what we see. But then we were like, how are they using it? They're using it in different ways for different purposes. And then we can debate whether or not these different uses are even good whatsoever. Obviously, cheating's bad, but is it good at summarizing text? It's not, but, you know, that's where we can get more nuanced questions about, like, these different uses. Is that actually going to be beneficial for students learning?
Nilay Patel (interviewer)
Can I just ask about the cheating number specifically for one second? Are you just finding that no matter the technology, 10% of students are dumb enough to self report themselves as cheaters?
Nilay Patel
Is that what that demonstrates?
Dr. Adam Dubay
I like the framing. There is dedicated researchers and institutes that look at academic misconduct, and they actually do a lot to get students to be honest in their reporting within this research. And that's where that number comes from. I have seen students openly admit to using generative AI in open symposium in front of their professors. So some people are perhaps not that bright when they did this, but here, that 10%, I think what really matters is that we're trying to see, okay, what is the real prevalence of this in the student population? And people have dedicated themselves to studying that and trying to find a way to have students be honest about it and so and that way we really know what the problem is.
Nilay Patel (interviewer)
I mean if you told me that 10% of teenagers were self destructively stupid, I would just believe it. No matter what the data showed. I was in that 10% for sure.
Nilay Patel
On that note, we have to take a quick break. We'll be right back.
Podcast Sponsor/Announcer
Support for the show comes from Zapier let's face it, talking about AI has become more than a trend. It's practically a daily discussion. But simply talking about AI trends doesn't help you become more efficient at work. For that, you need the right tools. You need Zapier. Zapier is how you break the hype cycle and put AI to work across your company for real. With Zapier's AI Orchestration platform, you can bring the power of AI to any workflow so you can do more of what matters. It lets you plug leading AI models like ChatGPT and Claude into into the tools your team already uses, so AI shows up exactly where it's most useful. And it's built for everyone, not just technical teams. In fact, their data shows teams have already automated more than 300 million AI tasks using Zapier. Join the millions of businesses transforming how they work with Zapier and AI get started for free by visiting zapier.comdecoder that's Z-A P I E R.com decoder
Nilay Patel
support
Podcast Sponsor/Announcer
for decoder comes from Adobe. For every big idea, your Documents folder tells a story. Let's say you've just finished pulling together a brief, so you hit export on final version PDF, but then you open the file and you immediately notice a typo. Several versions later you're exporting final v4.actual final draft. PDF Adobe Acrobat can save you the digital clutter with PDF Spaces. It takes your documents and turns them into a living project that you can engage with, get insights from, and collaborate with others on. You can gather all your files into one workspace and have a whole conversation with your AI assistant about it and ask questions to get deep insights about your project. You can even invite people to your PDF space and let them add files, comments, notes, and more. You could doodle in the margins or even turn your project into your own personal podcast episode. Acrobat lets you generate an audio overview of your project in just one click. Learn more at Adobe.com do that with Acrobat. Support for the show comes from Hostinger. Every business has its impact, and with AI changing the landscape, the barrier to entry has never been lower. Whether you're starting a side hustle or building the next big thing. Hostinger lets you go live in minutes, not weeks. Hostinger is an all in one platform that brings everything into one place. Your domain, website, email marketing, AI tools and AI agents. You can create websites, online stores and even custom apps without coding or design skills. Then use AI agents to automate tedious tasks and help grow your business. Turn your one day into day one. Go to hostinger.com decoder to bring your idea online for under $3 a month plus get an extra 20% off with promo code decoder. That's less than the price of a cup of coffee per month. That's hostinger.com decoder promo code decoder for an extra 20% off.
Nilay Patel
Welcome back. I'm talking with Dr. Adam Dubay about what his research is saying about generative AI in schools. Before the break, we were talking about how all of this is just new technology and as a result, it's kind of a mess. Students are using it to cheat, although maybe not as many as we're worried about. Teachers are feeling pretty confused about how to respond and there's just not a lot of clarity from anyone in response.
Nilay Patel (interviewer)
That kind of usage is leading to pretty whiplash policies across schools at every level. There's the we're going to ban it entirely kind of movement. The schools in my kids district, they've just fully banned smartphones from schools. That's here in New York, that's statewide. There's we have to put AI everywhere to get these kids ready. There's Sal Khan saying just let my robots teach your kids. This is a pretty wild mishmash of policies and approaches. What is the general shape of it that you've seen?
Dr. Adam Dubay
It is very fractured and it depends on who the leader of that school system is and on their view of technology. And then on the broader community around that school. The parents in that community, do they have a negative attitude towards technology? Right now there's a big anti screen movement that's happening. We see cell phone bans, concerns about social media from parents. This is increasing. You have the larger community influencing the way that school leaders think about technology. But then you've got some school leaders who are saying like, okay, we're resource constrained. Our budgets are being cut and they're seeing technology as potentially a way to save money. And so they're turning to generative AI as a way to maybe make up for not having enough educators in their classrooms. Or maybe they truly believe that it's a transformational tool. But you can't There is no one consistent system. It varies almost from school district to school district. And I've spoken with school leaders across our provinces because we run education at a provincial level. There's no federal sort of oversight. All the principals complain that there's no overarching guidance that everyone has having to figure it out by themselves. And that leaves it up to the factors at the local level influencing whether or not AI is seen as a potential positive or negative and whether or not it's a positive or negative for teachers, the admin or for students. There's also differences there. A lot of teachers think students shouldn't be using it, but it's okay for them to use it or the admin thinks this is going to help us save time for teachers marking students assignments so we can save some money there, but we don't want our students using it, but we're going to use it to analyze student data. Right? So there's even a mishmash and a disagreement within schools about the role of generative AI. Right now these systems are being sold to educators to generate lesson plans, to evaluate student work, to do learning analytics. And if you're having this deployed in your school and before you teach a class, you're being told it was like, okay, well we're going to cut back how much teacher preparation time there is, but we bought Magic School for you and so it's going to generate a lesson plan for you. So it's don't worry, you're going to have plenty of time to do it. It's like actually keep track of how much easier it is to generate your lesson plans and do your work with these tools.
Nilay Patel
A few of the teachers we spoke with really were excited by the idea that generative AI could be a time saving tool and actually help them out when it comes to managing a busy workload with too few resources.
Paul (middle school science teacher)
I'm Paul and I teach middle school science in Raleigh, North Carolina. And the thing that has me most excited about generative AI technology is the way that it unlocks teachers ability to do better teaching in ways that many of us really want to. We're constantly being told about new research that shows that there are better ways to teach, but many of these strategies and techniques, they require a lot of time and effort for us to like learn more about them and to build content with them. By partnering with an AI tool like ChatGPT, a lot of this becomes way more doable. And so I find that I'm able to integrate better strategies into my teaching because I know that I have support when it comes to building new materials with those strategies highlighted.
Nilay Patel
That's all pretty interesting, but Paul's position is part of a distinct minority, certainly at least amongst the teachers who spoke with us. Here's Evie May Again.
Evie May
Despite many attempts to incorporate it into my workflow, I found that Gen AI is more trouble than it's worth. And that's beyond the simple fact of the technology's unethical, plagiaristic roots and environmental destruction. Just purely on a utilitarian level, I can do better work much faster when it comes to designing course materials. At most I would use ChatGPT to clean up auto generated YouTube captions, but YouTube's already improved this on their own end, so it's kind of a moot point.
Nilay Patel
And then sometimes, as some teachers told us, generative AI can make things actively
Anne Rubenstein
worse than they were before My name is Anne Rubenstein. I'm a historian and a professor of history at York University. One of the things that I do as a scholar is I help prepare collections of documents from the past on specific topics that are then published as part of a digital history project that goes out to university libraries. Mostly because I am a historian of Mexico. The documents that I'm preparing for them are Mexican and they're in Spanish. The publisher decided that we should, along with providing the original documents, we should provide translations into English, since that's the language that the majority of people using these teaching tools are going to be comfortable with. So great. I said, great, I've got a friend who's a translator. We'll get them to translate these documents. Nope problem. And they said, oh no, no, we've bought new software that will translate for us and we don't need to go to the expense and trouble of hiring a human translator because this translation software is going to be great. And I was skeptical, but I said sure, let's try it. And so we tried the software and here's what it did. It hallucinated it made crap up. It inserted entire sentences and in a couple of cases, entire paragraphs into the document that did not exist in the original. I if you don't understand why that is a very, very big problem in a collection of translated primary source documents for history students. I invite you to come take some history classes and then you'll understand why that's an enormous problem. Luckily, the publisher also understood this was an enormous problem. So what they decided to do was hire a translator whose job it was to go through these machine translated documents and restore accuracy and clarity to them. And that ended up costing just about twice as much as just hiring a human translator would have done.
Nilay Patel
In a strange way, it might help when the hallucinations are incredibly obvious because then you can tell that the tool isn't working for you. But sometimes it can be a lot harder to spot if a tool is actually saving you time or improving your work when you first start using it. And then generative AI produces polished content and answers to questions so quickly that it feels like it's giving you something meaningful.
Dr. Adam Dubay
Is it actually saving you time? I speak to teachers and they say like, well, I use generative AI and it helps me generate my lessons, it helps me write emails, actually monitor and try to keep track of. Is this actually speeding things up? There's a lot of research that shows that say, for example, with coders, they actually end up being slower when they use these systems because they have to fix all the mistakes. And I think that we might see a similar pattern that happens with educators. They're using these tools to be more efficient, they think. But if they actually tracked how long it takes them to generate a lesson versus how long it takes to fix the lessons that generative AI produces, it might not actually be faster. I see some educators that are enthusiastic about the time saving it can give them, but I'm not sure it's actually saving anybody time.
Nilay Patel
So from the teacher's perspective, generative AI in schools is a workplace issue, a labor issue. And there's a lot of research out there, both some older research and also some new research recently published in the Harvard Business Review, about how workers feel when they're forced to use specific tools or behave in certain ways that devalue their own expertise or autonomy. How's that working out for teachers?
Dr. Adam Dubay
There's some research that looks at school climates and teachers who get demotivated for their use of generative AI in education and what causes demotivation. And for them it was being forced to use these systems when there was a top down rule that you had to use generative AI, maybe for lesson planning or writing emails, or for doing student feedback that is demotivating for educators. They don't like being told which tools to use because it feels like it's removing their autonomy. And so whenever we remove workers autonomy or their own scent, basically their control over their own work environment, people get demotivated. So it's not surprising that workers feel demotivated, evaded, when generative AI is being forced into their workplace because they have less of a say on what they get to do. And as human beings, we want to be creative, we want to produce, we want to feel like we have a control over our lives and our work and then something comes in and takes control away. We're not going to like it.
Nilay Patel
We have to pause here for a short break. We'll be back in just a minute.
Podcast Sponsor/Announcer
Support for today's show comes from cnn. Do you want to live forever? Influential journalist Kara Swisher is taking a hard look at the longevity industry to separate the influencer hype from evidence backed science. In her new CNN Original series, Kara's talking to Silicon Valley power players and trying out the latest in anti aging technology to see what works and what's a waste. Kara Swisher wants to Live Forever New series now streaming with a CNN subscription. Go to CNN.com subscribe to get started and save 40% for a limited time. Terms apply. Support for the show comes from Outshift, Cisco's incubation engine. Today's AI agents operate in silos, which can limit their true potential when it comes to AI advancement. Companies out there have been focused on building bigger and smarter models. But scaling up is just one approach to reach superintelligence together. Cisco says we need to do more. We need to scale out to do this. They're going back to the blueprint from 70,000 years ago. Humans just didn't get smarter individually. Rather, the cognitive revolution transformed society because we began sharing knowledge, goals and innovation. And Cisco says that AI agents are now at that exact same inflection point. They can connect, but they can't think together. That's why Outshift by Cisco is building the Internet of Cognition. Its goal is to transform AI from isolated systems into orchestrated superintelligence by creating an open, interoperable infrastructure. Cisco says Outshift is enabling agents and humans to share intent, context and reasoning. The Cognitive Evolution for Agents is here. Explore the Internet of cognition@outshift.com that's outshift.com Support for the show comes from AWS. How much of your workday is actually work and how much is just hunting for information? The answer you need is buried in a Slack thread. The data is in Salesforce or in an email from two weeks ago. By the time you've pulled it all together, half your morning is gone. That's the problem Amazon QUIC was built to solve. QUIC is an intelligent workplace assistant that connects to all of your systems, your documents, your dashboards, Salesforce, Jira, Slack email and gives you complete answers in seconds, not links. To dig through actual answers with Full context. And here's where it gets interesting. Qwik doesn't just find answers, it turns them into action. Create a deck, update a ticket, send a message right there in the conversation. Without switching tools, it's AI that actually works the way you do. Learn more@aws.com qwik.
Nilay Patel
Welcome back. Before the break, we are discussing how generative AI is affecting teachers. For school administrators and people in the classroom, Generative AI is a workplace issue. But the really big question, the one everyone is concerned about, is what these tools are doing for students. As we've discussed, there are studies that say generative AI in the workplace can actually demotivate adult professionals in a lot of ways. But anyone who's watched a kid really engage with ChatGPT's voice mode or Google Gemini can see that a conversation, no matter how one sided, can really keep a kid's curiosity going when they fall down that rabbit hole and maybe even teach them something. Something. So could there be some upsides to generative AI chatbots when it comes to learning?
Dr. Adam Dubay
It does seem that using these tools can increase affect and motivation. When it was designed to do that. Why is that happening? Is it the way these things talk to us? They congratulate us on our questions. They always provide an answer. It's a very positive experience. So maybe that sort of paradigm is causing the increased use of these system which is creating a loop of query and response and query and response. And that makes a lot of sense from a design standpoint. If you're trying to make a system where you want people to use it more because you're trying to make the case that you need the data centers and everything else in the use levels. It's like, well, let's positively enforce questions and answers and make it a very rewarding experience to ask us things. But at the same time, it's always positively providing you these answers, but sometimes it's wrong. Well, what if they changed it a little bit and they said, okay, well I can't actually answer that question for you because I'm not sure if it's right. What if it was truthful? Well, then people might stop using it as frequently and then maybe it's not as engaging and motivating. And so the question is, do we think as an educational tool it's good to somehow generate curiosity if the thing is lying to you about the information? It's like, would that be okay if it was a human being? It's like, well, here's our teacher, Jerry, he's in the class he always lies to the students when they ask him questions, but he constantly. But he gets them going. It's like, that seems like a really weird position to take when we translate this over.
Nilay Patel (interviewer)
There are some core skills that you should definitely have to learn. I actually think math is one of them. But, you know, I do have friends who are like, screw it. I have a calculator. I can literally Google the answer to unit conversion, and I will never think about it again. ChatGPT is that on a massive scale, right? You can just hand over some amount of skills to this robot about thinking about a lot of things, and maybe you'll just not be motivated to learn those skills because, you know there's a backstop. Whether or not the backstop hallucinates, you'll know there's a backstop, and you'll never be motivated to learn those skills. Have you seen that dynamic play out?
Dr. Adam Dubay
The classic example, as you said, is the calculator. And some people say that it didn't matter that we put calculators in classrooms. I actually had a colleague, Joanne Lefebvre, that looked at this throughout the 90s and 2000s in Canada, and we actually saw a decrease in math scores directly correlated with increasing calculator use. Because when you're using a tool to do thinking for you, you're not practicing and actually encoding the information well enough to actually recall it later on. And so it's not surprising that when people use generative AI to do work for them, that they're not able to do it independently. Being able to store information in your memory requires effortful practice. It requires effortful memorization, it requires reflection. It requires thinking about, okay, what am I trying to understand? And connecting that to my other understanding. That's what builds a strong knowledge network. And when you use systems that just generate answers on your behalf, you don't engage in those practices. It just gives you the response, and you passively consume it, and maybe you don't reflect on it. So it's not surprising that with the use of generative AI, one of the big effects that we see is that people are able to produce maybe work that looks more polished, but they don't remember the work that they actually wrote. That MIT study is the example that a lot of people have heard about where they had people writing essays with or without ChatGPT or Google, and students had a very poor memories for the essays that they wrote using ChatGPT. Well, that's because they actually weren't reflecting on their writing. They weren't engaged in the work that it takes to actually form substantial memories so you can remember it later on. Now should we care? Like, that's the thing. Who cares if you can produce the product and if you can produce the work, in the end, it's like it doesn't matter. Well, if we think of a future down the line where if you're using these tools to produce a piece of work, who is actually able to evaluate whether or not that work is any good? If everyone is just in the same level of expertise of just, I've used all these tools to produce the work. You actually don't have the internal knowledge, you don't have the internal skill set to say, like, is this good writing? Is this a strong idea or not? You don't know anything about the literature, you don't know anything about the area that you're actually studying because you just use these things as the reference. Now the people always say, I don't need to count. It's like, oh, I can just use a calculator. Well, I bet that person doesn't have to do mathematics in their job every single day, right? So they're not regularly having to do unit conversion on the fly. So it doesn't really matter because they don't use it. But in your profession, in your daily lives, access to information really does matter. If you have to grab something, you don't want to have to turn to an external tool to make a judgment. In a moment. I shouldn't have to be talking to you in this conversation and then having ChatGPT open over here to ask it things so I can have a conversation with you. That wouldn't be an actual productive conversation. It would impede how well I interact with my daily lives and do my job. And so it does matter when we don't actually know things and have a knowledge base on which we work.
Nilay Patel
So then teachers are left with the challenge of getting students not to let ChatGPT do their thinking for them, at least not in class. Sometimes addressing it head on is the way to go. Here's Anne Rubenstein again.
Anne Rubenstein
What I've worked out to do with the first year undergraduates especially is not so much to tell them that they can or can't use this stuff to help them in their process of becoming history students, but to think with them about the ways in which this particular kind of software is likely to lead to bad results for historians in particular. What I tell them is, as historians, we have a social responsibility to get our facts exactly right, not only to say exactly when the specific event happened. But if we're quoting, to back up our assertions, which we frequently do, we have to say who was speaking and also how we know what words they said, and also to quote their words precisely in the precise order, not to add any, not to leave any out, and to say where and when this quotation was made and to put it in its context. Once we've sort of gone through that with the beginning history students, then I talk to them about ChatGPT and similar software, and I say, okay, how does it work? And I pretend to be slightly more naive than I actually am, and I say, okay, explain to me how this works. And usually in any group of, say, 10 or more undergraduates, one of them is going to have a very clear understanding of how this software works. So I get them to explain it to me and incidentally, to explain it to themselves and each other. And we talk about how the software can't, by its nature, actually know a thing. What it can tell you is what order words are likely to be in. And if we're very lucky, what'll happen in the classroom as we're figuring this out together, going over it together, is that someone will say, oh, so they're bullshitting. And I say, yes, that's bullshit. And then we talk about what bullshit is and why we want to avoid bullshitting, and why bullshitting is sometimes useful and important in life. But historians aren't allowed to. We absolutely cannot. We are the only people in the world who are never, ever allowed to use. And so then the lesson is, you can use ChatGPT and similar software for all kinds of things, but you can cannot use it in conducting historical research or writing about history, because it is the exact opposite of what historians are supposed to do.
Nilay Patel
What Anne just described, being able to talk through the reasons why you would or wouldn't want to use a specific tool is really important because ChatGPT is just that, a tool. And students who are under a lot of different pressures might reach for any tool they have at hand. And maybe aside from that 10% Adam told us about who are happy to just cheat, maybe it's not hard to see what pressures students are reacting to. In a way, they're just behaving rationally inside of the system that they're a part of. And that itself is a kind of problem. Are the educational systems we've set up actually designed to prioritize learning as an incentive for the students?
Brian S.
I'm Brian S. And I teach technical communications at a Midwestern research one university. I have a lot of engineering majors in my classes and my job is to teach them how to speak about engineering things to non engineers. The big effect large language models have had on my job and my students is that it's really forced me to recognize how differently I see what's valuable in the classes from at least some of my students. In my classes I teach a lot of how to write things. Some of it is format, but more of it is about tailoring a message to an audience, translating concepts from expert to non expert. But as with most writing classes, the grades are based on the finished product, the user manual, or the proposal or the report. I use those to evaluate how well the students have internalized the tools I've been showing them how to use. LLMs promise that they can create those documents without having to learn all those intermediates steps. And when they're being used by a person who already has those skills, they can take some of the grunt work out of it. My students don't have those skills yet, and if they lean on LLMs now, they'll never develop those skills. But for a pretty good sized portion of my student body, that's not a problem. Because one, they have limited time and they have physics exams and so on, and two, because they get graded on the finished product. For me, the finished product doesn't really matter. I've read enough proposals for free student parking on campus for multiple lifetimes. But for them it matters because it can affect their academic standing, potentially their financial aid, and they believe that it can affect their ability to get a job or an internship. The grade matters a lot more to them than to me, in other words. So it makes sense that if there's a tool that promises a product that will help them pass so that they can concentrate on the stuff that they feel is more important to their career, of course they'll think about using it. That's the real tension at play here. How do we convince students that the value in the class is learning how to do things when the thing we measure is the end product? Especially when there's a tool that can take some of the pain out of producing that product.
Nilay Patel
Learning isn't just about homework. Homework is useful practice. But our systems reward the end product over the process. They reward the completion of the homework over the actual learning. So how would we need to change that so that students don't want or need to outsource all of their thinking to AI? Here's one final thought.
Todd Harper
My name is Todd Harper and I am a professor of game design at The University of Baltimore. A thing I've observed in 15ish years of teaching at the college level is how much refocus there has been on the product when it comes to course assignments, papers, presentations, whatever. Students are aiming for the grade because the grade is the thing that hooks into important metrics. It's the thing that hooks into whether they graduate or not. Sometimes it influences final financial aid, et cetera, et cetera. And students are under tremendous pressure that affects how they approach their college education. My university largely has students who are focused on getting out of here and getting a job. A lot of them work full time or are full time caregivers or have some kind of equivalent everyday pressure on them. They're taking multiple courses at a time. They're trying to make it all work out. And if a tool comes along and says, oh, you got a paper due, Just plug the question into me and I'll give you a plausible sounding result. And then the student can be like, great, that's one thing I can check off the list so I don't lose my mind trying to be alive in 2025. I get why that would have some appeal, but pedagogically, educationally, we don't assign homework, papers, presentations, projects. We don't give those to students because we want something in return. The thing that they give to us is not the point. The point is that when they're looking up sources, or drawing the art, or creating the thing that they turn into us, they are exercising the skills and the learning that we want them to develop in our classroom, that they have come to our classroom to develop. And yes, we have to evaluate them and we can't be there for the process. So they do have to turn something into us. Right? Like the product is how we evaluate the process through the result. But the process is the important bit. And if what's turned into me, you know, look, air quotes looks right, you know, looks plausible, which LLMs could be good at. In fact, it's probably the only thing they're air quotes good at is making things that look plausible. But if the student didn't do it, if there was no process, then what are we doing here? No real learning has happened. All that's happened is that somebody ticked off a box on a to do list. And I think it hurts students when that happens. What we need is not more tools that produce product. What we need is fewer stressors. Financial, cultural, social, whatever. What we need is less pressure on students so that they can actually do the things that they need to do to get an education.
Nilay Patel
Foreign I'd like to thank the many teachers who spoke with us for this episode, especially the ones who are willing to be recorded. And I also like to thank Dr. Adam Dubay for taking the time to join me and thank you for listening. I hope you enjoyed it. If you'd like to let us know what you thought about this episode, and I'm sure a lot of you do, you can email us atdecoder the verge.com we really do read all the emails. You can also hit me up directly on Threads or Bluesky or you can leave a comment on our YouTube channel. You can watch full episode episodes DecoderPod we also have a TikTok and Instagram. They're Decoder pod too. They're a lot of fun. If you like Decoder, please share it with your friends and subscribe wherever you get. Podcasts Decoder is a production the Verge and part of the Vox Media Podcast Network. The show is produced by Kate Cox and Nick Statt. It's edited by Ursa Wright. Our editorial director is Kevin McShane. The Decoder Music is by Breakmaster Cylinder. We'll see you next time.
Release Date: May 7, 2026
Guests: Dr. Adam Dubay (McGill University), various educators
This episode, originally aired in fall 2025, is a wide-ranging exploration of the growing presence of generative AI—tools like ChatGPT—in education and the existential questions its adoption is spurring in schools and universities. Nilay Patel (host and The Verge editor-in-chief) is joined by Dr. Adam Dubay (associate professor, McGill University) and a diverse group of teachers to examine not just cheating, but how AI is challenging the fundamental philosophy of education, changing classrooms, and creating uncertainties for all involved.
Beyond Cheating: While media coverage often focuses on "cheating," teachers and experts agree that the true threat of AI in the classroom goes much deeper, undermining the core processes of learning and creating fundamental uncertainty about the purpose and future of education itself.
A Systemic Shift: AI is not simply a tool students use. It's woven into grading, creating coursework, and interacting with learners—raising the specter of AI-graded, AI-written, and AI-composed education.
Limited Understanding and Data:
Teacher Experimentation: The educators featured describe confusion, anxiety, and frustration—often feeling like "guinea pigs" for unproven and unregulated tools.
Lessons from Tablets & 1:1 Devices:
Dr. Dubay describes how introduction of iPads/apps led to surface-level interactions and ineffective learning due to "oversimplified design" that masked a lack of understanding.
Translation and Hallucinations:
Automated tools can introduce serious errors in critical tasks—like translating historical documents, where hallucinated sentences go unnoticed until after publication.
Prevalence and Purposes:
Research suggests that while AI is widespread (up to 84% usage among high schoolers per College Board), most students don't use it solely to cheat; instead, it's used for:
Surface-Level Learning & Memory Deficits:
Using AI for writing is linked to poor retention of information, as highlighted by an MIT study.
Fragmented Policies:
Districts and universities vary dramatically in their approaches—ranging from outright bans (e.g. NY state) to full AI integration. This is shaped less by evidence and more by local leadership, parent attitudes, political currents, and budgetary concerns.
Labor and Autonomy Issues for Teachers:
Teachers resent being compelled to use top-down AI tools, feeling it devalues their expertise and creativity—leading to demotivation and alienation.
AI’s Design and Student Motivation:
AI chatbots are designed to be responsive and encouraging, which can increase motivation, but sometimes at the expense of accuracy or critical reflection.
Teaching Critical AI Literacy:
Some professors are making AI critique and its limits a conscious element of instruction.
The episode delivers a textured, sometimes sobering look at the collision between generative AI and educational values. It surfaces the uncomfortable reality that while AI can sometimes offer efficiencies or engagement, its presence exposes and amplifies cracks in institutional, pedagogical, and incentive structures. Most poignantly, it asks whether our systems are truly facilitating learning—or merely measuring outputs, regardless of understanding.
Ultimately, as Nilay sums up:
“All that’s happened is that somebody ticked off a box on a to do list. And I think it hurts students when that happens.” [42:02]
The solution, as the episode repeatedly suggests, lies less in policing AI and more in fundamentally rethinking educational incentives and structures—centering process, understanding, and autonomy over mere products or technological quick fixes.
Recommended for:
Educators, administrators, policymakers, technologists, and anyone concerned with the evolving meaning of education in the era of AI.