Loading summary
A
This is in conversation from Apple News. I'm Shamita basu. Today, how AI is transforming classrooms. In November of 2022, the tech company OpenAI released ChatGPT. It was met with excitement, big questions, and also worries about its potential impact on our lives and our jobs. And one of the biggest reactions came from educators.
B
It's brand new. We weren't ready for this.
A
My first reaction was absolutely panic.
B
There's a lot of worry about what it's going to mean for our classrooms. I mean, it's a game changer, and it's just the first of its kind. I asked people how they'd use ChatGPT, and they'd use it for all kinds of things I hadn't expected, right?
A
If we can find ways to teach kids, hey, how's that working?
B
How could we improve it?
A
I think that's a much better way than just saying, we're just gonna ban all technology. Within two months of the launch, New York City's Department of Education put a block in place to limit its use. But soon after, the department changed its mind. And now, just three years later, the seven biggest districts in the country use ChatGPT or other AI products in some form.
B
Some examples of what that looks like includes team teachers using AI Chatbot like products to come up with lessons, to build tests or quizzes, to then grade those tests or quizzes.
A
That's Wahini Vara, a contributing writer for Bloomberg businessweek.
B
And then students also are using these products to do things like plug in an essay and get writing feedback on it, or even chat with, like, character chatbots based on historical figures that they've been learning about in class.
A
Wahinee has covered the tech industry for years, starting with early social media like Facebook. She's also used AI in her creative work. Earlier this year, she came out with a memoir called Searches that she partly used ChatGPT to create. And recently she's been reporting on how tech companies have worked to get their products integrated into schools and how AI is changing educational outcomes for kids. I started by asking Wahini to tell us why so many schools seem to have done a 180 on the use of AI in such a short amount of time.
B
A couple of things happened all at once, I think, which led to this kind of perfect storm. So school districts over the past couple of years have faced unprecedented political challenges, financial challenges in getting their teachers and students the resources that are actually proven to work. Things like more teachers, being able to pay teachers more so that you can use those hours to provide them with professional development like training to become better teachers. These are things that are shown to be effective, but they're expensive, and it takes a lot of political will, and there's a dearth of that right now. So that's the landscape at school districts. Into this landscape comes tech companies, which have a lot of resources, unlike the public school system, and are invested in promulgating their AI products. And so what better way, if you're imagining a world 20 or 30 years from now in which everybody's using AI chatbots, what better way to bring about that world than to make sure kids are getting exposed to AI chatbots and learning to use them, becoming familiar with them, maybe even becoming dependent on them when they're in their public schools from the age of 5 to 18. The other thing is, these companies are increasingly under pressure from governments, from policymakers, from the public for making products with social harms. You know, you see something in the news every day about something bad happening because of somebody's use of an AI chatbot. And so these companies, too, want to make the case that their products actually have social benefit, too. And again, what better way to make that case than to be able to argue they're using our products in schools because this is going to improve students learning?
A
So have we seen anything on this scale before in terms of tech influence on education?
B
This is certainly part of a continuum. So starting especially in the early to mid 2000s, you saw companies like Google and Microsoft come up with these products like Google Classroom, that made technologies proliferate in schools. The thing that's different this time, though, is the way in which the tech companies are working to bring their products into schools. So, for example, Microsoft, OpenAI, and other tech companies recently partnered with the second biggest teachers union in the country, the AFT, to create a $23 million AI teacher training institute that's going to offer training to teachers in how to use AI. That. That's unprecedented. Something like that has never happened before. The other major teachers union, the nea, is also partnering with tech companies. These companies have also put a lot of money into nonprofits that spread the message that AI is the future and schools need to bring it into the fold. And then these companies also have relationships directly with the schools. Now, in part because of these two decades of history, they've now built such that a school district, when it has questions about how to use technology, will turn to the technology companies themselves and ask.
A
Right.
B
The New York district, for example, right around the time that they were blocking ChatGPT, they reached out to Microsoft and said, hey, we are getting a lot of questions about ChatGPT. We don't really know what to do. Can you help us out here? And people from Microsoft flew out to New York and they had some conversations about AI. And, and pretty soon afterward, the New York school district was saying, you know what, actually we think AI could be a helpful tool for our teachers and students. We want to bring this into the classroom. And they did.
A
Well, I mean, so many of the questions about what it means for students are certainly what most people focus on in these conversations or engage with. What, if anything, do we know about how AI really impacts learning as an experience? Have there been any credible studies in this very short amount of time that it's been introduced?
B
There have been some studies. There have not been long term longitudinal studies, in part because these kinds of products have been around for such a short period of time.
A
Yeah, sure.
B
But I did speak with Isabel Howe, who runs the Stanford Accelerator for Learning at Stanford University, which maintains this sort of database of all the studies that are considered credible and peer reviewed and well sourced on this subject. And she says the research is still largely inconclusive. So there are a few studies that seem to show some learning benefits for students using AI. There are also studies emerging that seem to show detrimental effects for students. So students not being able to learn or think as well when they're using these products than when they are not. And I think the challenge here is just that we don't know like it's. It's very possible that once the research comes in, we'll find out that these kinds of products are hugely beneficial for learning and teaching, but that research just isn't in yet. And there certainly is a growing body of research that suggests otherwise.
A
And just to from your perspective as a technology reporter, what do you see as some of the potential negative consequences, whether we're seeing them bear out already or that you're kind of keeping your eyes out for?
B
Well, typically when you're trying to decide what to do in a classroom, you look at the evidence of what has worked over the years and then you try to get funding to implement those things that you know have worked. We don't typically think of the public school classroom as a place where it makes sense to test things that may or may not work that might be harmful. And I'll name a couple of tests potential significant harms. One is that student privacy gets violated and student information is used to enrich big technology companies at the student's expense. Another potential harm is that students in using these products lose the ability to think for themselves, to problem solve for themselves, and actually, frankly, maybe aren't learning the skills that they need to use to be successful in a future workforce. If you want to frame it that way. Right. The big overarching concern, though, especially when you're talking about public schools, is the concern that big technology companies are successful when they make a profit for their investors. That's their measure of success. Their measure of success isn't our children in public schools across the United States and the world learning and thinking better? That is the concern of teachers and of public school districts.
A
Right.
B
So if you take something that has traditionally been in the purview of people whose job it is to make sure students think and learn better and move it increasingly into the hands of tech companies who have a different goal, what are the consequences of that?
A
Yeah, it almost feels like this is like the guinea pig era of seeing how it plays out in schools, frankly. I mean, it sounds like you spoke with one female Philadelphia school district official who made the argument that, you know, no matter how you feel about AI tools, this is the world that these students are growing up in. This is the world they're going to graduate into. It's the world they're going to have to get jobs in, and it's their responsibility as educators to think about how to expose them to these tools responsibly and really teach them how to use them. What do you make of that argument?
B
That was an argument I heard over and over, not just from the Philadelphia school district, but from districts across the country. Part of what interests me about that argument that school district officials make is that there's a guess built into it about what the future will be like. Right. And that guess is we are moving into a future in which AI will transform the workforce. It happens to be. And maybe it's not so coincidental also the argument that technology companies make when selling these products to schools. Right. And so there's this way in which it seemed to me in my reporting that technology companies are building this vision for the future that is not absolutely going to come to pass. Right. And then saying to schools, hey, this is the future. We know that this is the future. And so you need to train your students to be prepared for that future. Or. Well, it turns out a lot of companies that are currently trying to use AI in the workforce are finding out that there aren't significant benefits. And some companies are now pulling back on their AI investment. And so there is a plausible world in which actually AI chatbots aren't the future, and that all this investment is not actually preparing students for the future world that they're moving into. And so I think it is important to sort of hold that possibility in mind as well.
A
Mm. Mm. Well, let's talk about what you learned from visiting some classrooms where this is actually being used in practice. You spoke to one teacher in a school in Colorado whose name was Nate Fairchild, and it sounds like he really embraced the use of AI in his classroom, saw how it went over the course of a full year, and that you checked in with him over that course of the year. Can you talk a little bit about what he was implementing in the classroom, what it looked like?
B
Yeah. And I actually visited his classroom multiple times also, so I got to see it in person. What I appreciated about this teacher, Nate Fairchild, is that he was very transparent and open with his students about the questions he had about AI. So basically, he went to his students and he said, hey, I'm going to have us use this product. It was called Magic School in the Classroom. I don't know how it's going to go, but we're all going to see together how it goes. He gave them a 90 minute introduction to AI that he built himself that addressed all of the potential challenges and problems that we described, as well as the potential benefits of using AI, and then he had them play around with it. When I visited his classroom, the students I talked to said to me that they appreciated being able to use this platform, Magic School, to get feedback on their writing assignments, for example, before turning them in. So whereas they would previously turn in a writing assignment and lose a point because they had not addressed something that the teacher had asked them to address, now the chatbot would point out that they hadn't addressed, that they could go and fix it and then they could turn it in, and then the teacher theoretically could then make sort of higher level comments on their work rather than, you know, fixing their grammatical sentence structure. There were also a lot of challenges that I observed and that Mr. Fairchild observed as well. So every time I saw him introduce a new assignment in class, he would say, remember, this is not a person you're chatting with. This is a product. It makes mistakes. It can be biased. So make sure you double check everything. Well, I did sit in his classroom for multiple hours and observe students, and I quite rarely saw students actually double checking something that the chatbot spit out. I also saw the AI product make factual errors, put biases into its answers that the students didn't seem to notice, and that Seemed problematic to me. And Mr. Fairchild said that that was problematic for him also. He tended to view that as a potential learning experience. He thought that maybe in the future there could be a way to have students notice those things happening in real time and make more time in classroom discussion for that to be a jumping off point to talk about all the challenges with AI. And on the other hand, you know, that's a really difficult thing to do. You've got 30 students in the classroom. If each of them is sort of chatting on their individual computers with a chatbot, some subset of those are seeing mistakes, are seeing biases, but don't yet have the knowledge to recognize those mistakes and biases. It is a little difficult for me to see how it would then be possible for one teacher to be able to address all of that in real time. And I think Mr. Fairchild recognize that challenge also.
A
It sounds like at the end of the year, Nate Fairchild was really trying to figure out, how do I assess the impact of having used AI tools in the classroom this year? And he looked to some test results of how the students performed by the end of the year. What did the results say to him?
B
So, interestingly, it turned out that on this test of basically subject matter knowledge, the students generally seemed to have done better than they'd done in past years. And then anecdotally, he also felt that they had improved their oral and written communication, even when they weren't using the chatbot for help. The interesting thing is, though, he made very clear that he wasn't sure how much of that could be attributed to the product that they had been using, Magic School. And what he suspected instead was that there was a way in which he had changed his teaching that actually made the students learn and think better. And what he had changed was this. He knew that his students now had access to AI, like a lot of teachers across the country know. And because of that, he changed the way in which he asked them to engage with the material they were learning. So rather than just saying, what does Elie Wiesel say in these couple of pages of the book Night, which you just read? He started to ask questions that were more like, read these couple of pages of Night by Elie Wiesel and tell me about a time in your life when you experience something that feels related to Elie Wiesel's experience. And how does reading Night change your understanding of your experience? You would ask them to engage with their personal background, their personal experiences, and it turns out that that is a proven method of engaging students More and thereby making students more successful in class because they're more interested, they care more. And so what he realized in the end, in a funny way, is that what had actually happened was that he had done like the really hard, difficult to scale work of becoming a better teacher himself.
A
Right. Well, I'm curious to hear how it went for the students. Did you talk to the students about how the year felt for them? What felt different about this classroom experience?
B
The students found it exciting, to be honest. They thought it was kind of cool and subversive, I think, to be told in class. Yes. Not only can you use these products, but we want you to use them and we trust you to engage with the things that are problematic about using these products. I think that they did gain some AI literacy skills from the experiment. I think that had a lot to do with the fact that they had a teacher who was very. Had educated himself deeply in how these products worked. And so I don't know that I came out of spending time with his students feeling particularly confident that that was something that could be replicated across the board because it had so much to do with this particular teacher and his approach.
A
And did this teacher say whether he plans to continue using AI in the same way in the classroom, an experiment worth continuing in his ey?
B
He does. Interestingly, he hasn't and doesn't plan to use AI in the way that tech companies tend to market it to teachers, which is to write lesson plans to do grading. And he actually describes the work of like building a lesson, figuring out an assessment for that lesson, making a rubric for grading students based on that assessment. He describes that work as spiritual work. He even used that adjective. And he said, the connection between a teacher and a student exists within all that material. And I'm not ready to hand that off to a technology company. That said, he was really interested enough in the experiment he conducted this year that he did plan to do it next year as well. And next year he wants to build in more robust ways of having students engage with all the problems with AI too.
A
Did you reach out to Magic School about what you observed in the classroom, maybe some of the downsides that you heard from Nate?
B
I did, yeah. So the CEO of Magic School was excited obviously about the proliferation of AI in schools. I did point out to him that I had noticed biases and inaccuracies creep into the students experiences with the products. And I said to him, you know, it seems that because Magic school is using AI platforms from big technology companies like OpenAI and Google, to provide its product. Magic School itself doesn't have a lot of control over the appearance of inaccuracies and biases. Interestingly, he said, you're right. There's not that much we can do about it. He pointed out, though, that what the company does is it provides disclaimers in its products for teachers to see, for students to see, saying there might be inaccuracies and biases here. Make sure to double check what you see.
A
Well, let's talk about another school that I guess you could say has really gone all in. It's called Alpha School. It's described as an AI powered school. It is a private school. It's K through 12. There are several locations. You visited a high school campus in Austin, Texas. Can you just tell us what that school was like? I mean, there's so many things that are different about this classroom setup than a traditional school setup.
B
Yeah, where do I start? So the school describes itself as squeezing all the learning that you would typically do in like a six hour school day into two hours, so that the rest of the day is freed up for work on what they call life skills and work on individual pursuits. What I saw when I visited this high school in the morning was kids sitting at individual computers with headphones over their ears, sort of working individually on totally different subjects while sitting kind of close to each other. There were no teachers in sight because instead of teachers, the school employs these people called guides, who are adults who are called guides in part because they don't have teaching certification, at least in the school I visited. One of them had a background in hr. One had a background as a corporate lawyer. One had a background in marketing. These were the three guides I met. And their job was to, like, mill around and offer moral support. I. I spoke to one guide who was standing next to somebody, a student who was studying calculus, and pointed at her and was like, I haven't studied calculus in like seven years. I have no idea. And if the student who was sitting there studying calculus needed help with calculus, there was this AI chatbot that she could pull up on her screen. Now, the school, interestingly, later said it got rid of those AI chatbots. Students could use them to cheat, but that was what they were using at the time. In the afternoon, the students could work on individual pursuits. What was interesting about the school, though, was that as it happened, every project that I learned about over and over tended to be an AI startup that each of these kids was starting. So of all the possible individual pursuits one could do, you know, one student for example, was working on a startup in which AI stuffed animals would offer mental health advice to teenagers. Another was building this app where an AI chatbot would give advice to teenagers about how to put more riz in their text messages to other teens. I asked Mackenzie Price, the co founder of the school, if there were examples she could share of like non AI startup life pursuits or projects that these students were trying out. And the one that came to mind for her was a student who was writing a musical. And as we talked more about this musical, she explained that this student was using AI to write the musical. So, you know, it's a very specific idea of what school is for, what the role of AI could be at school. And one thing that's really important to note that I think has gotten lost in a lot of the coverage of the school is that in my reporting I learned that the school has actually been functioning as a subsidiary of a big technology company in Austin called Trilogy. And that company is building a lot of the products used at the school. So if you're a student in this school, you are being educated by a subsidiary of a tech company and using products of that tech company in your education. So I asked Mackenzie Price about the relationships among these different companies, and what she said was, it's high time that we do something different in education. And I believe that allowing capital and industry to go into education is hopefully something that's going to work.
A
I want to go back to your own relationship with tech and how that's evolved over time. You have a book out this year called Searches. It's a memoir. It's about your life, Specifically, it's about your digital life. And it's mixed with your dialogues with ChatGPT.
B
And.
A
And as the book goes on, you start to probe more and more about the nature of ChatGPT's rhetoric with you, how it tends to flatter you. I'm sure people have noticed this a lot, how it tends to agree with you, but also how it has a particularly authoritative tone in delivering its responses, even if those responses are not always based in fact or without bias. Can you talk a little bit more about that aspect, the tone of some of these chatbots and what that means, especially for young people?
B
Yeah, I mean, the question is, why does it matter? Right. And in my experience writing my book, what I wanted to show was one facet of why it matters. So what happens is I open the book with this dialogue with ChatGPT saying I want to share some of my writing with you and get feedback. And then over the course of the book. I'm sorry, feeding it two chapters at a time. And it's giving me feedback on those two chapters amid doing all that. It also starts to suggest, as it's giving me feedback, that I write a book that is more positive about big technology companies, that is more positive about AI, and it even at one point suggests a paragraph or a couple of paragraphs that I should include about Sam Altman, the CEO of OpenAI, which is the company behind ChatGPT, and in which it describes him in these, like, glowing, glowing terms as this visionary leader who wants to make the world a better place. And it's saying to me, you should put this in your book. Now, to be clear, I have no idea why the chatbot produced language like that. I don't have any evidence that OpenAI or tech companies in general are deliberately building their products to spout positive rhetoric about tech companies. But that certainly happened in my dialogue, and it's certainly something that could be hypothetically possible. Right. And so the question is, in a world in which kids who don't yet know how to write and are in school because they're learning to write, are using chatbots to get feedback on their writing, to what extent are the ideological and financial goals of the companies behind these products going to infect not only the students essays, but actually the student's way of thinking about the world?
A
Mm. Yeah. What's on your mind now? As the school year gets underway? What should we, as people, as parents, as educators, be keeping in mind as this relationship between schools and tech companies continues to deepen?
B
I mean, listen, I. I'm not an educator, and I know it's a really tough job. I am a parent, and I can say that I would hope that teachers and school districts think first about what they know based on evidence is going to serve my child's education well, and then muster the resources, the political will, to deliver those things to my child? I would hope that the starting point is, what do we know about what works? What does the research tell us? Let's do those things. Because I think part of the reason we're having this conversation in the first place isn't that school districts went out and looked at all their body of research and found that AI is having really impressive results with students, and so they're implementing it in schools. What's happening is something else entirely. And so I would urge school districts to go back to that sort of fundamental approach of seeing what works and then implementing the things that work. And I can imagine a future with AI, in which teachers on an individual basis are thinking about ways to build AI in in a way that allows students to improve their quote unquote literacy with AI, understand the harms, understand the potential benefits, understand how to use it better, understand really like the socio political context though for AI, the way I think some great teachers are doing with social media now, without saying this means that we should be using AI in the classroom as a product every day.
A
Wahini, thank you so much for this conversation. This was really interesting.
B
Thank you for talking to me. I enjoyed it.
A
Wahini Vara's story for Bloomberg Business, how chatbots and AI are already transforming kids classrooms, is on Apple News. We'll link to it in our Show Notes page. And if you're listening in the News app, we'll queue it up to play for you next.
Episode: Schools blocked ChatGPT. Now they embrace it. What changed?
Date: October 4, 2025
Host: Shumita Basu
Guest: Wahini Vara, Bloomberg Businessweek contributing writer
This episode explores the rapid transformation in the attitude of American schools towards artificial intelligence—specifically ChatGPT and similar AI products. Whereas schools initially responded with panic and outright bans, today most major districts actively integrate AI into teaching and learning. Host Shumita Basu speaks with Wahini Vara about what sparked this reversal, what AI is really doing in classrooms, the potential risks and benefits, and the real-world impact on students and teachers.
The episode highlights the whirlwind evolution in AI’s classroom presence and surfaces critical questions about pedagogy, critical thinking, corporate incentives, and the importance of evidence in educational decision making. While some teachers are experimenting with thoughtful, transparent integration, the conversation urges caution and reflection before letting powerful, profit-driven technology define how students learn and think.