
Loading summary
Scott Detrow
Debates on artificial intelligence usually go about two ways. It's either hyped as a savior or derided as a reaper lurking just around the corner. But up until very recently, nearly everyone agreed that technology is evolving fast, and the billions and billions of dollars invested in it are a pretty good bet. Earlier this month, Meta CEO Mark Zuckerberg announced that he thought superintelligence was within sight.
Cal Newport
I think an even more meaningful impact in our lives is going to come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.
Scott Detrow
Anthropic CEO Dario Amadei had a starker prediction that AI could eliminate up to 50% of new white collar jobs and could raise unemployment by 10 to 20%. Here he was in an interview with CNN.
Cal Newport
I think we do need to be raising the alarm.
Scott Detrow
I think we do need to be concerned about it. I think policymakers do need to worry about it. Many policymakers are worried about it. Former Transportation Secretary Pete Buttigieg, seen by many as a potential presidential contender next time around, told NPR he is concerned about the next few years.
Cal Newport
It'll be a bit like what I lived through as a kid in the industrial Midwest when trade and automation sucked away a lot of the auto jobs in the 90s, but 10 times, maybe 100 times more disruptive because it's happening on a more widespread basis and it's happening more quickly.
Scott Detrow
So it really seemed like the whole world was preparing for the dawn of a new era. But then things shifted. Earlier this month, OpenAI launched the most recent version of its flagship product, ChatGPT, which many users found disappointing. And then weeks later, Sam Altman, the CEO of the same company, warned of a looming AI bubble. To top it all off, MIT put out a recent study saying that 95% of AI pilots at companies are falling flat. Only 5% are succeeding at rapid revenue acceleration. All of this has made investors question whether AI is still the safe bet it used to be. Consider this. For years we have been told the explosive growth of AI could radically change life, for the better or for the worse. So has that growth stalled? And is this as good as AI is going to get? From npr, I'm Scott Detrow. This message comes from Carvana. Selling your car shouldn't take all day. With Carvana, it doesn't get a great offer in no time, then choose to drop off or pick up and get paid on the spot. Sell your car today on Carvana.com pickup fees may apply.
Cal Newport
Hey, it's Robin Hilton from NPR Music with some big news for everyone who loves the Tiny Desk. We're giving away a trip to D.C. to see a Tiny Desk concert in person, hotel and flights included. Learn more and enter for free@npr.org TinyDeskGiveAway. No purchase or donation required for entry. Must be 18 years or older to enter. Links to the entry page and official rules can be found@npr.org tinydeskgiveaway.
NPR Sponsor Announcer
This message comes from Ritual Upgrade your supplement routine with Ritual's commitment to trust and traceability. During their subscription sale sitewide, get 40% off your first month of essentials@ritual.com podcast.
Scott Detrow
It's consider this from NPR. I asked chat GPT to write an introduction for the segment and my prompt a 30 second introduction for a radio news segment. And the topic how after years of promise and sky high expectations, there are suddenly doubts about whether the technology will hit a ceiling. This is part of what I got. For years it was hailed as the future, a game changer destined to reshape industries, redefine daily life, and break boundaries we hadn't even imagined. But now the once limitless promise of this breakthrough technology is facing new scrutiny. Experts are asking, have we hit a ceiling? Okay, now back to the humans. We're going to put that question to an expert of our own. Cal Newport is a contributing writer for the New Yorker and a computer science professor at Georgetown University, and he joins me now. Welcome.
Cal Newport
Thanks for having me.
Scott Detrow
Let's just start with ChatGPT in the latest version. Was it really that disappointing?
Cal Newport
It's a great piece of technology, but it was not a transformative piece of technology. And that's what we had been promised ever since GPT4 came out, which is the next major model was going to be the next major leap. And GPT5 just wasn't that.
Scott Detrow
One of the things you pointed out in your recent article is that there have been voices saying it's not a given, that it's always going to be exponential leaps. And they were really drowned out in recent years. And kind of the prevailing thinking was, of course it's always going to be leaps and bounds until we have superhuman intelligence.
Cal Newport
And the reason why they were drowned out is that we did have those leaps at first. So there was an actual curve. It came out in a paper in 2020 that showed this is how fast these models will get better as we make them larger. And GPT3 and GPT4 fell right on those curves so we had a lot of confidence in the AI industry that, yeah, if we keep getting bigger, we're going to keep moving up this very steep curve. But sometime after GPT4, the progress fell off that curve and got a lot flatter.
Scott Detrow
ChatGPT is the leader. It is the most high profile of all of these models out there. So obviously this is a big data point, but what are you looking at to get a sense of is this just one blip or what is the bigger picture here?
Cal Newport
This is an issue across all large language models. Essentially the idea that simply making the model bigger and training it longer is going to make it much smarter. That has stopped working across the board. We first started noticing this around late 2023, early 2024. All of the major large language models right now has shifted to another way of getting better. They're focusing on what I call post training improvements, which are more focused and more incremental. And all major models from all major AI companies are focused on this more incremental approach to improvement right now.
Scott Detrow
I want to talk about that in a moment. First, I want to get your thoughts on this other big headline from recent days, this MIT report. The headline that was all over the place was 95% of generative AI pilots at companies are failing. 95%. Do you find that number surprisingly?
Cal Newport
I don't find that number surprising at all. What we were hoping was going to happen with AI in the workplace was the agentic revolution, which was this idea that maybe language models would get good enough that we could give them control of software and then they could start doing lots of stuff for us in the business context. But the models aren't good enough for that. They hallucinate, they're not super reliable, they make mistakes or make odd behavior. And so these tools we've been building on top of language models, as soon as we leave very narrow applications where language models are very good, these more general tools, they're just not very reliable yet.
Scott Detrow
You're talking about hopes, and a lot of these companies have hopes and a lot of investors have hopes.
Cal Newport
But there's been a lot of people.
Scott Detrow
Who'Ve been really freaked out about all of this, whether it means job security, whether it means some of the more high flung sci fi type views of what happens down the line with AI. Do you think a slowdown is necessarily good news for people who are worried, or do you think this continues to be the focus in so many industries and it will continue to take more and more?
Cal Newport
I think it's good news for those who are worried about, let's say the next five years. I think this idea, like Dariel Amadei floated, that we could have up to 20% unemployment, that we could have up to 50% of all new white collar jobs being automated in the near future. That technology is not there and we do not have a route for it to get there in the near future. The farther future is a different question. But I do not think those scenarios of doom we've been hearing over the last six months or so, I think right now they're seeming unrealistic.
Scott Detrow
You mentioned post training before. You had a great metaphor for it involving cars. Can you walk us through that?
Cal Newport
Well, there's two ways of improving a language model. The first way is making it bigger, training it longer. This is what's called pre training. This is what gives you the, the basic capabilities of your model. Then you have this other way of improving them, which we can think of as post training, which is a way of souping up or improving the capabilities they already have. So if pre training gives you like a car, post training soups up the car. And what has happened is we've turned our attention in the industry away from pre training and towards post training. So less trying to build a much better car and more focused on trying to get more performance out of the car. We already have.
Scott Detrow
How much is this leading to broad scale rethinking of what comes next? Or is it just kind of tweaking the current approach to how these models get better and better?
Cal Newport
I think it's almost a crisis moment for AI companies because the capital expenditure required to build these massive models is astonishingly. And in order to make a huge amount of money from these technologies, you need hugely lucrative applications. How are we going to make enough revenue to justify hundreds of billions of dollars of capital expenditures that's required to train these models?
Scott Detrow
A lot of trade offs have gone into this. There's an economic effect already. People have lost jobs already. We're talking about the enormous energy suck and environmental consequences of just the raw power that goes into all of AI. And I'm wondering, does that make you rethink whether or not all of the downsides are worth it if the upside isn't as revolutionary possibly as has been promised?
Cal Newport
I think it's a critical question because when the thought was pushing ahead as fast as possible is going to give us artificial general intelligence? People were willing to make whatever sacrifice or cause whatever damage because you had this goal like this is so transformative, it's worth it. If we knew then what we know now that maybe this massive investment and the environmental damage, the impact on communities, the impact on the economy, if where this is going to lead in the near future is going to be something like a better version of Google, something that is good at producing computer code and the more narrow types of applications we have, I don't know that we would have had the stomach for tolerating that level of disruption. So there is going to be some interesting questions about what we've already done, but also some questions about what we're willing to accept in the near future if we're no longer sure that we're heading our way to somewhere super transformative in the near future.
Scott Detrow
What does this mean in the immediate term for people who have already started to use AI in their everyday lives at work, at home, does that continue, do you think that we hit kind of a bubble there, like what comes next on the small consumer scale, do you think?
Cal Newport
I think we're going to get a lot more effort on product market fit. So instead of just having this focus on making the models bigger and bigger and maybe you just access them through a chat interface, now we're going to have to have a lot more attention on building bespoke tools on top of these foundation models for specific use cases. So I actually think the footprint in regular users lives is going to get more useful because you might get a tool that's more custom fit for your particular job. There's still plenty of things to be worried about. Language models as we have them today can do all sorts of things that are a pain. It's generating slop for the Internet. It makes it much easier to have persuasive misinformation. The fraud possibilities are explosive. All of these things are negatives, but I'll probably just get some better tools in the near future. As just an average user, that's not necessarily so bad.
Scott Detrow
That is Cal Newport, author and professor of computer science at Georgetown University. Thanks for coming in.
Cal Newport
Thank you.
Scott Detrow
This episode was produced by Elena Burnett and edited by John Ketchum, Eric McDaniel and Sarah Roberts. Our executive producer is Sammy Yenigun. It's considered this from npr.
Cal Newport
Scott.
Scott Detrow
I'm Scott Detrow.
NPR Sponsor Announcer
This message comes from NPR sponsor Shopify. No idea where to sell. Shopify puts you in control of every sales channel. It is the commerce platform revolutionizing millions of businesses worldwide. Whether you're a garage entrepreneur or IPO ready, Shopify is the only tool you need to start, run and grow your business without the struggle. Once you've reached your audience Shopify has the Internet's best converting checkout to help you turn them from browsers to buyers. Go to Shopify.com NPR to take your business to the next level. Today, this message comes from dsw. Where'd you get those shoes? Easy. They're from dsw. Because DSW has the exact right shoes for whatever you're into right now. You know, like the sneakers that make office hours feel like happy hour, the boots that turn grocery aisles into runways, and all the styles that show off the many sides of you, from daydreamer to multitasker and everything in between. Because you do it all in really great shoes. Find a shoe for every you at your DSW store or dsw.com.
Podcast: Consider This from NPR
Episode Date: August 24, 2025
Host: Scott Detrow
Guest: Cal Newport, New Yorker contributor & Professor of Computer Science, Georgetown University
This episode explores the recent shift in attitudes toward artificial intelligence (AI), moving away from years of hype and limitless expectations to a period marked by skepticism, stalled growth, and questions about AI’s real-world impact and future potential. The conversation, centered on Cal Newport’s insights, examines why so many promises around AI—especially generative models like ChatGPT—are not being met and what this slowdown means for industries, jobs, and society at large.
This episode delivers a thoughtful reality check on the state of AI: while transformative visions are stalled and exponential leaps are no longer guaranteed, AI’s development will likely continue in more cautious, measured, and consumer-focused ways. Cal Newport underscores that the sector faces a “crisis moment” and that society must weigh the true costs and benefits going forward.
For anyone tracking the AI revolution—or wondering why those chatbots aren’t yet living up to the hype—this episode offers clarity, skepticism, and practical insight.