
AI Reality Check: Is AI Stealing Entry-Level Jobs? Cal Newport takes a critical look at recent AI News. Video from today’s episode: youtube.com/calnewportmedia OPENING: Is AI stealing entry-level jobs? [1:29] MAIN STORY: Torsten Slok essay [3:06] CONCLUSION: AI is not stealing entry-level jobs now [11:32] Links: Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow https://www.wsj.com/lifestyle/careers/ai-entry-level-jobs-graduates-b224d624 https://www.apolloacademy.com/busting-the-ai-youth-unemployment-myth/ https://www.theatlantic.com/economy/2026/04/job-market-artificial-intelligence/686659/ Thanks to Jesse Miller for production and mastering and Nate Mechler for research and newsletter.
Loading summary
A
Last summer, the Wall Street Journal published an article with an alarming headline, AI is wrecking an already fragile job market for college Graduates. It then goes on to say companies have long leaned on entry level workers to do grunt work that doubles as on the duty training. Now, ChatGPT and other bots can do many of these chores. Now, this idea that young people are having a particularly hard time finding jobs, and that this is due in part to AI soon took off and became conventional wisdom. Variations of this claim have been cited ever since. Now look, this belief is starting to have a real impact. Just last week, Axios wrote an article that was titled, AI is Making College Students Change Majors. And it cited a survey that showed 16% of currently enrolled college students have changed their studies due to concerns about AI. That number jumps to 25% when you consider students who are studying technology. Which is all to say, if you've been reading AI coverage recently, you've probably encountered these type of claims many times. But are they true? Today we're going to look for some measured answers. I'm Cal Newport, and this is the AI Reality Check. All right, so the question we're looking at today is AI reducing the market for entry level jobs. Now, I'm not an economist, but fortunately for our purposes, multiple economists have weighed in recently on this claim about AI stealing entry level jobs. And here's the thing. They're not that impressed. All right, I want to start with someone named Torsten Slok, who is the chief economist at Apollo Global Management. Now, last week he published a newsletter that was titled Busting the AI Youth Unemployment Myth. All right, now, in this article, he has two different charts that he put together, both of them drawing data from the Bureau of Labor Statistics. I'll put the first chart up here on the screen. Okay, so this is looking at the unemployment rate from the mid-1990s until today. It shows two lines, one for all people 16 years and older and others for just 20 to 24 year olds. He's looking at, are young people having a particularly hard time with unemployment right now? And what you see is, well, no, the overall unemployment rate and the unemployment rate for young people seems to be moving roughly with the same trends. All right, here's how Slok summarizes this chart. The data does not show any sign that unemployment is stronger, that unemployment among younger workers is structurally higher because of AI. Okay, now a common critique that you might hear here is that it depends what type of young people we're talking about, right? It's really young people with college Degrees that should really be seeing their jobs being stolen by AI, because AI automation is more aimed at white collar jobs than non white collar jobs. With this critique in mind, we can bring ourselves to the second chart that SLOK looks at here, which I'll bring up on the screen, which is looking at the unemployment rate among US college graduates who are between the ages of 22 and 27. It's broken out by gender, female and male. What you see when you zoom out here is that the unemployment off to the far right for our current period on average really is not that much different than other times we've seen. So there's not necessarily a major difference here between what we've been seeing recently and what we've seen in other economic upturns and downturns in time past. Now, here's how SLOK summarizes this. The unemployment rate has increased for men, but it has recently converged towards the unemployment rate for women. For women, since ChatGPT was released, the unemployment rate has been moving lower, but then more recently it has increased slightly again. So this is kind of weird and messy data, but it's not, not at all what you'd expect to see. If AI was beginning to rapidly automate entry level college worker jobs, which would be both men and women, you would probably see a very rapid rise in unemployment among young college graduates. It's not what you're seeing. SLOK gave this chart a simple title. No signs of AI having a particular impact on the unemployment rate among US college graduates age 22 to 27. That's some compelling data we have here that maybe AI is not stealing these jobs, but it's not a slam dunk case by itself. Why is this? Well, because SLOK is not comparing college graduates to non college graduates. Now, if you look closer at more of these claims, these proponents of AI displacement theory, what they often talk about is not the overall unemployment rate for young people with college degrees, which as we just saw, is like it's moving noisily, but it's not unusual compared to other past periods. He says what matters is relatively speaking. The proponents rather would say what matters is, relatively speaking, what is going on with the unemployment rate for recent college graduates versus recent workers of the same age that don't have a college degree. Because the idea is AI, again, the automation is going to hit college graduates harder than it's going to hit non college graduates right now. And what proponents of AI displacement argue is that, look, the unemployment rate, it's not that it's unusual for college graduates but it's higher and rising faster than it is for non college graduates, and that's new. Traditionally, non college graduates have higher unemployment rates. If we look back over many decades and now, we've seen an inversion where actually college graduates have their unemployment outpacing the unemployment rate of their peers who don't have a college degree. So for the proponents of this idea that AI is stealing entry level jobs, that is one of their big pieces of data. Now, it turns out this claim is testable too, and economists have looked at it. Now, I learned about some of these studies through the following article that came out last week. It's written by Roger Karma in the Atlantic called Young People Are Falling Behind, But not because of AI. This is a good article because Roger talks to an economist named Nathan Goldschlag who has been studying this trend more recently and in a series of papers has found some pretty informative results. So, for example, in a recent paper, Goldschlag, co authoring with Adam Ozemik, found an alternative explanation for differential unemployment rates between those with a college degree and those with not. I'm going to actually read from the Atlantic article here summarizing this data. The economist Adam Ozimek and Nathan Goldschlag recently took a deeper look at the data and found that a significant number of younger workers without college degrees had simply given up looking for a job, artificially improving the unemployment rate for young workers without a degree and thereby giving the appearance that college graduates were doing uniquely poorly. Karma in his Atlantic article calls this a statistical mirage. So this, oh look, things got better for people without college degrees, but worse for those with. That turned out to be a statistical mirage. It was actually people without college degrees leaving the active job market, which takes them out of standard statistics that are used here. All right, so what happens then if you use just overall unemployment metrics, which do not remove people who have stopped looking for a job, but includes an entire population. Population. So what if we focus on young people only and overall unemployment, right? Just what percentage of these people have jobs or not? And guess what Ozempic and Goldschlag found when they looked at that data? Those without college degrees are actually doing worse. So it's actually the opposite of what the displacement proponents were saying, which was like, look, things are getting proportionately worse for people with college degrees because AI could take their jobs. No job markets be getting worse for people without college degrees. There was just a statistical mirage because they were dropping out of the market altogether. That made it seem like that wasn't happening. The Goldschlag Has a succinct summary. This is him in the Atlantic article. This makes me doubt that this is an AI story. I love this sort of low key delivery there. But we're not yet done because Goldschlag actually wrote another recent paper, this time co authored with an economist named Sarah Eckhart, where they now analyzed hiring trends in different economic sectors. And they used five different measures of exposure to AI, AI automation. So five different ways that people have assessed how exposed a particular job is for AI to come in and take over jobs. And they looked at what's the impact on you being more AI automatable to unemployment and employment trends in that particular sector. Right. So again, if AI is stealing entry level jobs, what we should see is a rise in unemployment, a decrease in hiring in the jobs that are most exposed to AI. So what did they find? I'm going to quote them here from the Atlantic article. No matter how we cut the data, we didn't see any meaningful AI impacts on the labor market. So there was no signal in there that AI exposure somehow made that particular type of job to be more likely to be hiring less. And in fact, another economist quoted in the Atlantic piece, this is Vernier Tedeschi, he showed the opposite. He said actually in the period since 2023, unemployment has increased for the professionals that are least exposed to potential AI automation. So there's a very messy job market out there. This is the conclusion. There's a very messy job market out there. Coming out of the pandemic, of course it was mixed up. The pandemic was a generational displacement and disruption and lots of things got shaken up. We've talked about this on this show before. In the white collar world, there's a lot of overhiring during the pandemic, especially in tech related firms. Interest rates were dead low, so you could borrow money very easily and people were hiring up a storm because there's a lot of interest in technology based solutions, especially cloud based solutions. There's a lot of hiring and now there's a lot of corrections because they don't need that many people. Interest in those services are down and interest rates are back up. Interest rates going back up alone is enough to lead to a lot of cuts because it's as if someone made your operating costs that much more expensive. So you have to offset that. It's been a messy market. It's affected white collar workers, it's affected non white collar workers. All of this data show this is messy. But no matter how we slice it, looking for a specific signal showing that AI is beginning to slow down entry level hiring. No matter how you come at it, we do not see that signal. In fact, we often see opposite signs showing up. All right, so I don't mean, let me step back here. In this type of analysis. I don't mean to be dismissive or 100% skeptical or reactionary. I'm not claiming that AI is not one day going to potentially cause big disruptions in the job market. It very well could. And it's possible that these disruptions will in fact start with entry level jobs and entry level jobs exposed to AI slowing down entry level hiring. That may very well be what we see in the future, but it is not happening right now. And the reason why this matters is because there's been a lot of articles and discussions and interviews where this idea is being referenced as if it was a fact. Now, I know I've made this point before, but I'm going to make it again. I think what's going on in a lot of this discussion and coverage of AI is that commentators will latch onto a theory or an observation or claim not because they think it's correct like that they've checked it and it's correct like they might with another story, but because they believe it is directionally true. So I don't know, maybe AI is not right this moment actually taking entry level jobs, but it's directionally true because people need to be worried about AI's impact. And so I will put an article in or I'll do an interview or I'll make a claim online about something that might not be happening right now. Because my ultimate goal is not to get to the truth. My ultimate goal is to influence how people are thinking about this. I know better they need to be more worried about this than there are. So it's directionally true that we should work worry about AI on jobs. So I will throw out and promote any claim that moves in that direction and helps to try to give that belief. I think there's a lot of that going on and I've seen this sort of directionally true versus factually true. This is something I've seen happen in a lot of different major things that have happened in the last decade or so. This shift towards my job as a commentator is to shape how people understand and act. Not necessarily to try to get to the truth of what's actually happening. But I think this is a problem because what happens is when you lean into what's directionally true, what feels true, what matches the vibe that you're feeling versus actually trying to figure out what is actually true. Two things happen. One, you erode public trust. The more AI commentators lean into what's directionally true, people pick up over time, hey, a year ago you said this was going to happen. It didn't. Last month you were so confident this was the case and it turned out that, no, actually Jack Dorsey was just AI washing. You do that enough times, people stop listening. And then when there's things that really need to be reported, because these are massive companies with huge implications on all sorts of different sectors of our society and economy, people are no longer listening to you. So trust is a matter. It matters. And if you go from accuracy to directional trueness, you begin to erode trust. The second thing that I think matters is that it lets these frontier AI companies get away with a lot. The more we lean into trying to have the most bombastic coverage possible because it matches our vibe that this is a big deal. It allows the frontier AI companies, they keep raising money, probably way more than they need to. They're not being held to the same scrutiny. Because if you believe this is the most disruptive technology in the last two centuries, I don't care about your ebitda, I don't care about your debt, I don't care about your revenue. I want to be involved in the company that's going to replace all the jobs. That's a big problem. Because if they're not actually, if these companies aren't actually able to become the fastest growing companies in the history of companies, it's going to have a huge impact negatively on the American stock market if and when that bubble burst. It also lets them get away with doing stuff. The sloppy products they're jamming down our throats, the problems they're causing, all sorts of different sectors, the stresses they're causing. It lets them get away with that because it gives them an aura. Like they're wearing this cloak of massive disruptive inevitability, like, hey, what can we do? Everything is changing. We're just trying to hold on. If it's not us, it'd be someone else, as opposed to if we treat them like a normal company. What is this nonsense product you just released? Why do I have to use this? What's your claim? Convince me this is useful. Why are you taking up all this energy and water? Why are you building these things? We don't hold people's feet to the fires. As long as we're focused on the vibe of the disruption we're addicted to the idea of this either dystopian utopian change as opposed to these are real companies doing real things that need accountability. I'll get off my soapbox now, but I just want this minor correction of a one thread among many that are woven into our coverage of AI right now. I wanted to stand in for this bigger thing. It's not about what's directionally true, it's about what is actually true. It is not our jobs as AI commentators to influence how people think about something. It's to inform them and to trust them to think the right way once they know what's really going on. We have to hold these people's feet to the fire. We have to get past our own anxieties and get to the on the ground truth. All right, sermon over. That's enough preaching for today. So remember, until next time, care about AI, but not everything you read about it.
Episode Title: AI Reality Check: Is AI Stealing Entry-Level Jobs?
Date: April 9, 2026
In this episode, Cal Newport scrutinizes the widespread belief that AI is significantly reducing entry-level job opportunities, particularly for recent college graduates. Sparked by alarming headlines and industry surveys, Newport investigates whether there's concrete evidence supporting the notion that AI is actively "stealing" jobs. Drawing upon recent economic research and journalistic analysis, he challenges the prevailing narrative, urging listeners to discern between what is "directionally true" and what is factually accurate.
Torsten Slok’s analysis:
“The data does not show any sign that unemployment is stronger, that unemployment among younger workers is structurally higher because of AI.” (03:55)
College graduates vs. non-graduates:
“No signs of AI having a particular impact on the unemployment rate among US college graduates age 22 to 27.” (07:23)
“A significant number of younger workers without college degrees had simply given up looking for a job, artificially improving the unemployment rate for young workers without a degree and thereby giving the appearance that college graduates were doing uniquely poorly.” – Summary from The Atlantic article (13:01)
“This ... turned out to be a statistical mirage.” (13:11, quoting Roger Karma)
Occupational data:
“No matter how we cut the data, we didn't see any meaningful AI impacts on the labor market.” (16:12, quoting Goldschlag/Eckhart from The Atlantic)
Counterintuitive trends:
“It's been a messy market. It's affected white collar workers, it's affected non white collar workers... But no matter how we slice it, looking for a specific signal showing that AI is beginning to slow down entry level hiring. No matter how you come at it, we do not see that signal.” (19:36)
Newport warns against accepting claims that “feel directionally true” but lack present evidence.
Perils of this approach:
[22:03]
Quote:
“This shift towards my job as a commentator is to shape how people understand and act, not necessarily to try to get to the truth of what's actually happening. But I think this is a problem because... when you lean into what's directionally true, what feels true, what matches the vibe that you're feeling versus actually trying to figure out what is actually true... you erode public trust.” (22:57)
Market impact:
“If you believe this is the most disruptive technology in the last two centuries, I don't care about your ebitda, I don't care about your debt, I don't care about your revenue. I want to be involved in the company that's going to replace all the jobs. That's a big problem.” (24:40)
“It's not about what's directionally true, it's about what is actually true. It is not our jobs as AI commentators to influence how people think about something. It's to inform them and to trust them to think the right way once they know what's really going on.” (27:11)
“No matter how we slice it... we do not see that signal. In fact, we often see opposite signs showing up.” (19:36)
“Two things happen. One, you erode public trust... people stop listening. And then when there's things that really need to be reported... people are no longer listening to you.” (23:18)
“Care about AI, but not everything you read about it.” (28:09)
Cal Newport thoroughly deconstructs the commonly held belief that AI is drastically reducing entry-level job opportunities. Through data and expert analysis, he demonstrates that present labor market fluctuations are not uniquely or primarily the product of AI, and that misleading commentary threatens both public trust and industry accountability. His closing admonition is clear: focus on facts, not fears, and maintain a critical eye toward speculative or hype-driven reporting on AI and the job market.