Loading summary
A
This episode is brought to you by ServiceNow. Look, I have my dream job. I get to explain complicated ideas to folks who have better things to do than read white papers. But even dream jobs have not so dreamy parts. The stuff that gets in the way of the actual work. That's where ServiceNow's AI specialists come in. They don't just tell you what what you should do about your busy work. They actually do it. Start to finish, cases closed, requests handled, no extra work for you. That way, you and your team can spend more time on what matters. Which for me is finding that one elusive stat that just makes everything click. To learn how to put AI to work for people, visit ServiceNow.com today another look at the AI jobs apocalypse in the last few months, several companies have announced big layoffs and cited artificial intelligence as a main driver. Coinbase Block, the company previously known as Square Salesforce. This comes at a time when executives at many of the top AI firms like OpenAI, Anthropic and Microsoft have predicted mass disemployment and even permanently elevated levels of unemployment as AI learns to do every single last task in this economy. On this show, we talk to Atlantic staff writer Josh Tierengel about his cover story in the magazine laying out exactly how this so called AI jobs apocalypse might unfold. If you put all of this together, the corporate media statements, the predictions, the media analysis, it really does feel like this might be one of those once in a millennia moment where everything is going to change. But today I want to consider the possibility, maybe even the probability, that all these people are wrong. First, when you look at the companies announcing these layoffs, almost all of them have something in common. They've all lost at least one third of their equity value in the last five years. In the stock market, it is very normal for companies with weak stocks to announce layoffs when they're mired in a slump. Many of these companies, I think, are surely using artificial intelligence as an attractive excuse to push attention away from what other people might otherwise recognize as poor corporate performance. Second, about those AI executives who have predicted mass layoffs on account of their own technology. A larger survey of 6,000 CEOs, CFOs, and senior finance managers about artificial intelligence was published earlier this year in 2026 found that 70% of these executives said AI would either add jobs or have no impact on their company's hiring. There's even some evidence that software engineering jobs may be the most implicated by the current AI models are growing faster than the rest of the labor Force. So for all the talk about the AI jobs apocalypse, there is simply no strong evidence from either economic history or current economic data that that such a thing is happening. Alex Imas is an economist with the University of Chicago who's written some fantastic essays on why the AI jobs apocalypse is so unlikely. Today we talk about his work, his skepticism of the apocalypse narrative, but we also talk about a subject that's been core to Alex's work for many years. Human desire. The economy is a register of human desires. Gdp, gross domestic product is the sum of what people spend, what they demand, what they desire. And in a world where artificial intelligence automates some tasks, it might not destroy work so much as it moves dollars toward new desires in new sectors of the economy. I'm Derek Thompson. This is Plain Eng. Alex Imas, welcome to the show.
B
Happy to be here. Thanks. Thanks for having me.
A
Barrick, it feels like you're suddenly everywhere these days. Alex, you're like, the most in demand economist on the issue of AI and jobs. So I'm very, very grateful that you agreed to stop by before you were suddenly everywhere. What were uchicago? What were you studying?
B
Well, I grew up studying behavioral economics, so I was always fascinated in kind of human psychology and how that relates to what people actually want in the economy, how the economy adjusts to those desires. And basically, most of my research up until past few years, when I've been focusing more on AI has been on essentially empirically documenting how psychology enters economic models.
A
I love the idea that a background in studying the economics of desire is fruitful for understanding the future of AI. I think some people who aren't familiar with your work might not necessarily understand that intersection, but by the end of this hour, they absolutely will. So there's a widespread feeling, I think, in the media among AI builders and technologists that artificial intelligence could wipe out tens of millions of jobs and lead to a lasting, elevated level of unemployment, which is something that no technology has done before. I mean, we are living now in a world with more technology than existed in any previous decade, and the unemployment rate is still under 5%. You and I are going to spend most of this interview talking about why we think the prediction of an AI jobs apocalypse is implausible. But can we begin by making the strongest possible case that this time, in fact, is different? Like, what do you think is the smartest version of the argument that an AI jobs apocalypse really could happen?
B
So this argument that technology can wipe away jobs is actually, it's very old. So let's. Let's go back to 1820 for a second here. So Ricardo, who's one of like the classic economists that everybody who, you know, even took undergrad in economics knows about, he has a really nice chapter called On Machinery. So Ricardo started out as a person, he was living through the Industrial revolution. And he started out as a person who like all people, kind of in the capitalist class, they thought that, look, obviously technology is a good thing. It will increase productivity, it will decrease the price of consumer goods. Everybody's going to be better off. And in this one chapter, he changed his mind. He said, look, actually if you have people working and you have technology replacing these people, what's going to happen to these people? This is labor, basically. He said, well, now these people are going to be out of work. What's going to happen to the economy? What's going to happen to like circulating capital, as he calls it. And so he highlighted this idea that actually the Industrial revolution could be kind of bad for most people. And then, you know, the world churned and continued and his prediction did not come true. But in 1989, Paul Samuelson wrote a paper called Ricardo Was Right that technology can in fact produce a jobs apocalypse. So Fast forward to 2026, late to 2025, and kind of the best version of the argument, it was, I think done by Philip Tramiel on his substack. Basically his argument is that technology makes things very cheap to produce. And because it makes things cheap and because AI is an extremely. You have this hyper intelligent system, it could create lots of new varieties, so lots of different types of goods of varieties that we really can't even imagine. So like, you know, video entertainment that's fully immersive. Concerts that have no human performers necessarily, but they have a fully immersive experience with virtual reality and all of these sorts of things. Video games that we've never heard of, you know, delicious food that hits all of the flavors that we want to eat. We maybe hadn't even considered that we wanted to eat them in the first place. Anyway, it creates all of this variety and all of this variety gets people to spend all of their money on all of this variety that's created by technology, such that the part that's produced by human labor just gets less and less and less and less and less money and it's just labor share goes to zero.
A
Yeah, it's kind of like, all right, what do I spend money on in any given day? Like, I spend money on food, I spend money on entertainment, I spend money on transportation. Okay, what if we Imagine artificial intelligence intelligence being able to make all of my food, making all of my entertainment, being in charge of all the transportation because all the cars are self driving. It's essentially like if AI could do every single task in the economy, then wouldn't consumer spending end up flowing entirely to these sort of AI firms? Right. So there's the idea that artificial intelligence is going to create something that we've never seen before in economic history, which is a permanent technological replacement of labor. You have a very interesting essay where you offered a counterpoint to this idea that anything that can be automated will always end up automated. And that story begins with Starbucks. What happened at Starbucks and why does it matter?
B
So Starbucks during the early 2000s thought that, look, we want to improve the customer experience. We want faster kind of throughput time. Somebody comes in, they want to order a cup of coffee, let's get them that cup of coffee, get them to leave the store as soon as possible so the next person can get their cup of coffee. Let's make all of this standardized so all the coffee tastes the same. And so they implemented all of this kind of automation within the stores. And a few years later, the CEO of the Starbucks decided to reverse this whole thing. He said, actually we went way too far with the automation. We got to get more baristas, we got to bring back people writing the names of the customers coming in in. And we have to kind of go back to the original kind of vision of Starbucks, which, you know, if you remember how Starbucks started, it was kind of a personalized coffee chain where you were. People knew your name behind the counter. It was handcrafted lattes, handcrafted coffee, and things like that. Let's go back to that. So it was a reversal of the automation story, where you could actually automate every single objective part of the experience of getting that coffee. They did that and then they went back.
A
And so the story here is like, there's some people saying that as AI is able to do more and more, it's simply going to replace human activity entirely in those domains. But that story sits alongside another, more complicated reality, a reality that I think clicks very much into your research on human desires, human needs. People sometimes desire a certain kind of friction. They desire a certain kind of human in the loop, if you will, in many different industries. Such that just because a certain task or job can be automated does not guarantee that it will be. Is that the job, so to speak, that your Starbucks story is performing here?
B
Exactly. So I call this a relational element to a task A relational element to a service, a good, or something like that. The definition is really simple. Let's say a person is produced with. With basically the exact same output. So the same cup of coffee, the same sort of performance, whatever. The actual output is exactly the same. And you ask the person, look, would you be willing to pay more for the human or the strict technology version of it? And the relational part is, hey, people actually want the person in the loop. They have a. Their desire is built on the idea that they want that human element. And my argument is pretty simple. If that desire is kind of baked into people's preferences, it will be catered to, because this is very simple economics.
A
So in a world where artificial intelligence was getting better at writing code and getting better at automating other tasks, where it was essentially producing commodities, what are the kind of jobs in this relational sector that you're describing that you think would become more common? Are these things like therapists, yoga instructors, personal chefs? Help me fill out this category of relational economics that you think might grow in an economy that's more inflected by AI?
B
Yeah, so I think kind of like the archetype of when I say relational or I make this description, people think like performer or entertainer or something like that. Right. But I think the actual relational element is present at many, many different jobs that we now currently classify as something completely different. So teachers have a relational element. The idea that the element of education that is consumed and is effective, it doesn't seem to be the fact that, look, if I expose you to the information, this will kind of get into your head and allow you to understand and learn the lesson. We hopefully learned this during COVID Right? We gave access to information to everybody, and people did not learn. Right. And there's something about the human in the loop that's really important there. So education is one of them. Healthcare is going to be I, you know, and this is data that I'm actually collecting, actually mapping these relational components. So a lot of that job of healthcare, there's. And we'll get into this later in the podcast, hopefully, there's different tasks in healthcare. There's different tasks in teaching, there's different tasks in financial advisor that a financial advisor has. Right. But within those tasks that are together form the job, there's a relational component. That relational component means that the job, because it's all of these tasks together in a bundle, require the human to be in the loop if it is to get this sort of value that I'm talking about.
A
What I want to do Next is walk through what economists like you consider to be the main flaws in the AI jobs apocalypse narrative. And the first main flaw that I want to discuss with you is this concept that you already alluded to when you brought up Ricardo called the lump of labor fallacy. What is the lump of labor fallacy and how does it apply to the current debate over AI and jobs?
B
So the lump of labor fallacy basically means that what we see now is all there is. The types of jobs that we have are static. These are the only types of jobs that exist. So think about Ricardo in 1820. If you told Ricardo that almost every single job that he knows about will be automated by 2026, what do you think his prediction would be for the prime age employment rate? What do you think it would be?
A
Right. 5 or 10%. Right. Which means 90% of 40 year olds are out of a job, essentially.
B
Right. And what is it today? It's actually pretty close to the all time peak. I think it's 2000 was the previous peak and we're at the second peak. Right. So if you're talking about historically the effect of automation on labor, it is, you know, if anything, it has increased in 2026 as far as the employment rate. So you then have to ask like, so what's happened since then? And what's basically happened is that new jobs have been created. So that's the lump of labor fallacy in a nutshell, is that the types of jobs that we see today are not the sort of jobs that we will see tomorrow. Most of the jobs that we have today didn't even exist in 1940 or 1930 or something like that.
A
And why are new jobs created? Is it about the idea that as one task gets cheaper, like as agriculture becomes automated, it frees up humans and frees up dollars to chase other desires. And then those dollars and those humans sort of create things like the modern pet care industry and healthcare and education and private tutoring and all sorts of industries that could not possibly exist in the 1820 world where 80% of the economy has to be employed in agriculture. Like how, as an economist, can you help me understand like why lump of labor is a fallacy?
B
That's exactly right. So I mean, what does technological change really do? It makes things cheaper to produce, right? For the most part. So agriculture employed a huge share of the population. And what happened when agriculture was automated kind of ironically became a smaller part of not only the employment share because many, many, many people were no longer employed in agriculture, but it became a smaller Part of the gdp, it became less economically important. And that part is really key. Even though we're not eating less than we did in 1820, we're eating way more than we did in 1820. The share of GDP of agriculture is much, much smaller because it's just cheaper to produce. So if you look at a graph of share of gdp, you look at agriculture and it just shrinks. And it's not because we're eating less, it's because it's cheaper. And that's it. So what happens to human beings? They get richer. Right. Why do they get richer? They're spending less of their money meeting their basic desires. So you can only eat so much food. I know, you know, there's very fancy food, there's like, you know, crazy Michelin star restaurants for, but for the most part, people are eating and they're spending, spending their money on, you know, kind of satiating the desire, their desires for food. And then afterwards they have a bunch of money left over. So what are they spending their money on? So what is the economics of structural change, which is kind of the topic that we're talking about today? It's the fact of what else are they spending money on? Because the dollar has to go somewhere. Well, the supply side of the economy creates, kind of caters to the desires that become available once the basic desires are met. So the question is, what will be kind of jobs will be produced when more desires are met through automation.
A
So, Alex, the first thing we talked about is the lump of labor fallacy. There's another pillar of this argument that the AI jobs apocalypse is very unlikely. And that is this term that's banded about in artificial intelligence all the time, which is called Jevons Paradox. What is Jevons paradox?
B
So a lot of people think about Jevons Paradox as when something decreases in price, people want more of it. That's just supply and demand. There's no paradox there. Jevons Paradox is something different. So Jevons Paradox was first used to describe when steam engines became very, very efficient. Right. So they required less coal. So if steam engines required less coal, what do you think would happen to the demand for coal? You would think people would want less coal because in order to run a steam engine, you don't need as much coal. Well, what ended up happening is coal demand exploded. Why? Because now it becomes much more economically viable to use steam engines and use everything else that you would use steam for. Now look, we needed too much coal to install a steam engine in this particular town. Now we can now we need more coal. And so this basically the paradox of the fact that something becoming more efficient to create creates a demand for the input into creating it.
A
I see. So this would be the idea that as software or intelligence becomes cheaper, rather than decimate software employment, what you might see is actually an ironic increase in software employment, because as the thing becomes more efficient, demand is increased for it. Is that right?
B
Yeah, that's exactly right. So this is the idea that just because, let's say you had 10 programmers writing code in your company, now you have one programmer able to do, you know, the job of 10. So if the demand for the software was exactly the same as before, then, you know, you would. You would fire nine people. Right. But if one programmer doing the job of 10 makes the code cheaper on the market, basically the product that the company is selling cheaper on the market, then many, many, many more people might be buying the product. And if demand can absorb the increase in potential supply, then you might be hiring more software engineers rather than less.
A
And the reason I started with lump of labor and Jevons Paradox is lump of labor explains economic history up until 2026. It explains why more and more technology has coincided with still high levels of total employment. Because as one sector becomes more efficient, whether it's agriculture or manufacturing, we find other needs, other desires for humans, and labor flows toward those new needs and desire. I think Jevons Paradox is really interesting for the year 2026, because there's lots of indications that suggest that as the software industry is becoming more productive with these new AI tools, in fact, many companies seem to be increasing their software hiring. Have you seen that as well?
B
Yeah, I've seen exactly that. So if you kind of look at. It depends on what dataset you look at. But of the datasets that I've seen that tracks software engineering hiring, it actually seems to be recovering. There was a big downturn, there was an overhiring during COVID and things like that. So there's called a Covid overhang. So there was a correction there. But what we've seen in the data is that it's not going down. Since these agenda coding tools have been introduced, if anything, it's starting to recover and go up, which I think is what people on the Internet called a narrative violation.
A
So we've got lump of labor fallacy and we've got Jevons Paradox. The next concept that I think is worth discussing here is O ring jobs. What is an O ring job and what is its connection to this debate about the AI Jobs apocalypse.
B
So a job is a bunch of things that people do, right? So you're not just the podcast host talking to me, you also have to send me emails. You also have to talk to your production team. There's a bunch of different tasks that you have to do, right? And that's very typical of most of every job, right? A doctor doesn't just. A radiologist, for example, doesn't just look at the scan. And that's the entire job. They have to talk to the patient, they have to talk to the rest of the care team, they have to do a bunch of coordination. So every single job is a bunch of tasks, right? And kind of this task based model in the, in economics has been used since, you know, David Otter has kind of been talking about it and writing about it since the early 2000s. And the way that it works is kind of like every task is independent. You can slot one out and it doesn't really affect the rest of the job. So you automate a task, that's fine. You have nine other tasks out of 10. The Oring model of jobs basically says that actually these tasks are really, really interrelated. So if you do one poorly, the rest of the job, you know, is kaput. So think of like cooking a meal, right? You're cooking a meal, you're putting in these ingredients and everything is perfect. This is like a top, top, top meal. And then you over salt it, can't eat it, right? It's done. So you screw up one thing and it's done. So why is it called the O Ring? So many of your audience was probably not born in the late 80s, but this is a riff on the Challenger disaster. And the Challenger disaster was basically this rocket launch that was happening, big preparation, NASA was rushing for it. And then this was a really, really tragic day because the rocket went up and then it exploded. And what ended up happening was that the entire operation was going swimmingly. But there was this one thing called the O ring, which was basically this little tiny part keeping the fuel from getting into the fire. And it was a very cold day. It was so cold that this O ring cracked. The fuel went into the fire, the whole thing blew up. And so it's this idea that the jobs are very interdependent with one another and sorry, the tasks are very interdependent, so you can't really think of anything in isolation. There's a whole bunch of common parodies there.
A
And the reason that this is relevant for artificial intelligence. Stop me if I'm getting This wrong is that artificial intelligence is good at doing tasks. It can write a memo, it can analyze a video, it can really quickly transcribe a piece of audio. It's very good at these discrete tasks. But jobs, as you said, are bundles of tasks. And if jobs, or if most jobs are O Ring style jobs, that is to say, if one little failure ruins an entire project, then just because artificial intelligence can automate like 4 out of the 5 main tasks of a job, doesn't mean it takes away the job. In fact, the human needs to be in the loop because you can't have an O Ring style failure that exploded the challenger. You need to make sure that there's a person watching over how those tasks integrate. So is the concept, do I understand it right that the concept of O ring jobs is this idea that even as we see AI get better at doing discrete tasks, that won't necessarily translate into millions of lost jobs, because most jobs have this proverbial O ring that keeps them from being sort of entirely sloughed off to a computer.
B
So now we have to go back to Jevons paradox in principle, yes. So for the individual worker, if all of these tasks are interrelated, that means that even if the human is left only doing one task, in fact they may be paid more because why? Well, now they have all of this time to focus on that one task and the quality goes up. So wages can actually increase in this model. So this is a model, by the way, Avi Goldfarb and Joshua Ganz have a really nice paper about this. So why may this still lead to unemployment? Well, if one person becomes much more productive again and nine people are, you know, are becoming relevant, you might still get those nine people getting fired. So the key thing for the O Ring model to, to keep labor constant or actually increase is the fact that demand can actually absorb the increased productivity. And this is why the statistic called the elasticity of consumer demand is so important here, which basically means, what's the statistic? It says, look, if the price changes, how much more do people buy? If the price decreases, do people buy more of this good? If they don't, there's some very inelastic. So like insulin is very inelastic. Good. You either need insulin or you don't. You don't need more insulin. But other goods, as we've seen with software programming and bunch other goods, they actually end up being very elastic. And these elastic goods essentially, in a sense respond to these decreases in prices. And so the company might be hiring more people even though they're becoming more productive.
A
Can you give an example here? Because I feel like I 80% understand where you're going, but I want to understand the full 100%. So what's an example of how elasticity of consumer demand would help us understand the degree to which AI is going to lead to unemployment or lead to increases in jobs?
C
Are you looking for support in your weight management journey? Zepbound Tirzepatide may be able to help Zepbound is a prescription medicine used with a reduced calorie diet and increased increased physical activity to help adults with obesity or some adults with overweight who also have weight related medical problems to lose excess body weight and keep the weight off. Zepbound is Approved as a 2.5, 5, 7.5, 10, 12.5 or 15mg injection. Zepbound contains Tirzepatide and should not be used with other Tirzepatide containing products or any GLP1 receptor agonist medicines. It's it is not known if Zepbound is safe and effective for use in children. Don't share needles or pens or reuse needles. Don't take if allergic to it or if you or someone in your family had medullary thyroid cancer or if you've had multiple endocrine neoplasia syndrome type 2. Tell your doctor if you get a lump or swelling in your neck, stop Zepbound and call your doctor if you have severe stomach pain or a serious allergic reaction. Severe side effects may include inflamed pancreas or gallbladder problems. Problems Tell your doctor if you experience vision changes before scheduled procedures with anesthesia. If you're nursing pregnant, plan to be or taking birth control pills. Taking Zepbound with a sulfonylurea or insulin may cause low blood sugar. Side effects include nausea, diarrhea and vomiting, which can cause dehydration and worsen kidney problems. Talk to your doctor, call 1-800-545-5979 or visit Zepbound.
B
So let's say you take let's say you take legal software. So I'm a company that makes legal software and right now it takes me a lot of different a lot of programmers to generate the software and the software is prohibitively expensive except for kind of the top legal houses. And so I have demand, I'm paying my employees, I'm paying employing the software engineers. But all of a sudden one software engineer could do the job of 10 software engineers. So my price, my costs go down right within the company. And if It's a competitive industry. Let's imagine it's a competitive industry. Now I'm going to be selling the software for less. So now the software is cheaper. All of a sudden, not just the big legal houses can afford it. Me and you can start affording it. If me and you can start affording it, then we're going to buy it. Lots of other people are going to buy it. So the company actually has this boon of demand. And if this boon of demand is high enough, well, I'm not going to be firing nine people, I'm going to be hiring 10 people in order to meet that elasticity. So this is the idea that supply creates demand in the sense that if the price goes down enough, the demand will respond so much that many people that hiring will actually increase.
A
It's funny, that reminds me of an old sort of framework that I had for AI and jobs that I thought of as horses versus spreadsheets. So like, if you were an economist in, let's say, the 1960s, and you knew that someone was working on digitized spreadsheets like Excel, essentially, then you might say, oh my God, I know what happened the last time we made something really, really efficient and cheap. It was the internal combustion engine, the tractors and horses. And we had horses on farm, millions and millions of horses on farms. And then we invented a tractor and it entirely wiped out the horse. So now that we have this digital efficient spreadsheet, it's going to entirely wipe out all of the spreadsheet jobs because 10 people can do the work of 100 accountants and nobody practically is going to work with spreadsheets in the future. Right? That's what economists might say, as sort of naive economists might say the 1960s. But of course, people aren't horses and spreadsheets aren't the tractor. And, and what ended up happening is not that Excel reduced the number of people working with spreadsheets. In fact, Excel was cheap enough and efficient enough and useful enough and sometimes annoying enough, but whatever, for basically everyone in the white collar economy to be working with spreadsheets. Like, the number of spreadsheet workers today is probably 100x what it was in the 1960s. So rather than the proverbial population of horses going down by 99%, it was more like this technology made the population of horses go up by a factor of 100. And so is this kind of a way in which a technology can essentially the supply of a new technology can amplify demand, such that while it might seem initially like it's substituting for jobs. It's actually just making a certain kind of work much more efficient and therefore much more in demand.
B
Yeah, so that's exactly right. So let's like thinking about the spreadsheet. So this was happening in the 70s and 80s and at the time, Gunnar Meidl and a bunch of other econom economist. He's a Nobel prize winning economist, very famous economist. Right. He actually wrote to the White House, they started a commission saying that computers and it is going to lead to a jobs apocalypse and the government needs to act. If we do not act, that digitized spreadsheets and everything else is going to wipe out the labor force. Right. So this is this sort of idea that look, there's a lump of labor and this lump of labor static and the only jobs that we can think about being done are the ones that are in front of our eyes. This is an old idea and I want to be very careful here. I don't want to. There's so much uncertainty going on. You have to have wide confidence bands and everything. So a lot of people will respond to this argument by saying, look, AI is not the spreadsheet, it is not the computer. This time really is different. So I want to say to those people that my beliefs have a wide confidence band as far as I have a lot of uncertainty. But at the same time, if you look at history, if you look at especially let's start with 1800, there's been so much automation. Every single thing that is around us right now, look around you, it is produced by a machine. In 1800, it was not. It was produced by a human being. So for somebody in the 1800s, if they were teleported to this room, looked around the house and realized that everything was done by a machine, they would expect to leave the house and everybody on the street to be unemployed. And so it's important to keep that in mind.
A
I want to be clear that I feel like we are telling two distinct stories here. I think they're both true, but I think they're distinct. One is the story about the spreadsheet and one is the story about, let's call it pet care. So the story about the spreadsheet is that technologies like Excel made spreadsheet tech so efficient that the number of people working with that technology actually increased. Right. And that's one possibility with artificial intelligence, that generative AI just might end up being like the new Microsoft Office is sort of limiting as that might seem. But what I mean is it's something on the computer of tens of Millions, hundreds of millions of people, and they're using it to become more productive. That's one vision. But there's another vision, which is that, as we saw in the agricultural industry, in fact, technology does destroy, so to speak, some jobs. It does fully automate many jobs, certainly the jobs of horses. But we find new work for the people who are displaced by those jobs. Because you have growing gdp. GDP is GDI gross domestic income. And that dollar, once it leaves the agricultural industry is going to go look for some other human need, some other human desire, and eventually it's going to find its way into pet care. Because once you have enough food, you have enough clothes, and you have a house over your head, well, now you can afford to spend 500 billion dol on pet care as a country. But I just want to be clear. Those are two different reasons or two distinct stories we can tell about why the AI jobs apocalypse is unlikely, Right? Like the elasticity of consumer demand and lump of labor fallacy are providing different explanations for how economies change, correct?
B
That's exactly right. So one way to think about this is like short run versus longer run, right? So in the short run, the idea is that the sort of like spreadsheet analogy predicts that things will be slower than you would anticipate as far as jobs being destroyed, like this creative destruction element. So Luis Garricano has this incredible paper called Weak Bundles and Strong Bundles. And it essentially talks about some jobs are strong bundles in the sense that looks, if I'm a computer programmer or I'm a doctor or something like that, it is really, really hard to take me out of the loop. Other jobs you might think of, like, let's say you're a truck driver or something like that, it is much easier to take the person out of the
A
loop in that sort of job and just slow down. To explain, why would a radiologist be hard to take out of the loop but a truck driver would be easy?
B
Well, because the radiologist, in order to do their full job, which is to express to the patient, manage the care team and everything like that, the radiologist requires the automated part of the job in order to do the entire job, they need to be able to read the radiology report. So you can't just say, look, the radiology's talking to the patients about what? What are they talking about? They need to be talking about the actual reading of the chart. So that means that these tasks are very interrelated. You need one and the other in order to do the job. But on the other hand, with let's say a truck driver. Let's say a truck driver is not just one job. It's not just driving across America and delivering the products. You also have to do safety around the truck. You have to actually get to the destination. You have to talk to the representative on the other side. Each of those jobs can be automated independently because they don't really depend on one another. Install a security system on a truck and drive it automatically across the country. You can have an automated process to take in the goods on the other side. These are tasks that are now part of the same bundle of jobs, but you can easily split them.
A
And this gets back to your idea, I think of relational. You're talking about the radiologist as being inside of a thick network of relations. They have to talk to other doctors about managing care. They have to talk to the patient. They have to talk to hospital admin, maybe about the electric health record. And so the more relational work, you're saying might be harder to fully automate than jobs that kind of exist within their own sort of pocket and aren't as relational to other parts of that industry. Another way that you look at this, Please jump right in.
B
Yeah, so just there's a couple ways that you're using relational here. So relational to mean like these tasks are interrelated, and relational could mean that the human is actually valued as the part of the package. Right. So I think the doctor is both. There's a lot of inner connectivity between the tasks, and the human part is valued as kind of a value proposition of the job itself.
A
Another way that you look at this, that you have looked at this in your work, is you've asked people to consider what becomes scarce in a world of abundant artificial intelligence. Talk to me more about this concept of scarcity and how you think it might help us imagine the sort of jobs that might be more plentiful in the future.
B
So this is based on some work that I did with Christoph Madaras. And it's this idea of that people have just an innate desire. Again, we started the show with talking about desires and these desires being catered to. Right. In an economy. And so this innate desire is for exclusivity and exclusion. So what did we do in the. In this paper is we basically made a mathematical model where desire is shaped by both by like kind of the hedonics that I get from, you know, how good a food tastes, how good this shoe looks. And another part of desire, which is, look, I have something that other people don't have. This is the scarcity part. And we ran a bunch of studies to actually test this idea. So one of the studies is really simple to describe. So people kind of came into the room, we gave and we created a product. This is a T shirt, right? A simple T shirt. But we told people this was a unique T shirt just for the experiment. You couldn't get this anywhere else. In one condition. We told, look, everybody in the room can say how much they're willing to pay for this T shirt. If your willingness to pay is above the price, which you don't know about, you get this T shirt and you pay the price. If your willingness to pay is below, you don't get the T shirt, you don't pay anything. And we just measure their willingness to pay. You could get demand curves for this, right? You could trace demand curves. In the other condition, we did something really simple. We said, look, we're going to roll a die and some of you just can't buy it. I'm sorry. And what happens to the willingness to pay of the people who could still buy almost doubled, right? And we didn't do anything to the product. It's the same shirt, right? It's the same product. The hedonic part is exactly the same. We just made the shirt scarce. And so how does this relate to kind of like the human part of the job and the kind of scarcity in the economy and what we've been talking about, a follow up study that I did with my grad student Grey Lynn Mandel, looked at AI generated versus human generated products and you have the exact same product. And we told them, look, here's the product, it was generated by the machine. Here's a product, it was created by a person. This person would only make one product, another condition. We said, look, this person is actually going to make a hundred of these things, things like limited edition versus not and the same thing for AI. So what do you see? When there's only one product available, the human made product gets a huge premium. And the AI product, even though there's only one, is priced much, much lower. If you increase the addition, people kind of pay, are willing to pay the exact same amount. For the AI product, they don't really care. But for the human, the value decreases a lot. And what this tells you is that human generated products can be made scarce just intrinsically by the fact that they're made by human beings.
A
If you're right about scarcity and you're right that maybe somewhat ironically, a world with a lot of artificial intelligence and Particularly a lot of AI made stuff is going to increase the value of things made by humans for humans. What does that imply about the kind of jobs that will grow in the future and the kind of, maybe even education that might be necessary in order to do them?
B
Yeah, so if I'm right, which is big question mark, if I'm right that I think the sort of jobs that would exist, many of them will have similar types. So you would have teacher or doctor or financial planner maybe or something like that. But that sort of like the day to day job will be very different because basically what it would look like is the people's everyday jobs will be mostly completely automated, except for the part that requires the humans. So like, you know, you would have a one on one teacher where the teacher's using a lot of AI technology in order to like kind of like tailor the lesson plan to support the student, but they would still be there to kind of like provide that empathetic and you know, know, human structure around the lesson plan. But you might also get a lot of different types of jobs that we might have no idea about. So even in the relational side, like if you look at like the breakdown of jobs from like 18, from 1940 to now, and you could actually classify them as like weakly relational, not relational, strongly relational, a bunch of the strongly relational jobs didn't exist in 1940. Right. So what that tells me is that there's going to be, if this is in fact a basis, this human desire, we're going to see a lot of relational jobs that we can't really fathom yet because they haven't been invented. And what does this mean for education? That part's much more complicated. So I'm an educator, I'm a university professor, I think about this all the time. What am I going to be teaching my students in 20 years, in 10 years, in five years?
A
And
B
the answer is I really don't know. Right. So it's like you just have to, I think as an educator, you just, the only thing you could be is be nimble right now because every single year I have to almost completely redesign my class since like 20, 2022. You know, at first I had to redesign it so like there were like less take home assignments, just like simple stuff. But now I have to redesign it in the sense that am I teaching students skills that they could actually use in the job market? And so far, yes, back to our spreadsheet argument, jobs aren't really changing that much yet. But I will imagine that at some point they Will. And in that case, I think the goal of the educator is just to be nimble and be willing to say, is my class working for creating human capital?
A
Something I hear in that answer is that there's going to be aspects of human connection that might be more valued in an economy that is both richer and where human made products and human rich services are in higher demand. And one irony or one tension point that I'm not exactly sure how to resolve is that I think you're right that a lot of jobs are going to go toward the equivalent of. You mentioned Michelin restaurants a few times. I don't know if you're a big foodie, but if you do go to a Michelin restaurant, you know how unbelievably labor intensive is. There's always someone whose job is to look at the meniscus of the water in your glass to make sure that it's always a certain level of highness within the glass. You're paying for a high touch experience. And I wonder if an economy that is richer is going to have more jobs that are the equivalent of that. People who specialize in a kind of high touch human connection. That's just one thought that I had. Another thought is that this is all happening at a time when human connection in many ways is in decline. People are spending less time around each other, they're spending more time indoors. And I'm not sure myself how to work out this tension that value in the economy of the 2030s might flow toward a connection which is something that certain aspects of human interaction are flowing against. That's a little bit of a complexifier for me that I might need to think a bit about that.
B
So this, this is something I've been interested a lot in. So like basically you're talking about like people are spending more time by themselves and they're becoming there. There's like a loneliness epidemic almost. Right?
A
Certainly a loneness epidemic.
B
Yeah, right. Yes. And alone a loneness epidemic. And this is something I've been thinking a lot about because if you look at like the amount of money spent on the relational sector, that's not going down at all.
A
Exactly.
B
So what this tells me is that kind of our everyday non economic relations are decreasing, but the economy is still exploiting our base desire for socialization and just making us pay for it, which is kind of a dark side to this relational sector story, is that people will be paying for relational goods, the economy will be humming along, there will be labor, but they might still be super lonely. And so that Part I don't have a lot of stuff to say about because. So, for example, there's a psych science study that was just released in 2026, basically showing what is the effect of chatbot use for basically using it as a partner or as a sounding board, as a friend. What does it mean for loneliness? And this is not an rct, so all the caveats apply, but it looked like loneliness increased. And this is a really depressing point because people are essentially, they're lonely, they desire some sort of connection, so they're trying to use this thing as a substitute. But the brain in our heads is a stone age brain. It wants to connect to other people. And so at some point, you could trick yourself for a couple days, but then you realize you're talking to a chatbot, and that makes you even more lonely than before. So I think that that part is, I think, going to be really, really important when, like, you're thinking about what does it mean when people are kind of like using chatbots and AI as a substitute for human connection? I think people will be, but I think they'll realize that it's not really going to work for them.
A
No, I think this is like a really interesting tension that we landed on, which is essentially that the there will be enormous possible value in new relational businesses, in industries that exploit or serve the deep human desire for connection. But that will happen alongside a number of secular trends that are ironically pushing against connection and pushing toward aloneness and pushing toward chat spots replacing human interaction. I think it's a weird future to point to, but also weirdly plausible. By way of conclusion, I want to make sure that I understand what you see as some of the major arguments against the AI jobs apocalypse. So just the four main things that I have that we walked through were lump of labor, Jevons paradox, O rings, and human privilege. And very briefly, what I mean by that is lump of labor. Historically, technology has happened. Technology has impacted the economy, but rather than take human employment down to zero, it simply opened up new horizons of human engagement. Number two, Jevons paradox. This irony that sometimes as you make an industry more efficient, demand actually goes up rather than essentially, the number of jobs that need to exist in that industry is declining because demand is stable. Number three, we talked about O rings. This irony that even as AI can take on certain tasks, many jobs are like the O ring and the challenger, where you need AI to be good at 100% of the tasks, otherwise the job falls apart, which means that many jobs might be safer than otherwise. Than they otherwise appear. And finally, your point about human privilege is that a richer economy might disproportionately demand more human work, human art, human instruction. And so in that case, we should also be somewhat optimistic about the possibility that people are just going to find stuff to do, because there will be demand for humans to do stuff, stuff. Anything big that we missed here. That's sort of central to your skepticism of why the AI jobs apocalypse is unlikely.
B
I mean, at the end of the day, so I think we covered everything, but at the end of the day, I guess maybe this will make me conservative in this sort of space. But I think, yes, I truly believe AI is different. I think we are going to hit AGI, we are going to probably hit
A
asi, and that is artificial general intelligence and artificial superintelligence. So essentially the ability of artificial intelligence to achieve a kind of network level intelligence that surpasses humanity is what you're saying is possible.
B
Yeah, so I'm a believer in that. So in this case, I'm not really a conservative, but in the sense that I am a conservative is that we should. A lot of stuff has happened in the past 200,000 years that have made our lives so fundamentally different, and yet the employment rate is at an all time high. And that should not be discounted. And we should be thinking about why. And I think the why part is basically the why is everything we've talked about today. And I think that's going be to continue.
A
Alex, Ems, thank you very much.
B
Thank you, thank you.
In this episode, Derek Thompson explores the widely discussed fear that artificial intelligence will lead to an unprecedented “jobs apocalypse.” Joined by University of Chicago economist Alex Imas, the conversation dives deep into economic history, psychological research, and current labor market data to challenge the prevailing narrative. Together, they lay out a thoughtful and nuanced case for why AI is unlikely to cause mass, permanent unemployment—and may even create new opportunities for human work and connection.
On media narratives and layoffs:
“Many of these companies, I think, are surely using artificial intelligence as an attractive excuse to push attention away from what other people might otherwise recognize as poor corporate performance.” —Derek (01:33)
On the spread of human-centric jobs:
“People’s everyday jobs will be mostly completely automated, except for the part that requires the humans.” —Imas (43:31)
On teaching & resilience:
“As an educator, the only thing you could be is be nimble right now… Am I teaching students skills that they could actually use in the job market?” —Imas (45:07)
On loneliness in a more AI-rich world:
“The brain in our heads is a stone age brain. It wants to connect to other people. And so at some point, you could trick yourself for a couple days, but then you realize you're talking to a chatbot, and that makes you even more lonely than before.” —Imas (48:51)
| Timestamp | Topic / Segment | |-----------|---------------------------------------------------------------------------------------------| | 00:53 | Companies citing AI for layoffs and skepticism about their motives | | 06:08 | History: Ricardo, Samuelson, and past fears of automation | | 09:41 | Starbucks automation reversal—human desire for “friction” and personal contact | | 14:50 | Lump of labor fallacy explained | | 18:57 | Jevons Paradox and increasing demand for cheaper, more efficient services | | 24:54 | O Ring jobs—the importance of interrelated task bundles | | 29:33 | Elasticity of demand: how automation can drive more jobs, not fewer | | 30:53 | Horses vs. Spreadsheets metaphor—different types of technological disruption | | 40:05 | Scarcity, exclusivity, and the premium for human-made goods | | 47:32 | The rise of paid relational jobs amid increasing “aloneness” | | 49:34 | Conclusion: Reviewing the anti-apocalypse arguments; lump of labor, Jevons, O rings, scarcity|
| Argument | Explanation | |---------------------|---------------------------------------------------------------------------------------------------| | Lump of Labor | The set of jobs is not fixed; automation creates new needs, desires, and jobs over time | | Jevons Paradox | Cheaper/more efficient production increases demand and can create more employment | | O Ring Jobs | Many jobs require humans to integrate/interrelate tasks—AI can’t (yet) replace entire bundles | | Human Privilege | Scarcity and human uniqueness (relational work, hand-made goods) will keep many jobs in demand |
While AI will transform how work is organized and what types of jobs are most common, history, economics, and human psychology all indicate that mass unemployment is not an inevitable or even likely result. Understanding how human desires, economic elasticity, and the value placed on “relational” work shape a dynamic labor market is key to seeing beyond the hype and doom—and to imagining a future of work that continues to require humans at its core.
Host: Derek Thompson
Guest: Alex Imas, University of Chicago economist
Podcast: Plain English with Derek Thompson, The Ringer
“A lot of stuff has happened in the past 200,000 years that have made our lives so fundamentally different, and yet the employment rate is at an all-time high. And that should not be discounted.” —Alex Imas (52:16)