Transcript
A (0:00)
There have been some pretty dark articles published recently about all the ways in which AI is about to destroy the worldwide economy. Now these include tales of mass unemployment and collapsing industries and white collar workers trying to retrain for skilled crafts jobs like woodworking and plumbing. One of these pieces, a World War Z style dispatch from the year 2028, which was put out by a small financial services firm named Citrini Research, spread so widely and scared so many people that it was blamed for a temporary dip in the S&P 500. All that's missing from these tales are the garbage can fires. So how seriously should we take these economics doomsday articles? Well, if you've been following AI news recently, this is probably a question that you've been asking. And today I want to try to find some measured answers. I'm Cal Newport, and this is the AI Reality Check. All right, here's the thing. Coverage of AI topics moves in waves. You'll have a certain sort of take or idea that will become popular and everyone is writing and talking about it, and then sort of seemingly all at once, all the attention will move on to a new topic as if the other one didn't exist. Like back in 2023, for example, I spent a lot of time trying to explain to people that a static feed forward large language model could not be considered conscious. I had fierce debates about this, and then at some point the whole conversation just moved on with no resolution. Late last year, to give another example, all the discussion was around super intelligence, and I found myself having to argue about how you cannot infer intention in an anthropomorphized manner from the autoregressively produced outputs of a chatbot. But then we've moved on from that recently as well. The topic du jour in AI coverage is this idea that we might not be ready for mass economic displacement that AI is now poised to wreak. Now, I want to go over quickly a few examples among many of some of the articles recently that have been making this point. The first article was published online in February, and it's part of the March print issue of the Atlantic, and it was titled America Isn't Ready for what AI Will do for Jobs. All right, so if you read this piece, it opens on a somewhat long history of the Bureau of Labor Statistics, which is actually quite interesting, the history of the bls. And so you're thinking, okay, maybe this is going to be a sort of thought provoking exploration of job cycles and technological disruption. But nope, it gets a little darker. Let me Read from the piece here. But like all statistical bodies, the BLS has its limits. It's excellent at revealing what has happened and only moderately useful at telling us what's about to the data can't foresee recessions or pandemics or the arrival of a technology that might do to the workforce what an asteroid did to the dinosaurs. I'm referring, of course, to artificial intelligence. Yikes. Remember, the asteroid that killed the dinosaurs killed off most of life on Earth. So we've kind of raised the stakes pretty high for what's about to happen with AI. All right, so the article goes on. The author says tasks that once required skill, judgment and years of training are now being executed relentlessly and indifferently by software that learns as it goes. I don't know what it means for a language model to be relentless or indifferent, but I guess they are. Quick fact check. The language model is driving most of the tools that we're talking about here. They don't learn as they go. They're static and trained and static batches. I guess you could make a case that if you're looking at like a terminal agent, like Claude code, that it could be doing updates to a markdown file that it uses as part of its prompting. But I don't think that's a great understanding of how this AI works. It's treat it more like a human brain. All right, anyways, let's keep going here. But anyone subcontracting tasks to AI is clever enough to imagine what might come next. A day when augmentation crosses into automation and cognitive obsolescence compels them to seek work at a food truck, pet spa, or massage table. At least until the humanoid robots arrive. Man, the word might does a lot of work in this essay. He said before, AI might be like the asteroid that destroyed 99% of life on Earth. And here he said, AI might make us all have to work at pet spas until the robots come. But there's evidence for this. So what's the main argument for why we should be concerned about this? Let me read from the article again. In May 2025, Dario Amade, the CEO of the AI company Anthropic, said that AI could drive unemployment up to 10 to 20% in the next one to five years and, quote, wipe out half of all entry level white collar jobs, end quote. Jim Farley, the CEO of Ford, estimated that it would eliminate literally half of all white collar workers in a decade. Sam Altman, the CEO of OpenAI, revealed that, quote, my little group chat with my Tech CEO, friends, end quote. Has a bet about the inevitable date when a billion dollar company is staffed by just one person. I'll step out of the quote here. The Atlantic piece then goes on to mention layoffs that recently happened at many companies, including Meta, Amazon, UnitedHealth, et cetera. All right, back to the quote. Taken together, these statements are extraordinary. The owners of capital warning workers that the ice beneath them is about to crack while continuing to stomp on it. All right, we got to hold on for a second here. I want to break apart. This is the evidence for the claim that. Well, we got two claims. Either all life on Earth is going to be wiped out like the dinosaurs, or knowledge workers are going to have to be massage therapists. It's worth taking a little bit closer look at exactly what this evidence is is stating. I want to start with the layoff piece because we covered this in last week's episode of the AI Reality Check, and I've covered it on my newsletter@cal Newport.com as well. We don't, for the most part, these layoffs have nothing to do with AI automating jobs or increasing efficiency to the point that you don't need more workers. Now, I haven't covered every one of these companies mentioned in this article, but I did cover the first two companies mentioned, Amazon and Meta. I've talked on background to multiple people within both of those companies and they're both very clear. Recent layoffs have nothing to do with AI making those workers unnecessary. They have everything to do with over hiring during the pandemic that's now being corrected for the bulk of the layoffs at Meta recently. Where? In the reality labs, which Zuckerberg had put a massive amount of money in over the last five years to try to build the Metaverse, where we're all going to put on virtual reality helmets and float around space stations and play cards. Remember that? Yeah, it was a bad idea. So they're firing a lot of those people. They want to put that money elsewhere. So right off the bat, okay, this is Vibe reporting 101. You take a fact that you have a scenario that's scary, and then you take a fact that directionally seems aligned with that scenario, but in reality is not, and you list it next to it to try to ground the hypothetical into something that's happening now, which vastly increases its value to actually cause anxiety or fear. All right, but what about the other piece of this argument? The idea that AI CEOs are making dire predictions? If the owners of capital are warning us, then for sure, we have to listen. But wait a second. We could flip this on its head. Of course, the CEOs of AI companies are making dire predictions about how powerful their tools are going to be because they are like the wizard and wizard of Oz say, don't look behind a curtain. Don't look behind a curtain. Terrified that people are going to spend more time asking about their financials, asking about the fact that in order for them to keep up with their debt, I'm talking about the major AI companies and not face implosion over the next one to two years. They need to be the fastest growing companies in the history of companies. We're talking about hundreds and hundreds of billions of dollars of revenue needs to be generated at some point in the next year or two. And it's unclear how they're going to do this beyond putting ads on ChatGPT and Cloud Code subscriptions, which they're currently losing money on. So yes, of course they would rather be talking about dire predictions of some future because guess what? That makes their technology the most important technology in the world and justifies investors continuing to put money into their company. So I'm not saying that's definitely what's happening, but I don't have to stretch to find an alternative explanation for why Dario Amade or Sam Altman love to spout out these sort of big predictions. It completely serves their purpose. I want to say, look, it's a good writer. The rest of the article, it's a good article after this. It's well researched. He talks to a lot of people. You learn a lot about labor statistics, you hear from a lot of experts. But I just want to point out the core. The beginning of the article has this combination of vibe reporting and appeal to biased authority that, as we're going to see, is sort of a theme in these economic doomsdays article. All right, let me talk about another one. Our second example here, this was from last week, I think in the New York Times, was an op ed that had a happy feel good title. Mass hysteria. Thousands of jobs lost. Just how bad is it going to get? Oh, geez. All right, so the piece opens. You don't choose the titles if you write an op ed. So let's put that aside. Let's look at the piece to see what it actually argues. The piece opens with the story of a college graduate having a hard time finding a job. Let me read this here. Just a few years ago, an entry level role with a bank or an asset management firm might have been Mr. Griefenberger's for the asking. But the white collar job market has cooled sharply while the unemployment rate remains relatively low, 4.3%. Office jobs are suddenly a lot harder to come by for recent college graduates and experienced professionals alike. Now, this is an important real story. Unemployment's pretty good, but there is a cooling, especially on entry level hiring in knowledge work jobs that has been persistent really for multiple years now and isn't yet improving. All right, so why is this happening? Well, you can ask economists and there's three reasons they'll give you in descending order of importance. By far the number one reason, most important reason explaining this trend is that white collar industries hired aggressively in 2020-2022 as pandemic era digital growth was super strong. And there was like these great resignation fears which led companies to overcompensate and offer like very attractive packages. It was like, get people in the door because we're worried about losing workforce. All right, now after that pandemic period is over, the economy is trying to correct for this. And we have a lot of employers not firing people, but they're going into what's called a no hire, no fire phase, where they say, okay, we need to slow down here. We have too many people. Most of us don't want to do mass layoffs of too many people because we might, you know, they might be useful in the future, but let's do no hire, no fire, which is how you get to this unusual situation where unemployment is actually pretty good, but you also have low new job growth. The secondary cause mentioned by economists is the higher interest rates. They started going up in 2022. They try to offset the inflation caused by Covid era stimulus investments. That slows down business expansion. Right. That's Economics 101. The third cause is global uncertainty, especially in the American context, the tariffs. What's happening in educational and now educational world. And now we have global wars. It's an uncertain time. So there's a lot of businesses that are sort of like, let's just wait and see. We don't need to, we are not sounding the alarm bells yet. We don't have to, you know, greatly reduce like we would into a strong recession. But we're not gonna, let's be careful about hiring right now as well. All right, so let's return down to that times bed. I'm sure it says like, this is what explains this. So, you know, it is what it is. Hopefully this will get better. All right, let's read what actually they actually say instead. Many companies went on hiring sprees, kind of the pandemic and the slowdown is perhaps just the inevitable adjustment. All right, so far so good. Are we gonna leave it there? Nope. Here's what comes next. But it is happening against the backdrop of the generative AI revolution and fears that vast numbers of knowledge workers will soon be evicted from their cubicles, replaced by machines. This is kind of a remarkable statement because it's, it's vibe reporting, but it's Vibe reporting that's transparently acknowledging that it's Vibe reporting. Right. They're saying, look, there are good explanations for this, but this other thing is happening now that makes us afraid. So let's just pretend they're connected. Even though we have other explanations, it's directionally aligned with this other fear we have. So why don't we just put them together? What is the main evidence side in this op ed for these fears? I'll quote here the people that the people selling the artificial intelligence are among those sounding the most ominous warnings about its potential fallout is notable. Some of them are prone to bombastic claims. But it's hard to see how spooking the public serves their interest. It might be wise to take their predictions at face value and assume that AI is indeed going to devour a lot of white collar jobs. Again, this is the appeal to biased authority. It is not hard to see why the CEOs of the company selling this technology like stories that makes this the most powerful, important technology of the last 200 years. Of course they want that story out there, because without that story, again, it becomes, how are you going to generate $300 billion in revenue in the next two years? They don't want that question. So they've been spouting these things for the last five years. I don't know why this idea of like, we need to take at face value what the owners of the technologies say about what their technology is going to do. I don't think we should take them at face value at all. We should be highly suspicious of them. All right, so anyways, again, this article goes on and it looks at a lot of things. It's not a bad article, but again, we have this sort of vibe reporting mention stuff that's happening that's directionally aligned with the fear. Then you mention the fear and then you justify the fear by saying, look, the CEOs of these companies are the ones sounding the alarm. Why would they sound the alarm if it wasn't real? All right, let me get to the third article, which is the one that spooked the stock market. And this will be the sort of final example I point out here before I get to some stronger responses. This article was called the 2028 Global Intelligent Crisis Intelligence Crisis, A thought Exercise in Financial history from the future. It was published on substack by a small financial services firm called Citrini Research. Now look, right off the bat, if you read this substack piece, the authors are clear that they say this is a thought experiment and not a prediction. And you'll hear actually that the authors have been interviewed a lot in the aftermath of this article going viral and spooking people. And they're really leaning into this. This was just a thought experiment. I was writing fan fiction, like why are people taking this so seriously? But if you read their same introduction, they then go on to say, hopefully reading this leaves you more prepared for potential left tail risk as AI makes the economy increasingly weird. So clearly they're saying this is a possibility, this is a prediction. We're not saying it will definitely happen, but it's on the table and we need to be worried about it. So I don't think they get off the hook by saying, hey, we said this is not a prediction. But you did say pay attention to this so you're prepared for what might come. I'm not a linguist, but that kind of sounds like the definition of a prediction. All right, so what does this article actually say? Well, it is written in the style of World War Z. That is, it's written like, like a dispatch, I think. It's like a financial report like these companies write, but from the year 2028, reflecting on the dire current circumstances and how the economy got there. So it's told in this sort of fake future retelling style, which is a very powerful style. Let me read a quote here from early in this sort of fake dispatch from the future. Two years, that's all it took to get from contained and sector specific to an economy that no longer resembles the one any of us grew up in. This quarter's macro memo is our attempt to reconstruct the sequence. A postmortem on the pre crisis economy. And then it goes on to lay out the scenario where it starts right now. And it's like, well, there's layoffs happening, but we were happy about productivity booms. And the stock market goes up until about the fall of 2026. And then as automation continues, these cyclically reinforcing negative feedback loops emerge. The economy crashes the next year in November 2027. And again, we're Back to garbage can fires and knowledge workers having to eat their dogs. This was a very effective article. It spread really far for two reasons. One, that World War Z style of storytelling where you're telling a story like this is what happened. Let me look back on it is very emotionally engaging and it presses fear buttons much more than sort of straightforward analysis or prognostication. And two, there's a vibe reporting trick here that we've seen in the other two examples. They peg their fake scenario to something that's real happening right now. It began with layoffs in the tech sector in 2026, which there are happening right now. Now, of course, as I've covered in this episode, in the last episode and ad nauseam, the layups in tech industry started a few years ago. It's in response to overhiring during the pandemic, but whatever. When you peg a story that ends somewhere fantastical and terrible with something that's happening right now, your mind puts it on a reality trajectory and it makes it much more believable. So that went viral. People said it had to do with a collapse in the S. Not a collapse, a minor dip in the S&P 500. Other commentators have said there's a lot of factors why there might have been that temporary collapse in S&P 500, but it got a lot of news, especially in the financial world. All right, so how. Seriously, I mean, I talked about some of the bad reporting techniques in these articles, but it doesn't mean that, doesn't mean a priori that they're also wrong. So how seriously should we take these scenarios of economic doom? I gotta say, they're very anxiety provoking. I don't like dystopian fiction. Right? Like I read World War Z, I really didn't like it. I don't like watching zombie movies. Dystopian, especially like collapse of society tales and movies. They press a lot of buttons for me. So I'm someone who knows a lot about AI and am a critic of hype. And even for me, these were distressing. So I can only imagine how much distress these type of articles are causing for the millions of people that are reading these in major publications. So how seriously should we take them? Let me tell you what made me feel better, and hopefully it'll make you feel a little better as well in the wake of the Satrini article, because that spread through the financial world and it might have had an actual impact on the stock market. In the wake of that Citrini article, professional economists and global macro strategy analysts, people who their goal is not engagement or impacting the conversation. It's to make money based on accurate understandings of what's likely to happen in the economy. And they came out of the woodwork and said, hey, enough. This is ghost stories. And they're not. We have no reason to believe they're true. And hearing from these economists, I have to say, made me feel a little bit better. I'm going to give you some quotes and hopefully it'll make you feel a little better as well. The New York Times, to their credit, published an article called Bleak Research Report Stokes AI Debate on Wall Street. It was written by a financial reporter, and they actually quoted some serious economists who were not that impressed by the Citrini article. Let me read you two quotes. Here's one the argument leans heavily on narrative and emotion rather than hard evidence, jim Reed, a strategist at Deutsche Bake, said of the report. That doesn't mean it will ultimately be wrong. But he added that the vibes to substance ratio is undeniably high. All right, here's another quote. On Tuesday, Christopher Waller, a governor on the Fed board, noted that he had not read the Citrini report, quote, deeply, end quote, but pushed back on the broader idea that AI will lead to a rapid rise in unemployment as technology displaces white collar workers. I don't think that is going to happen, Mr. Waller said, adding that he is not a doom and gloomer like that report was. I think my favorite response, however, came from Citadel Securities. So a global Macro strategy analyst for Citadel securities named Frank Flight put out a report in the aftermath of the Satrini article that had a sort of sarcastic title, the 2026 Global Intelligence Crisis. So the Satrini report was the 2028 Global Intelligence Crisis. Say, like, hey, everything has gone wrong in these two years. And so he called it the 2026 global intelligence crisis. But here he's referring to the intelligence crisis being people believing these types of stories. And so he does a sort of faux opening. He's like, here he's describing our current situation. And that sort of faux opening describing our current situation sort of sticks in the dagger with the following despite the macroeconomic community struggling to forecast two month forward payroll growth with any reliable accuracy, the forward path of labor destruction can apparently be inferred with significant certainty from a hypothetical scenario posted on Substack, he's sort of making fun of people in the community who were taking that substack post with any seriousness. He then proceeds to kind of educate in a semi accessible Way, the types of things that global macro financial analysts look at, especially when it comes to technological disruption and why they don't see signs of some sort of major calamity coming and they're not particularly worried about some sort of collapse of the economy. I'm going to read a few of these quotes just to give you a sense of the type of things covered in this article. Number one, we would posit that if AI represents imminent displacement risk, the real time population data would show an inflection upwards in the daily use of AI for work. The data seems unexpectedly stable and presents little evidence of any imminent displacement. Right. So again, there's lots of discussion about this, but they're looking at the Fed's data out of the St. Louis Fed and they say there's no rapid uptake in the way that the news media would have you believe in AI use. Second quote. The current debate around artificial intelligence conflates the recursive potential of the technology with expectations of recursive economic deployment. Technological diffusion has historically followed an S curve. Early adoption is slow and expensive. Growth accelerates as costs fall and complementary infrastructure develops. Eventually saturation sets in and the marginal adopter is less productive or less profitable, which causes growth to accelerate. I'm seeing this argument from a lot of professional analysts of technological disruption. They say, man, we always make the exact same mistake. You have slow and then you get a period of speed up. And we say that speed up will go on forever. And let's keep extrapolating out that curve. And if we keep extrapolating out that curve collapse or singularity or whatever the thing is that you want to say is going to happen. But this is never what happens at S curves. It goes up and then it begins. Other sort of factors contained to growth that go slower than you think. There's time to adjust. They say, I have no reason to believe. Why would this be different? All right, let me read another quote here. Displacing white collar work would require orders of magnitude more compute intensity than the current level utilization. If automation expands rapidly, demand for compute definitionally rises, pushing up its marginal cost. If the marginal cost of compute rises above the marginal cost of human labor for certain tasks, substitution will not occur. Create a natural economic boundary. We don't have nearly enough compute for these scenarios. And as they're saying, as you try to build out compute for more and more use, it's going to drive up the cost because we're going to have a mismatch between demand and actual supply. As the cost comes up, it Drives back down the demand. We are already actually seeing this with the one sector where after five years of work we're finally seeing tools. This is the best case scenario for AI. We're finally seeing tools that are really catching the interest of a sector and that's in computer programming. All of the evidence I can find right now seems to imply that these companies are selling the compute for these agents for computer programming at a significant loss because they're trying to fight for market share when they have to actually go. Because again, they have huge debt when these companies actually have to try to make more profit off of this and these costs get adjusted to the reality of how much expense they're incurring at the AI companies and you're going to see like a real moderation probably in like how much we use for programming. And is it really worth, is it worth $2,000 a month for an individual? $5,000 a month? I mean, it's going to be interesting. And that's just for this one first use case. So I think that's interesting to see as well. They also say, quote, moreover, there's little evidence of AI disruption in labor market data as of today. In fact, the forward looking components of our labor market tracking have improved recently. So huge mismatch between what the financial analysts are seeing and what the op ed writers are hypothesizing. The evidence of the financial analyst is their decades of experience of trying to understand the labor market and technological disruption. The evidence of the article on op ed writers, Amazon laid off people and Dario Amade says his technology is the most powerful thing ever. All right, let me read the conclusion from this Citadel securities piece. For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption experienced, near total labor substitution, no fiscal response, negligible investment absorption and unconstrained scaling of compute. It is also worth recalling that over the past century, successive waves of technological change have not produced runaway exponential growth, nor have they rendered labor obsolete. Instead, they have been just sufficient to keep long term trend growth in advanced economies near 2%. Today's secular forces of aging population, climate change and deglobalization, exertion, downward pressure on potential growth and productivity. Perhaps AI is just enough to offset these headwinds. So they're saying, and I think this is actually pretty optimistic, they're saying the reality of major, major disruptive technological changes historically has been just enough to offset all sorts of negative trends and keep at least some growth happening in the economy. They say, well, we hope for here's what they're predicting from AI. They're like, we have lots of negative growth forces that we're going to have to encounter in the next couple of decades that are going to pull down the. Hopefully we'll get enough out of AI to stave those off and still get at least some economic growth. That is a very different vision. AI is the latest technological innovation to stave off de growth is a completely different argument than. No, no. This is the one technology in history where the S curve doesn't happen and it's going to go exponentially and it's going to crash the economy. They end on a positive note there. All right, so here's. Let's step back. First of all, I want to say the economists make me feel better. It doesn't necessarily mean, of course they're right and maybe we are going to have all these factors will come together, right, to destroy the economy. But I do like the fact that the economists aren't. They're not that worried about it. I think we see this reflected in the stock market, where we're seeing, again, if serious investors really believe that the economy was going to crash in the fall of 2027 and that we're going to have massive decline starting in October of 2026. The COVID dip from 2020 is going to look like a minor correction, right? Like it would be substantial, but the reactions are small. Like they're actually being. They're pessimistic on the frontier AI companies because they think they're spending too much money. So they don't buy the AI tech CEO stories that their technology is going to automate all work, which would make them the most valuable companies in the history of companies. The stock market doesn't buy it. We see more moderate bets against specific sectors where they think they're going to have practical disruption, like the SASS sector. And even those are modest. And we're seeing actually much bigger reaction from things like the cost of oil going up to $100 a barrel that caused way bigger impacts on the stock market than these scenarios of the last two months about the economy collapsing. So to me, that makes me feel better. But it doesn't mean there's not going to be an impact. And they could be wrong, or maybe the impact is going to be smaller. But let's put that on the table now, right? Let's say, okay, maybe the economy is not going to collapse. I don't have to learn how to light a garbage can fire, become a pet masseuse. But maybe we're going to have like, it's going to be a hard run, there's going to be economic disruption and it's going to be like more so than almost any other technology in the past. And it is going to be disruptive in some way. Let's say that was the case, if that is. And it could be true, and I hope not, but it could be true. AI doomsday reporting isn't helping. What I'm seeing is that these sort of AI doomsday articles where you try to one up each other with how prescient you are about how bad things are going to get prevents us from responding in effective ways. If we instead treat AI like a normal technology and we respond with our normal tools when we see it doing things that we would normally say, this is a problem that we need to correct, I think we can have much better progress in containing, shaping and directing the AI revolution than instead falling back to these massive dystopian World War Z tales. The fallback on doomsday writing is letting the AI companies off the hook. Look at what I covered last week. Jack Dorsey negligently goes off and makes these huge acquisitions, sort of in an impulsive fashion, throughout the pandemic of these crypto and blockchain companies. They don't go well. So he then impulsively fires half of his workforce because he can't do anything. Injectors, he can't do anything in measured increments. Everything he does is drastic, right? But because he comes out and says this is just the first sign of the AI economic apocalypse. I for one, am learning how to make trash can fires because I'm going to not only be a pet masseuse, but have to maybe eat the dogs because there'll be no money left in the world. Because he leaned in the doomsday reporting. What was the coverage of the block layoffs? Reporters would rather treat it as evidence of the narrative economic doomsday. That's what they focused on. In fact, he cited in one of those two, one of the articles I talked about, the block layoffs are cited as evidence of what's coming. The right way to treat that was like, yeah, sure. And like, I'm sure you have a perpetual motion machine and you can fly. Back to the point. What happened to those crypto investments? Why did you have to lay off that many people? Who did you lay off? Wait a second. Most of these jobs have nothing to do with AI automatable roles. We would hold his feet to the fire like you're being Negligent and impulsive, but instead we're like, oh yeah, thank you, Cassandra, for helping us understand what's coming. The same thing has happened with these AI CEOs. They find like, the more, the more dramatic and fearful of a thing to say, the more the attention turns away from what's actually happening. Journalists used to severely distrust billionaire tech CEOs, but not when it comes to this issue. We look to them as like they are guiding us to understand what's happening with this technology. These CEOs I've been covering this, have been saying crazy stuff for the last four years. They keep changing what it is en masse. They were all talking about super intelligence and the machines getting out of control and like an alien mind, they're all talking about that. And they all shifted at some point point to something else. And now they've shifted to like the economy is going to crash. They just follow, they just say stuff. And it's entirely in their favor because again, your technology automates all jobs. Well, where am I going to put my money? The only place left to put my money is in like the three companies that are going to run all the jobs. So I think doomsday reporting prevents us from actually responding, prevents us from saying, when Dario Almade's like 50% of white collar jobs are going to be gone. I'm like, uh huh, uh huh. You need to make $300 billion somehow in the next four years in order to stave off, get anywhere near profitability. How are you doing that? That's the question we could be asking. So I think that we don't need to ignore AI or its impact on jobs. We need to cover it like a normal technology so we can deploy the type of normal things we would do. When we see disruption or changes, when we see that as cover for malfeasance or impulsiveness or whatever's going on. And so I hope we move past. By the time this comes out, we'll probably have moved on to, you know, something else. I don't know what. AI and birds are going to spy on us, whatever it is, and I hope so, because I think this AI doomsday reporting not only is stressing people like me out, but it's preventing us from actually respond to real impacts of this technology in a way that could really matter. All right, enough of my sermon. Hopefully some of this makes you a little bit feel a little bit better this week. We'll be back probably next week. I'm doing this on Thursdays, maybe not every Thursday, but if there's something to talk about. I'll be back next Thursday. Remember, take AI seriously, but not everything that's written about it. See you next time.
