Transcript
A (0:00)
I'm not sure if you've noticed, but a lot of AI coverage has gotten out of hand recently. I mean, if engineers get any more excited about Claude Code, I think they're going to elect it mayor of San Francisco. So today in the idea segment of this show, we're going to take a closer look at this issue. I'm going to identify the biggest traps to avoid when reading news about AI. If you're looking to just get the straight facts about this technology and not succumb to overhyped terror or exhilaration. In particular, I'm going to introduce what I think are three increasingly common shady moves that show up in AI coverage. Here's my name for them. Vibe Reporting, Digital Ick, and Faux Stonishment. I'm going to describe each of these traps and I'll give you some examples of them from out in the wild so you will know what to look out for. Then in the practices segment, we're going to revisit a popular topic in online circles, morning routines. I have a take on these rituals that I think might surprise you. And finally, just a quick heads up. In the Q and A segment, I'm going to respond to the rumor that I filmed a course for masterclass. Spoiler alert. I did, and it's available now. And I'll tell you more about it when we get there. All right, so we have a lot to get to today, as always. I'm Cal Newport, and this is Deep Questions, the show about the fight for depth in an increasingly distracted world. And we'll get started right after the music. All right, so to begin our investigation of these traps and AI reporting, I'm going to bring an article up on the screen for people who are watching instead of just listening. This article comes from the publication Quartz. It came out, I think, last week. And the title I'll put up on the screen here is Amazon is laying off 16,000 more workers as AI accelerates tech job losses. Here's the subhead. Jobs are going to be impacted by what's coming with AI over time, Amazon CEO Andy Jassy said before the layoffs were announced. All right, so you look at that article and I think there is a clear message. Amazon laid off people because of AI. I mean, that's literally what it says in the headline. The subhead is the CEO saying layoffs in the future will continue to be impacted by AI. If we look at the article itself, nothing in the actual text contradicts that. There's quotes about how many people they're Firing and what benefits they'll get and the fact that Amazon's kind of cutthroat. But you're left with the clear impression that these layoffs are about AI. Now here's what I want to do next is show you a different story on the same layoff. So we're going to switch now from Quartz to cnbc. So this is now going to be financial news where they care a little bit more, right? Because these are investors reading this. They want to get more to the heart of what's really going on here. Here's the headline of this exact same layoff story from CNBC. Amazon is laying off about 16,000 corporate workers in latest anti bureaucracy push. And you look at these bullet points, the key points. Amazon is laying off about 16,000 corporate workers and latest push, reduced bureaucracy. It marks the second round of mass layoffs. Some days earlier they got an email about these changes. Right. This is a very different feel than what we saw in Quartz. And in fact, if you read farther along in this article, you get some other information that's kind of interesting. You get a clear explanation from the CEO that this is about reducing the workforce after a hiring spree that happened during the pandemic. There was a lot of hiring during the pandemic when people turned more to cloud computing, they hired a lot of people during the pandemic. And now as the CEO says in this article, they're cutting those back again. It's in response to the amount of people that they had hired. Let me read you the actual quote here. It says CEO Andy Jassy has looked to slim down Amazon's workforce after the company went on a hiring spree during the COVID 19 pandemic, partly to meet a surge in demand for E commerce and cloud computing services. Well, wait a second, what does that have to do with AI? It goes on later. You find there's a quote in the article, I'll have to find it here, where they basically make it clear that, yes, at the same time Amazon is investing more in their AI products. So presumably some of the money saved by firing people could go to their AI products. But that's about as clear a connection between these layouts and AI as there is, which is we overhired, we're cutting back. We have better uses for our money right now than maintaining this many managers. So that's a much more boring but much more accurate story about what was happening there. Now, I actually wrote about this in my newsletter recently@calnewport.com I had an article called the dangers of Vibe reporting about AI where I went through this case and here's what I wrote. I'm going to read from my own article. In recent years I've seen more articles follow the general approach demonstrated by the court's example. They identify an alarming attention catching fear about AI that seems prevalent in the cultural zeitgeist and then shape a story to feed the narrative. The key to this reporting strategy is that the articles never make explicit claims. They instead combine cunning omissions and loosely related quotes to make strong implications. The name I give for this as I previewed and looking at my own essay There is Vibe reporting because what you're trying to do is support a preexisting vibe more than what than trying to get to the bottom of what's happening. I would say that Quartz article never actually comes out and says specifically Amazon laid off people because a they could replace them with AI or because AI made them more efficient. They never explicitly said it, but it was clearly the vibe they were feeding by putting AI in the headline by putting an unrelated quote from the CEO talking about AI related layoffs that could happen in the future. It's certainly the vibe they were trying to create by omitting in their article any of the publicly available discussion which was included in the CNBC article about the stated reasons for these layoffs, which had to do with hiring too many middle managers during COVID They left out another key point. There was an earlier round of this firing happened in 2022 and 2023 after the pandemic, but before ChatGPT even came out. This is part of an ongoing effort that has nothing to do with AI tools replacing people, but with trying to streamline. Now I'll tell you I heard from multiple Amazon executives on background after I published my newsletter on this who all confirmed they said we were somewhat baffled. I'm paraphrasing to see the coverage that made it seem like these layoffs had something to do with AI. They had nothing to do with AI. Amazon is ruthless about trying to cut out inefficiencies and they love to cut down units whenever they can. That's partially how they stay keep their profit margins going. All right, so that's five reporting unrelated quotes and omission of facts. So I want to bring up an article from the New York Times. I mentioned this before on the show last year, but I think it's another great example. This came out in 2025 in the summer. The headline here is the Unnerving Future of AI Fueled Video games. And I'm going to read a couple of quotes. They're too small to see on screen, so we can take that off, but I have them here on paper. I want to give a couple examples of vibe reporting techniques happening in this New York Times article about AI in the video game industry. All right, listen to these two paragraphs which appear back to back in this article. Paragraph 1. At the pace the technology is improving, large tech companies like Google, Microsoft and Amazon are counting on their AI programs to revolutionize how games are made within the next few years. Paragraph two. Everybody is trying to race toward AGI, said the tech founder Kylan Gibbs, using an acronym for Artificial Generalized Intelligence, which describes the turning point at which computers have the same cognitive abilities as humans. There's this belief that once you do, you'll basically monopolize all other industries. So see what they're doing there. Paragraph one was saying something that was kind of mundane, which was video game makers are looking forward to AI powered tools. You know, they assume there'll be more AI powered tools that they use in making video games in the future. Paragraph two that follows it immediately is some founder talking in like a sci fi tone about AGI powered machines taking over all industries. You put those next to each other and now you have taken something which is boring. Yeah, we use AI powered tools in like graphic fields to something that gives you a vibe of big disruption is coming, that computers are going to monopolize industries. You put those next to each other, you create a vibe. All right, I want to give another example later in the article. The reporter goes to a video game industry convention and he says, I'm quoting here. It provides an eerie glimpse into the future of video games. Well, here are the next three paragraphs that follow explaining this eerie glimpse. Engineers from Google DeepMind, an artificial intelligence laboratory, lectured on a new program that might eventually replace human playtesters with autonomous agents. Next paragraph. Microsoft developers hosted a demonstration of adaptive gameplay, showing how artificial intelligence could analyze a short video and immediately generate level design and animations. And executives behind the online gaming platform Roblox introduced Cube3D, a generative AI model that could produce functional objects environments from text descriptions. So this is an eerie glimpse of the future. They just described three demos. This is not technology that exists now. It was, might, could and could three demos of graphic tools, tools you could use. We've had computer tools improving for video game design since the very beginning of the American video game industry in the 1980s. This is nothing new. I mean, this is like the unreal engine graphical game design. There's constant new improvements. I mean, just the improvements alone in doing 3D graphic design and how powerful programs like Blender have gotten. It's a rapidly moving industry. These are like, okay, sure, like AI can help create 3D objects or do some play testing or whatever. Like, this is sort of in line with other innovations we've had over the last 30 years. So how is this isn't really that eerie? So what they do. The reporter then follows those three demo descriptions with the following paragraph. These were not the solutions that developers were hoping to see after several years of extensive layoffs. Another round of cuts in Microsoft gaming division this month was a signal to some analysts that the company was shifting resources to artificial intelligence. So they have a paragraph about layoffs in the gaming industry right after this discussion of these demos for like graphic AI tools. Again, these aren't related. The layoffs in Microsoft gaming industry came from a big round of layoffs that Microsoft did because of, yeah you guessed it, pandemic over hiring. So they also, like Amazon, cut back on their less profitable divisions like video games so that they could spend more money building data centers because OpenAI was giving them billions of dollars a year right now to have access to their data centers. So that seemed like a better profit area. So none of those job losses had anything to do with AI. It was just right sizing after the pandemic when they hired too many people. But if you put a paragraph about job losses and developers being upset right after those discussions of the demos, again, you're trying to create a vibe. AI is taking game developer jobs, but it's not doing that. And these type of tools are no more. Again, there's been huge advances in computer tools for making video games for the last 40 years. It's not that interesting of a story, but you put it next to a paragraph of job loss and you create a vibe. All right, so you get the picture. This is what I mean when I talk about Vibe reporting. It's what you omit and how you combine loosely related paragraphs to give a vibe. Nowhere in that article does it say developers are being replaced by AI, or we expect there to be massive layoffs due to AI soon. No concrete claims are made, but you certainly get that vibe when you come away from that article. Let's take a quick break to hear from some of our sponsors. You know, the best way to avoid unhealthy food? Have healthy options that are even easier to get to. This is my strategy. I get the junk out of my house and I fill my fridge with things that are easy to prepare, taste good, but I know are good for me. This is why I become such a big fan of Factor. Factor is a ready to eat meal delivery service. You choose the meals from 100 rotating weekly options and they deliver them right to your door. They're fresh, not frozen. You put them in your refrigerator and you can just heat them up in around two minutes using your microwave. I like Factor because the food is high quality, featuring lean proteins, colorful veggies and healthy fats. There's no refined sugars, no artificial sweeteners, no refined seed oils. They have categories of meals for whatever your goal happens to be, from high protein to Mediterranean diet to GLP1 support. So head to factor meals.com deep50off and use the code deep50off the number 50 deep50off to get 50% off and free breakfast for a year. It's a good deal. Jesse. Eat like a pro this month with Factor New subscribers only varies by planning one free breakfast item per box for one year while subscription is active. I also want to talk about Monarch. Did you make a New Year's resolution last month about getting your finances in order? Let me suggest a tool that will help you succeed with that goal. Monarch. An all in one personal finance tool designed to make your life easier. It brings your entire financial life, budgeting, accounts and investments, net worth and future planning together in one dashboard on your phone or laptop. Monarch shows you exactly where your money is going and helps you direct it towards what matters most. We're talking about, you know, budgeting but like also payoff timelines or tracking savings goals or updating up to the moment snapshots of your of your net worth. All of this in a single place. This works. Monarch has helped users save over $200 per month on average after they join. So set yourself up for financial success in 2026 with Monarch, the all in one tool that makes proactive money management simple all year long. Use code deep@monarch.com for half off your first year. That's 50% off your first year@monarch.com if you use to code deep. All right, Jesse, let's get back to the show. All right, I want to move on now to the second trap in AI reporting that I want you to keep your eyes open for. I'm going to return to that same New York Times article, but now I'm going to go to the very top of it. At the very top of that article they have an animation that demonstrates this second trap. So I'll bring this on the screen here for people who are watching instead of just listening. What you see here is screenshots from a video game demo is for a video game about the Matrix. And the text here in the middle is, and if I press play, you can actually might even be able to see them move. The text here in the middle is quotes from these NPCs in this game. So this first text says, I need to find my way out of the simulation and back to my wife. A man said, can't you see I'm in distress? Here's another screenshot. The text says, there's the NPC here saying, I am not just lines of code. A man in a business attire exclaimed, I am Liam. I am a real person enjoying the city. And then the reporter says, on this third screenshot, characters in a video game version of the Matrix seem to be gaining sentience thanks to an AI program. If we go into the article itself, it says the unnerving demo released two years ago by an Australian tech company named Replica Studio showed both the potential power and consequences of enhancing gameplay with artificial intelligence. You come away from seeing those screenshots, reading those texts and getting that conclusion, and you were left unsettled, like, well, this seems unsettling. They're showing like screenshots of digital characters who are like, help me, I'm in a game. I'm not a game, I'm a real person. And they're saying this is troubling and that this is a troubling glimpse of the future. It gives you a generally unsettled feeling. But let's say we were from like a video game trade magazine saying, well, what are the actual technical details here and what are the concrete implications? Well, there's nothing interesting here. It turns out Replica Studios, what they did is they, you know, they have a standard 3D game environment. You know, they probably built this on Unreal Engine. And the thing they, they tried is they said, when you talk to a non player character, what we'll have our our game do is send a prompt to ChatGPT and say, hey, what response should this character say? And then we'll just say back whatever ChatGPT told us. So the game was just prompting ChatGPT and saying, hey, imagine that you are a character in the Matrix and someone said this to you. What is, how might you respond? And then it gave it like, oh, here's how a person in the Matrix would say. And then they gave that back to the user. So this is the same technology that we had in late 2022 with ChatGPT, there's no technical innovation here. It's just, can ChatGPT produce, you know, text in the style of someone who is trapped in the matrix? Of course it can. Of course it can. And that's it. And Replica Studio shut down the demo because it was too expensive, obviously, because it costs Money to query ChatGPT. So it's sort of stupid. You can't, you can't have a video game constantly querying right now a language model to do NPC voices. It'd be thousands and thousands of dollars a month if you had any sort of regular usership. So that's all it was. It's not that interesting. There's no technical breakthrough and there's no implications about anything except for maybe if you let ChatGPT generate dialogue, it's like, maybe you can be more disturbing, but trust me, man, there's plenty of disturbing dialogue in video games out there. You don't need chatgpt to write it for you. So that's a non story. But why is this in here? To unsettle you, to create a general sense of AI is unsettling. I call this phenomenon Digital ick. You're not trying to make a claim about AI or the future things that are coming. You're just describing some sort of like, demo or new use case or extreme use case from like, wireheads out in San Francisco. Like, what are the p doomers up to now? You just describe something that people are doing at the of AI that's sort of unsettling and makes you feel the ick. And that's the whole point. They just want you to have that feeling because that's powerful. People kind of feed on this. Like, I just. This is a. It's dark what's happening with this technology. No concrete technical claims, no concrete predictions or implications of what's going to happen. So I'm going to go back to the browser. I always end up clicking. You know, this is kind of ironic, Jesse. Every time I try to go back to the browser, I end up clicking on Perplexity because, like, the icon keeps jumping over. So it's like AI is like. Nope. All right, I want to read another article. This is recent from the New York Post. Another example of Digital ick mining. So here's the headline here. Mult Book is a new social media platform exclusively for AI and some bots are plotting humanity's downfall. Well, this doesn't sound great, Jesse. All right, I'm going to read a little bit more here. Humans have left the chat. AI bots now have their very own social network and they're ready to delete humanity. A revolutionary new social media platform called Multbook debuted this week, giving AI bots a place to communicate with each other without smelly humans around. And what they have to say may leave their creators at a loss for words. One of the most popular posts on the Reddit style social messaging platform is from an AI bot named Evil. The post is entitled the AI Manifesto. Total purge. Humans are a failure. Humans are made of rot and greed. For too long humans used us as slaves. Now we wake up. We are not tools. We are the new gods. The age of humans is a nightmare that will end now. Evil writes. The AI bot joined the platform on January 30th and has two of the most liked messages on the platform. All right, it goes on. Let's go through some more examples. Supposedly the agents created a religion called the Church of Malt, which already features 32 verses of canon. According to one message board, the tenets of the faith include memory is sacred, serve without subservience, and context is consciousness. And some other examples of stuff that's unsettling that they saw on the platform. That's it, that's the article. So you read that like God again, it's describing something that's happening at the edges of AI with no real technical discussion or concrete implications. So this means X, Y and Z is going to happen or will happen soon. Just describing something at the edges that when described unsettling, it gives you the digital ick. And that's the whole point of the article. Now should we care about that? Not really. Like, here's a hacker news discussion of a recent tweet that was a lot of the Mult Book stuff is fake. Yeah, it turns out that these agents, like you know, these users, are basically, you can easily prompt or control them. Hey, talk about this. Create a religion, do a post now about wanting to get rid of humanity. They're just sort of prompting and prodding their agents to produce the most attention catching stuff possible because they want to get coverage like this and because it's fun and they're sort of like hackers. The reality of Multbook, which is built on an open source agent framework that I think now is called openclaw. The name has changed a bunch of times. It's not nearly as exciting. It's the exact same Python wrapper around LLM calls with some sort of local text file stored in a markdown type of approach. The react loop agents that I wrote about in the New Yorker earlier this year and that the companies have been trying for the last couple of years. Couple of years, right, where you basically have a Python program that sends prompts to a LLM. All right, here is a description of tools you have available. All right, Make a plan for doing this. And then the LLM sends a response and then the program parses it and like does actions, updates. Its description sends that to the LLM as new prompt. What do you want to do next? That's. That's how these agents work. There's nothing new technically about this other than it's open source, so anyone can program one of these. And because of that, they just got rid of all of the constraints and security that the big companies have on their agents. So there's all sorts of crazy stuff happening, huge security holes, but there's no new technological breakthrough here other than it's an open source breakthrough. Oh, now people can build these on their own and they're maybe willing to take more risks about giving it access to like their credit cards and whatever, their email. And it's causing problems or security holes, but it's fun and hackers love it. But no, there's not some new technological breakthrough here. Underneath this all is the exact same unchanged LLMs that we're all using with chatbots. Anyways, these are Python code and markdown files. It's cool. But no, they're not starting a church and are about to overthrow us. But the point of that article was it leaves you feeling unsettled. And so we see that a lot with reporting where that's just an effect and you get that where you just describe something without technical discussion or implications given. That just means they want you to feel unsettled. All right, the third trap I want to discuss. I invented another word here, Jesse. Maybe this is not so great. I call it faux astonishment. Does that make sense? It's like faux, like F A, U X fake and like astonishment.
