Loading summary
Cal Newport
This is an iHeart podcast.
Public Investing Ad
Guaranteed Human support for the show comes from Public, the investing platform for those who take it seriously. On Public you can build a multi asset portfolio of stocks, bonds, options, crypto and now generated assets which allow you to turn any idea into an investable index with AI. It all starts with your prompt. From renewable energy companies with high free cash flow to semiconductor suppliers growing revenue over 20% year over year, you can literally type any prompt and put the AI to work. It screens thousands of stocks, builds a one of a kind index and lets you back test it against the S&P 500. Then you can invest in a few clicks. Generated assets are like ETFs with infinite possibilities, completely customizable and based on your thesis, not someone else's. Go to public.com podcast and earn an uncapped 1% bonus when you transfer your portfolio. That's public.com podcast paid for by Public Investing Brokerage Services by Open to the Public Investing Inc. Member FINRA and SIPC Advisory Services by Public Advisors llc. SEC Registered Advisor Generated Assets is an interactive analysis tool. Output is for informational purposes only and is not an investment recommendation or advice. Complete Disclosures available at public.comdisclosures Pro Drivers.
Cal Newport
Live for race day, but for small.
Lenovo Pro Ad
Business owners, every day is race day. That's why going pro with Lenovo Pro matters one on one advice. IT solutions and customized hardware powered by Intel Core Ultra processors. Keep your business on the right track. Business goes pro with Lenovo Pro Sign up for free@lenovo.com Pro Lenovo Lenovo.
Ed Zitron
At Lowe's get up to 35% off select major appliances plus members get free delivery, install and more when you spend 2500 dollars on select major appliances Lowe's we help you Save valid through 225 while supplies last selection varies by location. Excludes Massachusetts, Maryland, Wisconsin, New Jersey, Florida loyalty program subject to terms and conditions. Visit loath.com terms for details. Subject to change.
Public Investing Ad
Visit your nearby Lowe's on Colorado street in Kennewick.
Sponsor and Promo Voices (including Cindy Crawford and others)
Hey, this is US Olympic gold medalist.
Cal Newport
Tara Davis Woodhull and I'm US Paralympic gold medalist Hunter Woodhull.
Sponsor and Promo Voices (including Cindy Crawford and others)
As athletes, our lives are about having.
Cal Newport
A clear path and a team that you can absolutely trust.
Sponsor and Promo Voices (including Cindy Crawford and others)
So when it came to getting the.
Cal Newport
Best mortgage, we chose PennyMac.
Sponsor and Promo Voices (including Cindy Crawford and others)
PennyMac is proud to be the official mortgage provider of Team USA and you.
Cal Newport
Learn more at pennymac.com PennyMac Loan Services, LLC equal housing lender and MLS ID 35953 licensed by the Department of Financial Protection and Innovation under the Residential Mortgage Lending act conditions and restrictions may apply.
Ed Zitron
You're witness to a great becoming. It's better offline. And I'm Ed Zitron. Better offline. Today we're joined by computer science professor and tech writer Cal Newport for hater season. Cal, thank you for joining me.
Cal Newport
Always happy to do some hating, I suppose.
Ed Zitron
Well, you had an excellent YouTube that I'll be linking in the show. Notes about the mistakes in AI reporting, though I would. My hater in me says, I don't think these are mistakes, but you really, you touched. One of the best videos I've seen on AI Report, or AI in general, was what you did, where it was basically this thing of like the digital ick that these stories are meant to make you feel uncomfortable. And of course, this faux astonishment thing.
Cal Newport
You know what?
Ed Zitron
Just run the tape. Tell me a little bit about what the bits that you found. Because I watched it going, yeah, yeah, yeah, like an angry person.
Cal Newport
Yeah, I had. I could imagine you when I was recording that video. I was like, I bet Ed is edit out wild now.
Ed Zitron
I was having a great time.
Cal Newport
Exactly. Well, I mean, let me just give the context. Right? So I'm in an interesting situation for observing this because, you know, I am a computer scientist, so I'm not afraid of the technologies. I'm happy to talk about transformers and feed forward networks and diffusion models, and that's not that scary to me. But I also write about technology. My main journalistic home is the New Yorker, where I write a lot about. I do AI journalism there, so I'm up on that as well. And so I'm often noticing things in journalism. When I see AI coverage that is faulty, it really catches my attention because I have a foot in both of these worlds. So there's a lot of good AI reporting out there. There's also a lot of trash. And I really wanted to help people figure out, how do you sort it? Like, how do you figure out if you're reading something? Should I pull the ripcord on this article? Like, this is not helping me. How do you know if it's good or not? So I was like, okay, here's what I'll do, is I'll come up with the three most common traps I see in AI reporting that makes me want to throw my iPad at the wall. And I came up with three, and I'll just give you the three names. I made up all these names. I don't think they're great. My producer thinks he loves them. But here we go. Vibe reporting. That's Number one. So that is where you will omit certain facts and put loosely related quotes next to each other in a way that creates a general vibe that you want to be true, but it's not quite true. So you don't actually make a concrete claim that's not true. But you imply that claim by what you omit or what you put in your story or what quotes you put next to it. So you'll put a quote, for example, about layoffs at the gaming division at Microsoft next to an unrelated quote about concerns about AI and its impact on jobs. Now you have the vibe, oh man, all these people just got laid off because of AI. AI is taking jobs where in reality the layoffs had nothing to do with AI. But you put these things next to each other. You give that sense. Then I had Mining Digital Ick. So to me, that's any AI story where you take an example from the edges of AI, like something that wireheads in San Francisco are up to, and you just tell a story that's unsettling without talking about any of the technical details, like, well, what's different here? Was there a technical innovation we need to know about? And not discussing any concrete implications. Oh, this means this is going to change in the future or it's going to have an impact on this sector. You're just telling a story to unsettle. And I think a lot of the coverage of Mult Book and openclaw fell into that. And then finally it's Faux Stonishment, which is more of a YouTube phenomenon than a print journalism phenomenon. But that's where every single thing that happens in AI is insane, amazing, terrifying. Everything is going to change. And so you constantly create this atmosphere of something seismic just happened so that the consumer of the information ends up in a bit of a panic. Like, I can't put my finger on exactly what's terrible, but like, everything terrible is happening. Those three traps to me should be automatic ripcords from what you're reading or watching.
Ed Zitron
The only disagreement I'll have is that you would say that this isn't the majority of AI journalism, because I actually argue it would be especially that kind of the beginning one. The vibe reporting is very common because the big one right now I'm seeing is this AI software, AI is replacing software thing. If you are a reporter and Cal is not making this statement, I am. If you were a reporter and you bring up anthropic cowork around anything, you are, you are wrong. I was about to call you a name, but I'm being nice today for some reason. All right, you're a dipshit. Because you are a dipshit. If you look at Claude Cowork, which is a thing for fucking around on your desktop and you say this is going to compete with Salesforce, you just don't know what you're talking about. You are wrong. But then one abstraction higher is this idea that Claude code is going to destroy SaaS software as a service. And the idea being that software as a service is this thing that people just going to start building their own CRMs, they're going to start building their own per seat software things, but they're going to build it internally. This just, I don't know if you've seen this Cal. It just reflects a complete lack of understanding of how software works. Yeah, because you don't, you don't Pay Microsoft for Microsoft 365 teams because you can't build your own Word or what have you. I don't think you could. But nevertheless, it's also because they maintain it, because they make sure it doesn't break or if it breaks, they fix the bit. So they make sure that it stays up all the time and make sure it's accessible, it has secure login.
Cal Newport
Well, look, there's a few things going on here. One of these things I reported on last month I did a big New Yorker piece on agents, right? So this is relevant to Claude code and how people are thinking about the current future. And there was this. Basically here's what seems to have happened. Claude code and these other command line interface agents can do really cool things. I'm using the word cool here in a very careful way.
Ed Zitron
That's fine.
Cal Newport
Yeah, like cool.
Ed Zitron
I would like you to be like completely like give people the actual explanation here.
Cal Newport
Yeah. So cool meaning like Oculus, right? You put on the Oculus visors for the first time. Everyone had the same reaction. This is really cool. Now that's separate from. That's a trillion dollar business. Like let's put that aside. But this is really cool. I'm seeing 3D in a world where it tracks my head. Claude code and other command line interface coding tools became like that for programmers. It was like really fun to watch it doing multi step execution of the construction of demos or this or that. And the reason why it could do those cool demos, it was sort of well suited for that world because that's a world that exists only in text. So Claude code works on a command line interface. It's all text based and it works with A file system. You can write files, edit files, send files to compilers. It all exists in a small number of commands on a command line interface. It's all in a world of text because that's where computer programs are built. That's a perfect case for LLMs, which love dealing with text and they love dealing with structured text like computer code. There was an extrapolation that then happened and really this caught on January of 25, which is where you first began to get this sentiment of, oh, it's doing such cool things over in the world of command line interfaces and code. Certainly these agents can now soon do similar cool things, like all different things we do on a computer. And that is what laid this foundation of. People were so impressed, programmers were so impressed by the coolness of what was happening with CLAUDE code and the other command line interface tools that they extrapolated that vibe over to other computer usages. The point of that article I reported was, oh, it turns out, like everything else we do on a computer is much harder. It's not six text commands on a command line interface creating structure code that you can compile and test to see if it compiled or not. It's much messier. The interfaces are visual. We don't realize what complexity goes into the things we click and select. Doing something as simple as even trying to just book like a hotel in a new city or something like this and that if you use a language model as the underlying logic and decision engine of making actual actions in the real world, well, the language models make things up and get things wrong, or a little bit wrong 20% of the.
Ed Zitron
Time and a little bit wrong in computer science means breaking everything.
Cal Newport
Well, it means a lot like it's okay in code because they say, oh, that didn't compile. Let's try again. But when you're booking a hotel room, as I sort of detailed in that article, it means like you ended up in the wrong city two years from now and the room cost $6,000. Right. And so it just didn't work. So 2025 was supposed to be the year of the agents and it just didn't work. And they don't really know how to fix it. But we're still vibe. So this is vibe reporting. Yeah, but CLAUDE code is like really exciting. And so why can't we do that with everything else in the computer? It's actually a much harder problem than they were letting on.
Ed Zitron
Well, the thing is, I've seen especially like in 2026, I've seen a lot more Claude Code stuff. And there was. There's been a very big consent manufacturing operation going on right now. Wall Street Journal, Atlantic, cnbc. Deirdre Bosa. This is a statement from me, not Cal Didribosa from cnbc. Should be fucking ashamed of herself going on CNBC every day, just going, code's gonna destroy all software. She's on Twitter because she was able to Vibe code a. Some sort of Monday clone, which is just like a. It's a project management tool. It's like, I made some software that worked. This is everything now. But that's kind of what you've been talking about with this Vibe reporting, where it's like, I did a small thing now. All things will be done in this manner. Whether it's possible. God, no. God, no. But she. She, like many reporters, are able to find a lot of people who are invested in AI who will absolutely go on TV and say, yep, it's completely true. That's gonna happen 100%. It's just so strange because it makes me feel paranoid and kind of conspiratorial. When you look at the majority of news about AI, and it is this Vibe reporting, it's these vast extrapolations from 16,000 job job losses at Amazon. They mentioned AI. This plus this equals the AI is replacing people. The.
Cal Newport
It's just so.
Ed Zitron
It makes me feel, like, uncomfortable with the world.
Cal Newport
Yeah, well, can we. Can we sit for a moment on that Amazon example? Because I think it's a great one.
Ed Zitron
Please, please.
Cal Newport
It frustrates me. That one frustrates me.
Ed Zitron
Tell me.
Cal Newport
Go ahead. All right, so Amazon lays off 16,000 people, all right? It's covered in two different ways. So the Vibe reporting way it's covered in my newsletter, my podcast, I looked at an example from Quartz and it was covered as clearly intended to imply Amazon laid off 16,000 people because of AI. They're being replaced by AI. They put the subhead of the article was the CEO of Amazon talking about how AI is going to increasingly disrupt the job force. And then in the article itself, no alternative explanations are given for these layoffs. They kind of just give the details of, like, here's how many people are laid off and here's where they figured it out. And then they put a couple quotes in there about AI being very disruptive and being able to automate jobs. It turns out those layoffs had nothing to do with AI. And you could find other reporting that focused because it was reporting that was for the financial market. So it was trying to focus on what the hell is actually happening. And the deeper reporting was like, yeah, they laid off a bunch of managers because like a lot of tech companies they overhire during the pandemic because cloud computing became much in demand during the pandemic. So a lot of tech companies over hired during the pandemic and they're all shedding those jobs again. And Amazon's pretty ruthless about this, right? They're always looking for excuses to fire people and they said we have too much bureaucratic bloat, there's too many managers. We're going to fire a bunch of these managers we hired so that we can be more lean. Again, that has nothing to do with AI. In fact, this is the second or third round of these firings that they plan to do. The first round occurring before ChatGPT was even released. Like this has nothing to do with jobs being replaced by AI. But then you can vibe report it because like, well, technically speaking Amazon also is investing money in AI products. So technically speaking money saved by firing these managers could be re spent on AI. So you can say with a, like a semi straight face they fired people because of AI, but clearly, you know, the impression that you're giving to the reader is that they were replaced by AI and it had nothing to do with it. And I heard, by the way, so I wrote about that, I put in that video. I've heard on background from multiple Amazon executives since they were like, we were completely baffled by this coverage that was implying that people were being fired here to do with AI. This is just what we do at Amazon. We're ruthless. Like if you're not earning your keep, we fire you. They were completely baffled by that coverage and like, thanks for pointing it out. Like what do we got?
Ed Zitron
I got to be honest, Cal, if I got an email like that from an Amazon executive, I tell them go fuck themselves. And I mean this nicely because Andy Jassy Last year, June 17, 2025 put a whole thing about today in virtually every corner of the company. We're using generative AI to make customers lives better. I believe Amazon benefits from that obfuscation and I they deliberately fuel it. Now there may be executives who disagree with this.
Cal Newport
Well, these were lower level managers, right. So not exec. I shouldn't say like humans who work there that were like, oh yeah, no, no, no, they're not firing people because like AI, they're just being brutal.
Ed Zitron
Right, right. They're the people who tell the truth.
Cal Newport
Yeah, exactly, exactly. But no, you are right. I think for the. There has been a lot of vibe reporting Basically the. I've been covering this for the last two years. The entire sequence of job reductions that were post pandemic corrections. So the entire tech industry over hired during the pandemic. The entire tech industry jobs in the last few years because they hired too many people. And now they have to correct back to where they were pre pandemic consistently across the board. These cuts survive reported as due to AI. Consistently. You see exactly that story. We see it in. I'm a computer scientist. We see this in coverage of computer science majors as well. Same idea. Computer science majors historically directly tied to the tech industry. If they're cutting in a down cycle, majors go down. If they're hiring. I mean it's just economic. It's not surprising. When the tech industry is booming, we get a lot more majors because they're good jobs. Right. And so computer science majors went down in the last year or two as the tech companies started cutting. That was reported in the Atlantic as kids are not majoring in AI because AI makes the majoring in Comp Sci. You mean Comp Sci. Yeah, because AI is going to do all these jobs. We've had these cycles, every five years we have this cycle there. This is nothing. Anyways.
Ed Zitron
No, no, I, I'm. And the Atlantic has just done a piss poor job with this stuff. I'm. I'm hating and I'm hating on them because they had an awful Claude code thing. They just, you know, I'm gonna bring it up and read the title. I'm not gonna say the reporter's name because I think that that's, that's mean. But let's look at this. Move over Chad GPT. You're about to hear a lot more about Claude Cody. No way you're gonna hear about that because the fucking Atlantic is writing about it and it's just over the whole days. Alex Lieberman had an idea. What if he could create Spotify Wrapped for his text messages without writing a single line of code. Lieberman, co founder of the media outlet Morning Brew, created imessage wrapped. I just want to start here and say that guy doesn't do any fucking work. And I know people who work there. Like I just want to start by saying if that's the best you've got. And that probably involved throwing a giraffe and the entire zoo into a vat of acid to make the GPUs move. You're meant to read this and the deliberate effect you're meant to have is you're meant to be scared and excited like it is a kind of a mishmash between the vibe reporting and the. I guess it's the first. In fact, this might be a triple score because this is meant to fill you, make you feel with discomfort, but also make you freak out, but also be excited. A rare triple score. It's just stories like these piss me off because I'm fine with people going, this could do this, I don't mind, it happens. But when it's just like feed me the slop right into my mouth, an asshole, make me feel, make me feel all the marketing things all at once. First of all, I'm not impressed by the idea of doing imessage grabbed. That's not, that's not a thing a human being with that, with like friends and hobbies does. I mean, rewatch. I've been watching the first season of True Detective. I've got many more shows I could watch. Never once would I think, what if I could get it wrapped of all my messages? What a fucking psychotic thing to do. But when you read this, it's like yours. You spent your holidays with your family, wrote one tech policy expert. That's nice. I spent my holidays with Claude Code.
Cal Newport
Well, it's fun. I think it's fun to create these sort of demo apps if you're like engineering minded. So in that sense, the most cynical analysis here is that Claude Code is like model trains for engineering minded people. It's a fun hobby. I can put. Look at this. I made a thing that can. Like I was talking to a friend of mine yesterday, computer scientist, and he was like, oh, I built the thing where I can whatever, email an appointment to a thing that gets parsed by an LLM and then goes to my calendar. That's just fun for him in the same way that someone else might be like, hey, I built a cabinet and like it's really nice. Like look, I got the wood to go together and like I'm proud. I could just bought a cabinet or whatever.
Ed Zitron
Except you build, except you built something with your, your hands and a cabinet can have things put in it. It's. I just, it feels like Lego almost like it's, it's more like it's toy software that has some functionality. Because the big thing I'm waiting for with the vibe coding stuff is an actual product. You know what I mean?
Cal Newport
Well, they, you know, that doesn't go well. I mean look, this is the story of, this is the story of Malt Book, right? Because.
Ed Zitron
Oh yeah, tell me more about Malt Book. We were just talking about this David show.
Cal Newport
Oh my God. There's a whole can of worms to open there. I'm just going to open like the top of the nearest can, which is before even getting to what Molt book is and is not. You know, it was in the news all the time. It was vibe coded and immediately was just full of terrible security holes because it was vibe coded. And it turned out that you could get the API key. So the key you use when you access the paid service to get used in LLM, you have an API key. So they know who to charge. You could just steal everyone's API key who was using it because the guy just vibe coded it. And so no one was actually there looking at the code. But what I think is going on. Okay, so here's what I would like to see. I would like to see more reporting. That would be how are people using X? To me that's very interesting, right? How are people using X? I agree. Yeah. And the problem is those answers right now. And this is confounding, I think to people who are very excited by the potential and coolness of these things in isolation. The answer to how are people using X is often not nearly as exciting as you would guess. And I think the reality with, I mean, I don't quite have my arms around cloud code. I do know there's a lot of people who are building kind of like internal tools or personal tools with it, which I think is cool. Most people aren't interested in that, but for some people they are and I think it's fun. Computer programming is fun. Right. So the ability to make a program work is. And some of those tools are useful, but that's not a major industry on its own. I'm designing a tool for my small company that makes it easier for us to do whatever that's cool, but that's not like a trillion dollar industry. I can't get my arms around yet exactly how professional computer programmers are using it. They're all talking about it.
Ed Zitron
Really?
Cal Newport
Yeah. But I can't get, I can't get my arms around it yet. Yeah. What do you, what's, what's your sense of.
Ed Zitron
I heard, I'm going to be honest, I was going to ask you, I was literally going to ask, have you heard people using it? Because if you go on the Everything app, if you put on full Hazmat suit, you go on X and you go and look. And the way that people talk about this is like they have connected into the matrix that they are now a thousand X engineer. But when you go and look at what they do, no one actually says there's always these. These kind of vibe store the Vibe tales, the mythology where it's like I had a problem vaguely that would have taken me X number of hours, but when I used Claude Code, it solved it immediately and caught two bugs that I didn't know about. It's very Marine Todd style. Then everybody clapped. Yeah, and it's you are a computer science teacher and you. You don't know either, which makes me think it's not as big a deal as people are saying. My other bit of evidence is that in at the end of last year, Anthropic's Claude code revenue was 1.1 billion annualized, so about 90 to 100 million a month for a revolution that feels low somehow now.
CarMax Ad
So do you want to start shopping for your next car but don't know where to begin? Start at CarMax, where you can shop your way from start to finish. Whether you're shopping for something practical, adventurous or luxurious, CarMax puts you in the driver's seat. And if you want to shop cars that fit your budget, CarMax has your back all the way. Simply grab your phone and get pre qualified from your couch, the dog park, or even on a coffee break. It's quick, easy and has zero impact on your credit score. Want to Explore your options? CarMax has plenty of options. In fact, with over 45,000 cars to choose from, CarMax has rides for almost every budget, including more than 25,000 cars priced under $25,000. From browsing online to checking out cars on the lot, you can shop your way at CarMax. Want to get started on the search for your next car? Start@carmax.com for details and get pre qualified today. Want to drive CarMax.
Public Investing Ad
Support for the show comes from Public, the investing platform for those who take it seriously. On public, you can build a multi asset portfolio of stocks, bonds, options, crypto and now generated assets which allow you to turn any idea into an investable index. With AI. It all starts with your prompt. From renewable energy companies with high free cash flow to semiconductor suppliers growing revenue over 20% year over year. You can literally type any prompt and put the AI to work. It screens thousands of stocks, builds a one of a kind index and lets you back test it against the S&P 500. Then you can invest in a few clicks. Generated assets are like ETFs with infinite possibilities, completely customizable and based on your thesis, not someone else's. Go to public.com podcast and earn an uncapped 1 bonus when you transfer your portfolio. That's public.com podcast paid for by Public Investing Brokerage Services by Open to the Public Investing Inc. Member FINRA and SIPC Advisory Services by Public Advisors, llc. SEC Registered Advisor Generated Assets is an interactive analysis tool. Output is for informational purposes only and is not an investment recommendation or advice. Complete Disclosures available at public.comdisclosures Pro Drivers.
Cal Newport
Live for race day, but for small.
Lenovo Pro Ad
Business owners, every day is race day. That's why going pro with Lenovo Pro matters one on one advice IT solutions and customized hardware powered by Intel Core Ultra processors. Keep your business on the right track. Business goes pro with Lenovo Pro. Sign up for free@lenovo.com Pro Lenovo Lenovo.
Sponsor and Promo Voices (including Cindy Crawford and others)
Hi, I'm Cindy Crawford and I'm the founder of Meaningful Beauty. Well I don't know about you, but like I never liked being told oh wow, you look so good for your age. Like why even bother saying that? Why don't you just say you look great at any age, Every age. That's what Meaningful Beauty is all about. We create products that make you feel confident in your skin at the age you are now. Meaningful Beauty. Beautiful skin at every age. Learn more@meaningful beauty.com.
Cal Newport
Yeah, I mean what I know is a computer scientist, you can't write performance oriented code. You can't write safe code. You can't write code. It has to sort of juggle a sort of complex set of scenarios. I mean you just need good programmers eyes on it, building this code.
Ed Zitron
Why is that? Is there a way of explaining to a non coder why that is?
Cal Newport
Code is there's like a poetic element to it. You know, Writing good computer code is difficult. You're often, you're dealing with what is my problem here? And I want an elegant way of sort of satisfying this problem. You're often drawing from pretty nuanced algorithms and data structures to try to figure out how am I going to organize information and efficiently access it. When performance comes into play, there's a lot of really subtle decisions to make about how am I going to store or use things in such a way that we don't get bogged down when we're trying to execute things. It just becomes I don't code as much anymore because I'm a theoretician, but I did my whole life Since I was 7 and it's an art form. When you're using Claude code, you're not really supposed to look at the code. And so I think that takes a lot of uses probably off the table. The use case is you're supposed to have these different instances of CLAUDE code running and this one's going to write code and then this one's going to write test for that code and then the CLAUDE code is going to run the test and then try to fix the code if it doesn't match the test but your eyes aren't on the code at all. And obviously for a lot of programming that's an issue. But then the other thing I've heard about the computer programming industry is it's very stratified. There's a smaller number of really good, serious programmers that produce 90% of the really important, valuable code on which everything runs. I don't think they would touch cloud code with a 10 foot pole. They're good at what they do. And then you have these huge strata of people writing JavaScript and hacking together Python and it's not very good code. I guess you could replace some of those. It's functional enough, it's functional enough. But yeah, I can't get my arms around it yet. But I do get, you know, I have a lot of sources. So I hear from, you know, I do hear from people that are talking about how cool cloud code is and I do think it's cool. I hear from a lot of professional programmers that are like, yeah, we don't use, we can't use this, we're trying to write serious programs. Like we have to sell this software. Like we, this is this not solving a problem we have. So, you know, I don't know. But the problem is that's what the reporting should be. Hey, here we are at this company looking over the shoulder of people. What are they doing? Let's talk to the engineering teams on background of this tech company. Exactly what role is going on here and I think what a lot of reporting has become on AI is your hype laundering. So you look at the discussion about the technology happening for more engineering minded people. You convince yourself as a reporter, I can't understand the engineering but I will trust the people who do. And then you launder what you're sensing from that hype into your articles, not realizing that nerds like me, we get hyped up about stuff and we get super excited about stuff and we go crazy. You can't just launder our hype into this is what's happening. And so it's like reporting on a war where you have no one embedded, there's no one actually on the ground where the battles are happening. You're just responding to the press conferences that the generals are holding back in the Pentagon. Like it's not a way to report on what's actually going on.
Ed Zitron
And I think the other thing as well is there is a. If you don't do this hype laundering, I think, I don't know how these. If you're a reporter listening to this and you have a thought about this, send it to me anonymously. Is it 76 on signal? But my thought is as well is there's probably a problem with rationalization as well because if you look at this and you say, okay, well it's kind of cool, it's fun in whatever indeterminate way. It doesn't seem like serious software, like actual real deal software is being made with it. But then the CEO of Google says 30% of code is written in AI, which is bullshit. And I've heard from so many software engineers. Well, it couldn't be that everyone's just wrong, right? It couldn't be a case of that. Everyone is making the most egregious capital expenditure fuck up of all time. This will be historic, I believe. Worse than railways, digital beanie babies, but done at the scale of laser tag arenas. Now it can't possibly be that because everyone else is saying this is exciting and good. And at that point they choose, instead of being like worried, instead of being a bit anxious about this, they say, well, Amazon Web Services spent a lot of money, so this spends a lot of money too. So this is actually good. It's actually good. And indeed these people seem emphatic and excited. And I as a non coder can build a fudged CRM that probably would not withstand even the laziest hacker. I can do this. And thus it will extrapolate further from there. And what sucks is what the people that I believe actually will be hurt by this are retail investors, I think regular people buying stocks in these companies or selling software stocks because they believe that code will replace them. And ultimately I think it's just going to be a bloodbath for people's 401 s that could have been avoided except it would require reporters to do something uncomfortable. And I don't think they want to do that ever.
Cal Newport
I think there's two things going on. This is what I've decided with reporting. First of all, it's asymmetric risk. So a lot of reporters are like, look, there's not a major risk if I'm excited about this and it doesn't pan out because we could be like yeah, surprisingly this didn't pan out and it was some factors we couldn't see. But they do feel like there's a huge amount of risk of saying this is not a big deal. And then it is and so it's definitely an asymmetric risk. We saw a lot of this during like Covid as well. Right. Is like you wanted. It was less. There would be less harm if you were too alarmist about something. But there could be a lot of harm. They felt like reputationally if you're like this is not a big deal. And it was. And so there's definitely an asymmetric risk profile. There's also like a meaning defining profile. It's just really exciting to think everything is gonna be disrupted and change unrecognizably. It sort of gives a focus and meaning to like an otherwise somewhat chaotic and disrupted world that we're in right now. And so there's that aspect too is that people wanna believe there is something massive about to happen because in some sense it wipes away all the like bad stuff that's happening. Who cares? None of this is relevant because this much bigger thing is coming. So I think that get wrapped. Gets wrapped into it as well. I think the economic reporters are more on this because like their whole job is to try to. I mean they're not on it. I think after your work they're on it more. But like don't know if I agree.
Ed Zitron
I. I am reading Big ser. There's a. An article in Bloomberg I'm trying to shove through Archive is because they. They were so mean and unfair to my friend Steve Burke at Gamers Nexus that I won't pay them. But it's more shit about like the software narrative. The fact that software is being disrupted by Claude code. You get the same pallid reporting from Bloomberg and even the Financial Times about Anthropic and the FT is generally pretty good. We're like yeah, they're just going to make $30 billion next year. It's just fanciful. It's. There's the skepticism the cynicism doesn't exist. And I get, I agree on this.
Cal Newport
Well there was a bubble reporting though last fall there was a period where everyone did the bubble reporting. After GPT5 you had a two month period where every major publication did bubble reportings. But you're. Yeah, I guess that did kind of die off.
Ed Zitron
It died off because they, they hear one nice thing from Jensen Huang and they're like well I'm sold like that.
Cal Newport
Jacket, that jacket's looking Pretty sharp. I mean someone who wears that jacket, how could they be wrong?
Ed Zitron
Would a man that has a shiny jacket like the Gleed singer of Cornware's at concerts, would he lie? And it's just what really bothers me as far as the economic reporting though is the Oracle. Because it's like Oracle needs to be paid $300 billion over five years by OpenAI, a company that if we're to believe reporting, which I do not. They made $13 billion last year in revenue and lost. Sorry, they've made 13 billion and lost. They claim 9. I think it's probably higher. How are they meant to pay 30 to 60 billion dollars a year in a year? And everyone's like, well they'll work it out. How's Oracle meant to build those data centers? Those data centers are going to cost $189 billion. They've raised 100 billion. Okay, how they meant to do that? And everyone's just like, ah, they'll work it out. If I wish I could do this with the fucking bank I needed. I need a 500 million dollar house, I'll pay for it in some point. All right, it's fine. And the news would run articles about my genius housing purchases. It's just, it's one of those things where. And you said, well the asymmetric risk doesn't exist. It does for some of these reporters because I've been saving their bylines for years because I actually think that there needs to be some sort of reckoning with this because if you look back, there are major financial outlets that did the same thing that were literally propping up Sam Bankman, fried two weeks before FTX collapsed, who then went on immediately to cover AI.
Cal Newport
Yeah, fucking.
Ed Zitron
And now they're peddling bullshit for anthropic and sorry, I'm kind of hating. Well, I guess it's hater season, so I can. It just frustrates me because regular people are being scared. They're being scared by the kind of faux astonishment which I actually love the firstonishment reporting of like oh well, Open Claude has proven. Open Claw has proven that AI is AGI is here. Or they built their own social network. So we should be.
Cal Newport
Software people are getting scared. Insane.
Ed Zitron
Software is dead.
Cal Newport
Yeah, yeah, like Singularity is here.
Ed Zitron
And it's like I, I assume you saw that for the listeners, by the way, openclaw had this, this fucking Claude bot, whatever it is. They had their own social network where the quote unquote LLMs would post. But it turns out that most of Those posts were just made by their owners.
Cal Newport
Yeah. And this was made by the. I mean, this is a good case study, right? This one bothered me because, like writer friends I know, who are not technology related, were texting this to me. They were worried, right? They were texting me these articles. Like, this seems bad. This seems like something really bad. They were really getting the digital ick really strongly off of these articles. And I would start reading these articles and I say, well, there's no discussion of what is the technical breakthrough here and what are the concrete implications, because it turns out there was zero technical breakthroughs. There is no new AI technology connected to OpenClaw, which used to be called Molt, whatever. Open Molt, OpenClaud or whatever. But OpenClaw is, I think, where they ended up, right? Which is an open source library or framework for building AI agents powered by LLMs. There's no new AI technology involved in this at all. The agents you build are just accessing off the shelf LLMs that we're all using for chatbots anyways. You can aim it at whatever commercial chatbot you want. There's no new framework for how the agents work. It's the same sort of react loop that we've been trying for the last two or three years, where you just, you have a program, it's like a bit of Python code that asks an LLM, hey, here's what I want to do. Here's the tools you have available, come up with a plan, and then it sends it back a plan, and then the Python code takes the first step out of that text and says, okay, here's the tools available. What should I do to execute this step? And then like the LLM will give you some steps and then the Python code runs those steps and then you just go back and forth, right? That's like this basic react, which is.
Ed Zitron
Exactly how Manus worked as well.
Cal Newport
Everything. There's nothing new about this.
Ed Zitron
Manus was this AI agent that Facebook might be meta, might be acquiring. They literally, when you use it, it just writes Python code for every step.
Cal Newport
Well, yeah, so this is, I mean, there's nothing new here technologically. The only thing that was new about OpenClaw is it was open source. So it made it easy for anyone to write bad agents. And the other thing that made it interesting to wireheads in San Francis is that the commercial products where people are trying to build these at companies, they have common sense security. Well, probably if it's just a Python program blindly doing what an LLM tells it to do, it probably shouldn't have access to your credit cards or to your hard drive or whatever. At OpenCloud, you can give it access to anything you want on your computer. And so you could build really cool demos that are also incredibly insecure and unsafe. And so it was fun for hobbyists, but there was no new technology that nothing was new. It was just a way for other people, like hobbyists, to build their own agents that were less safe than the more carefully built. Like, what people don't. Here's a story people don't know is in the immediate aftermath of ChatGPT. So we're in early 2023, right? So ChatGPT has come out. OpenAI goes on a road tour of major publications, right? And to try to, hey, here's what's going on. You need to write about this in early 2023. They're like, the next thing we're going to offer is something like these agents, they call them plugins. And you can install plugins that basically can do actions on behalf of your LLM queries. You could have like a book an airline flight plugin and say, hey, chatgpt, book me a flight and language model. Can't do anything but produce text, but then the plugin could take the text and go and book you a flight. And they're like, this is the future, obviously. And that project disappeared because, oh, it's incredibly unsafe and unreliable to have code that can interact with the real world that's following commands from a unreliable host named LLM that went away not because the technology was hard, but because it's not safe. So there's nothing new. My whole article on agents, they tried this all of 2025 and we're failing to get this type of agent to work for anything outside of basically producing computer code. So OpenClaw is nothing new. It was just a way for other people to build these things. The only interesting stories were security hole stories, but it was reported, man. I was listening to the all in podcast and they were, they said, this is the future of AI. Like, this is it. Everyone is going to have an open Claw agent as like a personal assistant. And this is like the biggest thing in OpenAI. And I don't know who it was. One of those guys was like, yeah, we replaced our podcast producer, you know, with this agent who can like email guests on our behalf and book it on our calendar. And while it's costing about a thousand dollars a day right now to run it, but like, I don't know why we're going to need Employees in the future. Hell yeah, brother.
Ed Zitron
Yeah.
Cal Newport
So anyways, like, it's a non. If you said, what's the technical implications? None. If you say, what are the concrete implications? The best story they can have is like, maybe when people. I don't know what it is, all the companies trying to build these things for years. So I don't know. But it was reported like, just something icky is happening. And that Mult book was an application that was built for these agents to communicate on like a Reddit style social network. They vibe coded that framework and it was full of security holes as we mentioned before. And there was like a small number of users that created a huge amount of agents and they were just kind of like prompting and prodding their agents to produce like, let's talk about creating our own church or killing humanity. Like, guys, this you have python code asking LLMs to like write text in the style of like the Matrix and you're posting it on a fake social network. The real story here should be where are they? Don't these people have things to do? Don't they have jobs? Like what is.
Ed Zitron
Yeah, yeah, yeah, yeah. It really.
Cal Newport
My model train, I added a tunnel. That one really got there.
Ed Zitron
No, I actually will push back on that. Model train listeners, I think make up 20% of the listenership of this podcast. Of course also, model trains are cool signaling to my fellow autists out there. But no, model trains are a physical thing which you build. You build a little train, a little city. I think they're delightful. Respect to those with this. It's like if they, and you know what? If they were just saying, hey, I've been fucking around with some software and I did something cool, I'd respect the shit out of them. I'd be like, yeah, enjoy yourself. It's going away. This is all subsidized rates, but have fun with that. Don't know why you need a Mac studio, but good luck. No, they are like, this is the future. Yeah, but I'm gonna be honest, Cal. My real question is, what does Open Claw actually do? Because when I went and read what it actually I read so many posts, you said it books, calendars and does this, I could find no proof that anyone successfully did that.
Cal Newport
And it doesn't. It's just a series of library calls that you can use to build your own program. In theory, that would do that. So openclaw itself is just like a series of like interfaces and hooks that makes it easy to write a program that talks to other services basically and makes Calls to LLM. So it's just like a wrapper in which you can write your own code. And some people are trying. Yeah, like you can have it talk to your calendar, you can have it talk to your email. In theory, does it work?
Ed Zitron
But does it work?
Cal Newport
Well, it's just asking an ll, you can simulate. I mean, I did this for my New Yorker piece. I was like, look, I can just simulate being an agent. Just ask any LLM. Here's what I'm trying to do. Give me the steps for doing this or whatever. And like anything else you ask of an LLM, it will give you answers that sound very reasonable in general and then have a lot of issues in the details. Which is why agents based on LLMs have struggled. Because it's fine if you as a human are talking to an LLM because you don't realize how much filtering and tweaking. You're like, well, that's kind of ignore that and this is good. Or let me ask you to redo it. It's really an issue if you write a program that just says, I will do whatever. I'll ask the LLM for a plan and then just do whatever it says because it doesn't know. That doesn't really make sense. Or like this sounds generally reasonable, but with issues in the details doesn't work well when you execute.
Ed Zitron
I mean, in a practical sense. They said on the all in podcast, three dumb bitches saying, Exactly. That's a meme reference. Don't use that word. Usually those people sitting around going, blah, blah, blah. Oh, we've made it. And replaced our podcast producer with an LLM that can make appointments and send emails in practice. Is that true? I severely doubt that. I just, I also. You're spending thousand dollars a day, so you're paying $365,000 a year for this, right?
Cal Newport
Yeah.
Ed Zitron
Are you really? Or are you just. Have you. If it is it completely auto? It probably isn't. And that's the thing. I don't. I get why the all in guys do it because they probably have investments and their boosters, tbpn, same fucking deal. It really rules that the two largest like tech things in the Valley are just state media, but for Silicon Valley. What gets me is when you get like the Atlantic, cnbc, Business Insider and places like that doing stuff like this. And it bothers me because they don't even need to hate on it. They could just say, yeah, I could do this. It's pretty cool, right?
Cal Newport
Yeah.
Ed Zitron
But I guess that that doesn't get.
Cal Newport
The clear I it's not an interesting story. That's not the real story is not always that interesting. It's like look, hobbyists are building these tools that kind of do cool things but make mistakes and it's a little exp. Like the most interesting story out of openclaw is the only thing I think is going to be impactful out of it is because it was so expensive right like that thousand dollar a day. What it's forcing people, these hobbyists to do is to turn to cheaper alternatives for models. And that I do think is significant this idea that there's open source models out there, they're significantly cheaper than trying to use like OpenAI or Claude. More and more people are running their own local models because it turns out for or most specific uses you might use an LLM. You don't need a super fine tuned trillion parameter beast of a model running in some data center somewhere. It's like you know what, I'm parsing my email to try to extract suggested times. I'm fine with a 20 billion parameter model that can easily fit in a single GPU on a thing I have we share in our office or something like that. So to me that's the most interesting story out of openclaw is the bad news it could be for the the big companies as people get more comfortable with we don't need these Formula one car versions of language models for the stuff we're actually doing. We're fine with the Ford Focus, right? And I think that's a transition that's going to, that's a transition that to me is more that's what we should be writing about. Like that's an interesting idea to me is that you have these huge high valuation companies but you also have all these open source models. Like the weights are just out there in the public domain that can do 98% of what people care about and you're beginning to get low cost competitive services where people can just spin those up and cheaper data centers. That's interesting to me. That's an economic story AI creating its own church is not as, not as relevant to me.
CarMax Ad
So do you want to start shopping for your next car? But but don't know where to begin? Start at CarMax where you can shop your way from start to finish. Whether you're shopping for something practical, adventurous or luxurious, CarMax puts you in the driver's seat. And if you want to shop cars that fit your budget, CarMax has your back all the way. Simply grab your phone and get pre qualified from your couch, the dog park or even on a coffee break. It's quick, easy and has zero impact on your credit. Score Score Want to Explore your options? CarMax has plenty of options. In fact, with over 45,000 cars to choose from, CarMax has rides for almost every budget, including more than 25,000 cars priced under $25,000. From browsing online to checking out cars on the lot, you can shop your way at CarMax. Want to get started on the search for your next car? Start@carmax.com for details, sales and get pre qualified today. Want to drive CarMax?
Public Investing Ad
Support for the show comes from Public, the investing platform for those who take it seriously. On Public, you can build a multi asset portfolio of stocks, bonds, options, crypto and now generated assets which allow you to turn any idea into an investable index with AI. It all starts with your prompt. From renewable energy companies with high free cash flow to semiconductor supply growing revenue over 20% year over year, you can literally type any prompt and put the AI to work. It screens thousands of stocks, builds a one of a kind index and lets you back test it against the S&P 500. Then you can invest in a few clicks. Generated assets are like ETFs with infinite possibilities, completely customizable and based on your thesis, not someone else's. Go to public.com podcast and earn an uncapped 1% bonus when you transfer your portfolio. That's public.com podcast paid for by Public Investing Brokerage Services by Open to the Public Investing Inc. Member FINRA and SIPC Advisory Services by Public Advisors, llc. SEC Registered Advisor Generated Assets is an interactive analysis tool. Output is for informational purposes only and is not an investment recommendation or advice. Complete Disclosure is available at public.com disclosures.
Lenovo Pro Ad
There'S no championship league for small business owners, but if there was, you'd be at the top of the standings. Because going pro with Lenovo Pro means you've got the winning formation. One on one Advice, IT solutions and customized hardware powered by Intel Core Ultra processors help you stay ahead of the competition. Business goes Pro with Lenovo Pro Sign up for free@lenovo.com Pro.
Sponsor and Promo Voices (including Cindy Crawford and others)
Now I'd like to introduce you to Meaningful Beauty, the famed skincare brand created by iconic supermodel Cindy Crawford. It's her secret to absolutely gorgeous skin. Meaningful Beauty makes powerful and effective skin care simple and it's loved by millions of women. It's formulated for all ages and all skin tones and types and it's designed to work as a complete skin care system, leaving Your skin feeling soft, smooth and nourished. I recommend starting with Cindy's full regimen which contains all five of her best selling products including the amazing Youth Activating Melon Serum. This next generation Seriously Serum has the power of melon leaf stem cell technology. It's melon leaf stem cells encapsulated for freshness and released onto the skin to support a visible reduction in the appearance of wrinkles. With thousands of glowing five star reviews, why not give it a try? Subscribe today and you can get the Amazing Meaningful Beauty system for just $49.95. That includes our introductory five piece system, free gifts, free shipping and a 60 day money back guarantee. All of that available available@meaningful beauty.com.
Ed Zitron
I'm already working on a story for this Friday in fact on my premium newsletter about the fact that actually the margins of serving GPU compute are dog shit. Like the best of the best of like a colocation place like Applied Digital is like 27% gross margins and that's at 100% utilization. Anything below 0.7 they're burning cash. Yeah, I hear this data center out in North Dakota losing a million dollars a day. That's the thing this is even with these lower cost models on device could be interesting.
Cal Newport
That's what's going to happen I think. I think on device is what's going to happen.
Ed Zitron
I just remembered something. This is a classic bullshit story that I see every so often. So have you read any of the stories that are like yeah, Claude can now work for hours uninterrupted. Have you read about these?
Cal Newport
You heard this? You seen this work for hours? Oh yeah, yeah. We're talking about the multi step agent execution. It keeps coming back.
Ed Zitron
Yeah yeah. AI on the CNBC AI on the verge of eight hour job shift without burnout or break. Is it is 24 hour hour AI workday next? Gonna just censor myself what I was gonna say there because it's just like yeah, it can work for hours. Is the output good? Does it.
Cal Newport
It's a problem. Yeah, it's just a loop. Does it just keep. Recall it, recall it, recall it. Like I can, I can write a python program that calls an hour 24 hours.
Ed Zitron
If you need me to burn something for hours on end, just give me some gasoline. I got you baby. I could sort this right out. Be much cheaper but it's like yeah. By September 2025, Claude Sonnet 4.5 was reported to run autonomously for up to 30 hours. Reported with I mean but that's more vibe reporting it's you're meant to read that and go, wow, this thing is performing tasks that are useful, that execute code, that do something. And in that 30 hours, that is equival to 30 human working hours versus they sat there and pissed their pants for like 29.5 of those at least.
Cal Newport
Well, and also those are like specialized tasks typically that have clear milestones in testing. So it's like it can do something. The loop can try and try until it passes the test and then you know that's done and then you can move on to the next step and then you can keep calling to LLM and executing until it passes the next test and you can move on. And in theory, like it's, yeah, they're avoiding error cascade because it's testable and they can keep retrying or something like that. I mean this was like the meter had this problem with their graph of how long of tasks AI can now do. And it was this sort of like super arbitrary decision of like, oh, here's a task that takes a human five hours. Now AI can do that. Here's a task, whereas really more about like how many things in a row can you do without the errors cascading out of control or something like that. It's very different.
Ed Zitron
Yeah, you're being way nicer than you should be in my opinion. Just because they don't even explain, they don't even say like yeah, and we managed to make it do it. And this they just go, works for eight hours. CNBC just fucking, just woo. Just tearing their shirts off and screaming.
Cal Newport
But it goes back to the two things we need in this reporting. You need technical the details of the relevant technical innovation and a discussion of concrete implications, near future implications of that breakthrough. That's what I'm always looking for. So if you want to cover a story like that, like, well, what does that mean? What is the technical breakthrough? Right? What does this mean? Technically you can do 30 hours of work or eight hours. What is the work? What was changed? What did they figure? What was happening before? What technical breakthrough made this possible now? And then what are the concrete implications? What specific things now can we direct? This will now allow us to do. Tell us what jobs are going away, tell us what tool we're going to see. Like you have to. But we avoid that, right? Because then you're putting, you're putting your chips down and then those things don't come along. And so I'm always looking for that. What's the technical innovation and what are the concrete implications if you don't have that, you're mining emotions.
Ed Zitron
You're mining emotions or just helping boost stock. That's what really bothers me because I hate that they're scaring people. I really, really hate that they're doing marketing. Like they're just doing like I run a PR firm, I've dealt with early stage startups for like 15 years. There isn't a single early stage startup that would get a percentage point of this bullshit. Like you email a reporter about like a series a startup, they're like, all right, all right, motherfucker. How much revenue? Are they profitable yet? Why not? Why not? Explain to me right now, Anthropic's like, ah, we're going to burn 100 billion on training, I guess. What do you think? And they're like, yeah, I love it. I actually think that's future.
Cal Newport
That's great.
Ed Zitron
I'm not. And to be clear, I think that this scrutiny should be from the beginning to the end. I think that everyone should face this scrutiny. Like that's. I'm not saying that I should get an easier. I'm saying actually everyone should. Anthropic should face the most brutal scrutiny. And I guess that they don't because they want access. It's just very dull and annoying and it's just helping already rich guys like Wario Amadei who should not have more money. Listen to him speak. He needs, he needs to face some stress. I think some stress would help him grow. But Cal, as we wrap up, I did actually have like a technical thing that went an idea of being percolating. So I think that this term training with models is being misused and used in a way that is kind of vibe reporting style, which is they use training as a word that suggests that it will stop, that they will stop training these models. But correct me if I'm wrong, training is everything from building a new model to updating a model's current parameters. Correct?
Cal Newport
Yeah. There's pre training and post training is the right way to think about it. So the pre training is unsupervised. That's where you take all the text that's ever been written and you will take a real piece of text written by a human and you'll cut it off at an arbitrary point and you'll tell the language model, you guess the word that came next. There's a real word that came next. This is real writing. Guess the word that came next and then it guesses. And then you adjust the weight so it gets closer to the Right answer, that's pre trained.
Ed Zitron
And when you adjust the weights, what are you doing?
Cal Newport
So you're running a training algorithm called back propagation. This is a Geoff Hinton innovation where you're going through and you're adjusting the weights all the way through the layers in such a way that the answer it gives for this particular test gets a little bit closer to the right answer because you know the right answer.
Ed Zitron
And that's pre training.
Cal Newport
That's pre trained unsupervised. So you take Hamlet or you take Dickens is like the best of times, it was the worst of. And then you give that to the thing, what word should come next. And you know it says bacon. Like we're going to adjust these weights now in a way that like your answer gets a little bit closer towards times. Right, okay, that's pre training. Then you get post training where you already have trained, so you have this network, all the weights have been set through this massive multi month, billion dollar pre training. And now you want to go through and you want to tweak this to avoid certain types of behaviors or to influence towards certain types of behavior. So for post training, it's almost always based off of, you have inputs like prompts and correct answers. This is the right way to respond to this question, right? So you have pairs of questions and answers. You give the prompt to the LLM, it spits out some answer. And now what you're doing, you're using, it's called reinforcement learning. It's a general technology. But you're using techniques from reinforcement learning to sort of like zap it like you would zap a dog when you're dog training it. If it's a bad answer, you zap it. So you get those weights away from that answer. And if it's a good answer, you give it a treat. Like, okay, this is post training. This is post training. And so that's where you've moved past the word guessing game, which is where all of the sort of general smarts comes from these models. And now you're, you're doing the sort of zapping and treat training around very specific things, right? So this is where they do that. Keep going. So, yeah, so like, so you'll go through and like ask it questions where the answers might be like about building bombs or whatever. And every time it spits out an answer about a bomb, you give it like a really bad negative shock and you're like, definitely turn off those circuits. Like we don't want you to spit out answers by bombs. Like, that's where all the guardrails come from. Or if you wanted to get better at doing a particular math exam, you, you can give it lots of questions from that math exam and then you have the right answer and you can kind of zap it to move it towards what the answers look like on this math exam. So post training is more focused. You have particular types of behavior you're trying to sort of instill in this already pre trained massive network. That's mainly where the focus turned after GPT4. So GPT4 was like the extent of pre training, making it smarter after GPT4, trying to make those models even bigger and pre trained them longer didn't lead to much performance increases. So everything we got between four in the lead up to five was post training. And that's when they began focusing on metrics. Because if I have a particular metric I can post train a model to do well on that test. And so everything became about metrics and post like now we can do this thing better or look at this thing we do better. And so that's kind of the game that's played now is we do lots of post training that requires like much more specific data because you need right answers, pairs of prompts and correct answers. So only certain things we can do this with. But that's the game we've been playing since like 24.
Ed Zitron
Do they do that with models that exist now? So this is basically updates, right?
Cal Newport
Yeah, they do it on a semi regular basis. Yeah, but usually there'll be a name change. Like GPT 5.2 is different than 5.1, different than 5.
Ed Zitron
But don't they update the current models?
Cal Newport
They don't re pre train them. That's too expensive.
Ed Zitron
But they. No, no, I'm not saying that. I'm saying do they post train the current models to make them better at stuff, to tweak things?
Cal Newport
Yeah.
Ed Zitron
So that's. This is a very long way to get to a point I'm kind of making which is one of the problems with Vibe reporting on this is training is framed as this thing. Like inference is framed as opex that is permanent, that you cannot avoid inference being creating the outputs. Training is framed as this R and D mysticism which is just out there. And you know, the training, they never say this but you hear training, you think oh you train and then you perform and so training would end. But from what I understand, training is as common and necessary and expense as inference at this point.
Cal Newport
Yeah, it's the only way you improve or update things. Right. Like so if you had a Microsoft 365 software, you're constantly sending updates and patches and whatever as you like, add new features or whatever. In the AI model world, yeah, it's post more rounds of post training. It's the only way to make any sort of improvement or fixing bugs. Like, you're like, oh, here's something that's saying that we're really upset about. Okay, let's go in and do some zapping. Let's get out the zapper, give it a bunch of examples of saying that bad thing and let's zap and say, don't do that. And now it's like very unlikely to do that. So, yeah, they're constantly doing that. Otherwise you're at stasis, which is different than the way most people actually think of it differently. They think that the model is somehow learning online. That like, it's absolutely not. It's just absolutely not true. That as you talk to it, it's learning and it's getting better. And they're like, but wait a second, it remembered something I talked about earlier. It must be getting smarter. The model doesn't care about you, just your local software. You don't realize this. It's including huge bits of stuff. You've talked to it before in the prompt that's going to the model. It's like, here's a bunch of stuff that Ed has submitted to you in the past. Okay, now here's his current question. So the model hasn't changed. The model doesn't have memory. It doesn't adjust its memory in real time. It's all static. There is no dynamic memory involved in these models. They're fixed until you go out and post train it and then you bring the new weights back in the data center. And that's the thing to inference now.
Ed Zitron
But that's the thing like this. The reason I bring it up as the kind of closing vibe story is because training is very clearly getting more expensive, they say profitable on inference, which again, doesn't really make sense to me. But putting that aside, they're not. But even if they were, if training never stops, then who cares about profitable inference? Like, it just means that you will get more expensive forever. Inference is just. Yeah, inference is pushing. Training is poo. I'm not putting that one in an article.
Cal Newport
Yeah, okay. It's an interesting question, right? Because it's also getting like the. What's it Altman, who is making those comments about. Well, if we just didn't have to pay for the training, this would be Profitable.
Ed Zitron
If I didn't have all these expenses, I'd be so profitable.
Cal Newport
The one distinction that's maybe relevant there, not to be an apologist, but the one distinction that is relevant is pre training versus post. So pre training is insanely expensive. Right? Because you're training something on all the words in the world and it takes months. So it's just like you're running a data center that's going to have Nowadays up to 6 digit GPUs running full time for months just to get that pre training done. And you have to pay for all of that. Right? So that's all time. You're not getting money, you're just paying for training. Post training is also expensive. It's not that expensive though, because as expensive as pre training because each post training session it's a way, way, way, way smaller data set that you're post training it on. It's like, all right, we generated like 10,000 examples of people responding to questions in a racist way. And those 10,000 examples we'll use to reinforcement learn and try to move it away from answering those type of questions in a racist way. That's like not that big of a data set compared to we're going to train this on every word written that we have access to. So the post training is not as expensive as pre training training unless you're.
Ed Zitron
Doing it all the time.
Cal Newport
Yeah, that's true.
Ed Zitron
That's the thing. If you're doing it all the time and it takes months of pre training, but you're constantly doing something like post training for months. Yeah, I mean post training, not functionally the same thing.
Cal Newport
It's the same thing. I think this is a fair point. When you have a particular example that you're post training on like one prompt with an answer, it's kind of like you're doing inference in reverse, right? So you're going from, you're back propagating from one side of the network to the beginning as opposed to going from the beginning to the end. Now it's more expensive than that because when you're just doing inference, the fundamental operation is basic multiplication. You're just multiplying numbers in a big table. Back propagation, which you use to training it is multiplication. When you do a bunch of, it's a bunch of derivatives because you're constantly, you need to calculate the derivative of these. I mean it doesn't really matter, but you kind of need that. You want the derivative because you want the gradient descent to be towards better and away from worse and derivatives. My Understanding is this is like more expensive operations per weight that you're trying to change because you're not just multiplying a number, you're having to calculate derivatives and it becomes a little bit more complicated. So yeah, it's like inference in reverse, but also like a little bit more expensive. So if you have 10,000 sample question responses you're going to use to post train, it's kind of like 10,000 users sent prompts and there are particularly expensive prompts and you had to pay for that instead of them paying for it. It. So yeah, it's good to think of it as like inference in reverse.
Ed Zitron
It's also an ongoing cost. Like it's. Everyone is leaning on this idea that this stuff will magically become profitable. I don't know if you've seen the cash flow diagrams of anthropic and OpenAI, but there is a mysterious math going on where year 2028, 2029, they just become profitable.
Cal Newport
Wait, I was going to ask you about this. Last month there's this announcement that OpenAI had some massive increase in revenue.
Ed Zitron
Yeah, well this is actually a great vibe reporting thing. And that's annualized revenue. They said they hit 20 billion in annualized revenue which would mean 1.67 billion in a month. Now important details, we don't know how they're defining a month. We don't know if they mean 28 days, 29 days, 30 days. We don't know if they mean a calendar month, so the month of November or December or do they just be in any 30 days. We don't know if they're doing insane math, which happens very rarely, but this company feels like one that might do it. Just my gut instinct is they may be doing here's a seven day period and we're going to turn it into a month. Like they, we don't know how they're doing this. And they also coupled this by saying that as compute grows, revenue grows. I don't know if you've seen this.
Cal Newport
No, it's like they hit formula. Yeah. Okay.
Ed Zitron
Yeah, it's an insane formula that does not map to any economics. Like it's just, it's the kind of thing, if we had a functioning business and tech press that would just be scrutinized to the bone, that would just be ripped apart and say what the fuck does that mean? Because if this were true, if you simply add more gigawatt, add more revenue.
Cal Newport
Yeah.
Ed Zitron
Then you would simply print more money. Like it would be a money printer.
Cal Newport
Versus like my movie theater did well. Therefore, if I build 100,000 movie theaters, we're going to make 100,000 more times the amount of money. And it's like, well, wait, that theater was in Manhattan and it was really well run. And they're. Yeah, well, okay, I assume this much. Trying to understand that story. Okay, yeah, the annualized revenue stuff, I think I even used that term talking to someone I credit you as like Zitron would say look out for annualized revenue.
Ed Zitron
The funny thing is with that as well is it's more Viab reporting because ARR standard, it's very standard in SaaS, companies that sell on a per seat basis. So a salesforce would do well. Even they are doing annualized now with their AI revenue. But it would be a software company has 100 seats, they sell to a company and they charge 15 bucks a bit a seat per month, and they charge that annually. And the actual cost of each user is fairly measurable because they're doing CPU based stuff. Like it gets expensive at scale because there's a lot of people, but it doesn't get multiple. Multiple. I can't say the word. It doesn't get much more expensive as you grow with AI. It's actually because of the way large language models work. Your most excited customers are your most expensive.
Cal Newport
Yeah, because they blow past whatever, whatever their monthly revenue, whatever their cost is, they blow past it.
Ed Zitron
So you can't even do a per seat revenue. It's just every. All of these things, when I say them out loud, I'm like, I feel like this should be more obvious. It's why I loved your video, because it was like, thank you, someone else.
Cal Newport
Well, here's the question. Here's the dangerous question. I actually put out a video last week that was like, dangerous question. We're now on year three or four, I'm counting new year starting new year 2023 or whatever, of people saying, oh my God, this is so cool. These massive disruptions that are going to change everything is imminent. And year after year we've said that I'm not yet seeing the massive disruptions, not the stories of what might be disrupted or the stories of what's different, but how many years do we have to go without industries crumbling or major new economic players that didn't exist before, or complete restructuring of huge companies around this technology? How many years we have to go? So I did a video where I found the Reddit thread where someone just asked this question. This was from like earlier in the month. They were like outside of the vibe coding stuff. What are the big tools to have come? Like, what are the big things that have come out of this technology that are changing things? And I read through this whole thread and it was interesting. There's not. People don't have much like, well, you know, like it could. These are like really small case studies. I used it to help gather clean up data that I got from whatever I was like, this is such a nerdy, specific use case. I went through that thread in a video and this has kind of been my question. It's like, it is very cool technology, but how do we know where this is going to fall? To me, the scale would go like this. Blockchain software, then Oculus VR, then maybe Internet, then electricity. So we're going to have a scale of disruption. Blockchain software is something where the premise made no sense and it was never going to get off the ground and there was going to be no impact on the world. And you know, because I'm a. My training in CSS and distributed system theory, I was there in 2020 saying, Guys, let me just tell you, this is nonsense. No, Web3 is not about to take off. None of this makes sense. And that was true. That did nothing. Then you have like Oculus VR. It really is cool, right? You put on these things like, that is awesome. I love this technology, but it's having a hard time having any real major impact because people aren't sure what to do with it.
Ed Zitron
Also, most people don't necessarily have a great experience initially because it's extremely dependent on where you are, who you are, the size of your skull. That kind of.
Cal Newport
Yeah, for a limited group of people, it's really cool, but it fails to come out. Then the Internet is like really disruptive. Changed a lot of things more. It's not so much as like whole industries disappeared or whatever, but it like changed the way a lot of industries actually function. And then like electricity, you could say, like it just completely changed what day to day existence of business was.
Ed Zitron
Like how existence offbuild.
Cal Newport
Exactly. Yeah. And so the big question, like everyone should be asking is, where is generative AI going to fall on this? And you know, I would say right now this is what gets me yelled at. I'm not saying this is the prediction necessarily going forward, but right now I don't think it's got past much farther past the Oculus part of that scale where it's really cool. There's very cool things. ChatGPT is very cool. It's very cool that it can have that comprehension and no one thought it could do that. But we haven't yet figured out what.
Ed Zitron
Comprehension, Comprehension.
Cal Newport
We take it for granted. But for CS people, the ability that I can, hey, give me text, whatever, in the style of a poem that does whatever, and that includes a character from Star wars and then it can give you text that does that. That comprehension for a computer scientist was like, oh, we didn't really know how to consistently do that on a technical level. Yeah, yeah. That's like, very cool. Yeah. But we haven't got past this is like the surprising thing of this field is we're not really past the Oculus stage yet. Like, where there's like, for certain, this is vibe coding is really cool. The comprehension is cool. Like, Sora is weird, but like, it's cool. You can do that. But the markets are not. None of these have markets yet. Right. Like, there's not big markets in any of these yet. And how far will it go from Oculus to the Internet? To me, that is like the number one question, the number two question, the number three question of all reporting on this. And almost no one's talking about that. It's just hype, laundering. We'll take this hype. We'll extrapolate it. We'll react to that extrapolation. That's kind of what reporting is right now in AI Whereas to me, this is the hugest question. If this ends up Oculus, retail investors are going to get screwed. If it ends up Internet, all right, that's like a really interesting, significant story. If it ends up electricity, obviously that really matters. But I don't know anyone who actually thinks it's going to be that disruptive. Not the current technology. That's the story to me. Not let's extrapolate, hey, these things are creating a church. Or let's hype, launder, extrapolate that and react to our extrapolation. That's not really reporting so much as speculative fiction writing, I guess. I don't know quite what to call it. But this is the real question. Where exactly are we now? And what are the possibilities of where this is going to go, positive and negative? We have enough talk on this.
Ed Zitron
That I fully agree. Cal, it's been such a pleasure having you. Thank you for joining me.
Cal Newport
Always happy to talk shop. Always happy to hate with you, I guess. Hater season. I like it.
Ed Zitron
Hater season's the best. We will be back this week with either a monologue or an interview. I have not decided cuz I've got a wonderful Corey Quinn interview. I just did, so I'm considering putting that in the monologue. You'll find out on Friday. Anyway, this has been better of line. I'm at Zron sp. Subscribe to the Premium Download a T shirt. Whatever you desire. Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Matosowski. You can check out more of his music and audio projects@matosowski.com M A T T O S O W M.
Cal Newport
You.
Ed Zitron
Can email me@ezetteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter. I also really recommend you go to chat wheresyoured at to visit the Discord and go to R betteroffline to check out our Reddit. Thank you so much for listening.
Sponsor and Promo Voices (including Cindy Crawford and others)
Better Offline is a production of Cool Zone Media.
Cal Newport
For more from Cool Zone Media, Visit.
Sponsor and Promo Voices (including Cindy Crawford and others)
Our website coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Lenovo Pro Ad
There's no championship league for small business owners, but if there was, you'd be at the top of the standings. Because going pro with Lenovo Pro means you've got the winning formation. One on one advice IT solutions and customized hardware powered by Intel Core Ultra processors help you stay ahead of the competition. Business goes pro with Lenovo Pro Sign up for free@lenovo.com.
Sponsor and Promo Voices (including Cindy Crawford and others)
Sometimes all we want is more of the same. Like another round of golf played from a channel with 247 coverage. Another look at the garden and the deer as they pick their way through it. Another Taco Tuesday followed by a Whatever's in the Fridge Wednesday. And to get more of the same, all we need is a little help with adaptable care plans from qualified, compassionate caregivers matched to your family's needs. Home instead can help you and your passion stay home no matter what's on your horizon. Visit home instead online for a better what's next? Why have I asked my H Vac guy I found on Angie.com to change my grandpa's trachea tube? Because I was so amazed by how quickly he replaced our air ducts, I knew I could trust him to change Pop Pop's tube while I was on vacation.
Public Investing Ad
Make it quick, young man.
Ed Zitron
Aw.
Sponsor and Promo Voices (including Cindy Crawford and others)
See, Pop Pop trusts you.
Cal Newport
I think we should call a doctor.
Ed Zitron
Connecting homeowners with skilled pros for over 30 years, Angie.
Cal Newport
The one you trust to find the ones you trust. Find pros for all your home projects@angie.com.
Sponsor and Promo Voices (including Cindy Crawford and others)
Travel smarter, not harder at America's Best Value Inn by Sonesta with convenient locations.
Cal Newport
From coast to coast and value packed comfort at every turn.
Sponsor and Promo Voices (including Cindy Crawford and others)
And when you're a Sonesta Travel Pass member, staying at America's Best Value Inn means earning points toward free nights, upgrades and more. Go to sonesta.com to book your stay and unlock the best rates with Sonesta.
Cal Newport
Travel Pass here today, Rome tomorrow.
Sponsor and Promo Voices (including Cindy Crawford and others)
Join now at sonesta. Com. Terms and conditions apply.
Cal Newport
This is an iHeart podcast.
Sponsor and Promo Voices (including Cindy Crawford and others)
Guaranteed Human.
Podcast: Better Offline (Cool Zone Media & iHeartPodcasts)
Air Date: February 11, 2026
Host: Ed Zitron
Guest: Cal Newport (Computer Science Professor, tech writer for The New Yorker)
This episode of Better Offline dives deep into how artificial intelligence (AI) is covered in the media—with a focus on the hype, misinformation, and manipulation that define much of today’s tech journalism. Host Ed Zitron is joined by Cal Newport, a computer science professor and respected tech commentator, to dissect the patterns and pitfalls in AI reporting. The pair “embrace hater season,” critiquing the reporting styles, economic narratives, and industry exaggerations that often distort public understanding of AI’s actual capabilities and implications.
[04:33 - 06:46]
Cal Newport identifies the three most common (and destructive) mistakes in current AI journalism:
Vibe Reporting:
Reporting designed to create a “vibe” or emotional reaction, often by juxtaposing unrelated stats or anecdotes to create misleading inferences. For example, layoffs at tech firms are frequently paired with AI commentary so that readers are left believing workers were replaced by AI—even when this isn’t true.
“You omit certain facts and put loosely related quotes next to each other in a way that creates a general vibe that you want to be true, but it’s not quite true.”
— Cal Newport [04:57]
Mining Digital Ick:
Focusing on fringe or unsettling examples from the AI world to provoke discomfort without offering technical detail or concrete implications.
“You just tell a story that’s unsettling without talking about any of the technical details... You’re just telling a story to unsettle.”
— Cal Newport [05:40]
Faux-Astonishment:
A more “YouTube phenomenon,” characterized by reporting every mundane AI development as earth-shattering and terrifying.
“Every single thing that happens in AI is insane, amazing, terrifying. Everything is going to change.”
— Cal Newport [06:23]
Newport finds these styles should be “automatic ripcords” for any reader.
[06:46 - 12:54]
“If you look at Claude Cowork... and you say this is going to compete with Salesforce, you just don’t know what you’re talking about.”
— Ed Zitron [07:14]
[12:55 - 17:07]
“You can vibe report it because, technically speaking, Amazon also is investing money in AI products. So... you can say with a semi-straight face they fired people because of AI, but clearly, the impression you’re giving... had nothing to do with it.”
— Cal Newport [14:13]
[19:14 - 22:57]
“You just need good programmers’ eyes on it... I don’t think [serious programmers] would touch Claude Code with a ten-foot pole.”
— Cal Newport [26:39]
[29:54 - 33:18]
“You can't just launder our hype into 'this is what's happening.' It's like reporting on a war where you have no one embedded... you're just responding to the press conferences the generals are holding.”
— Cal Newport [29:19]
[34:10 - 36:48]
[36:49 - 44:52]
[55:43 - 66:13]
“In the AI model world, yeah, it’s more rounds of post-training. It’s the only way to make any sort of improvement or fixing bugs.”
— Cal Newport [61:24]
[69:28 - 74:29]
“How many years do we have to go without industries crumbling... or complete restructuring of huge companies around this technology? ...It is very cool technology, but how do we know where this is going to fall?”
— Cal Newport [69:58]
[74:29 - End]
Newport prescribes two simple tests for all AI journalism:
“...if you don’t have that, you’re mining emotions.”
— Cal Newport [54:57]
“You can’t just launder our hype into ‘this is what’s happening.’” [29:19]
“All of these things, when I say them out loud, I feel like this should be more obvious.” [69:16]
“Right now I don’t think it’s got much farther past the Oculus part of that scale—where it’s really cool... but we haven’t yet figured out what.” [72:18]
“There isn’t a single early stage startup that would get a percentage point of this bullshit... Anthropic’s like, ‘Ah, we’re going to burn 100 billion on training, I guess. What do you think?’ And they’re like, ‘Yeah, I love it. Future!’” [54:57]
This episode is an essential, no-nonsense guide to how current media narratives about AI often mislead the public. Cal Newport and Ed Zitron break down the mechanisms of hype and misinformation—arguing for more technical rigor, more honest reporting, and less reliance on emotional manipulation. Anyone trying to understand what’s really happening with AI in industry, media, or even their portfolio will find concrete tools for analysis and a dose of much-needed skepticism.