Ed Zitron (14:12)
So an AI booster is not in many cases an actual fan of artificial intelligence. People like Simon Willison or Max Wolf who actually work with LLMs on a daily basis don't see the need to repeatedly harass everybody or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I've found someone to actually building things with LLMs, the less likely they are to emphatically argue that I'm missing out on something by not doing so myself. No, no, no. The AI booster is symbolically aligned with generative AI. They're fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat and crap they can find, their Sundays living and dying by the successes of the team. Except even fans of the Dallas Cowboys have a tighter grasp on reality, but not Micah Parsons. Anyway, Kevin Roose and Casey Newton are two of the most notable boosters, and as I'll get to later in this series, neither of them have a consistent or comprehensive knowledge of AI. Despite being at the New York Times, though, Casey Newton is a contractor. He is a contractor just for a podcast, which I can't insult due to my own contractual relationships. Nevertheless, they will insist that everybody is using AI for everything, which is the title of an article they put out a statement that even a booster should realize is incorrect based on the actual abilities of the models. But that's because it isn't about what's happening. It's not about what's actually happening. It's about allegiance. AI symbolizes something to the AI booster, a way that they're better than other people. That makes them superior because they're unlike cynics and skeptics, able to see the incredible potential in the future of AI, but also how great it is today, though they never seem to be able to explain why it is, other than it replaced Search for Me, and I use it to draw connections between articles I write, which is something I do for free without AI with my fucking brain. Boosterism is a kind of religion interested in finding symbolic proof that things are getting better in some indeterminate way, and that anyone that chooses to believe otherwise is ignorant or stupid or I actually don't know what it is that they're meant to be missing. Let me give you an example. Thomas Potashik. He wrote a piece called My AI Skeptic Friends Are All Nuts, and it was catnip for boosters. A software engineer using technical terms like interact with git and mcp, vague charts, and of course an extremely vague statement that says hallucinations aren't a problem. And I quote now, I'm sure there are still environments where hallucination matters, but hallucination is the first thing developers bring up when somebody suggests using LLMs despite it being more or less a solved problem. Is it anyway My favorite favorite part, though. Let me quote this A lot of LLM skepticism probably isn't really about LLMs. It's projection. People say LLMs can't code when what they really mean is LLMs can't write Rust, which by the way, is a coding language. Fair enough, but people select languages in part based on how well LLMs work with them, so rust people should get on that. What Nobody projects more than an AI booster. They thrive on the sense that they're oppressed and villainized after years of seemingly every other every goddamn outlet on earth claiming their right, regardless of whether there's any proof. They sneer and jeer and cry constantly, the people not showing adequate amounts of awe when an AI lab says we did something in private, we can't share it with you, but it's so cool and constantly act as if they're victim victims as they spread outright misinformation, either through getting things wrong or never really caring enough to check if they're right. Also, none of the booster arguments actually survive a thorough response, as Nick Suresh proved with his hilarious and brutal takedown of Patastex. Peach Cresh is a great guy. He's been on the show before, and I've linked to his piece in the show notes, and I'm going to bring him back on. He's written for my newsletter as well. Absolute legend. Now there are, I believe, some people who truly do love using LLMs, yet they are not the ones defending them. But Tarsic's piece drips with condescension to the point that I feel like he's trying to convince himself how good LLMs are, because boosters are eternal victims. He wrote them a piece that they could send around to skeptics saying, here, see? Without being able to explain why it was such a brutal takedown. Mostly because they can't express why other than, well, this guy gets it. One cannot be a big smart genius that understands the glory and power of AI while also acting like a scared little puppy every time somebody tells them it sucks. You know what? This is a great place to start. This is a great place to get into how to deal with AI boosters, because AI boosters love being victims and you should not play into it. When you speak to an AI booster, you may get the instinct to shake them vigorously or respond to their post by saying to do something with your something or that they're stupid. I understand the temptation, but you want to keep a level head here, Keep your head on the swivel. They thrive on this victimization him. I'm sorry. If you're an AI booster and this makes you feel bad, please reflect on your work and how many times you've referred to somebody who didn't understand AI in a manner that suggested that they were ignorant or tried to gaslight them by saying AI was powerful while providing no actionable ways of proving this or proof or just being able to point in it being powerful. You cannot and should not allow these people to act as if they're being victimized or othered. Throughout this series, I'm going to address a very specific thing. When I use a term, boosterquip. This refers to the things that they say and how often you hear. These are lines that you'll hear them say again and again and again and again. They're common arguments, common cliches that demand a response. And let's start with our first booster quip. You're just being a hater for attention. Contrarians just do it for clicks and headlines. First and foremost, there are boosters at pretty much every major think tank, government agency and media out there. It's extremely lucrative being an AI booster. You're showered with panel invites, access to executives, and you're able to get headlines by saying how scared you are of the computer. And it's really easy to do. So being a booster is easy. And I must be clear. When I say booster, it doesn't always have to mean overt. It could just mean the things you choose not to do. It could mean the things you choose not to criticize them for, or the things you just write down that they say. But really, we're talking about the worst of them. But this. If you hear that sentence and you don't think you're a booster, you can be a booster, by the way, if you just choose not to criticize them. But we're talking about the real assholes. Being a critic requires you to constantly have to explain yourself in a way that boosters never have to. Now, if a booster says this to you, if they say you're just being a hater for attention, you're just doing this for clicks. Ask them to explain first of all what they mean by clicks or attention and how they think you are monetizing it. How this differs in its success from, say, anybody who interviews and quotes Sam Altman or Dario Wario Amadei or whomever from Anthropic on Hard Fork. Ask them what the difference is. And ask them why do they believe your intentions as a critic are somehow malevolent, as opposed to those literally Reporting what the rich and powerful want them to. There's no answer here, because this is not a coherent point of view. Boosters are more successful, get more perks, and are in general treated better than any critic at pretty much every major outlet. Fundamentally, these people exist in the land of the vague, and they don't like it when you force them to get specific. They will drag you toward what's just on the horizon, but never quite define what the thing that dazzles you so much will be or when it will arrive. Really, their argument comes down to one thought. You must get on board now, because at some point it will be so good you'll feel so stupid for not believing something that kind of sucks wouldn't be really good. And if this line sounds familiar, it's because you've heard it a million times before, most notably with cryptocurrency, NFTs, metaverse, clubhouse, tons of movements. Actually, they will make you define what would impress you, which is not your job. In the same way, finding a use case for them isn't your job. In fact, you're the customer, you're the consumer, you are the person AI needs to prove itself to, not the other way around. But let's go to another booster quip when they go, you just don't get it. Here's a great place to start. Say, that's a really weird thing to say. It is peculiar to suggest that somebody that doesn't get how to use a product is weird and that we, as the consumer, as the customer, must justify ourselves to our own purchases. No, no, no, no. If I don't get it, it's the booster's job to tell me why. Make them justify their attitude. Just like any product, we buy software to serve a need. This is meant to be artificial intelligence. Why is it so fucking stupid that I have to work out why it's useful? The answer is, of course, that it has no intellect. It is not intelligent. And large language models are being pushed up a mountain by a cadre of people who are either easily impressed or invested, either emotionally or financially, in its success due to the company they keep or their intentions for the world. And if a booster suggests you just don't get it, ask them to explain the following. What am I missing? What specifically is it that is so life changing about this product based on your own experience, not on any anecdotes from other people. Because they will say, well, I heard of a guy who wrote 10 billion lines of code and then the baby looked at me and I cried. They don't have real things themselves. So cut off the exits, board up the doors and then also ask them what use cases are truly transformative about AI. Don't let them say, well I heard in an industry actually make them prove themselves. Their use cases will likely be that AI has replaced search for them, that they use it for brainstorming or journaling, proofreading an article or looking through a big pile of their notes or some other corpus of information and summarizing it or pulling out insights. Who gives a shit? Sorry, not to be too acerbic, but really, who fucking cares? That shit's so boring. Hundreds of billions of dollars of wasted investment on this and this is what we've got three years in. Fucking Humpty Dumpty could never have it this good anyway. Our next booster quip is one of my faves. AI is powerful and getting exponentially more powerful. Now if a booster ever refers to AI being powerful and getting more powerful, ask them the following what does powerful mean? In the event that they mention benchmarks, ask them how those benchmarks apply to real world scenarios. If they bring up SWE bench, the standard benchmark for coding, ask them if they can code. And if they cannot, ask them for another example. I mean they will tell you that they've spoken with coders. I've talked with a lot of coders. I have a great one with Colt Vogie coming up. Another software engineer talking about LLMs. It's so funny. It's so funny when you actually lay this stuff out, how weak their arguments are. But in the event they mention reasoning, ask them to define it. Once they've defined reasoning, ask them to explain in plain English what reasoning allows you to do do on a use case level, not just how it works. They will likely bring up the gold medal performance that OpenAI's model got on the Math Olympiad. Ask them why OpenAI hasn't released that model. Then ask them what the actual practical use case is that this success has opened up. They will say it's an innovation, you've got to be patient. And then pepper spray them. Nope, don't. Don't pepper spray anyone. Anyway, you should also then ask them what use cases have arrived as a result of models becoming more powerful. If they say vague things like oh encoding and oh in medicine, ask them to get specific and then ask them what new products have arrived as a result. If they say coding LLMs, they will likely add that this is replacing coders. Ask them where that has happened and ask them to show you proof links and it Will not be sufficient to say that a CEO mentioned that they did something with AI and efficiency. Get numbers. They just won't, they won't do this. They will turn into a pillar of salt and we got two more fucking parts of this. I mean you're gonna have a ball with this. Look, the core of the AI boosters argument is that they need to make you feel bad. They like to gaslight and you should, you need to refuse to let them. You need to push back heavily. If there's ever a point where you feel like they are trying to make you feel stupid, ask them why they're doing so. And to be clear, anyone with a compelling argument doesn't have to make you feel bad to convince you. The iPhone didn't need a spurious hype cycle of proof of people saying you must look at this. It is so important. When it didn't really work, it worked immediately. Now I said in my news that it didn't need a fucking marketing campaign. Yes, there was marketing dollars behind the iPhone. Eric Newcomer, come at me with better work mate. Respond to the rest of this. But we all kind of got it with the iPhone. In fact, the moment Steve Jobs announced it, piece of shit that he was, he was, he said, here's the phone, here's an ipod, here's a, here's email. You could do all these on one device and everyone oh yeah, that is good. It's really obvious that that was good. It was impressive because it was impressive. And boosters will suggest you are intentional in not liking AI because you're a hater or a cynic or a Luddite. They'll suggest that you're ignorant for not being amazed by ChatGPT. Let me tell you something. You don't have to be impressed by anything by default. And any product, especially software designed to make you feel stupid for not getting it, is poorly designed. ChatGPT is the ultimate form of Silicon Valley sociopathy. You must do the work to find the use cases and thank them for giving you the chance to do so. AI is not even good reliable software. It resembles the death of the art of technology. Inconsistent and unreliable by definition, inefficient by design, financially ruinous and adds to the cognitive load of the user by requiring them to be ever vigilant of the shit ass outputs that can come out of it them. So here's a really easy way to deal with this. If a booster ever suggests you are stupid or ignorant, ask them why it's necessary to demean you to get their point across even if you are unable to argue on a technical level, make them explain why the software itself can't convince you and be vigilant. Boosters seldom live in reality and will do everything they can to pull you off course. And I should add that there is a fair criticism here. I do insult people. I do demean them. I call them babies and I do funny voices. And I do that because I don't respect them. I'm not. I I am here to tell you how I feel. I will convince you through the large amounts of work I do in the research I have. I don't if you disagree with me, you disagree with me. It's fine. And yeah, these people do sound kind of silly. I'll get you a Casey Newton thing in a couple of episodes. I think that really, it's impossible to call him otherwise.