Transcript
Commercial Announcer (0:00)
This is an iHeart podcast. Guaranteed Human did you know you can opt out of winter with VRBO? Save up to $1,500 for booking a month long stay with thousands of sunny homes. Why subject yourself to the cold? Just filter your search by monthly stays and save up to $1,500. Book now@vrbo.com when the holidays start to.
Ed Zitron (0:20)
Feel a bit repetitive, reach for a.
Commercial Announcer (0:23)
Sprite Winter Spice Cranberry and put your twist on tradition.
Ed Zitron (0:27)
It's a refreshing way to shake things.
Commercial Announcer (0:29)
Up this sippin season, and only for a lim. Obey your thirst.
Sophie Cunningham (0:34)
This is Sophie Cunningham from Show Me Something. Do you know the symptoms of moderate to severe obstructive sleep apnea, or OSA in adults with obesity? They may be happening to you without you knowing. If anyone has ever said you snore loudly, or if you spend your days fighting off excessive tiredness, irritability and concentration issues, it may be due to osa. OSA is a serious condition where your airway partially or completely collapses during sleep, which may cause breathing interruptions and oxygen deprivation. Learn more at don'tsleep on OSA.com this information is provided by Lilly, A Medicine company.
US Ski and Snowboard Insider Announcer (1:13)
The world's best ski and snowboard athletes are chasing medals. Now you can follow their every move. Join Insider, the official US Ski and Snowboard fan loyalty program, and get premium viewing at World cup ski events, exclusive athlete meetups, date discounts from brands you love, and a custom welcome gift mailed direct to your doorstep. This winter, show your support as they race for the podium. Head to Insider, usski and snowboard.org and join today.
Ed Zitron (1:45)
Call Zone Media hello and welcome to Better Offline. I am of course your host Ed.
Quince Brand Promoter (1:50)
Zitron.
Ed Zitron (2:03)
And after a few three part episodes I had an idea. What if I did a four parter? In all seriousness, I know that this is a little bit long, but the topic we're about to explore demands quite a bit of depth and it isn't something I could really do justice to in a one parter or two parter, or I guess even three parter. But let's get into it. Over the last few months we've felt the vibes shift downward in an aggressive way, with both Mark Zuckerberg and Clammy Sam Altman saying that we're in a bubble. In the latter case, said warnings of a bubble are always couched in rank hypocrisy, as it's always implied that whoever it is and the companies they represent aren't part of that bubble, but rather it's other people and other companies making unfortunate decisions. The thing is, there's really no escape for either of these guys. Not for Zark and definitely not for Sam Altman. And over the next four episodes, I'm going to make a comprehensive case for the fact that we're in a bubble and condense everything I've been talking about into one series. And I know I've been all over the place and I get a lot of people saying, oh, well, where did you talk about this and where'd you talk about that? And that's kind of fair when you put out as much as I do. But I'm going to break this down in four episodes. I'm going to give you a comprehensive argument against the bubble. Well, I mean that for a bubble, I guess, but against generative AI in general. But in this episode, I think it's good to start from the beginning and work our way forward to track the thread from the origins of ChatGPT to the billions burned building data centers all over the world, and the weak business justifications for burning in nearly a trillion dollars to keep this hollow industry Alive. Now, in 2022, a kind of company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort of sounded like a person, using a technology called large language models, which can also be used to generate images, video and computer code, or at least would eventually. Large language models require entire clusters of servers connected with high speed networking or containing this thing called a GPU Graphics Processing Units. These are different to the GPUs in your Xbox or laptop or gaming PC. They cost much, much more, and they're good at doing the processes of inference, the creation of an output of any LLM and training, feeding masses of training data to the models or feeding them information about what a good output might look like so they can later identify a thing or replicate it. These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem. Because they're probabilistic, meaning that they're just guessing whatever the right output might be. These models can't actually be relied upon to do exactly the same thing every single time. So if you generated a picture of a person that you wanted to, for example, use in the storybook, every time you create a new page using the same prompt to describe the protagonist, the person would look different, and that difference could be minor, something that a reader could shrug off, or it could make the character look like a completely different person. Now, none of this, by the way, is me validating or saying that any of this stuff is good. I'm just describing it. Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data. As a result, these models would frequently make mistakes, something which we later referred to as hallucinations. And that's not even mentioning the cost of training these models, the cost of running them, the vast amounts of computational power they required, the fact that the legality of using materials straight from books on the web without the owner's permission permission was and remains legally dubious, or the fact that nobody seemed to know how to use these models to actually create profitable businesses. These problems were overshadowed by something flashy and new, and something that investors and the tech media believed would eventually automate jobs that have proven most resistance to automation, knowledge, work and the creative economy. The newness and hype and these expectations sent the market into a frenzy, with every hyperscaler immediately creating the most aggressive market for one supplier I've ever seen. Nvidia has sold over $200 billion of GPUs since the beginning of 2023, becoming the largest company on the American stock market and trading at over $170 as of writing this sentence, only a few years after being worth $19.52 a share. Now there's a stock split that happened there, but it works out that way. Now, while I've talked about some of the propelling factors behind the AI wave, automation and novelty, that's not really the complete picture. A huge reason why everybody decided to do AI was because the software industry's growth was slowing and SaaS software as a service company valuations were stalling or dropping, resulting in the terrifying prospect of companies having to under promise and over deliver and be efficient. You know, gross things like running sustainable businesses, things that normal companies, those whose valuations aren't contingent on ever increasing, ever constant growth, don't have to worry about because they're normal companies. Suddenly there was a new promise of new technology, large language models that were getting exponentially more powerful. Which was mostly a lie, but hard to disprove because powerful can mean basically anything, and the definition of powerful depended entirely on whoever you asked at any given time and what that person's motivations were. The media also immediately started tripping over its own feet, mistakenly claiming OpenAI's GPT4 model tricked a TaskRabbit into solving a captcha it didn't this never happened. Or saying that and I quote people who don't know how to code already used bots to prod fledged games. And if you were wondering what the New York Times was referring to when they said full fledged there it meant Pong and a cobbled together rolling demo of Sky Roads, a game from 1993, likely because a bunch of that training data was fed into the models. Now the media and investors helped peddle the narrative that AI was always getting better, could basically do anything, and that any problems you saw today would inevitably be solved in a few short months or years or at some point, I guess. Not really sure when that point is but. But damn do they think it's coming. And LLMs were touted as a kind of digital panacea, and the companies building them offered traditional software companies the chance to plug these models into their software using an API, thus allowing them to ride the same generative AI wave that every other company was riding. The model companies similarly started going after individual and business customers, offering software and subscriptions that promised the world, though this mostly boiled down to chatbots that could generate stuff and then doubled down with the promise of agents, a marketing term that's meant to make you think autonomous digital worker, but really means broken digital chatbot of some sort, or just broken digital product. It really depends how you're feeling that day. Throughout this era, investors in the media spoke with a sense of inevitability that they never really backed up with data. It was an era based on confidently asserted vibes. Everything was always getting better and more powerful, even though there was never much proof that this was truly disruptive technology other than in its ability to disrupt apps you were using with AI making them worse, for example, suggesting questions on every Facebook post that you could ask me meta AI, but which meta AI couldn't answer. And I mean on memes, on just random posts, it's really not useful in any way, shape or form. AI became omnipresent, and it eventually grew to mean everything and nothing. OpenAI would see its every move lorded over like a gifted child its CEO Sam Altman called the Oppenheimer of our age. Even if it wasn't really obvious why Everybody was impressed. GPT4 felt like something a bit different, but was it actually meaningful? The thing is, artificial intelligence is built and sold on not just faith, but a series of myths that AI boosters expect us to believe with the same certainty that we treat things like gravity or the boiling point of water. Can large language models actually replace coders? Not really, no. And I'll get into why later in this series.
