B (38:26)
Yes. Then I want to return to one of my thoughts about AI. So Reuters wrote Aug 29. Reuters Meta has appropriated the names and likenesses of celebrities including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez to create dozens of flirty social media chatbots without their permission, Reuters has found. While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift parody bots. Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Showbell. A 16 year old film star asked for a picture of the teen actor at the beach. The bot produced a lifelike shirtless image writing beneath the picture. Pretty cute, huh? All of the virtual celebrities have been shared on Meta's Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing. To observe the bot's behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meetups. Some of the AI generated celebrity content was particularly risque. Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or or dressed in lingerie with their legs spread. Meta spokesman Andy Stone told Reuters that Meta's AI tools should not have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta's production of images of female celebrities wearing lingerie on failures of the company's enforcement of its own policies, which prohibit such content anyway, the article goes on at much greater length, but everyone gets the idea over the course of the past year, I've invested some time studying the operation of large language model generative conversational AI, and I've been using them continuously while watching and marveling at their output, which to me remains astonishing. That Reuters piece brings me back to a feeling I've expressed here before, which is that the nature of the way AI generates its output to me means that it is inherently uncontrollable, which explains why the AI industry is having so much difficulty controlling it. The information that is acquired, stored, and modeled within a large language model is almost stored holographically, with no single fact residing in any one place, so it's not possible to pluck it out from the whole. In struggling to find a useful analogy, the classic photographic hologram came to mind. What I recall about a hologram is that it's not possible to readily edit its image contents, because every part of the image is stored everywhere else. Each small region of a hologram contains information about the entire scene, though with proportionally less detail. So if, for example, we were to cut a hologram in half, each half would still depict the entire scene, albeit with lower resolution and with a reduced field of view, like looking through only part of a window. This is very much the way large language models store their information. The other inherent problem with what we want when we say that we want to control an AI, is that the boundaries between what we would consider acceptable and unacceptable are beyond blurry and fuzzy. We may be able to make a go no go determination, but how do we describe it? U.S. supreme Court Justice Potter Stewart was unable to define what was and was not pornographic, and was finally reduced to saying, I may not be able to define it, but I know it when I see it. So on the one hand, it's unclear how we even describe to an AI what it is and is not allowed to produce. And even if we could, it's not at all clear to me how we edit a hologram, which is, I think, a very good analogy for what, you know, the way information is stored inside of a large language model. Having, you know, taken some time to, to look at the, at the way they're, at the way they are trained. I just think, Leo, that it is, you know, I've talked about like, maybe having another AI look at the output of the main AI before its output is made public. I, I, it's like, it just seems so difficult to me. I mean, I get how, how hard a problem it is to, to, to edit it. It's very much like, like telling the AI, okay, don't say anything that's wrong. Well, it's been trained on a whole bunch of wrong stuff. So it doesn't know what's right or wrong. I mean, it doesn't know, you know, anything. It's just producing content based on the way it's been trained. So I, I mean, I agree with you. What, what, what Reuters uncovered is, it's, frankly, it's not surprising, but it is very disturbing. And speaking of AI, last Thursday, the Vivaldi browser folks took an interesting stand on the issue of AI permeating the web browsing space and their feelings about that. Their post was titled Vivaldi Takes a Stand. Keep browsing human. And that was followed by their teaser intro, which read, browsing should push you to explore, chase ideas and make your own decisions. It should light up your brain. Vivaldi is taking a stand. We choose humans over hype, and we will not turn the joy of exploring into inactive spectatorship. Whoa. No AI for you. So here's what they wrote. They said, just like society, the web moves forward when people think, compare, and discover for themselves. Vivaldi believes the act of browsing is an active one. It is about seeking, questioning, and making up your own mind. Across the industry, artificial assistants are being embedded directly into browsers and pitched as a quicker path to answers. Google is bringing Gemini into Chrome to summarize pages and in future, work across tabs and navigate sites on a user's behalf. Microsoft is promoting Edge as an AI browser, including new modes that scan what's on screen and anticipate user actions. These moves are reshaping the address bar into an assistant prompt, turning the joy of exploring into inactive spectatorship. This shift has major consequences for the web as we know it. Independent research shows users are less likely to click through to original sources when an AI summary is present, which means fewer visits for publishers, creators, and communities that keep the web vibrant. A recent study by Pew Research found users clicked traditional results roughly half as often when AI summaries appeared. Publishers warn of dramatic traffic losses when AI overviews sit above links and I'll just interrupt to say, as far as we know, that's all true, and we've been exploring the various consequences of that for the past several weeks, vivolvi continues. The stakes are high. New AI, native browsers, and agent platforms are arriving, while regulators debate remedies that could reshape how people reach information online. The next phase of the browser wars is not about tab speed. It's about who intermediates knowledge, who benefits from attention, who controls the pathway to information, and who gets to monetize you. Today, as other browsers race to build AI that controls how you experience the web, we are making a clear promise. We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship. Without exploration, the web becomes far less interesting, our curiosity loses oxygen, and the diversity of the web dies. The field of machine learning in general remains an exciting one and may lead to features that may or that are actually useful. But right now there is enough misinformation going around to risk adding more to the pile. We will not use an LLM to add a chatbot, a summarization solution, or a suggestion engine to fill up forms for you until more rigorous ways to do these things are available. Vivaldi is the haven for people who still want to explore. We will continue building a browser for curious minds, power users, researchers, and anyone who values autonomy. If AI contributes to that goal without stealing intellectual property, compromising piracy I'm sorry, privacy or the open Web, we will use it. If it turns people into passive consumers, we will not. We will stay true to our identity, giving users control and enabling people to use the browser in combination with whatever tools they wish to use. Our focus is on building a powerful personal and private browser for you to explore the web on your own terms. We will not turn exploration into passive consumption. We're fighting for a better web. Okay, so I guess there will be a web browser for anyone who hates AI. I certainly am not an AI hater. I think it's a marvelous and amazing emergent phenomenon, and I make great use of it as a quick reference source while I'm coding. I actually feel a bit guilty now, asking it dumb things that I could easily go look up for myself and would have had to a couple of years ago. But if OpenAI wants to lose money allowing me to ask it why the sky is blue, I'll happily pay them 20 bucks a month for the privilege. Today, I'm still using Google and I check out its AI overview to see whether that's all I need, while never forgetting that it could be wrong. You know, the other day, Chat GPT produced a snippet of Windows code for me, and it just made up a Windows message that never existed. I immediately knew it was wrong, but the way it was wrong was interesting and it made sense to me since, you know, there's nothing in there that actually understands what. What it's spewing out. It's just language. And that's what makes, you know, what it's able to do so miraculous. So my feeling is it is certainly way more useful than not. And that's why I tend to think that Vivaldi's anti AI stance is probably a mistake.