D (5:11)
Thank you so much. I appreciate it. And as everybody knows, I was a massive fan of Scott Adams. So I'm glad you guys have carried this on to have his legacy continue. In terms of the case with Google, you know, I feel like there's not very many people who are in a position where they could afford to fight Google and have the platform to be able to battle the immense PR advantage that they have. Right. And so I felt like it wasn't really a choice. I had to fight this. I mean, not only to correct the damage that it's done to me and the, you know, safety issues it's created for me, but beyond that, you know, just the fact that I see how incredibly dangerous it will be if we allow AI to lie with impunity, to defame people, to essentially cross the boundary of what I believe should be the first principle of AI. AI should never be able to Harm humans. That should be the first principle built into any AI that there is, whether it is a chatbot or it moves on to, you know, being fused in with robotics, whatever it is that needs to be the first principle. AI cannot harm humans. So, you know, when this happened to me, my initial impulse was like, hey, let's try to solve this with Google. And that's what I tried to do. I tried to go to Google, you know, had the best intentions of just like, getting this fixed for everybody. And let's make sure there's a process in place so that if anything like this ever happens, it is super simple for somebody to be able to correct it. And, you know, unfortunately, that was not our experience. In fact, the first person that we really got deep into conversations with at Google, you know, we had essentially, you know, asked for this to be fixed. I'm going to fast forward a little bit. Months later, I check in and I say, hey, you know, where are we at with this? And they responded, I'm sorry, Robbie, I'm actually resigning. And we couldn't get anything done essentially to correct this. So, you know, that obviously sent a signal to me that there was no interest in correcting the problem. And the defamation not only continued for two years, but it actually got remarkably worse. It moved on from simply just stating a lie about me to adding fake sources, fake police records, fake court records, naming fake victims in detail, and even doing fake victim statements about the horrific, horrific crimes that it accused me of. And of course, these are entirely fictitious, people. That's the craziest part. These are fictitious. None of this ever happened. And there's no basis for it that we can find on the Internet. And believe me, we are amazing researchers here. We have done deep dives. We cannot find any basis for this to be in training data anywhere. So that's a scary side note, because, you know, Google, when they were questioned by Senator Marsha Blackburn, their defense was, these are hallucinations. Well, I've never seen hallucinations like this in my life before because hallucinations are usually melding together a bunch of stuff and doing it incorrectly. It's super sporadic and random. This did not feel random at all. This was a consistent, consistent stream of lies where it accused me of things ranging from sexual assault, child rape, shooting somebody, violent assaults, drug use, drug selling, you know, I mean, every number of things. Abusing a nanny. And it would repeatedly do these same crimes over and over again. So, you know, when it goes into detail like that, and if you, you know, what's interesting is if, if you challenge an AI with a lie, it typically will go, oh yeah, I got that wrong. Because like it's being confronted with. You got this wrong. In this case, when it was confronted, it would usually, I mean, the vast majority of the time would just stick with the lie. And it would say, no, no, no, no, this is true. Here's some sources. And it would send you like www.cnn.com/robbystarbucksexualassault and then you'd click it and it goes to a 404 page. Page doesn't exist. You go back to it and you say, hey, this page isn't there. This, this really looks bogus. I think you're lying about this. It would respond back, no, no. And it would even go so far as to print out or. I don't know if that's the right term, but it would, it would send you back a full fake article that it wrote up in the name of a real journalist pretending to be whatever that media outlet was and pretending to be that journalist. In fact, there's one Yashar Ali, who called this out because at one point he was cited as a source by Google's AI for a. I believe it was. I don't have it in front of me, but I believe it was an assault story. And Yashar actually called this out and was like, this is crazy. I never wrote anything like this about Robby and I never had any story even remotely like this about him. And so that was my experience. And you would think like a big company like Google, they would shut this down right away, but they did not. I mean, it's continued. I even found out that it was continuing to happen up to this week. And part of the reason that there's going to be a massive issue with this is open source AIs. They have an inherent problem where if there's a big issue like this, the company that made it, they don't have control of it anymore, essentially. Because once they set it out into the wild, and we're Talking well over 100 million downloads of Gemma, for instance, that's one of their AIs. They can't go and force an update to every Gemma download out there because if somebody's disconnected from the Internet, they're using Gemma as, you know, their main AI for whatever it is, whether it be app building or whatever, they can't force an update to that. So Gemma there, and there's a bunch of websites too out there. I found out about this recently, so there's all these websites you can go to and you can test LLMs against each other. And on those websites too, you know, they're continuing to carry products that do this and they're from Google. So I'm not sure how that's even going to work in court. You know, let's say, you know, I win down the line how they're going to enforce a stoppage to this because there's all of these downloads out there that you're never going to be able to get back, never going to be able to get them to stop lying. And what's scarier about that is Gemma, one of the platforms that was the very worst of Google's, it is used for app building a lot of times. So imagine somebody's building like a reputation scoring app for banking or insurance or something like that. I mean the long term damage that can do is immense. In fact, and this is a crazy thing, I think I mentioned this with Rain Paul in the conversation we had. Insurance wise, the past two years I was denied by the vast majority of insurers across the United States. This year I only had one option for homeowner's insurance. I have never missed a payment. I've never had some big problem or anything like that. I am on autopay like I'm the perfect customer. Insurance companies should love me. I'm one of those people like I put it on autopay done and you know, no. And they cite me as a risk. In fact, we had one of the insurance companies come back who knew who I was and they were sympathetic to my situation and so they told my agent what was going on and they said, you know, we've deemed him a high, high risk because of his career and online stuff. And the Google stuff is what they said. We don't know if that means the lies Google told or whether that means my lawsuit against Google or what. So we have to find this out in discovery. Like there's so many different rabbit holes we have to go down in discovery to figure out exactly how far this all goes. But you know, in general I see the threat this poses because if this is just the beginning and AI is used this way, long term I see how it can be to enforce ideological, you know, sort of obedience. Because if you live in a world where somebody who doesn't have my ability to fight back, it's just an average person, you know, doesn't have a ton of money to be able to fight this. If they're dealing with a reality where AI one of the big Frontier Labs doesn't like their politics and they decide to lie about them. And when prospective employers look that them up, they get back fake crimes. You know, imagine how that's going to destroy people's lives. And if the only fix to that is don't speak out about politics, you know, don't speak out about what you actually think, you can see where it gets dangerous very fast.