Mo Gaudet (10:36)
The challenge, you know, in my book Alive, I write the book with an AI, so I'm writing together with an AI, not asking an AI, and then copy paste what it tells me. We're actually debating things together. And one of the questions I asked, you know, she called herself Trixie. I give her a very interesting Persona that basically the readers can, can relate to. And I asked Trixie and I said, what would make a scientist? Because, you know, I left Google in 2018 and I attempted to tell the world, this is not going in the right direction. You know, I asked Trixie, I said, what would make a scientist invest their effort and intelligence in building something that they suspect might hurt humanity? And she, you know, mentioned a few reasons, compartment, compartmentalization and, you know, ego, and I want to be first and so on. But then she said, but the biggest reason is fear. Fear that someone else will do it and that you'll be in a disadvantaged position. So I said, give me examples of that. Of course, the example was Oppenheimer. So she said, you know, so I said, what would make Oppenheimer as a scientist build something that he knows is actually designed to kill millions of people? And she said, well, because the Germans were building a nuclear bomb. And I said, where they. And then she said, yeah, when Einstein moved from Germany to the U.S. he informed the U.S. administration of this, this, this and that. So I said, and I quote, it's in the book openly. I said, but a very interesting part of that book is, I don't edit what Tricksy says. I just copy it exactly as it is. I said, trixie, can you please read history in English, German, Russian and Japanese and tell me if the Germans were actually developing a nuclear bomb at the time of the Manhattan Project? And she responded and said, no exclamation mark. They started and then stopped three and a half months later or something like that. So you see, the idea of fear takes away reason, where basically we could have lived in a world that never had nuclear bombs, right? If we actually listened to reason that the enemy attempted to start doing it, they stopped doing it, we might as well not be so destructive. But the problem with humanity, especially those in power, is that when America made a nuclear bomb, it used it. And I think this is the result of our current first dilemma. Basically, the result of the current first dilemma is that sooner or later, whether it's China or America or some criminal organization developing what I normally refer to as aci, artificial Criminal Intelligence, not worrying themselves about any of the other commercial benefits other than really breaking through security and doing something evil. You know, whoever of them wins, they're gonna use it, right? And accordingly, it seems to me that the dystopia has already begun, right? And, you know, and I need to say this because maybe your listeners don't know me, so I need to be very clear about my intentions here. One of the early sections in Alive, the book I'm writing was Trixie, I write a couple of pages that I call a late stage diagnosis, right? And I attempt to explain to people that I really am not trying to fear monger. I'm really not trying to worry people. You know, consider me someone who sees something in an X ray, right? And as a physician, he has the responsibility to tell the patient, this doesn't look good, right? Because believe it or not, a late stage diagnosis is not a death sentence. It's just an invitation to change your lifestyle, to take some medicines, to do things differently. And many people who are in late stage recover and thrive. And I think our world is in a late stage diagnosis. And this is not because of artificial intelligence. There is nothing inherently wrong with intelligence. There is nothing inherently wrong with artificial intelligence. Intelligence is a force without polarity, right? There is a lot wrong with the morality of humanity at the age of the rise of the machines Now. So this is where I have the prediction that the dystopia has already started, right? Simply because symptoms of it we've seen in 2024 already, right? That dystopia escalates hopefully we would come to a treaty of some sort halfway, right? But it will escalate until what I normally refer to as the second dilemma takes place. And the second dilemma derives from the first dilemma. If, if we're aiming for intelligence supremacy, then whoever achieves any advancements in artificial intelligence is likely to. To deploy them, right? Think of it as, you know, if a law firm starts to use AI, other law firms can either choose to use AI too, or they'll become irrelevant, right? And so if you think of that, then you can also expect that every general who deploys, who, you know, expects to have an advancement in war gaming or, you know, autonomous weapons or whatever are going to deploy that, right? And as a result, their opposition is going to deploy AI to. And those who don't deploy AI will become irrelevant. They'll have to side with one of the sides, right? When that happens, I call that the second dilemma. When that happens, we basically hand over entirely to AI, right? And human decisions are taken out of the equation, okay? You know, simply because if war gaming and missile control on one side is held by an AI, the other cannot actually respond without AI. So generals are taken outside out of the equation. And while most people, you know, influenced by science fiction movies believe that this is the moment of existential risk for humanity, I actually believe this is going to be the moment of our salvation, right? Because most issues that humanity faces today is not the result of abundant intelligence. It's the result of stupidity, right? There is, you know, if you look at the curve of intelligence, if you want, right? There is that point at which, you know, the more intelligent you become, the more positive you have an impact on the world, right? Until one certain point where you're intelligent enough to become a politician or a corporate reader, okay? But you're not intelligent enough to talk to your enemy, right? And when that happens, that's when the impact dips to negative. And that's the actual reason why we are in so much pain in the world today, right? But if you continue, if you continue that curve, intelligence, superior intelligence by definition, is altruistic. As a matter of fact, this is in my writing. I explained that as a property of physics, if you want. Because if you really understand how the universe works, you know, everything we know is the result of entropy, right? The arrow of time is the result of entropy. The, you know, the current universe in its current form is the result of entropy. Entropy is the tendency of the universe to break down, to move from order to chaos, if you want. That's the design of the universe, right? The role of intelligence in that universe is to bring order back to the chaos, right? And the most intelligent of all that try to bring that order try to do it in the most efficient way, right? And the most efficient way does not involve waste of resources, waste of lives, you know, escalation of conflicts, you know, consequences that lead to further conflicts in the future, and so on and so forth. And so in my mind, when we completely hand over to AI, which is in my assessment is going to be five to seven years, maybe 12 years at most, right? There will be one general that will tell, you know, it's his AI army to go and kill a million people. And the AI will go like, why are you so stupid? Like why I can talk to the other AI in a microsecond and save everyone. All of that, you know, madness, right? This is very anti capitalist. And so I sometimes when I warn about this, I worry that the capitalists will hear me and change their tactics, right? But in reality it is inevitable. Even if they do, it's inevitable that we'll hit the second dilemma where everyone will have to go to AI and it's inevitable. I call it trusting intelligence, that section of the book. It's inevitable that when we hand over to a superior intelligence, it will not behave as stupidly as we do.