Eliad Zayukowski (65:07)
So first of all, Geoffrey Hinton, the guy who won the Nobel Prize in physics for being among the people most directly pinpointable as having kicked off the entire revolution in getting backprop to work on multilayer neural networks or as it's now currently known, deep learning, like the point where AI started working at all. Geoffrey Hinton I think is on record as recently saying he quit his job at Google and then could speak freely saying something like intuitively it seems to him like it's 50% catastrophe probability. But based on other people seeming less concerned, he ingested down to 25%. I could be misquoting here. I'm trying to do this from memory. So are you asking so many people would consider this to not be a lack of concern, like somebody being like well it looks to me like a coin flip whether or not you destroy the world. This is not what you want to hear from your Nobel laureate scientist who helped invent the field and left Google to be able to speak freely about it so he no longer has a a financial stake and making it bigger or smaller one way or the other. Many people would call this already a high degree of scientific alarm. Yashua Bengio was one of the co founders of deep learning. He co won the computer science award with Jeffrey Hint at the Turing Prize for inventing deep learning. Yoshua Bengio is also I think on the concern list. So I don't off the top of my head have a direct quote from him about probabilities. It is true that I am more concerned than they are. I would and I realize that this may sound somewhat hubristic. Attribute this to them being relative newcomers to my field who may not have gotten acquainted with the full list of reasons why it is hard to align AI. That said, coin flip odds of destroying the world is still not what you want to be cheering from your relatively more senior scientists who are relatively newer to the field. Relatively newer to my field. They are Vastly my seniors in artificial intelligence itself, of course I am like speaking tongue in cheek whenever I accuse people of being young whippersnappers. Jeffrey Hinton could say that with a straight face. I am just like bit of light self mockery there about how I'm not Jeffrey Hinton. But that said, if you are relatively newer to this, you might think like, well maybe we've just got to use reinforcement. Learning to make the AIs love us the way a child loves a parent or love us the way a parent loves a child and not quite have at your fingertips the top six reasons why that is hard and principled obstacles to that and what will go wrong there. So that is what prevents the the famous inventors of the field who only started speaking out about their concerns relatively recently after leaving their companies and are now financially dependent of stakes on their opinion. That's what makes them be like 50 50, the world gets destroyed. Instead of my own thing where I'm like, yeah, it's predictable that the world gets destroyed if you keep doing this. But if you ask what's responsible for Sam Altman at OpenAI not, you know, possibly having less than 50% odds, who knows what that guy's really thinking? Well, you can like trace out his long trail over time of him initially saying like AI will end the world, but in the meanwhile there will be great companies to him. Sort of like saying less and less alarm astounding things in front of Congress. Like where Congress asks him like, well, you talk about the world ending. By that do you mean like mass unemployment? And Sam Altman hesitates for two seconds and replies, yes. Was was the lovely like congressional hearing thing that happened I think about a year back now. So what's going on with the AI companies? I'm not telepaths, I can't read their minds. I would point out that it is immensely well precedented in scientific history, in the history of science and engineering for companies that are making short term profits to do really sad amounts of damage vastly disproportionate to the profit that they are making and to be in apparently sincere denial about the negative effects of what they are doing. Two cases that come to mind are leaded gasoline and cigarettes. I don't know if you would be familiar off the top of your head with the case of legend gasoline. Probably even the kids today have heard about cigarettes. The cigarette companies did way more damage to human life in cancer and other health effects than they made in profits. They did make a few billion dollars in profit selling cigarettes. But Nothing remotely compared to the cost of human life. It's not that they were. This, this was an immensely negative sum game. They were doing enormously more damage than the profits that they were making. And any particular advertising professional who got up in the morning and figured out how to market cigarettes to teenagers, any of the scientists that they paid to, to write stories about how you couldn't really tell whether or not cigarettes were causing lung cancer would have made a tiny, tiny fraction of the. Of the total profit of the cigarette companies. Their CEO would not have made that larger fraction of the total profit of the cigarette company. So they went off and participated in this thing that, you know, caused lung cancer to I don't know how many millions of people. And for what? For this very small profit. How could a human being bring themselves to do that? Through a very simple alchemy. First you convince yourself that what you're doing is not causing the harm, which is just a very easy thing for human beings to do all the time, all throughout the entire recorded history of humanity. And then once you've convinced yourself that you're not doing that much harm, well, what's the harm in taking money to not do any harm? Leaded gasoline caused brain damage to tens, maybe hundreds of millions of developing brains in the United States and elsewhere. It caused brain damage to children. For what the gas companies making leaded gasoline could have made unleaded gasoline. It's not that they would have gone out of business if they'd somehow gotten together and decided to stop making leaded gasoline, if they hadn't opposed the regulations that were trying to bend leaded gasoline before it turned into a big deal. Back in the 1930s, there was an attempt to have regulations against leaded gasoline. Lead was known to be poisonous in large quantities. Why let people spray it all over the place, even in smaller quantities? But the gas companies got together. They managed to prevent that legislation from passing. They poisoned an entire generation, and for what? For gas that burned about 10% more efficiently, I think was what leaded gasoline basically got you for it being more convenient to add lead to the gas instead of adding ethanol to make it burn more smoothly inside of car engines. Trivial, trivial, trivial compared to the. This is not a conspiracy theory. This is standard medical history I'm talking about here. Like I've seen estimates of 5 points off the tested IQs. And you can look at the chart of which states banned leaded gasoline when and watch the drops in the crime rate because it makes you, you know, it disposes you to be more violent not just stupid, that tiny little bit that, that, that hit child after child after child. Why, why, why would anyone cause that amount of damage? Because you got your CEO salary of a company that then didn't need to go to the inconvenience of adding ethanol to gasoline instead. Because first you convince yourself it's safe. First you convince yourself you're doing no harm, which is just an easy thing for human brains to convince themselves of. And then why not oppose the legislation against leaded gasoline? It's not doing any harm, right? Ronald Fisher, one of the inventors of modern scientific statistics, testified against it being knowable that cigarettes cause lung cancer. Because, you see, a no proper controlled experiment had been done on cigarettes causing lung cancer. And so how could you possibly, possibly know from your observational studies showing 20 times the chance of cancer if you were a smoker, how could you possibly know from mere correlational studies? And Fisher himself was a heavy smoker. He actually drank his own Kool Aid. The inventor of leaded gasoline, I think, had to go away to a sanitarium at one point because of how much he managed to poison himself with lead. He drank his own Kool Aid. They really managed to convince themselves that they were doing no harm, and so they could do arbitrarily vast amounts of harm in exchange for these tiny, comparatively tiny, tiny profits. And to say this is not a substitute for actually tracking the object level arguments about whether or not AI will kill you and for what reason. You cannot figure out what will happen as a matter of computer science, if you build a superintelligence and switch it on by pointing out at who has what tainted motives, who has what incentives to say what. But having tried in my book, in Mayonnaise Horizon's book, to make the case for why on an object level, this is what happens if you build a superintelligence and switch it on to ask why the people being paid literally hundreds of millions of dollars by meta to be AI researchers, why people like Sam Altman, who, I mean, didn't quite get paid billions of dollars. He was supposed to be CEO of a nonprofit. He actually stole billions of dollars. But why the guy's stealing billions of dollars in equity from the public that was supposed to own it. How does he manage to convince himself that what he's doing is okay? Well, maybe he's not even convinced. You know, we do have him on the record as saying a few years earlier, like, AI will end the world, but in the meantime, they'll be great companies. You know, maybe, maybe he's Just like, yeah, sure, you know, like the world's gonna end, but I get to be important, I get to be there, you know, sure, who but I could be trusted with this power. That that's.