Ben Horowitz (14:43)
Yeah, so it's funny, every panel I've been on or, like, kind of time, I've been at a conference with, like, European leaders, they always say that, whether they're in the press or industry or the regulatory bodies, they say the same thing. Well, Europe may not be the leaders in innovation, but we're the leaders in regulation. And I'm like, you realize you're saying the same thing. So Europe kind of got down this path, which is known as the kind of precautionary press principle in terms of regulation, which means you don't just regulate, you know, things that are kind of known harmful. You try and anticipate with the technology anything that might go wrong. And this is, I think, a very dangerous principle, because if you think about it, we would never have released the automobile. We'd never released any technology. I think it started in the nuclear era. And one could argue that we had the answer to the kind of climate issues in 1973. And if we would have just built out nuclear power instead of burning oil and coal, we would have been much better shape. And if you look at the safety record of nuclear, it's much better than oil, where people blow up on oil rigs all the time. And I think more people are killed every year in the oil business than have been killed in the history of nuclear power. So, you know, these regulatory things have impact. In the case of AI, there is kind of several categories that people are talking about regulating. So there's kind of. The speech things like, can you. And Europe is very big on this. Can it say hateful things? Can the AI say political views that we disagree with this kind of thing? So very similar to social media and kind of that category of things? And do we need to stop the AI from doing that? And then there's kind of another section which is, okay, can it tell you instructions to make a bomb or a bioweapon or that kind of thing? And then there's, you know, another kind of regulatory category which is, I think the one that, you know, most people, like, use this argument to kind of get their way on the other things is, well, what if the AI becomes sentient, you know, and like, turns into the Terminator? We got to stop that now. Or like, kind of the. The related one, which is kind of a little more technologically believable, but not exactly is takeoff. Have you heard of this thing? Takeoff? So takeoff is the idea that, okay, the AI learns how to improve itself, and then it improves itself so fast that it just goes crazy and becomes a super brain and decides to kill all the people to get itself more electricity and stuff. Kind of like the Matrix. Okay, so let me see if I can deal with. And then there's another one which is around copyright, which is important, but. But probably not on everybody's mind as much. So if you look at the technology, the way to think about it is there's the foundation, the models themselves. And it's important, by the way, that everybody who works on this stuff calls it models and not like AI intelligence and so forth. And there's a reason for that, because what it is, is it's a mathematical model that can predict things. So it's a giant version of kind of the mathematical models that you all kind of study to do basic things. So if you want to calculate, you know, when Galileo dropped a cannonball off the Tarapisa, you know, you drop it off the first floor and the second floor. But then you could write like a math equation to figure out what happens when you drop it off, like the 12th floor, you know, how fast does it fall? So that's a model with, you know, maybe like a couple of variables. So think then, what if you had a model with 200 billion variables, that's an AI model, and then you can predict things like, okay, what word should I write next if I'm writing an essay on this? Like, you can predict that, and, and that's what's going on. So it's math, and inside it's doing. The model is just doing a lot of matrix multiplication, you know, linear algebra, that kind of thing. So you can regulate the model or you can regulate the applications on the model. So I think when we're talking about publishing a bio, how to make a bioweapon or how to make a bomb or that kind of thing, that's already illegal. And the AI shouldn't get a pass on that because it's AI. So if you build an application like ChatGPT that publishes the rules of making them bomb, like, you ought to go to jail. Like, that's, that should not be allowed. And that's not allowed. And I think that falls under regular law. Then the question is, okay, do you need to regulate the model itself? And the challenge with regulating the model is you're basically, the regulations are all of the form. You can do math, but not too much math. Like, you do too much math, we're going to throw you in jail. But if you do just this much math, it's okay. And like, how much math is too much math? And, and look, the problem in that thinking is when you talk about sentient AI or takeoff, you're talking about sort of thought experiments that nobody knows how to build. And I think there's very good arguments, you know, and we do know how to reason about these systems that like, takeoff is not going to happen and that we have no idea how to make takeoff happen. And so it's kind of one of these things like, well, the laws of physics. I can do a thought experiment that says if you travel faster than the speed of light, you can go backwards in time. So do we now need to regulate time travel? And Outlaw whole branches of physics in order to stop people from traveling back in time and changing the future or changing the present and screwing everything up. For us, that's probably too aggressive. And, like, we're really getting into that territory when we talk about sentient AI. Like, we don't even know what makes people sentient. Like, we literally don't. You know, who knows the most about consciousness? Anesthesiologists. Because they know how to turn it off. But that's, like, the extent of what we know about consciousness. So, like, we definitely don't know how to build it, and we definitely haven't built it to date. Like, there's no AI that's conscious or has free will or any of these things. And so when you get into regulating those kinds of ideas, and I'm not saying that AI can't be used to improve AI, it absolutely can. But computers have been improving computers for, like, since we started them. But that's different than takeoff, because takeoff requires a verification step that nobody knows how to do. And so, like, you get into, you know, you get into very, very theoretical cases, and then you write a law that prevents you from competing with China at all. And that gets very dangerous. And so I, so I just say, like, we have to be really, really smart about how we think about regulation and how that goes. Copyright is another one. So copyright. Should you be allowed to have an AI, like, listen to all the music and then, like, reproduce Michael Jackson? No, definitely. That's gotta be illegal because that's a clear violation of copyright. But then can you let it, like, read a bunch of stuff that's copywritten and create a statistical model to make the AI better, but not be able to reproduce it? Well, that gets very tricky if you don't allow that, because, by the way, that's what people do, right? Like, you read a lot of stuff and then you write something and it's affected by all the stuff you read. And by the way, like, competitively with China, they're absolutely able to do that. And the amount of data you train on dramatically improves the quality of the model. And so you're going to have worse models if you don't allow that. So that's a trickier one. But this is where you have to be very careful with regulation to not kill the competitiveness while not actually gaining any safety. And so that's, you know, that's a big debate right now, and it's something we're working on a lot.