Transcript
A (0:05)
Very happy to be speaking to Alex Stamos on the next episode of Frankly Fukuyama. If you do like this series, please like and subscribe to it. So Alex was the head of security at Facebook back in 2016 when all the Russian hacking happened. He's been with us here at Stanford at our Cyber Policy center for a number of years. He teaches a really popular course called Hack Lab where he teaches white hat hackers how to get into computers, among other things. Yes, yes, but really what I wanted to talk about today was the whole question of AI and how that's going to affect computer security. It seems to me that if you listen to people like Jeffrey Hinton, you know, he says that there are these existential threats out there that really may affect the future of humanity as a whole. Elon Musk I mean, there are a lot of people that are into this and there's also another view that says that the threats are shorter term. It's basically not computers doing bad things to people, but people doing bad things to other people using computers. Since you are really one of the foremost experts on computer security, how do you think about the impact that AI is going to have on our well being?
B (1:19)
Yeah, I'm much more of a short term worried about humans person. I'm not an existential threat guy. AI Doomer is the shorthand. I'm not a doomer one, I think. You know, over the last year or so, people have become a lot more skeptical of the idea of AGI, artificial general intelligence that a number of experts have either put the idea of AGI in the post 2030. Even further out, a number of people have made the argument that LLMs, the current set of technologies that we're pursuing, will never get us to AGI, that we're going to need completely different types of models to get us there, that LLMs don't have true understanding of the world in a way that human beings do and will never have an understanding of the world that will rival human beings. And that's a good thing. If you're worried about AI taking over the world. Having a real 3D model of how things interact would be necessary to have the Terminator future.
A (2:27)
The LLM knows that an apple fell on Newton's head, but it doesn't actually understand gravity.
B (2:33)
That's right. The example I often use is like my golden doodle that our family has. She can catch a Frisbee. She doesn't bring it back because she's mostly poodle. So she's not a full golden retrieval. But she can catch a Frisbee. She can't write poetry, right? LMS can write poetry, but pretty much it's impossible to train an LM if you gave it control of a robot to get your Frisbee, right. Language is the hardest thing human beings do. It's hard. One of the last things we evolved. It's one of the highest functions of our brains. And we jumped with LLMs to kind of the hardest things humans do. And then we've kind of backed into it doing these simple things. And so you have all these experiments where folks give control of robots to LLMs and then they're really bad at doing things like navigating simple spaces and such. And what folks are realizing is that like LMS just don't understand the world. They understand like the world through the written word. It's like a novelist's view of the world, but they don't have an actual physical understanding. So people are talking about physical models and physics based models and vision models and other things. Just like, you know, a dog and other animals have a much more innate sense of space and an innate sense of the 3D world that is much simpler and kind of a base brain view that doesn't require that kind of high level. And there's been a bunch of research into simpler models that actually beat the more complicated models and things like solving Sudoku or solving problems that are practical problems, but they can't write poetry, which is a whole set of research that is really effective and I think perhaps gets us to AGI. And perhaps what you end up doing is like the human brain. You end up with a bunch of different models put together. If you want to build an AI system that like a human has the capability to interact in 3D world and write poetry and have instincts and have all these things. But the other thing about AI is it doesn't want anything, right? Like an LLM is just a. It's a bunch of tensors. It's a box of numbers. The most complicated AI model just sits there, right until you do something. Now a human being can prompt it to do something terrible, can give it a system prompt, can give it a goal to go do something, and then can take the output and plug the output into a system that has the ability to do things. And so, yes, and we'll talk about, I'm sure, the bad things you could do with AI. But on its own, AI has no wants, it has no instincts, it has not evolved like we have to have, you know, basic mammalian desires to intention, intentionality, doesn't want to reproduce, it doesn't want to make more of itself. It doesn't want to kill or eat or make more AIs. And so, like, that's the other thing that I think is missing here, that like, if AI was going to do something bad, it's because somebody made it to want to do it. Even in the end, if we ended up with like a Terminator situation, it's because somebody decided to create AI and then to give it the desire to do something terrible.
