Transcript
A (0:00)
Hey, everyone. I'm super excited to be sitting down with AI legend Peter Norvig. Peter is a former research director at Google, an AI fellow at Stanford, and the author of the most important text on AI of the past 30 years, artificial intelligence, A Modern Approach. What's so special about Peter is that he's sat at the forefront of AI research and teaching for over three decades. So he hasn't just pushed the technology forward, but educated an entire, entire generation of AI leaders. I want to ask him how close we are to unlocking the true potential of AI, what he's most worried about, and what we need to do to build the future we want. Let's find out. Peter, you're the author of, you know, the preeminent, if I can call it that, textbook in the AI space, Artificial Intelligence, A Modern Approach, which has recently turned 30. And so this is an area that you've been thinking, thinking about, you know, since at least 1995, I'm sure, a lot longer. As you think about where the technology was then, where it is today. One of the things I'm hearing a lot about these days is a lot of hype around, you know, oh, we're only one to two years out from artificial intelligence reaching its final form or being AGI and having this full potential. Do you believe that? How far has this technology come since, you know, you wrote the first edition of this book and, you know, how close are we to the modern version actually achieving what is, you know, the complete promise of this technology?
B (1:31)
Yeah, so we have seen amazing progress in the last couple years. I do think it's ironic that, you know, 30 years ago, we titled this book A Modern Approach and we kept the same title. So it. I don't know how it can be modern both 30 years ago and today. And it does seem like textbooks seem obsolete now because they come out on a cycle of several years, and AI is advancing on the cycle of several weeks. And so it is exciting what's been happening the last couple years. I think unanticipated by. Most. Certainly unanticipated by me. And just this idea that scaling up in data and processing power and with a few very clever ideas for algorithms made such a difference. So I think that's really different in terms of AGI, I think, you know, I don't really like the term. I think there's. There's no clear definition of it. Everybody has a different idea of what it was, what it is. Depending on what it is achieving. It will vary by five, six, seven orders of Magnitude. And I guess I feel like there's not going to be a moment when we say, AGI is here. I don't believe in sort of this hard takeoff idea. I think it'll get better and we'll just get used to it. And I think past technologies have been like that, right? So if we had all of a sudden gone from the days when if you wanted to learn something, you had to drive to the library to the days where you have a machine in your pocket that gives you access to all the world's information, if that had happened in one day, people would said, this is an incredible singularity and transformation. But it happened gradually and we just got used to it. And so I think it'll be the same with AI. It'll get better and better. There won't be one point when we say this is the transition. It'll just do more and more. Now, Blaise Agarayarka and I wrote this article a year or so ago in which we said, AGI is already here. What we meant by that was not that the machines we have now are perfect. They're certainly flawed in many, many, many ways. But if you take the G seriously, we made a transition in say, 2022 or so of going from writing programs that were specific, a program to play go, a program to recognize images and so on, to programs that are general. So chatgpt and the like can do lots of things that the inventors never realized. And. And we liken that to the invention of the computer, you know, maybe going back, say, to the ENIAC in In or Von Neiman's Maniac a couple years later, where they were 100% general. Now, they're terrible computers by today's standards. They're big and clunky and slow and have no memory and slow processing speed. But if you have a conditional statement, a branching statement, and a sequential statement, and you can read and write memory, then you're 100% general. You're as general as a Turing machine. You can't get more general than that. And so in that sense, we now have programs that are general. We write them, they can do things we didn't think of before. So that's general. They're imperfect, and they'll get better, and we'll play with that technology. But I don't see having AGI as the focus as being that helpful right now. I'd rather focus on how can we make them better, how can we make them more reliable? How can we make them safer? What else can they do?
