Transcript
Toby Ord (0:00)
Every time you want to halve the amount of this error that's remaining, you have to put in a million times as much compute. That's pretty extreme, right? So they have halved it and they did put in a million times as much compute. But if you want to halve it again, you need a million times more compute. And then if you want to halve it another time, probably it's game over. It does hold over many different orders of magnitude, but the actual thing that's holding is what I would have thought of as a pretty bad scaling relationship in the case of that famous data point with the Preview version of O3. In order to solve this task, which I think costs less than $5 to get someone to solve a mechanical turk, and which my 10 year old child can solve in a couple of minutes, it wrote an amount of text equal to the entire Encyclopedia Britannica. It reminds me of this Andy Warhol quote about, you know, what makes America great is that the President drinks a Coke, Liz Taylor drinks a Coke, the bum on the corner of a street drinks a Coke, and you too could have a Coke. Everyone's got access to it. I think that era is over. OpenAI, you know, introducing a higher tier that cost 10 times as much money. And this is what you're going to keep seeing. The more that we do inference, scaling, it's certainly going to create inequality in terms of access to these things. There is some snake oil, there is some fad type behavior, and there is some possibility that is nonetheless a really transformative moment in human history.
Rob Wiblin (1:20)
Today I'm again speaking with Toby Ord. Toby is a senior researcher at Oxford University and his work focuses on the biggest picture questions facing humanity. And he's probably most well known to listeners as the author of the Existential Risk and the Future of Humanity, which made quite a big Splash back in 2020. Welcome back on the show, Toby.
Toby Ord (1:36)
It's great to be here.
Rob Wiblin (1:38)
So today I want to take a bunch of kind of technical developments that have been going on in AI over the last couple of years and try to explain them in a way that almost everyone can understand and then also explain what implications they have for our lives, for what sort of things we should expect from AI in coming years, and what implications they have for AI governance and policy in particular. But first I wanted to talk a bit about this blog post that you wrote or this presentation you gave last year called the Precipice Revisited. So the precipice was this book that you wrote in 2019, 2018 came out in 2020, it, I guess, explored the science behind all of the different major threats to humanity's future. Pandemics, asteroids, AI, of course, nuclear war, that sort of stuff. And of course, there's been lots of developments since then. And I think last year you wanted to look back and say, over the five years since you wrote it, what have been the major changes in the picture? Is humanity in a better situation? Is it in a worse situation? What have been the major changes, I guess in particular on AI, where so much has been going on?
