Transcript
Daniel Cocatello (0:04)
OpenAI anthropic and to some extent GEM are explicitly trying to build super intelligence to transform the world. And many of the leaders of these companies, many of the researchers at these companies, and then like hundreds of academics and so forth in AI have all signed a statement saying this could kill everyone. And so we've got these important facts that people need to understand, like these people are building superintelligence. What does that even look like? And how could that possibly result in killing us all? We've written this scenario depicting what that might look like. It's actually my best guess as to what the future will look like.
Tristan Harris (0:44)
Hey everyone, this is Tristan Harris and.
Daniel Barquet (0:47)
This is Daniel Barquet. Welcome to your undivided attention.
Tristan Harris (0:50)
So, a couple months ago, AI researcher and futurist Daniel Cocatello and a team of experts at the AI Futures Project released a document on online called AI 2027. And it's a work of speculative futurism that's forecasting two possible outcomes of the current AI arms race that we're in.
Daniel Barquet (1:08)
And the point was to lay out this picture of what might realistically happen if the different pressures that drove the AI race all went really quickly, and to show how those different pressures interrelate. So how economic competition, how geopolitical intrigue, how acceleration of AI research and, and the inadequacy of AI safety research, how all those things come together to produce a radically different future that we aren't prepared to handle and aren't even prepared to think about.
Tristan Harris (1:33)
So in this work, there's two different scenarios, and one's a little bit more hopeful than the other, but they're both pretty dark. I mean, one ends with a newly empowered super intelligent AI that surpasses human intelligence in all domains and ultimately causing the end of human life on Earth.
Daniel Barquet (1:49)
So, Tristan, what was it like for you to read this document?
Tristan Harris (1:53)
Well, I feel like the answer to that question has to start with a deep breath. I mean, it's easy to just go past that, that last thing we just read, right? It's just ultimately causing the end of human life on Earth. And I wish I could say that this is total embellishment. This is exaggeration, this is, you know, just alarmism, Chicken Little. But, you know, being in San Francisco talking to people in the AI community and people who have been in this field for a long time, they do think about this in a very serious way. I think one of the challenges with this report, which I think really does a brilliant job of outlining the competitive pressures and the steps that push us to those kinds of scenarios. I think the thing for most people is when they hear the end of human life on Earth, they're like, what is the AI going to do? It's just a box sitting there computing things. If it's going to do something dangerous, don't we just pull the plug on the box? And I think that's what's so hard about this problem, is that the ways in which something that is so much smarter than you could end life on Earth is just outside of you. Imagine chimpanzees birthed a new species called Homo sapiens. And they're like, okay, well, this is going to be like a smarter version of us. But what's the worst thing it's going to do? It's going to steal all the bananas. And you can't imagine computation, semiconductors, drones, airplanes, nuclear weapons from the perspective of a chimpanzee. Your mind literally can't imagine past someone taking all the bananas. So I think there's a way in which this whole domain is fraught with just a difficulty of imagination and also of kind of not dissociating or delegitimizing or nervous laughtering or kind of bypassing a situation that we have to contend with. Because I think the premise of what Daniel did here is not to just scare everybody, it's to say, if the current path is heading this direction, how do we clarify that so much so we can choose a different path?
