Transcript
A (0:06)
Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're hearing this, you're not currently on our subscriber feed and will only be hearing the first part of this conversation. In order to access full episodes of the Making Sense podcast, you'll need to subscribe@samharris.org we don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. I am here with Eliezer Yudkowsky and Nate Soares. Eliezer, Nate, it's great to see you guys again.
B (0:43)
Been a while.
C (0:43)
Good to see you, Sam.
A (0:44)
Been a long time. So you were. Eliezer, you were among the first people to make me concerned about AI, which is going to be the topic of today's conversation. I think many people who are concerned about AI can say that. First, I should say you guys are releasing a book which will be available, I'm sure the moment this drops, if anyone builds it, everyone dies. Why superhuman AI would kill us all. I mean, the book is. Its message is fully condensed in that title. I mean, we're going to explore just how uncompromising a thesis that is and how worried you are and how worried you think we all should be here. But before we jump into the issue, maybe tell the audience how each of you got into this topic. How is it that you came to be so concerned about the prospect of developing superhuman AI?
B (1:37)
Well, in my case, I guess I was sort of raised in a house with enough science books and enough science fiction books that thoughts like these were always in the background. Werner Vinge is the one who. Where there was a key click moment of observation. Vinge pointed out that at the point where our models of the future predict building anything smaller than us, then, said Vinji, at the time our crystal ball explodes past that point, it is very hard, said Vingy, to project what happens if there's things running around that are smarter than you? Which in some senses you could see it as a sort of central thesis. Not in the sense that I have believed it the entire time, but that in the sense that some parts of it I believe and some parts of it I react against and say, like, no. Maybe we can say the following thing under the following circumstances. Initially, I was young. I made some metaphysical errors of the sort that young people do. I thought that if you built something very smart, it would automatically be nice. Because, hey, over the course of human history, we had gotten a bit smarter we'd gotten a bit more powerful, we'd gotten a bit nicer. I thought these things were intrinsically tied together and correlated in a very solid and reliable way. I grew up, I read more books, I realized that was mistaken. And 2001 is where the first tiny fringe of concern touched my mind. It was clearly a very important issue. Even if it. Even if I thought there was just a little tiny remote chance that maybe something would go wrong. So. So I studied harder, I looked into it more, I asked how would I solve this problem? Okay, what would go wrong with that Solution? And around 2003 is the point at which I realized this was actually a big deal.
