Transcript
A (0:00)
Hey, everyone. I'm super excited to be sitting down with Andy Mills, co founder of the New York Times, the daily podcast, and producer of tech podcasts like Rabbit Hole and the Last Invention. Andy is a world class storyteller and has been dedicating his time to talking to the greatest minds in AI so that he can help us better understand what the technology is capable of and where it's going next. I want to ask him which vision of the AI driven future he finds most convincing, who he thinks will be the winners and losers, and what we need to do to be ready. Let's find out. I am here with Andy Mills. Andy, you're the producer on the Last Invention podcast. And you know, one of the things that's, you know, you know as well as I do when you're making an AI podcast is you hear from all sorts of camps, if I can call them that, of people who have completely different views for the outlook of how AI is going to transform or not transform our society. And so I wanted to ask you, maybe you can lay out what are the main camps that you've seen, and based on hearing arguments across all of those, where do you see yourself sitting?
B (1:09)
Yeah, well, thanks for having me. Thanks for the question. My favorite subject to cover as a journalist is a debate. If there's something very attractive to me about trying to understand in good faith why intelligent people come to such different conclusions when looking at the same material. And I had known that there was a contingent inside of the world of artificial intelligence that was really, really worried about it for many years, like Eliezer Yudkowski podcast interviews. And 2013 or something is when I first realized that there was this almost like biblical prophet voice out there saying that the sci fi movies are kind of true and we really need to get ready, we need to get prepared for this. And after ChatGPT blew up, I started to increasingly run into essentially the opposite side of that debate, which are these people who we often call the accelerationists, who believe that AGI, this artificial general intelligence point that they believe is coming, it could be the best thing that ever happened to us. And so I was attracted right away to the people who have those strongly opposing views inside the same world. But the more that I dug into it, the more I realized that everyone in the AI world had different camps. So literally there was like eight or nine different ways you could categorize the debate that's happening inside the technology world about what we should do with artificial intelligence. And we ended up in the podcast narrowing them down to Three basic camps. The camps that I think are most influential in this conversation, in the moment that we're having. Camp number one is the AI doomer. Essentially, the people who think that the risks of the AI race, as we are conducting that race today, are so great and include the fact that we may create something smarter than the smarter than us, that ends up leading to our own extinction. They think that that is cause for so much alarm, that we need to stop. They're trying to get us to stop right now before we go any further. Then on the far end, you have the AI accelerationists who say that the fears have been overblown, that the benefits of this and the way that this might help us out of the stagnation that we're in. I mean, some of them will even tell you this malaise, this essentially nihilistic streak that's spreading from our politics to our social media, like almost all of that is going to be positively affected by the discovery and the investment in a true AGI. And then there's this camp that's kind of in the middle, but they're not like a medium ground between the two. They're their own place on the map. And I call them the scouts. And they're the people who think it's probably too late to stop. So the doomers are right to be afraid, but we're not going to stop. And maybe we shouldn't stop because the accelerationists are right, that this could be like fire, like electricity, like a true turning point in human history. But they believe that the risks are real. And so we need to do everything we can to. To get ready for what's coming. And that's on the economy. What will we do if the job market starts to fall away? If the job market goes away completely? What should we do with our politics? What kind of tests, what kind of regulations should we put in place? And they are trying to shout as loud as they can that we can't wait five years. Like, we have to start getting ready right now. Journalists, universities, think tanks, like, we need to turn our efforts into solving the problems that stand between now and the creation of this AGI. So those are the three main camps that we talk to. Obviously, there are camps like the skeptics that are out there as well, and we are going to follow up with them down the road. But I just. I think that we're living in a time where the skeptics are not really a forceful shape of the conversation happening closest to the technology.
