Transcript
Sponsor/Ad Voice (0:00)
This episode is supported by Wealthfront. It's time. Your hard earned money works harder for you with Wealthfront's cash account. Earn 3.75% APY on your cash from program banks with free instant withdrawals to eligible accounts. Get a $50 bonus when you open your first cash account and deposit $500 at wealthfront.com Ezra Bonus terms and conditions apply. Cash account offered by Wealthfront Brokerage llc meant for FINRA sipc, not a bank. The annual percentage yield on deposits as of September 26, 2025 is representative, subject to change and requires no minimum. Funds are swept to program banks where.
Sponsor/Ad Voice (0:30)
They earn the Varia Sam.
Ezra Klein (1:01)
Shortly after ChatGPT was released, it felt like all anyone could talk about, at least if you were in AI circles, was the risk of rogue AI. You began to hear a lot of talk of AI researchers discussing their P doom, the probability they gave to AI destroying or fundamentally displacing humanity. In May of 2023, a group of the world's top AI figures, including Sam Altman and Bill Gates and Jeffrey Hinton, signed on to a public statement that said mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war. And then nothing really happened. The signatories, or many of them at least, of that letter, raced ahead, releasing new models and new capabilities. Your share price, your valuation became a whole lot more important in Silicon Valley than your P doom. But not for everyone. Eliezer Yudkowsky was one of the earliest voices warning loudly about the existential risk posed by AI. He was making this argument back in the 2000s, many years before ChatGPT hit the scene. He has been in this community of AI researchers influencing many of the people who build these systems, in some cases inspiring them to get into this work in the first place, yet unable to convince him to stop building the technology he thinks will destroy humanity, he just released a new book co written with Nate Suarez called if Anyone Builds It, Everyone Dies. Now he's trying to make this argument to the public, a last ditch effort to at least in his view, rouse us to save ourselves before it is too late. I come into this conversation taking AI risk seriously. If we are going to invent superintelligence, it is probably going to have some implications for us, but also being skeptical of the scenarios I often see by which these takeovers are said to happen. So I wanted to hear what the godfather of these arguments would have to say. As always, my email Ezra kleinshowytimes.com. eliezer Yudkowski, welcome to the show.
Eliezer Yudkowsky (3:20)
Thanks for having me.
Ezra Klein (3:21)
