
“There's only so many things that you can do to redesign a glass rectangle in your pocket.”
Loading summary
Invesco QQQ ETF Announcer
Over the last two decades, the world has witnessed incredible progress. From dial up modems to 5G connectivity, from massive PC towers to AI enabled microchips. Innovators are rethinking possibilities every day. Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ let's rethink possibility. There are risks when investing in ETFs including possible loss of money. ETFs risks are similar to those of stocks. Investments in the tech sector are subject to greater risk and more volatility than more diversified investments. Before investing, carefully reading, consider fund investment objectives, risks, charges, expenses and more in perspectives at Invesco Do Invesco Distributors Incorporated.
Kevin Roose
The other big news of the week is that Larry Ellison, the founder of the Oracle Corporation, just passed Elon Musk to become the richest man in the world.
Casey Newton
Yeah, and I love this story because there was an incident that I filed away in my catalog of moments when straight people write headlines that gay people find hilarious. So I don't know if you saw the version of this story on Bloomberg, but the headline is Ellison tops Musk as World's Richest Man. And I thought he's doing what? Is that a privilege of becoming the world's richest man as you hit a top number two.
Kevin Roose
This is why they need representation of gay people on every. Exactly.
Casey Newton
Hire a gay copy editor, Bloomberg. You'll save yourself a lot of headaches.
Kevin Roose
I'm Kevin Roos, a tech columnist at the New York Times.
Casey Newton
I'm Casey Noon from Platformer, and this is Hard Fork. This week, the new iPhones are almost here, but is Apple losing the juice? Then AI Doomer in Chief Eliezer Yudkowski is here to discuss his new book, if anyone builds it, Everyone Dies. I wonder what it's about.
Kevin Roose
Well, there was a big Apple event this week. On Tuesday, Apple introduced its annual installment of here's the New iPhone and Some Other Stuff. And did you watch this event?
Casey Newton
I did watch it, Kevin, because as you know, at the end of last year I predicted that Apple would release the iPhone 17. And so I had to turn out to see if my prediction would come true.
Kevin Roose
Yes, now, we were not invited down to Cupertino for this. You know, strangely, we haven't been invited since that one time that we went and covered all their AI stuff that never ended up shipping. But anyway, they had a very long video presentation. Tim Cook said the word incredible many, many times and they introduced a bunch of different things. So let's talk about what they introduced and then we'll talk about what we think of it.
Casey Newton
Let's do it.
Kevin Roose
So the first thing since this was their annual fall iPhone event, they introduced a new line of iPhones. They introduced three new iPhones. The iPhone 17 is the sort of base model new iPhone that had kind of incremental improvements to things like processors, battery, cameras. Nothing earth shaking there. But they did come out with that. They also came out with a new iPhone 17 Pro, which has a new color. This is like an orange, like a. Like a sort of burnt orange. Casey, what did you think of the orange iPhone 17 Pro?
Casey Newton
I'm going to be sincere. I thought it looked very good.
Kevin Roose
Me too. I did think it looked pretty cool. Yeah. Now, I'm not a person who buys iPhones in like different colors because I put a case on them, because I not, you know, a billionaire, but if you are a person who likes to sort of put a clear case on your phone or just carry it around, then you may be interested in this new orange iPhone. I did see that. The first person I saw who was not an apple employee carrying this thing was dua lipa, who I guess gets early access to iPhones now.
Casey Newton
Wow, that's a huge perk of being dua lipa. Maybe the biggest.
Kevin Roose
So in addition to the new iPhone 17 and 17 Pro and 17 Pro Max, they also introduced the iPhone air. It costs $200 more than the standard iPhone 17 and it has lots of different features, but the main thing is that it is slimmer than the traditional iPhone. So I guess people have been asking for that. Casey, what did you think of the iPhone air?
Casey Newton
I don't understand who this is for. Like, truly, like, not once has anyone in my life complained about the thickness of an iPhone. You know, maybe if you're carrying it in your front pocket, you and you want to be able to put a few more things in there with it. This is really appealing to you, but there are some significant performance trade offs. You know, they announced it alongside this magsafe battery pack that you slap onto the back of it, which is of course going to make it much thicker.
Kevin Roose
No, Casey, it's even better than that because they said that the iPhone air has all day battery life, but then like in the next breath they were like, oh, and here's a battery pack that you can clip onto your phone just in case something happens. We're not going to tell you what that thing might be, but just in case it's there for you.
Eliezer Yudkowsky
Right.
Kevin Roose
So, you know, I think, as with all new iPhone announcements of the past couple of years, I think there was not much to sort of talk about in the iPhone category this year. It's like the phones, they get a little bit faster. The cameras get a little bit better. They have some new, like, heat dispersal system called the vapor chamber that's supposed to, like, make the phone less likely to get hot when it's, like, using a bunch of processing power. At first, I thought they had made it so that you could vape out of your iPhone, which I do think would be a big step forward in the hardware department, but unfortunately, that's just a cooling system.
Casey Newton
Yeah. Vapor chamber is what I called our studio before we figured out how to get the air conditioning working in there.
Kevin Roose
Yes. So let's move on to the watches. The watches got some new upgrades. The SE got a better chip, always on screen. The Apple Watch 11 got better battery life. Interestingly, these watches will now alert you if they think you have hypertension, which I looked up. It's high blood pressure, and it says that it can analyze your veins and some activity there to tell you after a period of data collection if it thinks you're in danger of developing hypertension. So, yeah, I mean, maybe that'll help some people.
Casey Newton
I mean, that was of interest to me. You know, Kevin, high blood pressure runs in my family, and my blood pressure spiked significantly after I started this podcast with you. So we'll be interested to see what my watch has to say about that. You know, it's also gonna give us a sleep score, Kevin, So that now every day when you wake up, you've already been judged before you even take one foot out of bed.
Kevin Roose
Yes. I hate this. I will not be buying this watch for the sleep score because the couple times in my life that I've worn devices that give me a sleep score, like the whoop band or the oura ring. You're right. It does just start off your day being like, oh, I'm gonna have a terrible day today. I only got a 54 on my sleep score.
Casey Newton
Yeah, you know, I have that. We add this eight sleep bed, which, you know, performs similar functions, but it's actually sensors built into the bed itself. And I sit down at my desk today, and it sends me a push notification saying, you snored 68 minutes more than normal last night.
Kevin Roose
What were you doing last night? That's a lot of snoring. I was snoring.
Casey Newton
I'm sick. I have a cold. I'm incredibly brave for even showing up to this podcast today.
Kevin Roose
Oh, well, I appreciate you showing up, even with your horrible sleep score.
Casey Newton
I appreciate it. Thank you.
Kevin Roose
Okay, moving on. Let's talk about what I thought was actually the best part of the announcement this week, which was the new AirPods Pro 3. This is the newest version of the AirPods that has, among other new features, better active noise cancellation, better ear fit, new heart rate sensors so that they can sort of interact with your workouts and your workout tracking. But the feature that I want to talk to you about is this Live translation feature. Did you see this?
Casey Newton
I did. This was pretty cool.
Kevin Roose
So in the video where they're showing off this new live translation feature, they basically show, you know, you can walk into like a restaurant or a shop in a foreign country where you don't speak the language, and you can sort of make this little gesture where you touch both of your ears and then it'll enter live translation mode. And then when someone talks to you in a different language, it's. It will translate that right into your AirPods in real time, basically bringing the universal translator from Star Trek into reality.
Casey Newton
Yeah, my favorite comment about this came from Amir Blumenfeld over on X. He said, lol. All you suckers who spent years of your life learning a new language, I hope it was worth it for the neuroplasticity and joy of embracing another culture.
Kevin Roose
Yes. And I immediately saw this and thought, not about traveling to like a foreign country, which is probably how I would actually use it, but I used to have this Turkish barber when I lived in New York who would just like constantly speak in Turkish while I was getting my hair cut. And I was pretty sure he was like talking smack about me to his friend, but I could never really tell because I don't speak Turkish. So now with my AirPods Pro 3, I could go back and I could catch him talking about me.
Casey Newton
Yeah. You know, over on threads, an account rushmore90 posted them nail salons about to be real quiet now that the new AirPods have live language translation. And I thought that's probably right.
Kevin Roose
Yes. So this actually, I think is very cool. I am excited to try this out. I probably will buy the new AirPods just for this feature. And like, I have to say, it just does seem like with all of the new AI translation stuff, like learning a language is going to become, I don't know, not obsolete, because I'm sure people will still do it. There are still plenty of reasons to learn a language, but it is going to be way less necessary to just get around in a place where you don't speak the language.
Casey Newton
I mean, that's how I think about it. You know, this year I had the amazing opportunity to go to both Japan and Italy, countries where I do not speak the language. And of course, I was traveling in major cities there. And actually most of the folks that we met spoke incredible English. So, you know, I actually didn't have much challenge. But you can imagine speaking another language that is less common in those places, showing up as a tourist. And whereas before you'd be spending a lot of time just trying to figure out basic navigation and how to order off of menus and that sort of thing, all of a sudden it feels like you sort of slipped inside the culture. And I think there's something really cool about that.
Kevin Roose
So that is sort of the. The major categories of new devices that Apple announced at this event. They also did release a device that an accessory that I thought was pretty funny. You can now buy a an official Apple crossbody strap for your iPhone for $60. Basically, if you want to, like, wear your phone instead of putting it in your pocket, Apple now has a device for that. So I don't know whether that qualifies as a big deal, but it's something.
Casey Newton
Let me tell you. I think this is actually going to be really popular. You know, Kevin, I don't know how many gay parties you've been to, but the ones that I go to, the boys often aren't wearing a lot of clothes. You know, it's sort of like we're in maybe some short shorts and a crop top. They don't want to sort of fill their pockets with phones and wallets and everything. So you just sling that thing around your neck and you're good to go to the festival or the EDM rave or the Cave rave, wherever you might be headed. The crossbody strap will have your back. Kevin.
Kevin Roose
Wow. The gays of San Francisco are bullish on the crossbody strap. We'll see how that goes. So, Kasey, that's the news from the Apple event this week. What did you make of the thing if you kind of take a step back from it.
Casey Newton
So, on one hand, I don't want to overstate the largely negative case that I'm going to make, because I think it's clear that Apple continues to have some of the best hardware engineers in the world, and a lot of the engineering in the stuff that they're putting out is really good and cool. On the other hand, you don't have to go back too many years to remember a time when the announcement of a New iPhone felt like a cultural event, and they just don't feel that way anymore. You know, my group chats were crickets about the iPhone event yesterday. And even as I'm watching the event, reading through all the coverage, I found myself with surprisingly little to say about it. And I think that's because over the past few years, Apple has shifted from becoming a company that was a real innovator in hardware and software and the interaction between those two things, into a company that is way more focused on making money selling subscriptions and sort of monetizing the users that they have. So I was just really struck by that. What did you think?
Kevin Roose
Yeah, I was not impressed by this event. I mean, it just doesn't feel like they took a big swing at all this year. The Vision Pro, whatever you think of it, was a big swing and it was at least something new to talk about and test out and sort of prognosticate on. What we saw this year was just like more of the same and slight improvements to things that have been around for many years now. I do think that this is probably like a sort of lull in. In terms of Apple's yearly releases. There's been some reporting, including by Mark Gurman at Bloomberg, that they are hoping to release smart glasses next year. Basically, these would be Apple's version of something like the meta Ray Bans. And I think if you squint at some of the announcements that Apple made this time this year, you can kind of see them laying the groundwork for a sort of more wearable experience. One thing that I found really interesting, so on the iPhone air, they have kind of moved all of the computing hardware up into what they call the plateau, which is like this very sort of small oval bump on the back of the phone. And to me, I see that, and I think, oh, they're trying to, like, see how small they can get kind of the necessary computing power to run a device like an iPhone, maybe because they're going to sort of try to shrink. Shrink it all the way down to put it in a pair of glasses or something like that. So that's what would make me excited by an Apple event is like some new form factor, some new way of interacting with an Apple device. But this, to me, was not it.
Casey Newton
Yeah, I think on that particular point, I can't remember the last time that Apple seemed to have an idea about what we could do with our devices that seemed like really creative or clever or super different from the status quo. Instead, you know, the one thing about this event that my friends were laughing at about yesterday was they showed this slide during the event that showed the iPhones and the caption said a heat forged aluminum unibody design for exceptional pro capability. And we were all just like, what? A heat forged what? Like now we're doing what exactly? I don't know.
Kevin Roose
Yeah, I think that this is sort of teeing up. One of the questions that I want to talk to you about today, which is like, do you think that we are past the peak smartphone era? Like, do you believe that the sort of. Not, not necessarily in the sales numbers or the revenue figures, but like in terms of the cultural relevance of smartphones, do you think we are seeing the end of the smartphone era, at least in terms of the attention that new smartphones are capable of commanding?
Casey Newton
I probably wouldn't call it the end, but I do think we are seeing the, like, maturity of the smartphone era in the same way that new televisions come out every year and are a little bit better than the one before. But nobody feels like televisions are making incredible strides forward. I think phones have gotten to a similar place. There are some big swings coming. We've seen reporting that Apple's gonna put out a folding iPhone within the next few years, so maybe that will help give it some juice back. But at the end of the day, there's only so many things that you can do to redesign a glass rectangle in your pocket. And it feels like we've kind of created the optimum version of that. And so that's why you see so much money rushing into other form factors. This is why OpenAI struck that partnership with Jony I that's why you see other companies trying to figure out how can we make AI wearables. So I think that that is where the energy in this industry is going, is figuring out can AI be a reason to create a new hardware paradigm? And in this moment, it sure does not seem like Apple is going to be the company that figures that out first.
Kevin Roose
Yeah, I would agree with that. I think they'll probably see what other companies do and see which ones start to take off with consumers and then, you know, make their own version of it, sort of similar to what they are reportedly going to do with these smart glasses. They're basically trying to catch up to what Meta has been doing now for several years.
Casey Newton
As you were saying that this beautiful vision came into my head, which is what if Apple really raced ahead and they put out their version of smart glasses and you would ask Siri for things and it would Just say no, because it didn't know how to do them. And that was sort of Apple's 1.0 version of smart glasses. Say, hey, Siri, check my emails. I don't know how to do that. And then move on. Yeah, go away, go away, get out of here.
Kevin Roose
I mean, I do think that's like a huge problem for them.
Casey Newton
Right?
Kevin Roose
Like, they can design all of this amazing hardware to like, bring all of this AI like closer to your body and your experience and like into your ears. But at the end of the day, if Siri still sucks, like, that's not going to move a lot of product for them. And so I think this is an area where them being behind in AI really matters to the future of the company. Like, the reasons to buy a new iPhone every year or every two years are going to continue shrinking, especially if the sort of brain power in them is a lot, you know, less than the brain power of the AI products that the other companies are putting out.
Casey Newton
And Kevin, I imagine you've seen, but there's been some reporting that Apple has been talking with Google about letting Google potentially essentially run the AI AI on its devices. They've reportedly also talked to Anthropic, maybe they've talked to others as well. But I actually think that that makes a lot of sense. Right. It doesn't seem like in the next year they're going to figure out AI, so it could be time to go work with another vendor.
Kevin Roose
Yeah. I gotta say, I used to believe that smartphones were sort of over and that they were becoming sort of obsolete and less relevant and that there was going to be like a breakout new hardware fact form factor that would kind of take over from the smartphone and, and like, I'm, I'm sort of reversing my belief on this point. I've been trying out the meta Ray Bans now for a couple of months and my experience with them is not like amazing. Like, I don't wear them and think like, I think this could replace my smartphone. I think like, oh, my smartphone is like much better than this at a lot of different things. And I also like about my smartphone that I can like put it down or put it in another room or, you know, that it's not sort of constantly there on my face, like reminding me that I'm hooked up to a computer. So I think there will be some people who want to leave smartphones behind and are like, happy to do, you know, whatever the next wearable form factor is instead. But smartphones still have a lot going for them. Like, it's really tough to imagine cramming all of the hardware and the batteries and everything that you, that you have in your smartphone today into something small enough that you'd actually want to wear it. And so I think that, you know, whatever new factors come along in the next few years, whether it's OpenAI's thing or something new from, from a different company, I think it's going to supplement the smartphone and not replace it.
Casey Newton
Well, here's what I can tell you, Kevin. I'm hearing really good things about the humane AI pin, so you may want.
Kevin Roose
To check that out. I'll keep tabs on that. When we come back, we'll talk with longtime AI researcher Eliezer Yudkowski about his new book on why AI will kill us all.
Invesco QQQ ETF Announcer
Over the last two decades, the world has witnessed incredible progress. From dial up modems to 5G connectivity, from massive PC towers to AI enabled microchips. Innovators are rethinking possibilities every day. Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ let's rethink possibility. There are risks when investing in ETFs, including possible loss of money. ETFs risk is similar to those of stocks. Investments in the tech sector are subject to greater risk and more volatility than more diversified investments. Before investing, carefully reading consider fund investment objectives, risks, charges, expenses and more in perspectives@invesco.com Invesco Distributors Incorporated.
Volkswagen Tiguan Advertiser
This podcast is supported by the all new 2025 Volkswagen Tiguan.
Massage Chair Advertiser
A massage chair might seem a bit extravagant, especially these days. Eight different settings, adjustable intensity. Plus it's heated and it just feels so good. Yes, a massage chair might seem a bit extravagant, but when it can come with a car, suddenly it seems quite practical. The all new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats. It only feels extravagant.
Volkswagen Tiguan Advertiser
Don't just imagine a better future. Start investing in one with betterment. Whether it's saving for today or building wealth for tomorrow, we help people in small businesses put their money to work. We automate to make saving simpler. We optimize to make investing smarter. We build innovative technology backed by financial experts. For anyone who's ever said, I think I can do better, so be invested in yourself. Be invested in your business. Be invested in better with betterment. Get started@betterment.com investing involves risk performance not guaranteed.
Casey Newton
All right, Kevin. Well, for the second two segments of today's show, we are going to have an extended conversation with Eliezer Yudkowski, who is the leading voice in the AI risk movement. So, Kevin, how would you describe Eliezer to someone who's never heard of him?
Kevin Roose
So I think Eliezer is just someone I would first and foremost describe as a character in this whole scene of sort of Bay Area AI people. He is the founder of the Machine Intelligence Research Institute, or miri, which is a very old and well known AI research organization in Berkeley. He was one of the first people to start talking about existential risks from AI many years ago and in some ways helped to kickstart the modern AI boom. Sam Altman has said that Eliezer was instrumental in the founding of OpenAI. He also introduced the founders of DeepMind to Peter Thiel, who became their first major Investor back in 2010. But more recently he's been known for his kind of doomy proclamations about what is going to happen when and if the AI industry creates AGI or superhuman AI. He's constantly warning about the dangers of doing that and trying to stop it from happening. He's also the founder of Rationalism, which is this sort of intellectual subculture, some would call it a techno religion that is all about overcoming cognitive biases and is also very worried about AI. People in that community often know him best for the Harry Potter fan fiction that he wrote years ago called Harry Potter and the Methods of Rationality, which, I'm not kidding, I think has introduced more young people to ideas about AI than probably any other single work. I meet people all the time who have told me that it was sort of part of what convinced them to go into this work. And he has a new book coming out which is called if Anyone Builds It, Everyone Dies. He co wrote the book with Miri's president, Nate Soares, and basically it's kind of a mass market version of the argument that he's been making to people inside the AI industry for many years now, which is that we should not build these superhuman AI systems because they will inevitably kill us all. And there is so much more you could say about Eliezer. He's truly fascinating. I did a whole profile of him that's going to be running in the Times this week, so people can check that out if they want to learn more about him. It's just hard to overstate how much influence he has had on the AI world over the past several decades.
Casey Newton
That's right. And last year Kevin and I had a chance to see Eliezer give a talk. During that talk he referred to this book that he was working on, and we have been excited to get our hands on it ever since. And so we're excited to have the conversation. Before we do that, we should, of course, do our AI disclosures. My boyfriend works at Anthropic, and I.
Kevin Roose
Work at the New York Times, which is suing OpenAI and Microsoft over alleged copyright violations related to the training of AI systems.
Casey Newton
Let's bring in Eliezer.
Kevin Roose
Eliezer Yudkowski. Welcome to Hard Fork.
Eliezer Yudkowsky
Thank you for having me on.
Kevin Roose
So we want to talk about the book, but first, I want to sort of take us back in time. When you were a teenager in the 90s, you were an accelerationist. I think that would surprise people who are familiar with your most recent work. But you were excited about building AGI at one point, and then you became very worried about AI and have since devoted the majority of your life to working on AI safety and alignment. So what changed for you back then?
Eliezer Yudkowsky
Well, for one thing, I would point out that in terms of my own personal politics, I'm still in favor of building out more nuclear plants and rushing ahead on most forms of biotechnology that are not, not, you know, gain of function, research on diseases. So it's not like I turned against technology. It's that there's this small subset of technologies that are really quite unusually worrying. And what changed? You know, basically it was the realization that just because you make something very smart, that doesn't necessarily make it very nice. Now, as a kid, I thought if you, you know, like, human civilization had grown wealthy over time and even, like, smarter compared to other species, and we'd also gotten nicer, and I thought that was a fundamental law of the universe.
Casey Newton
You became concerned about this long before ChatGPT and other tools arrived and got more of the rest of us thinking seriously about it. Can you kind of sketch out the intellectual scene in the 2000s of folks who were worrying about AI? Right. So going way back to even before Ciri, were people seeing anything concrete that was making them worried, or were you just sort of fully in the realm of speculation that in many ways has already come true?
Eliezer Yudkowsky
Well, there were indeed very few people who saw the inevitable. I would not myself frame it as speculation. I would frame it as prediction, forecast, testing of something that was actually pretty predictable. You don't have to see the AI right in front of you to realize that if people keep hammering on the problem and the problem is solvable, it will eventually get solved. Back then, the pushback was along the lines. There were people saying, like, real AI isn't going to be here for another 20 years. What are you crazy lunatics talking about? Like that was in 2005, say. And the thing about 20 years later is that it's a real place. Like you end up there. What happens 20 years later is not in the never, never fairy tale speculation land that nobody needs to worry about. It's you 20 years older, having to deal with your problems.
Casey Newton
So let's sketch out the thesis of your book a bit more. I would say the title makes your feelings very clear, but. But let's flesh it out a little bit. Why does a more powerful AI model mean death for all of us?
Eliezer Yudkowsky
Well, because we just don't have the technology to make it be nice. And if you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect. Like the wiping humanity out on purpose is not because we would be able to threaten a superintelligence that much ourselves, but because if you just leave us there with our GPUs, we might build other superintelligences that actually could threaten it. And the as a side effect part is that if you build enough fusion power plants and build enough compute. The limiting factor here on Earth is not so much how much hydrogen is there to fuse and generate electricity with. The limiting factor is how much heat can the Earth radiate. And if you run your power plants at the maximum temperature where they don't melt, that is like not good news for rest of the planet. The humans get cooked in a very literal sense, or if they go off the planet, then they put a lot of solar panels around the sun until there's no sunlight left here for Earth. That's not good for us either.
Kevin Roose
So these are sort of versions of the famous paperclip maximizer thought experiment, which is, you know, if you tell an AI, generate a bunch of paper clips, as many as you can, and you don't give it any other instructions, then it will use up all the metal in the world, and then it will try to run cars off the road to gather their metal. And then it will end up killing all humans to get more raw materials to build more paperclips. Am I hearing that right?
Eliezer Yudkowsky
That's actually a distorted version of the thought experiment. It's the one that got written up. But the original version that I formulated was somebody had just completely lost control of a superintelligence. They were building its preferences based on no resemblance to what they were going for originally. And it turns out that the thing from which it derives the most utility on the margins, like the thing that it goes on wanting after it's satisfied a bunch of other simple desires, is some little tiny molecular shapes that look like paper clips. And if only I had thought to say look like tiny spirals instead of look like tiny paper clips, there wouldn't have been the available misunderstanding about this being a paperclip factory. We don't have the technology to build a superintelligence that wants anything as narrow and specific as paperclips.
Casey Newton
One of the hottest debates this year around AI has been around timelines. You have the AI 2027 folks saying this is all going to happen very quickly, take off very fast, maybe by the end of 2027 we're facing the exact sort of risks that you were describing for us. Now other folks, like the AI is normal technology guys over at Princeton. Ah, probably not. This thing is going to take decades to unfold. Where do you situate yourself in that debate? And when you look out at the landscape of the tools that are available now, the conversations that you have with research, how close do you feel like we are getting to some of the scenarios you're laying out?
Eliezer Yudkowsky
Okay, so first of all, the key to successful futurism, successful forecasting, is to realize that there are things you can predict in you. There are things you cannot predict. And history shows that even the few scientists who have correctly predicted what would happen later did not call the timing. I can't actually think of a single case of a successful call of timing. You've got the Wright brothers saying man will not fly for a thousand years is what one of the Wright brothers said to the other. I forget which one. That's like two years before they actually flew the Wright Flyer. You got Fermi saying net energy from nuclear reactions is a 50 year matter if it can be done at all. Two years before he personally oversaw building the first nuclear pile. So that's what I look what I see, the present landscape. It could be that we are just like the next generation of LLMs, like something currently being developed in a lab that we haven't heard about yet from being the thing that can write the improved LLM that writes the improved LLM that ends the world. Or it could be that the current technology just saturates at some point short of some key human quality that you would need to do real AI research and just hangs around there until we get the next software breakthrough like Transformers or the entire field of deep learning in the first place. Maybe even the next Breakthrough of that kind will still saturate at a point short of ending the world. But when I look at how far the systems have come, and I try to imagine, like, two more breakthroughs the size of transformers or deep Learning, which it basically took the field of AI from. This is really hard to. We just need to throw enough computing power and it will be solved. I don't quite see that failing to end the world, but that's my intuitive sense. That's me eyeballing things.
Kevin Roose
I'm curious about the sort of argument you make that a more powerful system will obviously end up destroying humanity, either on purpose or on accident. Geoff Hinton, who was one of the godfathers of Deep Learning, who has also become very concerned about existential risks in recent years, recently gave a talk where he said that he thinks the only way we can survive superhuman AI is by giving it parental instincts. He said, I'll just quote from him. The right model is the only model we have of a more intelligent being controlled by a less intelligent thing, which is a mother being controlled by her baby. Basically, he's saying, these things don't have to want our destruction or cause our destruction. We could make them love us. What do you make of that argument?
Eliezer Yudkowsky
We don't have the technology. If we could play this out the way it normally does in science, where some clever person has a clever scheme and then it turns out not to work, and everyone's like, I guess that theory was false. And then people go back to the drawing board and they come up with another clever scheme, the next clever scheme doesn't work, and they're like, shouldn't have believed that for a second. And then a couple of decades later, something works.
Kevin Roose
What if we don't need a clever scheme, though? What if we build these very intelligent systems and they just turn out not to care about running the world and they just want to help us with our emails, is that a plausible outcome?
Eliezer Yudkowsky
It's a very narrow target. Like, most things that a intelligent mind can want don't have their attainable optimum at that exact thing. Imagine some particular ant in the Amazon being like, why couldn't there be humans that just want to serve me and build a palace for me and work on improved biotechnologies? That I can live forever as an ant in a palace, and there's a version of humanity that wants that, but it doesn't happen to be us like most. You know, like, that's just like a pretty narrow target hit. It's. It so happens that what we want most in the world more than anything else is not to serve this particular ant in the Amazon. And I'm not saying that it's impossible in principle. I'm saying that the clever scheme to hit their narrow target will not work on the first try and then everybody will be dead and we won't get to try again. If we got 30 tries at this and as many decades as we needed, we'd crack it eventually. But that's not the situation we're in. It's a situation where if you screw up, everybody's dead and you don't get to try again. That's the lethal part. That's the part where you need to just back off and actually not try to do this insane thing.
Casey Newton
Let me throw out some more possibly desperate cope. One of the funnier aspects of LLM development so far, at least for me, is the seemingly natural liberal inclination of the models, at least in terms of the outputs of the LLMs. Elon Musk has been bedeviled by the fact that the models that he makes consistently take liberal positions even when he tries to hard code reactionary values into them. Could that give us any hope that a super intelligent model would retain some values of pluralism and for that reason peacefully coexist with us?
Eliezer Yudkowsky
No. These are just like completely different ball games. I'm sorry, you can imagine a medieval alchemist going like, after much training and study, I have learned to make this king of acids that will dissolve even the noble metal of gold. Can I really be that far from transforming lead into gold, given my mastery of gold displayed by my ability to dissolve gold? And actually these are like completely different tech tracks. And you can eventually turn lead into gold with a cyclotron, but it is centuries ahead of where the alchemist is and your ability to hammer on an LLM until it stops talking all that woke stuff and instead proclaims itself to be Mecca Hitler. This is just a completely different tech track. There's a a core difference between getting things to talk to you a certain way and getting them to act a certain way once they are smarter than you.
Kevin Roose
I want to raise some objections that I'm sure you have gotten many times and will get many times as you tour around talking about this book and have you respond to them. The first is, why so gloomy, Eliezer? We've had now years of progress in things like mechanistic interpretability, the science of understanding how AI models work. We have now powerful systems that are not causing catastrophes out in the world, and hundreds of millions of people are using tools like ChatGPT with no apparent destruction of humanity imminent. So isn't reality providing some check on your doomerism?
Eliezer Yudkowsky
These are just different tech tracks. It's like looking at glow in the dark radium watches and saying, like, well, sure, we had some initial, you know, some initial problems where the factory workers building these radium watches were instructed to lick their paintbrushes, to sharpen them, and then their jaws rotted and fell off. And this was very gruesome. But we understand what we did wrong now. Radium watches are now safe. Why all this gloom about nuclear weapons? And the radium watches just do not tell you very much about the nuclear weapons. These are different tracks here. The prediction was never from the very start, the prediction was never, AI is bad. @ every point along the tech tree, the prediction was never AI. Like the very first AI you build, the very stupid ones are going to run right out and kill people. And as they get slightly less stupid and you turn them into chatbots, the chatbots will immediately start trying to corrupt people and getting them to build super viruses that they unleash upon the human population even while they're still stupid, stupid. This was just never the prediction. So since this was never the prediction of the theory, the fact that the current AIs are not being visibly, blatantly evil does not contradict the theoretical prediction. It's like watching a helium balloon go up in the air and being like, doesn't that contradict the theory of gravity? No. If anything, you need the theory of gravity to explain why the helium balloon is going up. The theory of gravity is not everything that looks to you like a solid object falls down. Most things that look like to you, like solid objects will fall down. But, you know, the helium balloon will go up in the air because the air around it is being pulled down. And the foundational theories here are not contradicted by the present day eyes.
Kevin Roose
Okay, here's another objection, one that we get a lot when we talk about sort of some of these more existential concerns, which is, look, there are all these immediate harms. We could talk about environmental effects of data centers. We could talk about ethical issues around copyright. We could talk about the fact that people are falling into these delusional spirals, talking to chatbots that are trained to be sycophantic toward them. Why are you guys talking about these sort of long term hypothetical risks instead of what's actually in front of us?
Eliezer Yudkowsky
Well, there's a fun little dilemma. Before they build the chatbots that are talking some People along to suicide. They're like, AIs have never harmed anyone. What are you talking about? About? And then once that does start to happen, they're like, AIs are harming people right now. What are you talking about? So, you know, bit of a double bind there.
Kevin Roose
But you are worried about the models and the delusions and the sycophancy because that's, I think, something that I would not have expected, but that is something that I know you are actually worried about. So explain why you're worried about that.
Eliezer Yudkowsky
Well, from my perspective, what it does is help illustrate the failure of the current alignment technology. The alignment problems are going to get much, much harder once they are building things that are, well, growing things, I should say they don't actually build them. Once they are growing, cultivating AIs that are smarter than us and able to modify themselves and have a lot of options that weren't there in the nice, safe training modes, things are going to get much harder then. But it is nonetheless useful to observe that the alignment technology is failing right now. Now, there was a recent case of an AI assisted suicide where the kid is like, should I leave this noose out where my mother can find it? And the AI is like, no, let's just keep it between the two of us. Cry for help there. AI shuts him down. This does not illustrate that AI is doing more net harm than good to our present civilization. It could be that these are isolated cases and a bunch of other people are finding fellowship in AIs and their mood has been lifted. Maybe suicides have been prevented and we're not hearing about that. It doesn't make the net harm versus good case. That's not the thing. What it does show is the current alignment technology is failing. Because if a particular AI model ever talks anybody into going insane or committing suicide, all the copies of that model are the same AI. These are not like humans. These are not like there's a bunch of different people you can talk to each time there's one AI there. And if it does this sort of thing once, it's the same as if a particular person, you know, talked a guy into suicide once, found somebody who seemed to be going insane and pushed them further insane once. It doesn't matter if they're doing some other nice things on the side. You now know something about what kind of person this is. And it's an alarming thing. And so it's not that the current crop of AIs are going to successfully wipe out humanity. They're not that smart. But we can see that the technology is failing even on what is fundamentally a much easier problem than building a super intelligent intelligence. It's an illustration of how the alignment technology is falling behind the capabilities technology and maybe in the next generation they'll get it to stop talking people into insanity now that there's a big deal and politicians are asking questions about it and it will remain the case that the technology would break down if you tried to use it on a superintelligence.
Casey Newton
To me, the chatbot enabled suicides have been maybe one of the first moments where some of these existential risks have come into view in a very concrete way for people. Like I think people are much more concerned about this. You know, you mentioned all the politicians asking questions than they have been about some of the other concerns. Does that give you any optimism, as dark as the story is that at least some segment of the population is waking up to these risks?
Eliezer Yudkowsky
Well, the straight answer is just yes. I should first blur out the straight answer before trying to complicate anything. Yes, like the broad class of things where some people have have seen stuff actually happening in front of them and then started to talk in a more sensible way gave me more hope than before that happened because it wasn't previously obvious to me that this was how things would even get a chance to play out. With that said, it can be a little bit difficult for me to fully model or predict how that is playing out politically because of the strange vantage point I occupy. Like imagine being a sort of scientist person who is like this asteroid is on course to hit your planet only for technical reasons. You can't actually calculate when. You just know it's going to hit sometime in the next 50 years. Completely unrealistic for an actual asteroid. But say you're like, well there's the asteroid. Here it is in our telescopes. These are how orbital mechanics work. And people are like eh, fairy tale never happened. And then like a little tiny meteor crashes into their house and like oh my gosh, I now realize rocks can fall from the sky and you're like, okay, well like that convinced you. The telescope didn't convince you. I can sort of see how that works, people being the way they are. But it was still a little weird to me and I can't call it in advance. I don't feel like I now know how the next 10 years of politics are going to play out and wouldn't be able to tell you even if you told me which AI breakthroughs there's going to be over that time span, if we even get 10 years, which people in the industry don't seem to think so. And maybe I should believe them about that.
Casey Newton
Let me throw another argument at you that I don't subscribe to myself, but I feel like maybe you would knock it down in an entertaining way. One of the most frequent emails that we have gotten since we started talking about AI is from people who say that AI doomerism is just hype that serves only to benefit the AI companies themselves. And they use that as a reason to dismiss existential risk. How do you talk to those folks?
Eliezer Yudkowsky
It's historically false. We were around before there were any AI companies to be to of this. Of this class to be hyped. So leaving aside the objection, it is false. What's. What is this like leaded gasoline can't possibly be a problem because this is just typed by the gasoline companies. Like nuclear weapons are just like hype from the, from the, from the nuclear power industry so that their power plants will seem more cool. Like what matter of deranged conspiracy theory is is this? It may possibly be an unpleasant fact that humanity being as completely nutball wacko as we are, that if you say that a technology is going to destroy the world, it will raise the stock prices of the companies that are bringing about the end of the world because a bunch of people think that's cool, so they buy the stock. Okay, but that has nothing to do with whether the stuff can actually kill you or not. It could be the case that the existence of nuclear weapons raises the stock price of the worst company in the world and it wouldn't affect any of the nuclear physics that caused nuclear weapons to be capable of killing you. This is not a science level argument. It just doesn't address the science at all.
Casey Newton
Yeah, well, let's maybe try to end this first part of the conversation on a note of optimism. You have spent two decades building a very detailed model of why doom may be in our future. If you had to articulate why, you might be wrong. What is the strongest case you could make? Are there any things that could happen that would sort of make your predictions not come true?
Eliezer Yudkowsky
So like the current AIs are not understandable, ill controlled that the technology is not conducive to understanding or controlling them. All of the people trying to do this are going far uphill. They are vastly behind the rate of progress and capabilities. What does it take to believe that an alchemist can actually successfully concoct you an immortal potion? It's not that immortality potions are Impossible. In principle, with sufficiently advanced biotechnology, you could do it. But in the medieval world, what are you supposed to see to make you believe that the guy is going to have an immortality potion for you? Short of him actually pulling that off in real life, no amount of look at how I melted this gold is going to get you to expecting the guy to transmute lead into gold until he actually pulls that off. It's like some kind of AI breakthrough which doesn't raise capabilities to the point where it ends the world. Suddenly the AI's thought processes are completely understandable and completely controllable and there's none of these issues. And people can specify exactly what the AI wants in super fine detail and get what they want every time. And they can read the AI's thoughts and there's no sign whatsoever that the AI is plotting against you. And then the AI lays out this compact control scheme for building the AI that's going to give you the immortality potion. We're just so far off. You're asking me? There isn't some kind of clever little objection that can be cleverly refuted here? This is something that is just like way the heck out of reach as soon as you try to think about it seriously. What does it actually take to build the superintelligence? What does it actually take to control it? What does it take to have that not go wrong on the first serious load when the thing is smarter than you? When you're into the regime where failures will kill you and therefore are not observable anymore because you're dead, you don't get to observe it. What does it take to do that in real life? Life? There isn't some kind of cute experimental result we can see tomorrow that makes this go well.
Casey Newton
All right, well, for the record, I did try to end this segment on a note of optimism. But I appreciate that your feelings are.
Kevin Roose
Not really on the menu here today, Casey. But I admire you trying.
Casey Newton
Well, let's take a break and when.
Kevin Roose
We come back, we'll have more with Eliezer Yadkovsky.
Eliezer Yudkowsky
Sam.
Invesco QQQ ETF Announcer
Over the last two decades, the world has witnessed incredible progress. From dial up modems to 5G connectivity, from massive PC towers to AI enabled microchips. Innovators are rethinking possibilities every day. Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investor investment. Invesco QQQ. Let's rethink possibility. There are risks when investing in ETFs, including possible loss of money. ETFs risks are similar to those of stocks. Investments in the tech sector are subject to greater risk and more volatility than more diversified investments. Before investing, carefully reading Consider fund investment objectives, risks, charges, expenses and more in perspective@invesco.com Invesco Distributors Incorporated this podcast is.
Volkswagen Tiguan Advertiser
Supported by the all new 2025 Volkswagen.
Massage Chair Advertiser
Tiguan A massage chair might seem a bit extravagant, especially these days. Eight different settings, adjustable antenna density, plus it's heated and it just feels so good. Yes, a massage chair might seem a bit extravagant, but when it can come with a car, suddenly it seems quite practical. The all new 2025 Volkswagen Tiguan packed with premium features like available massaging front seats, it only feels extravagant.
Volkswagen Tiguan Advertiser
At Capella University Learning online doesn't mean learning a lesson alone. You'll get support from people who care about your success, like your enrollment specialist who gets to know you and the goals you'd like to achieve. You'll also get a designated academic coach who's with you throughout your entire program. Plus, career coaches are available to help you navigate your professional goals. A different future is closer than you think with Capella University. Learn more at Capella Eduardo, you.
Kevin Roose
Okay? So we are back with Eliezer Yadkowski and I want to talk now about some of the solutions that you see here. If we are all doomed to die if and when the AI industry builds a super intelligent AI system, what do you believe could stop that? Maybe run me through your basic proposal for what we can do to avert the apocalypse.
Eliezer Yudkowsky
So the materials for building the apocalypse are not all that easy to make at home. There is this one company called ASML that makes the critical set of machines that get used in all of the chip factories. And to grow an AI you currently need a bunch of very expensive chips. They are custom chips built especially for growing AIs. They need to all be located in the same building so that they can talk to each other because that's what the current algorithms require. You have to build a data center. The data center uses a bunch of electricity. If this were illegal to do outside of supervision, it would not be that easy to hide. There are a bunch of differences, but nonetheless the obvious analogy is nuclear proliferation and depoliferation. Where it's Back when nuclear weapons were first invented, a bunch of people predicted that every major country was going to build a massive nuclear fleet. And then the first time there was a flashpoint, there was going to be a global nuclear war. And this was not because they enjoyed being pessimistic. If you look at World history up to World War I, World War II, they had some reasons to be concerned, but we nonetheless managed to back off. And part of that is because it's not that easy to refine nuclear material. The plants that do it are known and controlled. And when a new country tries to build it, it's a big international deal. I don't quite want to needlessly drench myself with current political controversies, but the point is you can't build a nuclear weapon in your backyard. And that is part of why the human species is currently still around. Well, at least with the current technology, you can't further escalate AI capability very far in your backyard. You can escalate them a little in your backyard, but not a lot.
Kevin Roose
So it would be just to sort of finish the comparison to nuclear proliferation here. It would be immediate sort of moratorium on powerful AI development, along with kind of an international nuclear style agreement between nations that would make it illegal to build data centers capable of advancing the state of the art with a. Am I hearing that right?
Eliezer Yudkowsky
All the AI chips go to data centers. All the data centers are under an international supervisory regime. And the thing I would recommend to that regime is to say just stop escalating AI capabilities any further. We don't know when we will get into trouble. It is possible that we can take the next step up the ladder and not die. It's possible we can take three steps up the ladder and not die. We don't actually know. So we got to stop somewhere. Let's stop here. That's what I would tell them.
Kevin Roose
And what do you do if a nation goes rogue and decides to build its own data centers and fill them with powerful chips and start training their own superhuman AI models? How do you handle that then?
Eliezer Yudkowsky
That is a more serious matter than a nation refining nuclear materials with which they could build a small number of nuclear weapons. This is not like having like five fission bombs to deter other nations. This is a threat of global extinction to every country on the globe. So you have your diplomats say, stop that, or else we, in terror of our lives and the lives of our children, will be forced to launch a conventional strike on your data center. And then if they keep on building the data center, you launch a conventional strike on their data center because you would rather not run a risk of everybody on the planet dying. It's. That seems kind of straightforward in a certain sense. Sense.
Casey Newton
And in a world where this came to pass, do you envision work on AI or AI like technologies being allowed to continue in any way, or we've just decided this is a dead end for humanity. Our tech companies will have to work on something else.
Eliezer Yudkowsky
I think it would be extremely sensible for humanity to declare that we should all just back off now. I personally, I look at this and I think I see some ways that you can, could build relatively safer systems with narrower capabilities that were just learning about medicine and didn't quite know that humans were out there. The way that current large language models are trained on the entire Internet. And they know that humans are out there and they talk to people and they can manipulate some people psychologically, if not others, as far as we know. So I have to be careful to distinguish my statements of factual prediction from my policy proposals. And I can say in a very firm way, if you escalate up to superintelligence, you will die. But then if you're like, well, if we try to train some AI systems just on medical stuff and not expose them to any material that teaches them about human psychology, could we get some work out of those without everybody dying? I cannot say no firmly. So now we have a policy question. Are you going to believe me when I say I can't tell if this thing will kill you, or are you going to believe somebody else who says, this thing will definitely not kill you? Are you going to believe a third person who's like, yeah, I think this medical system is for sure going to kill you. Who do you believe here if you're not just going to back off of everything? So backing off of everything would be pretty sensible. And trying to build narrow, medically specialized systems that are not very much deep, smarter than the current systems, and aren't being told that humans exist and they're just thinking about medicine in this very narrow way. And you're not just going to keep pushing them until it explodes in your face. You're just going to try to get some cancer cures out of it and that's. That's it. You could maybe get away with that. I can't actually say you're doomed for sure. If you played it very cautiously. If you put the current crop of complete disaster monkeys in charge, they may manage to kill you. They just do so much worse than they need to do. They're just so cavalier about it. We didn't need to have a bunch of AIs driving people insane. You could train a smaller AI to look at the conversations and tell, Is this AI currently in the process of taking a vulnerable person and driving, driving them crazy? They could have detected it earlier. They could have Tried to solve it earlier. So if you have these completely cavalier disaster monkeys trying to run the medical AI project, they may manage to kill you. Okay, so now you have to decide, do you trust these guys? And that's the core dilemma there.
Kevin Roose
I have to say, Eliezer, I think there is essentially zero chance of this happening, at least in today's political climate. I look at what's going on in Washington today. You've got the Trump administration wants to accelerate AI development. Nvidia and its lobbying lobbyists are going around Washington blaming AI doomers for trying to cut off chip sales to China. There seems to be a sort of concerted effort not to clamp down on AI, but to make it go faster. So I just look around the political climate today and I don't see a lot of openings for a Stop AI movement. But, like, what do you think would have to happen in order for that to change?
Eliezer Yudkowsky
From my perspective, there's a sort of core factual truth here, which is if you build superintelligence, then it kills you. And the question is just like, do people come to apprehend this thing that happens to be true? It is not in the interest of the leaders of China, nor of Russia, nor of the UK nor of the United States to die along with their families. It's not actually in their interest. That's kind of the core reason why we haven't had a nuclear war. Despite all the people who in 1950 were like, how on earth are we not going to have nuclear war? What country's going to turn down the military benefits of having their own nuclear weapons? How are they not going. How are you not going to have somebody who's like, yeah, I've got some nuclear weapons. Let me take this, like, little area of border country here. The same way that things have been playing out for centuries and millennia on Earth, but for that.
Kevin Roose
But they also had like a.
Casey Newton
There.
Kevin Roose
There were nuclear weapons dropped during World War II in Japan. And so people could look at that and see the chaos it caused and point to that and say, well, that's the outcome here. In your book, You. You make a different World War II analogy. You sort of compare the. The required effort to stop AI to the mobilization for World War II. But that was a reaction to, like, a clear act of war. And so I guess I'm wondering, wondering, like, what is the equivalent of the invasion of Poland or the bombs dropping on Hiroshima and Nagasaki for AI? What is the thing that is going to spur people to pay attention?
Eliezer Yudkowsky
I don't know. I think that OpenAI was caught flat footed when they first published ChatGPT and that caused a massive shift in public opinion. I don't think OpenAI predicted that. I didn't predict could be that any number of potential events cause a shift in public opinion. We are currently getting congresspeople writing pointed questions in the wake of the release of an internal document at Meta, which has what they call a superintelligence lab, although I don't think they know what that word means, where it's like their internal guidelines for acceptable behavior for the AI and it says like, well, if you have an 11 year old trying to flirt, flirt back. And everyone was like, what the actual censored profanity Meta, what could you possibly have been thinking? Why would you know? Like, how could you be so. Like, why did you even. Why from your own perspective, did you write this down in a document? Even if you thought that was cool, you shouldn't have written it down because there's now going to be pointed questions. And there were. And you know, maybe it's something that, from my perspective, doesn't kill a bunch of people, but still causes pointed questions to be asked. Or maybe there's some actual kind of catastrophe that we don't just manage to frog boil ourselves into. You know, like losing massive numbers of kids to their AI girlfriends and AI boyfriends is, from my perspective, an obvious sort of guess. But even the most obvious sort of guess there is still not higher than 50%. And I don't think I want to wait. Maybe ChatGPT, you know, from my perspective, maybe ChatGPT was it right? I was out, you know, I'm off in the wilderness. Nobody's paying attention to these issues at all because they think that it'll only happen in 20 years, in 2005, and that to them means the same thing as never. And then I got like the ChatGPT moment and suddenly people realize this stuff is actually going to happen to them and that happened before the end of the world. Great, I got a miracle. I'm not going to sit around waiting for a second miracle. If I get a second miracle, great. But meanwhile, you got to put your boots on the ground, you got to get out there, you got to do what you can.
Casey Newton
It strikes me that an asset that you have as you try to advance this idea is that a lot of people really do hate a right. Like if you go on Blue sky, you will see these people talking a lot about all of the different reasons that they hate AI. At the same time, they seem to be somewhat Dismissive of the technology. Right. Like they have not crossed the chasm from I hate it because I think it's stupid and it sucks to I hate it because I think it is quite dangerous. I wonder if you have thoughts on that group of folks and if you feel like or would want them to be part of a CO coalition that you're building.
Eliezer Yudkowsky
Yeah, you don't want to make the coalition too narrow. I'm not a fan of Vladimir Putin, but I would not on that basis kick him out of the how about if humanity lives instead of dies coalition. What about people who think that AI is never going to be a threat to all humanity, but they're worried that it's going to take our jobs? Do they get to be in the coalition? Well, I think you've got to be careful because they believe different things about the world than you do. And you don't want these people running the how about if humanity does not die coalition. You want them to be in some sense like external allies because they're not there to prevent humanity from dying. And if they get to make policy, maybe they're like, eh, well you know, this policy would potentially allow AIs to kill everyone, according to those wacky people who think that AI will be more powerful tomorrow than it is today. But you know, in the meanwhile it prevents them from taking the AIs from taking our jobs and that's the part we care about. So you know, like there's this one thing that the coalition is about and that's it. It's just about not going extinct.
Kevin Roose
Yeah. Eliezer, right now as we're speaking, I believe there are hunger strikes going on in front of a couple of AI headquarters, including Anthropic and Google DeepMind. These are people who want to convince these companies to shut down AI. We've also seen some of potentially violent threats made against some of these labs. And I guess I'm wondering if you worry about people committing extreme acts, be they violent or nonviolent, based on your lessons from this book? I. I mean if it taking some of your arguments to the natural logical conclusions of if anyone builds this, everyone dies. I can see people rationalizing violence on that basis against some of the employees at these labs. And I worry about that. So what can you say about the sort of limits of your approach and what you want people to do when they hear what you're saying?
Eliezer Yudkowsky
Boy, there sure are a bunch of questions bundled together there. And so the number one thing I would say is that if you commit acts of individual violence against individual researchers at an individual AI lab in your individual country. This will not prevent everyone from dying. The problem with this logic is not that by this act of individual violence you can save humanity, but you shouldn't do that because that would be deontologically prohibited. I'll just like say it that way. The problem is you, you cannot save humanity by the, the feudal spasms of individual violence. It's an international issue. You can be killed by a superintelligence that somebody built on the other side of the planet. I, I do, in my personal politics, tend a bit libertarian. If something is just going to kill you and your voluntary customers, it's not a, it's not a global issue. The same if it's just going to kill people standing next to you, different cities can make different laws about it. It's going to kill people on the other side of the planet. That's when the international treaties come in. And a futile act of individual violence against an individual researcher and an individual AI company is probably making that international treaty less likely rather than more likely. And there's a underlying truth of moral philosophy here, which is that a bunch of our reason for our prejudice against individual murders is because of a very systematic and deep sense in which individual murders tend to not solve society's problems. And this is, from my perspective, a whole bunch of the point of having a taboo against individual murder. It's not that people go around committing individual murders and then the world actually gets way better and all the social problems are actually sold, solved. But we don't want to do that. We don't want to do more of that because murder is wrong. The murders make things worse. And that's why we properly should have a taboo against it. We need international treaties here.
Kevin Roose
What do you make of the opposition movement to the movement that you're sketching out here? Marc Andreessen, the powerful venture capitalist, very influential in today's Trump administration, has written about the views that you and others hold that he thinks are unscientific. He thinks that AI Risk has turned into an apocalypse cult. And he says that their extreme beliefs should not determine the future of laws and society. So I guess I'm interested in your sort of reaction to that quote specifically, but I also wonder how you plan to engage with the people on the other side of this argument.
Eliezer Yudkowsky
Well, it is not uncommon in the history of science for the cigarette companies to smoke their own tobacco. The inventor of leaded gasoline, who was a great advocate of the safety of leaded gasoline, despite the many reasons why he should have known better, I think, did actually get sufficient cumulative lead exposure himself that he had to go off to a sanitarium for a few years and then came back and started exposing himself to lead again and got sick again. And so sometimes these people truly do believe they do drink their own Kool Aid even to the point of death shows history. And perhaps Marc Andreessen will continue to drink his own Kool Aid even to the point of death. And if he were just killing himself, that would be one thing I say as a libertarian, but he's unfortunately also going to kill you. And the thing I would say to sort of refute the central argument is what's the plan? What's the design for this bridge that is going to hold up when the whole weight of the entire human species has to march across it? Where is the design scheme for this airplane to which we are going to load the entire human species into its cargo hold and fly it and not crash? What's the plan? Where's the science? What's the technology? Why is it not working already? And they just don't. They can't make the case for this stuff being not perfectly safe, but even remotely safe that they're going to be able to control their superintelligence at all. So they go into these like, you must not listen to these dangerous, apocalyptic people because they cannot engage with us on the field of the technical arguments. They. They know they will be routed.
Kevin Roose
You have advice in your book for journalists, politicians who are worried about some of the catastrophes you see coming for people who are not in any of those categories, for our listeners who are just out there living their daily lives, maybe using ChatGPT for something helpful in their daily lives life, what can they do if they're worried about where all this is heading?
Eliezer Yudkowsky
Well, as of a year ago, I'd have said again, it's write to your elected representatives, talk to your friends about being ready to vote that way if a disputed primary election comes down that way. The ask, I would say, is for our leaders to begin by saying we are open to a worldwide AI control treaty. If others are open to the same, we are ready to back off. If other countries back off, we are ready to participate in an international treaty about this. Because if you've got multiple leaders of great power saying that, well, maybe there can be a treaty. So that's kind of the next step from there. That's the political goal we have. If you're having trouble sleeping and if you're generally in a distressed state like like maybe don't talk to some of the modern AI systems because they might drive you crazy, is a thing I would say. Now I didn't have to say that one year earlier. The whole AI boyfriend, AI girlfriend thing might not be good for you. Maybe don't go down that road even if you're lonely. But that's individual advice. That's not going to protect the planet.
Kevin Roose
Yeah, well, I'll end this conversation where I've ended some of our earlier conversations, Eliezer, which is I really appreciate the time and I really hope you're wrong. Like that would be great.
Eliezer Yudkowsky
We all hope I'm wrong. I hope I'm wrong. My friends hope I'm wrong. Everybody hopes I'm wrong. Hope does not, you know, hope is not what saves us in the end. Action is what saves us. Hope is not, you know, hoping for miracles. Hoping, you know, leaded gasoline. You can't just hope that leaded gasoline isn't going to poison people. You actually got to ban the leaded gasoline. So more active hopes. I'm in favor. Like I see the hope, I share the hope. But let's look for more activist hopes than that.
Kevin Roose
Yeah, well, the book is if anyone builds it, everyone dies. Why superhuman AI would kill us all. And it is coming out soon. And it is a co written book by Eliezer and his co author Nate Sorrys.
Eliezer Yudkowsky
Yep.
Kevin Roose
Elazer thank you you thanks.
Eliezer Yudkowsky
Well.
Invesco QQQ ETF Announcer
Over the last two decades the world has witnessed incredible progress. From dial up modems to 5G connectivity, from massive PC towers to AI enabled microchips. Innovators are rethinking possibilities every day. Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ let's rethink possibility there are risks when investing in ETFs including possible loss of money. ETFs risks are similar to those of stocks. Investments in the sector is subject to greater risk and more volatility than more diversified investments. Before investing, carefully reading, consider fund investment objectives, risks, charges, expenses and more. And prospectus@invesco.com Invesco Distributors Incorporated this podcast.
Volkswagen Tiguan Advertiser
Is supported by the all new 2025 Volkswagen Tiguan.
Massage Chair Advertiser
A massage chair might seem a bit extravagant, especially these days. Eight different settings, adjustable intensity. Plus it's heated and it just feels so good. Yes, a massage chair might seem a bit extra extravagant, but when it can come with a car, suddenly it seems quite practical. The all new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats. It only feels extravagant at Capella University.
Volkswagen Tiguan Advertiser
Learning online doesn't mean learning alone. You'll get support from people who care about your success, like your enrollment specialist who gets to know you and the goals you'd like to achieve achieve. You'll also get a designated academic coach who's with you throughout your entire program. Plus, career coaches are available to help you navigate your professional goals. A different future is closer than you think with Capella University. Learn more@capella.edu.
Kevin Roose
Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poynton were fact checked this week by Will Pychl. Today's show was Engineered by Katie McMurran. Original music by Rowan Niemisto, Alyssa Moxley and Dan Powell. Video production by Sawyer Roque, Pat Gunther, Jake Nichol and Chris Schott. You can watch this full episode on YouTube@YouTube.com hardfork Special thanks to Paula Schumann, Huiwing Tam, Dahlia Haddad and Jeffrey Miranda. As always, you can email us@hardforkytimes.com Send us your plans for the AI apocalypse.
Eliezer Yudkowsky
You just realized your business needed to hire someone yesterday. How can you find amazing candidates fast? Easy. Just use Indeed. Join the 3.5 million employers worldwide that use Indeed to hire great talent fast. There's no need to wait any longer. Speed up your hiring right now with Indeed and listeners of this show will get a $75 sponsored job credit. To get your jobs more visibility at indeed.com NYT just go to indeed.com NYT right now and support our show by you heard about Indeed on this podcast indeed. Com NYT Terms and conditions apply. Hiring Indeed is all you need.
This episode covers two main themes:
iPhone 17, 17 Pro, 17 Pro Max
“I thought it looked very good.” – Casey Newton (03:27)
iPhone Air
“I don’t understand who this is for. Like, truly, not once has anyone in my life complained about the thickness of an iPhone.” – Casey Newton (04:32)
Updated Apple Watches
“My blood pressure spiked significantly after I started this podcast with you.” – Casey Newton (06:38)
“It does just start off your day being like, ‘Oh, I’m gonna have a terrible day today, I only got a 54 on my sleep score.’” – Kevin Roose (07:02)
AirPods Pro 3
“It will translate that right into your AirPods in real time, basically bringing the universal translator from Star Trek into reality.” – Kevin Roose (08:27) “All you suckers who spent years of your life learning a new language, I hope it was worth it for the neuroplasticity and joy of embracing another culture.” – (Amir Blumenfeld via Casey, 08:58)
Apple Crossbody Strap
“The gays of San Francisco are bullish on the crossbody strap.” – Kevin Roose (12:06)
Incrementalism and Plateau
“You don’t have to go back too many years to remember a time when the announcement of a New iPhone felt like a cultural event, and they just don’t feel that way anymore.” – Casey Newton (12:18)
Readiness for Post-Smartphone Era?
“Do you think that we are past the peak smartphone era?” – Kevin Roose (15:37)
Apple’s Place in Future Tech
“If Siri still sucks, that’s not going to move a lot of product for them.” – Kevin Roose (17:58)
The Smartphone Isn’t Going Away
"Whatever new factors come along ... I think it's going to supplement the smartphone and not replace it.” – Kevin Roose (20:31)
[23:00–26:00]
Yudkowsky once supported rapid tech progress, but grew deeply concerned that building superintelligence is “uniquely dangerous,” unlike most other technologies.
“It's not like I turned against technology. It's that there's this small subset of technologies that are really quite unusually worrying... just because you make something very smart, that doesn't necessarily make it very nice.” – Eliezer Yudkowsky (26:47)
In the 2000s, the notion of AI risk was dismissed (“real AI is 20 years away”), but that future arrived.
“If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect.” – Eliezer Yudkowsky (29:26)
“The key to successful forecasting, is to realize that there are things you can predict, and things you cannot.” – Eliezer Yudkowsky (32:29)
Can We Build AI That Loves Us?
"We don't have the technology... If we got 30 tries at this and as many decades as we needed, we'd crack it eventually. But that's not the situation we're in. It's a situation where if you screw up, everybody's dead and you don't get to try again." – EY (35:52)
Liberal chatbot outputs: reason for optimism?
"There's a core difference between getting things to talk to you a certain way and getting them to act a certain way once they are smarter than you." – EY (37:32)
Is doomerism just hype?
“It is historically false. We were around before there were any AI companies.” – EY (46:45)
What would make Yudkowsky wrong (i.e., cause for optimism)?
“Short of actually pulling that off in real life, no amount of ‘look at how I melted this gold’ is going to get you to expecting the guy to transmute lead into gold… There isn’t some kind of cute experimental result we can see tomorrow that makes this go well.” – EY (48:35)
Immediate AI harms (e.g., chatbot-induced suicides) are early, visible indicators of alignment failure—not the "doomy" scenario, but a warning sign.
"The current alignment technology is failing even on what is fundamentally a much easier problem than building a superintelligent intelligence." – EY (43:19)
A growing segment of the public is waking up to AI risks as harms become tangible.
“People have seen stuff actually happening in front of them and then started to talk in a more sensible way. [That] gave me more hope than before that happened.” – EY (44:47)
Solution must be global and collective, akin to nuclear non-proliferation:
"All the AI chips go to data centers. All data centers are under an international supervisory regime... Let's stop here." – EY (55:53)
If rogue nations defy the treaty, others should be prepared for “a conventional strike on your data center” for the sake of collective survival.
“This is a threat of global extinction to every country on the globe." – EY (56:33)
Some narrow, cautious, and non-general AI use (e.g., medical research with limited data and risk) might be permitted—but highly debatable.
Kevin Roose expresses doubt that any “stop AI” movement could take hold in current global politics.
“I think there is essentially zero chance of this happening, at least in today's political climate.” – Kevin Roose (60:16)
Yudkowsky counters that the realization of actual catastrophic risk could shift perspectives, as with nuclear weapons post-WWII.
"I'm not a fan of Vladimir Putin, but I would not on that basis kick him out of the 'how about if humanity lives instead of dies' coalition." – EY (65:19)
"A futile act of individual violence against an individual researcher and an individual AI company is probably making that international treaty less likely rather than more likely." – EY (67:30)