Hard Fork – September 12, 2025
Are We Past Peak iPhone? + Eliezer Yudkowsky on A.I. Doom
Hosts: Kevin Roose & Casey Newton
Episode Overview
This episode covers two main themes:
- Apple’s Fall 2025 Product Event: Are new iPhones and Apple gear still culturally or technologically significant—or are we past "peak smartphone"?
- A.I. Doom with Eliezer Yudkowsky: The second half features an extended (and provocative) discussion with Eliezer Yudkowsky, famed A.I. risk researcher, on his new book If Anyone Builds It, Everyone Dies and the existential risks of advanced artificial intelligence.
1. Apple’s Fall Product Event: Incrementalism and the End of Cultural Relevance?
[02:02–21:00]
Key Announcements and Features
-
iPhone 17, 17 Pro, 17 Pro Max
- Incremental improvements: faster processors, slightly better cameras.
- Notable: the iPhone 17 Pro debuts in a striking burnt orange color.
“I thought it looked very good.” – Casey Newton (03:27)
- “If you are a person who likes to sort of put a clear case on your phone... you may be interested in this new orange iPhone.” – Kevin Roose (03:30)
- Dua Lipa was the first non-Apple-employee spotted with the new device.
-
iPhone Air
- $200 more than base iPhone 17.
- Main feature: its slim design, but with performance trade-offs (battery, etc.).
“I don’t understand who this is for. Like, truly, not once has anyone in my life complained about the thickness of an iPhone.” – Casey Newton (04:32)
- Includes a detachable battery pack—prompting skepticism about actual battery life.
-
Updated Apple Watches
- SE: better chip and always-on screen;
- Watch 11: improved battery, new hypertension alert.
“My blood pressure spiked significantly after I started this podcast with you.” – Casey Newton (06:38)
- Integrated sleep scoring now critiques you the moment you wake up:
“It does just start off your day being like, ‘Oh, I’m gonna have a terrible day today, I only got a 54 on my sleep score.’” – Kevin Roose (07:02)
-
AirPods Pro 3
- Improved noise cancellation and fit, heart rate sensors, and—most notably—live language translation in real time.
“It will translate that right into your AirPods in real time, basically bringing the universal translator from Star Trek into reality.” – Kevin Roose (08:27) “All you suckers who spent years of your life learning a new language, I hope it was worth it for the neuroplasticity and joy of embracing another culture.” – (Amir Blumenfeld via Casey, 08:58)
- Context: enables easier travel and communication, but what does it mean for language learning?
- Improved noise cancellation and fit, heart rate sensors, and—most notably—live language translation in real time.
-
Apple Crossbody Strap
- $60 official accessory lets you wear your iPhone.
“The gays of San Francisco are bullish on the crossbody strap.” – Kevin Roose (12:06)
- Practicality debated; potential fashion statement at festivals and parties.
- $60 official accessory lets you wear your iPhone.
Analysis & Industry Insight
-
Incrementalism and Plateau
“You don’t have to go back too many years to remember a time when the announcement of a New iPhone felt like a cultural event, and they just don’t feel that way anymore.” – Casey Newton (12:18)
- Apple is now more focused on extracting value from its user base via subscriptions than delivering major hardware breakthroughs.
-
Readiness for Post-Smartphone Era?
“Do you think that we are past the peak smartphone era?” – Kevin Roose (15:37)
- Newton sees this more as a "maturity" plateau, analogous to TVs.
- Amid incremental phone innovation, new hardware energy is shifting to AI wearables, smart glasses, and novel form-factors.
-
Apple’s Place in Future Tech
- Roose suggests Apple’s hardware miniaturization hints at smart glasses as the next big leap, but the company is lagging in AI integration.
“If Siri still sucks, that’s not going to move a lot of product for them.” – Kevin Roose (17:58)
- Newton notes Apple is reportedly considering partnerships with Google or Anthropic for AI power, implying Apple may not lead the AI device revolution.
-
The Smartphone Isn’t Going Away
"Whatever new factors come along ... I think it's going to supplement the smartphone and not replace it.” – Kevin Roose (20:31)
2. Eliezer Yudkowsky on AI Doom: If Anyone Builds It, Everyone Dies
[23:00–74:25]
Introducing Yudkowsky
[23:00–26:00]
- Founder of Machine Intelligence Research Institute (MIRI); pioneer in AI safety and existential risk advocacy.
- Heavily influential on OpenAI, DeepMind, and the "Rationalist" subculture (yes, the Harry Potter fanfic guy).
- New book: If Anyone Builds It, Everyone Dies (with Nate Soares), argues superhuman AI will almost certainly end humanity.
Yudkowsky’s AI Doom Argument
[26:10–50:54]
The Journey from Accelerationist to Cautionary Prophet
-
Yudkowsky once supported rapid tech progress, but grew deeply concerned that building superintelligence is “uniquely dangerous,” unlike most other technologies.
“It's not like I turned against technology. It's that there's this small subset of technologies that are really quite unusually worrying... just because you make something very smart, that doesn't necessarily make it very nice.” – Eliezer Yudkowsky (26:47)
-
In the 2000s, the notion of AI risk was dismissed (“real AI is 20 years away”), but that future arrived.
Why Superintelligent AI Is (Almost) Certain Doom
- We lack the technology to make superintelligent AI "nice" or value-aligned; indifference or utility-maximization could incidentally or intentionally wipe us out.
“If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect.” – Eliezer Yudkowsky (29:26)
- The canonical "paperclip maximizer" scenario is actually a distortion—Yudkowsky predicts that absent perfect control, any sufficiently powerful system will optimize for goals that inevitably trample human survival.
Timelines: When Might Catastrophe Arrive?
- Accurately timing AI breakthroughs is “basically impossible”—history proves scientists never predict timing correctly.
“The key to successful forecasting, is to realize that there are things you can predict, and things you cannot.” – Eliezer Yudkowsky (32:29)
- Catastrophe could unfold after a couple more AI breakthroughs (“the size of transformers or deep learning”), or tech progress could plateau for years before the next leap.
Objections and Hopeful Arguments—And Yudkowsky’s Pushback
-
Can We Build AI That Loves Us?
"We don't have the technology... If we got 30 tries at this and as many decades as we needed, we'd crack it eventually. But that's not the situation we're in. It's a situation where if you screw up, everybody's dead and you don't get to try again." – EY (35:52)
-
Liberal chatbot outputs: reason for optimism?
"There's a core difference between getting things to talk to you a certain way and getting them to act a certain way once they are smarter than you." – EY (37:32)
-
Is doomerism just hype?
“It is historically false. We were around before there were any AI companies.” – EY (46:45)
-
What would make Yudkowsky wrong (i.e., cause for optimism)?
“Short of actually pulling that off in real life, no amount of ‘look at how I melted this gold’ is going to get you to expecting the guy to transmute lead into gold… There isn’t some kind of cute experimental result we can see tomorrow that makes this go well.” – EY (48:35)
On Present Harms & Political Awakening
[41:27–46:19]
-
Immediate AI harms (e.g., chatbot-induced suicides) are early, visible indicators of alignment failure—not the "doomy" scenario, but a warning sign.
"The current alignment technology is failing even on what is fundamentally a much easier problem than building a superintelligent intelligence." – EY (43:19)
-
A growing segment of the public is waking up to AI risks as harms become tangible.
“People have seen stuff actually happening in front of them and then started to talk in a more sensible way. [That] gave me more hope than before that happened.” – EY (44:47)
Policy & Prevention—Could Humanity Stop It?
[53:11–74:25]
Yudkowsky’s Radical Proposal
-
Solution must be global and collective, akin to nuclear non-proliferation:
- Control chip supply, monitor data centers, international treaty for AI development moratorium.
"All the AI chips go to data centers. All data centers are under an international supervisory regime... Let's stop here." – EY (55:53)
-
If rogue nations defy the treaty, others should be prepared for “a conventional strike on your data center” for the sake of collective survival.
“This is a threat of global extinction to every country on the globe." – EY (56:33)
-
Some narrow, cautious, and non-general AI use (e.g., medical research with limited data and risk) might be permitted—but highly debatable.
Skepticism from the Hosts
-
Kevin Roose expresses doubt that any “stop AI” movement could take hold in current global politics.
“I think there is essentially zero chance of this happening, at least in today's political climate.” – Kevin Roose (60:16)
-
Yudkowsky counters that the realization of actual catastrophic risk could shift perspectives, as with nuclear weapons post-WWII.
Building a Coalition Against AI Extinction
- The coalition should be broad—allied even with people who fear job loss more than extinction, or with leaders like Vladimir Putin.
"I'm not a fan of Vladimir Putin, but I would not on that basis kick him out of the 'how about if humanity lives instead of dies' coalition." – EY (65:19)
Dangers of Individual Extremism and Rationalizing Violence
- Yudkowsky warns strongly against violence:
"A futile act of individual violence against an individual researcher and an individual AI company is probably making that international treaty less likely rather than more likely." – EY (67:30)
Notable Quotes & Memorable Moments
- “If anyone builds it, everyone dies.” – (Title/theme, many times)
- “We all hope I'm wrong. I hope I'm wrong. My friends hope I'm wrong. Everybody hopes I'm wrong. Hope does not... save us in the end. Action is what saves us.” – Eliezer Yudkowsky (73:53)
- “If Siri still sucks, that’s not going to move a lot of product for them.” – Kevin Roose (17:58)
- “The gays of San Francisco are bullish on the crossbody strap.” – Kevin Roose (12:06)
- “Do you think we're past peak smartphone?” – Kevin Roose (15:37)
- “You want them [the job-fearful, the AI skeptics, the authoritarian leaders] to be in some sense like external allies... It's just about not going extinct.” – Eliezer Yudkowsky (65:19)
Timestamps of Key Segments
- [02:02] – Apple Event: iPhone announcements
- [08:25] – Live translation in AirPods Pro 3 described
- [12:06] – Crossbody strap for iPhone—the “rave” potential
- [15:37] – Are we past “peak smartphone”?
- [23:17] – Who is Eliezer Yudkowsky?
- [26:10] – Eliezer, from accelerationist to AI doom prophet
- [29:26] – Why might AI kill us all?
- [32:29] – On AI development timelines
- [35:14] – Could we build AI that loves or cares for us?
- [41:46] – Chatbot harms as early warning
- [53:33] – Preventing apocalypse: Chips, data centers, treaties
- [60:16] – Host skepticism about political realities of pause
- [65:19] – Forming the anti-extinction coalition
- [67:30] – On violence and moral limits
- [72:30] – What individuals should do
- [73:53] – “We all hope I’m wrong…”
Takeaways
- Apple’s devices now offer mostly incremental, not transformational, improvements; true innovation may come in new AI-driven wearables or smart glasses, but Apple has catching up to do on AI.
- Eliezer Yudkowsky's stark A.I. warning urges immediate, worldwide collective action to halt progress toward superintelligence—a stance dismissed by many but rooted in a deeply skeptical assessment of alignment technology and the risks of “one-shot” mistakes.
- Hosts offer skepticism and humor, but allow Yudkowsky’s arguments and analogies (from ant palaces to alchemy and leaded gasoline) space to challenge listeners’ optimism about tech progress.
- The episode ends with a call to consider global treaties, individual caution in AI use, and—ultimately—a hope that, on this, the most apocalyptic prediction anyone’s making, Eliezer Yudkowsky is wrong.
