
Hosted by Future of Life Institute

Anthony Aguirre is the CEO of the Future of Life Institute. He joins the podcast to discuss A Better Path for AI, his essay series on steering AI away from races to replace people. The conversation covers races for attention, attachment, automation, and superintelligence, and how these can concentrate power and undermine human agency. Anthony argues for purpose-built AI tools under meaningful human control, with liability, access limits, external guardrails, and international cooperation.LINKS:A Better Path for AIWhat You Can DoCHAPTERS: (00:00) Episode Preview (01:03) Attention, attachment, automation (13:58) Superintelligence power race (26:39) Escaping replacement dynamics (40:15) Pro-human tool AI (53:30) Guardrails and verification (01:03:24) Defining pro-human AI (01:10:37) Agents and accountability (01:17:28) International AI cooperation (01:25:28) Rethinking AI alignment (01:32:43) Optimism and action PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help. We also discuss private oversight, state versus federal rules, and the risk of concentrating power in companies or government.LINKS:Radical Optionality websiteCharlie BullockCHAPTERS: (00:00) Episode Preview (01:04) The pacing problem (06:18) Defining radical optionality (11:03) Assumptions under uncertainty (16:00) Industry convenience concerns (20:41) Political will realities (26:48) Private governance limits (30:28) Government misuse risks (36:29) Balancing institutional power (42:25) Transparency and reporting (49:35) Evaluations, security, talent (58:26) State law preemption (01:04:20) Historical nuclear analogies PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help. We also discuss private oversight, state versus federal rules, and the risk of concentrating power in companies or government.LINKS:Radical Optionality websiteCharlie BullockCHAPTERS: (00:00) Episode Preview (01:04) The pacing problem (06:18) Defining radical optionality (11:03) Assumptions under uncertainty (16:00) Industry convenience concerns (20:41) Political will realities (26:48) Private governance limits (30:28) Government misuse risks (36:29) Balancing institutional power (42:25) Transparency and reporting (49:35) Evaluations, security, talent (58:26) State law preemption (01:04:20) Historical nuclear analogies PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Peter Wildeford is Head of Policy at the AI Policy Network, and a top AI forecaster. He joins the podcast to discuss how to forecast AI progress and what current trends imply for the economy and national security. Peter argues AI is neither a bubble nor a normal technology, and we examine benchmark trends, adoption lags, unemployment and productivity effects, and the rise of cyber capabilities. We also cover robotics, export controls, prediction markets, and when AI may surpass human forecasters.LINKS:Peter Wildeford BlogCHAPTERS: (00:00) Episode Preview (01:12) AI bubble debate (06:25) Normal technology question (15:31) Mythos security implications (30:47) Robotics and labor (40:27) Social economic response (48:57) Forecasting methodology (59:49) AGI policy timelines (01:11:13) Forecasting with AI PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Carina Prunkl is a researcher at Inria. She joins the podcast to discuss how to assess the capabilities and risks of general-purpose AI. We examine why systems can solve hard coding and math problems yet still fail at simple tasks, why pre-deployment tests often miss real-world behavior, and how faster capability gains can increase misuse risks. The conversation also covers de-skilling, red teaming, layered safeguards, and warning signs that AIs might undermine oversight.LINKS:Carina Prunkl personal websiteCHAPTERS: (00:00) Episode Preview (01:04) Introducing the report (02:10) Jagged frontier capabilities (05:29) Formal reasoning progress (12:36) Risks and evaluation science (19:00) Funding evaluation capacity (24:03) Autonomy and de-skilling (31:32) Authenticity and AI companions (41:00) Defense in depth methods (48:34) Loss of control risks (53:16) Where to read report PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Li-Lian Ang is a team member at Blue Dot Impact. She joins the podcast to discuss how society can build a workforce to protect humanity from AI risks. The conversation covers engineered pandemics, AI-enabled cyber attacks, job loss and disempowerment, and power concentration in firms or AI systems. We also examine Blue Dot's defense-in-depth framework and how individuals can navigate rapid, uncertain AI progress.LINKS:Li-Lian Ang personal siteBlue Dot Impact organization siteCHAPTERS:(00:00) Episode Preview(00:48) Blue dot beginnings(03:04) Evolving AI risk concerns(06:20) AI agents in cyber(15:52) Gradual disempowerment and jobs(23:26) Aligning AI with humans(29:08) Power concentration and misuse(34:52) Influencing frontier AI labs(43:05) Uncertain timelines and strategy(50:18) Writing, AI, and actionPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Emilia Javorsky is a physician-scientist and Director of the Futures Program at the Future of Life Institute. She joins the podcast to discuss her newly published essay on AI and cancer. She challenges tech claims that superintelligence will cure cancer, explaining why biology’s complexity, poor data, and misaligned incentives are bigger bottlenecks than raw intelligence. The conversation covers realistic roles for AI in drug discovery, clinical trials, and cutting unnecessary medical bureaucracy. You can read the full essay at: curecancer.aiCHAPTERS:(00:00) Episode Preview(01:10) Introduction and essay motivation(06:30) Intelligence vs data bottlenecks(19:03) Cancer's complexity and heterogeneity(29:05) Measurement, health, and homeostasis(41:41) AI in drug development(50:13) Regulation, FDA, and innovation(01:02:58) Practical paths toward curesPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.You can read the full essay at: curecancer.aiCHAPTERS:(00:00) Essay Preview(00:54) How AI Can, and Can't, Cure Cancer(17:05) Reckoning with Past Failures(35:23) Misguiding Myths and Errors(59:15) AI Solutions Derive from First Principles or Data(01:31:31) Systemic Bottlenecks & Misalignments(02:08:46) Conclusion(02:14:35) The Roadmap ForwardPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.LINKS:AI Psychological Harms Research CoalitionZak Stein official websiteCHAPTERS: (00:00) Episode Preview (00:56) Education to existential risk (03:03) Lessons from social media (08:41) Attachment systems and AI (18:42) AI companions and attachment (27:23) Anthropomorphism and user disempowerment (36:06) Cognitive atrophy and tools (45:54) Children, toys, and attachment (57:38) AI psychosis and selfhood (01:10:31) Cognitive security and parenting (01:26:15) Education, collapse, and speciation (01:36:40) Preserving humanity and values PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.LINKS:Control AIControl AI global action pageControlAI's lawmaker contact toolsOpen roles at ControlAIControlAI's theory of changeCHAPTERS: (00:00) Episode Preview (00:52) Extinction risk and lobbying (08:59) Progress toward superintelligence (16:26) Building political awareness (24:27) Global regulation strategy (33:06) Race dynamics and public (42:36) Vision and key safeguards (51:18) Recursive self-improvement controls (58:13) Power concentration and action PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP