Podcast Summary: Using AI to Enhance Societal Decision Making
Podcast: 80,000 Hours Podcast
Episode Theme: Exploring how advanced AI can support better societal decision making and whether individuals should consider careers in this emerging field.
Podcast Hosts: Rob Wiblin, Luisa Rodriguez, and the 80,000 Hours team
Article Author and Narrator: Zoshane Qureshi
Date: March 6, 2026
Transcript Reference Period: Article written September 2025, narrated November 2025
Episode Overview
This narrated episode centers on Zoshane Qureshi’s article for 80,000 Hours, examining both the challenges and opportunities of using artificial intelligence (AI), especially as humanity approaches or enters the era of artificial general intelligence (AGI). The episode advocates for accelerating the development of AI tools that help individuals and institutions make better, wiser decisions, and discusses risks, objections, and pathways for impactful work in this area.
Key Discussion Points & Insights
Why Improving Decision Making with AI Matters (03:20)
- Historical Human Failures: Society's struggle with large-scale problems—such as ineffective responses to climate change and pandemics—highlight how imperfect human decision making can be.
- The AGI Acceleration: AGI could “compress a century of progress into a decade” (03:53), intensifying the need for high-quality decisions in a shorter timespan.
- Rising Stakes: With AI reshaping economies, geopolitics, and potentially producing new weapons and technologies much faster than humans can evaluate them, robust decision making becomes critical.
How AI Can Help Make Better Decisions (07:05)
Two Types of Beneficial AI Tools:
-
Epistemic Tools (07:18)
- Help determine what is true and what is likely to happen.
- Examples:
- AI Fact Checkers: Could improve reliability and impartiality of information evaluation.
- AI Forecasting Systems: Help predict world events and policy impacts.
- Moral Progress Tools: Assist reasoning through complex ethical issues.
-
Coordination Tools (10:01)
- Facilitate cooperation and more effective group outcomes, even with competing interests.
- Examples:
- AI Negotiation Tools: Simulate high-volume negotiations to suggest mutually beneficial solutions.
- AI Verification Systems: Impartially monitor compliance, overcome trust barriers.
- Structured Transparency Tools: Allow precise threat detection without widespread surveillance.
- General Guideline: “Empowering people to better understand the world and coordinate with each other is usually good for humanity, at least under the assumption that people are usually well intentioned.” (13:21)
The Opportunity for Differential Technology Development (14:00)
- There’s an under-explored opportunity to “accelerate the development and adoption of AI decision making tools” (14:25), influencing which technologies emerge first.
- Quick deployment might mean critical safety tools are available before high-risk AI systems, making even small speed-ups potentially pivotal.
- Analogy: The pandemic’s mRNA vaccines—having the technology but delayed implementation—illustrates this lag (15:02).
Major Objections and Risks
Objection 1: These Tools Will Be Built Anyway (17:33)
- Commercial Incentives: AI companies and markets are incentivized to build useful tools already.
- Counterargument: There are impactful gaps (e.g., non-commercially-incentivized tools) where focused work can make a difference, and accelerating even by a few months could matter.
- Notable quote: “Simply waiting for good decision making tools to get rolled out could mean getting them once AGI has already arrived, and by then it might be too late to use them to avoid a catastrophe.” (19:46)
Objection 2: Helping the Wrong Capabilities Arrive Faster (21:10)
- Concern: Advancing these AI tools might incidentally accelerate dangerous AI development, reducing vital preparation time.
- Response:
- Focusing on lower-risk applications (fact-checking vs. high-level planning) reduces danger.
- “Some speed up in the arrival of dangerous AI capabilities could still be worth it” if it brings critical safety tools sooner. (23:52)
- Intervention Options: Both speeding up safety tools and slowing down risky capabilities (through regulation, pausing development, etc.) can and should happen in parallel.
Objection 3: Double-Edged Sword—Misuse Risks (25:30)
- Risk: Tools that support decision making and coordination can be misused by malicious actors for harmful ends or power grabs.
- Mitigation:
- “We’d guess that actors with genuinely malicious intentions are just not that common.” (26:41)
- Making such tools widely accessible and integrating them into key institutions can help neutralize dangerous power imbalances.
- Project-specific misuse risk assessment is crucial.
Should You Work on AI Decision Making Tools? (29:58)
- Complex, Ambiguous Landscape: Caution is warranted; not all projects will be impactful, and some can be harmful.
- Best Fit:
- Entrepreneurial, thoughtful, and resilient individuals who can navigate ambiguity and prioritize wisely.
- “If you’re especially good at navigating ambiguity, have an entrepreneurial mindset, and have strong judgment about what projects to prioritize, this could be a great fit.” (31:06)
- Scale Needed: A few hundred new, particularly capable people could have an outsized impact in this young field.
- Alternative Views: Some researchers are even more optimistic about involvement opportunities.
How To Get Involved: Career Paths & Recommendations (33:02)
- Direct Contribution:
- Work at organizations or research groups directly building AI society tools (see job board).
- Found your own project if none exists.
- Skills needed span technical (engineering, data science), product, operations, and stakeholder engagement.
- Supporting Roles:
- Develop benchmarks and evaluations.
- Build tech infrastructure, manage datasets, or create collaborative directories.
- Work on integration, education, and adoption within institutions.
- Preparation for Future Opportunities:
- Founding non-harmful tech companies to gain entrepreneurial skills.
- Building expertise in domains like forecasting or diplomacy.
- Working within key institutions that can later benefit from AI tools.
- Advice and Resources:
- 80,000 Hours’ advisors can provide career guidance (see website).
- “If you’re interested in learning more, visit this article on our website 80,000hours.org; search for ‘using AI to Enhance Societal Decision Making.’” (35:47)
Notable Quotes & Moments
- AGI Stakes:
- “The arrival of AGI could compress a century of progress into a decade, forcing humanity to make decisions with higher stakes than we've ever seen before and with less time to get them right.” (00:42)
- On Misuse vs. Benefit:
- “Empowering humans to understand the world and coordinate better seems to usually be a good thing for humanity.” (27:59)
- On Building Now vs. Waiting:
- “Even a small speed up could make a big difference here… waiting for good decision making tools to get rolled out could mean getting them once AGI has already arrived.” (19:09)
Further Resources & Final Thoughts
- The article draws on work like Forethought’s “AI Tools for Existential Security” and Alan Dafoe’s “Open Problems in Cooperative AI.”
- For curated reading, open positions, and expert consultation, visit 80000hours.org.
- Emphasis on staying up to date: “We recommend keeping up to date with the evolving landscape of AGI challenges and being ready to pivot if other needs become more pressing.” (36:49)
End of summary
