Transcript
A (0:00)
Hi listeners, Sushana here. So before we begin, I've actually got some great news for people who enjoy listening to narrated articles like this one. Basically, we've got literally hundreds more of them on our brand new 80,000 hours narrations feed where you can find advice like how not to lose your job to AI or like how to make a difference in any career, as well as overviews of pressing problems like factory farming and the moral status of digital mines. And even better than that, lots of them are like 20, 30 minutes long, which I think is like pretty perfect if what you're looking for is to absorb some big ideas fast. If you're wondering where you can find this wonderful new feed, just search for 80,000 hours narrations on your podcasting app and remember to subscribe. Okay, that's it. I hope you enjoy the article. Using AI to enhance societal decision making an article for the 80,000 Hours website, written by Zoshane Qureshi, published in September 2025 and read by the author in November 2025. Summary the arrival of AGI could compress a century of progress into a decade, forcing humanity to make decisions with higher stakes than we've ever seen before and with less time to get them right. But AI development also presents an opportunity. We could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll out these decision making tools quickly enough, humanity could be far better equipped to navigate the critical period ahead. We'd be excited to see more people trying to speed up the development and adoption of these tools. We think that for the right person, this path could be very impactful. That said, this is not a mature area. There's significant uncertainty about what work will actually be most useful, and getting involved has potential downside risks. So our guess is at this stage, it would be great if something like a few hundred particularly thoughtful and entrepreneurial people worked on using AI to improve societal decision making. And if the field proves promising, they could pave the way for more people to get involved later. Why Advancing AI Decision Making Tools Might Matter A Lot Humans often make big mistakes. Our institutions ignored climate scientists for decades. We responded ineffectively to early COVID 19 warnings and have rushed into countless wars that all parties later regretted. It's striking how far our actual decisions can fall short of what, in hindsight, looks obviously necessary. But why does this keep happening? Sometimes we misunderstand the facts or fail to predict challenges ahead of us. Other times we know there's a problem, but we just don't Take sufficient action or we can't coordinate on a response. We're now rapidly developing advanced AI systems that could transform every aspect of society. And this means good decision making could become even more critical. Soon we could be dealing with a whole new population of extremely capable agents, potentially with different goals and interests to humans. There could be a totally reshaped labor market where it's AI systems rather than humans that drive most economic progress. AIs could even be developing new advanced technologies, including weapons, much faster than we can study their risks. And there could be societal and geopolitical tensions over who controls or receives the benefits of advanced AI, possibly escalating into serious conflict. On top of all this, advanced AI systems could also produce ideas and economic outputs much faster than humans, potentially compressing a century's worth of progress into a decade. As Will MacAskill and Finn Morehouse argue in their article, Preparing for the Intelligence Explosion, this would mean decisions that, once played out over years, might need to be made in a matter of months. So the chance of missteps is high. And as we've argued elsewhere, the stakes could be existential. If we want to navigate this period well, we'll need to think more clearly, act more wisely, and coordinate more effectively than ever before. And that's a tall order. Next, I'll explain why we think AI tools could help us make much better decisions and why we might have an opportunity to speed up the rollout of these tools. Let's start with the first claim. AI tools could help us make much better decisions. The development of advanced AI could both make decision making more challenging and raise the stakes of humanity's future decisions. But AI is not a monolith. And perhaps counterintuitively, we think certain AI tools could actually be part of the solution. AI systems are capable of things humans simply aren't. They can absorb far more information, process it at vastly higher speeds, and improve their performance by practicing the same task millions of times. They've already beaten the best humans at strategy games like Go, and they're now also performing impressively on complex reasoning and problem solving tasks. And if you've ever used deep research tools from AI companies like OpenAI and Google DeepMind, you've seen how current models can process huge amounts of information into coherent conclusions much faster than even the greatest human minds. Given this, we think we're within reach of having AI tools that can seriously improve human decision making. And some may even be buildable. With today's technology, two kinds of AI tools seem especially promising. The first category here is epistemic tools which help us understand what's true and what's likely to happen. For example, AI fact checkers may be more reliable and impartial evaluators of information than humans are. After all, society currently struggles to converge on matters of fact. Consider how often political disagreements come down to a dispute over facts, or how easily misinformation spreads online. We'll need to get a lot better at this to navigate the disruption from future advanced AI systems. Also in this category, AI forecasting systems, which could help institutions make better predictions about world events and model the effects of different policies. And more speculatively, AI tools for moral progress could help us reason through complex ethical questions and potentially come to more agreement as a society. The second category is coordination tools, which help groups work together and make better collective decisions, even if they have competing interests. For example, AI negotiation tools could find mutually beneficial agreements that might otherwise be missed, perhaps by rapidly simulating thousands of hours of negotiation and testing out a vast number of agreements before making a proposal. AI enabled verification systems could reliably and impartially monitor compliance with agreements, overcoming the trust barriers that often prevent groups from cooperating. And structured transparency tools could enable tightly controlled information sharing, allowing parties to detect specific threats from each other, like whether someone is building dangerous weapons without the broader privacy costs of ordinary surveillance. There's also ongoing research in the field of cooperative AI, exploring more ways to use AI for improved coordination. For more on that, you can check out Open Problems in Cooperative AI by Alan Dafoe and other authors. We think these applications in epistemics and in coordination target some of the most common failures of human decision making. We often get led astray by false information, incorrectly predict how things will unfold, or fail to prevent outcomes no one wanted just because we can't cooperate. Another virtue of these applications is that they seem to be more useful for enabling good outcomes than bad ones. Overall, as a general rule of thumb, it seems empowering people to better understand the world and coordinate with each other is usually good for humanity, at least under the assumption that people are usually well intentioned. Of course, this assumption doesn't always hold, and we do think there's some risk of people deliberately using these AI tools to cause harm, a possibility we address later on. Let's move on to our second claim now, that is, we might be able to differentially speed up the rollout of AI decision making tools. Right now, only a handful of projects are building the kinds of AI tools we described earlier. This is a drop in the ocean compared to the billions that are being invested into developing broadly capable AI agents. Plus, there's often a lag between society having the ability to build a product and it actually being built and successfully rolled out. Consider COVID 19 vaccines. Although the underlying MRNA technology for these vaccines was proven in the mid 2000s, they didn't actually arrive until late 2020, almost a year into the pandemic. This points to an opportunity we might be able to accelerate the development and adoption of AI decision making tools, which would mean getting their benefits faster and even a small speed up could be consequential. For example, getting sophisticated verification tools just a few months earlier could mean critical safety commitments get nailed down before we develop dangerous AI systems instead of arriving too late to make a difference. What we're pointing to here is one form of differential technology development basically influencing the order in which different technologies emerge in order to make the world safer. In this case, the idea is to speed up the development of certain safety promoting AI capabilities so they're available before we have to contend with other riskier AI capabilities. And because we've seen so few projects in this direction so far, there's still lots of low hanging fruit to pick later on. We describe some work we think could be useful Section 2 what are the arguments against working to advance AI decision making tools? Having said all this, there are some objections we think people should really consider when deciding whether to work in this area. Here's three of them. Objection 1 Won't these technologies be developed by default anyway? Huge AI companies are racing to develop models that excel at all kinds of complex reasoning, and they're making rapid progress. Meanwhile, there are growing market incentives to build AI products for specific commercially valuable tasks, which might include some of the applications we talked about earlier. So AI decision making tools might get developed anyway by people trying to make money, which means this might not be a good use of time for people who want to do good with their careers. It seems like you could just wait for others to develop these tools and do something else with your time. This argument does seem right to some extent. As a general rule, focusing on something that's already commercially incentivized will probably reduce the counterfactual impact of your work. But we think there are ways you could still make a meaningful difference here, especially if you focus on gaps in the market when deciding what projects to pursue. So first, your work could still help society achieve the benefits of these tools sooner than they would otherwise have arrived. You might speed things up directly, for example, by successfully building a specific tool before anyone else gets there. And if your work does get overtaken by another project, it could still have compounding effects that speed up the arrival of future tools. For instance, if it attracts more investment or builds relevant knowledge, your project could enable others to to achieve a certain milestone faster, which could in turn bring forward the next milestone and so on. And as we've said, even a small speed up could make a big difference here. Although we think frontier models will eventually excel at tasks in epistemics and coordination, simply waiting for good decision making tools to get rolled out could mean getting them once AGI has already arrived, and by then it might be too late to use them to avoid a catastrophe. Second, you might be able to focus on products that are less incentivized by the market. For example, while advanced AI forecasting tools might get built by default for profitable uses like financial trading, there's much less commercial pressure to develop AI systems that are good at predicting other things or to create sophisticated tools for reasoning about ethics. Objection 2 Wouldn't this make dangerous AI capabilities arrive faster when we should be slowing things down? By accelerating progress on these tools, you might also increase knowledge hype and investment into AI R and D. More broadly, this could bring about AGI sooner, giving us less time to prepare for its risks. Your work could also enhance certain dangerous capabilities. For example, we think AI systems that excel at planning pose risks of disempowering humans, and developing systems that are great at forecasting might dangerously boost AI planning capabilities. We've explored these concerns elsewhere on our website, and there's a lot to say on the subject, but in this context, it's worth bearing a few things in mind. Firstly, although projects in this area might contribute to AI hype to some degree, these effects will probably be very insignificant compared to the billions of dollars already being invested into building AGI. By contrast, you could have an outsized impact on humanity's ability to make wise decisions. Secondly, you might be able to and should probably try to target lower risk applications that don't directly feed the development of dangerous capabilities. For example, AI fact checking tools seem much safer to build than tools leveraging strategic planning or persuasion. And thirdly, if these tools seriously improve our ability to navigate the world's biggest challenges, some speed up in the arrival of dangerous AI capabilities could still be worth it. Overall, there's also a role to play for interventions that slow down progress on dangerous technologies, whether that's through regulations that allow companies to take their time on safety without bearing the costs of unilateral slowdowns, or perhaps even campaigning to pause frontier AI development altogether. But speeding up progress on safety promoting technologies can happen at the same time, and it might also be easier. While slowing down requires agreement from officials or companies, you can just decide to develop a new tool without a consensus, and you'll likely face less pushback, since your strategy won't mean foregoing or delaying the benefits of future AI, or threatening any powerful company's bottom line. Objection 3 People might use these tools in dangerous Ways like many technologies, AI tools for epistemics and coordination could be used to cause harm. After all, getting better at understanding the world and coordinating with others typically makes you better at achieving your goals. And since people sometimes have goals that are harmful to others, these tools will sometimes help people do bad things more effectively. For example, groups with access to tools that enhance their negotiation or forecasting abilities could use them to illegitimately gain strategic advantages over those who don't have such tools. In extreme cases, this could even enable a dangerous power grab. But we'd guess that actors with genuinely malicious intentions are just not that common. Broadly speaking, it seems most harmful decisions don't happen because people really want to cause harm, but because we misunderstand a situation, don't realize the consequences our actions could have, or fail to find a solution that's less costly for everyone involved. These are all defects that AI decision making tools would help us overcome. And as we said earlier, a general common sense rule of thumb here is that empowering humans to understand the world and coordinate better seems to usually be a good thing for humanity. So our overall guess is that AI decision making tools will help us prevent bad outcomes more often than they'll enable them. This is one of the key reasons we're broadly enthusiastic about these tools. But this is a generalization and won't hold true for every AI decision making tool you could create. So if you're deciding whether to build or promote a new tool, you should factor in its specific misuse risks and whether it might actually favor harmful uses over beneficial ones. And these are difficult questions, so you should get help when trying to answer them. The most extreme risks here, like the chance of enabling a power grab, also highlight the importance of getting AI decision making tools in enough hands. By default, the most powerful actors will have access to better technologies than everyone else. But if we can make decision making tools widely accessible and equip key institutions to use them, we could prevent any single group from gaining dangerous advantages over others. We may need dedicated effort to make this happen. And if you decide to work on this, we encourage you to put in that effort. So should you work on this bottom line? It's complicated, but if you're a good fit, working on this could have a lot of upside. We'd encourage anyone who's interested to investigate whether it might be a good fit for them. For example, you could apply to speak with one of our advisors for the reasons we've just described. It does seem that some work in this space will end up having very little impact, and some could even have negative effects. You're more likely to avoid the pitfalls if you can prioritize AI decision making projects that are under incentivized by the market, less likely to drive the development of other dangerous AI capabilities, and more robust to misuse. But deciding what projects to pursue on this basis is much easier said than done. And because there aren't many concrete job opportunities here, working in this area may also require a more entrepreneurial approach than you'd need for tackling many other pressing problems. So overall, we don't think we can recommend this work as widely as we recommend working in more mature areas where the paths to impact are better tested and more clearly mapped out. Still, we think efforts to advance AI decision making tools could be very impactful for the right person. If you're especially good at navigating ambiguity, have an entrepreneurial mindset, and have strong judgment about what projects to prioritize, this could be a great fit. At this stage, we'd be excited to see perhaps a few hundred more people working in this area. If you're interested in being one of those people, we recommend building a network in AI safety and finding people who can help you think through specific project ideas first. It's also worth noting that some researchers, like the authors of AI Tools for Existential Security, which is linked in our article and in the description of this recording, feel more optimistic than we do about having many more people working in this area, so it's possible we're underrating it. In any case, we also recommend keeping up to date with the evolving landscape of AGI challenges and being ready to pivot if other needs become more pressing. How to Work in this Area Here are the top recommendations we've seen for people who want to speed up the development and adoption of AI decision making tools. The most direct thing you can do is work somewhere that's building the tools themselves. You can find some relevant organizations and research projects that are hiring on the job board featured in this article, but since the field is currently small, you might consider founding your own project instead. Either way, there's lots to do here not just the core engineering work, but also making demos, getting stakeholders on board, designing user interfaces that are appealing to decision makers, doing market research to tailor products to user needs, and ensuring projects operate efficiently. This means you don't need to be a technical expert to join or found projects of this kind. They also need great operations staff, product managers, and more. There are also other ways you can support these efforts without getting directly involved in building the tools. For example, you could measure and steer these beneficial capabilities by designing benchmarks or evaluations for the AI capabilities that would most help decision making work on supporting tech and infrastructure. That could mean developing complementary technologies that help remove barriers to adoption, for example by addressing users privacy or security concerns. It could also mean curating and managing data sets that can be used to train specialized AI decision making tools, for example data about past mistakes in forecasting or negotiation, or high quality research notes from fields where specialized decision making tools could be very helpful. Or it could involve creating infrastructure like online databases or directories to help people share resources and collaborate on projects. Additionally, you could help with implementation by integrating these tools into existing decision making processes at key institutions, including educating stakeholders on how to use them. If you're not currently able to work on any of this or you just don't feel it's the best option right now, you can still position yourself to help in future by founding any non harmful company, especially a technology company, so that you can learn and practice the skills of founding projects or developing expertise in fields where these AI tools could be especially impactful like forecasting or diplomacy, or joining key institutions like government agencies or international bodies that might benefit a lot from AI decision making tools and staying current with the technologies while you're there so you can help to integrate them later on. Want one on one advice on pursuing this path? If you think this path might be a great option for you, our team might be able to advise you on your next steps. We can help you compare options, make connections, and possibly help you find jobs or funding opportunities. Learn More if you're interested in learning more, visit this article on our website 80,000hours.org search for using AI to Enhance Societal Decision Making. At the bottom of the page you'll find a list of further reading, including resources we recommend. If you're looking for specific project ideas, you can also find open positions, funding opportunities and fellowships on our job board, which is featured towards the end of the article. Thank you for listening. This profile draws extensively from Forethought's article AI Tools for Existential Security. Many thanks to Arden Koehler Lieske Weintraub, Neil Bowerman, Max Dalton, and Rose Hadshah for input. Please share this article with others who might find it helpful or interesting. Thank you. Using AI to Enhance Societal Decision Making written by Zoshane Qureshi, read by the author in November 2025 and edited by Dominic Armstrong.
