
Effective Altruism: An Introduction is a collection of ten top episodes of The 80,000 Hours Podcast specifically selected to help listeners quickly get up to speed on the school of thought known as effective altruism. Here the host of the show — Rob Wi
Loading summary
A
Hi listeners, I'm Rob Wiblin, Head of research at 80,000 hours. I'm the regular host of the 80,000 Hours podcast and my producer Kieran and I thought it would be useful to collect 10 episodes of our show on this separate feed as a way of bringing you up to speed on what you need to know about how people in the Effective Altruism community think and some of the conclusions that they've reached. We are confident that if you get through all 10 of these, you'll be in a great position to engage more with these ideas and figure out if this is a group that you would like to be a part of. This feed currently has one companion feed called effective 10 global problems, which you can also search for and subscribe to in your podcasting app. While this set of 10 covers how to think like an effective altruist and set global priorities that will allow you to have the most impact, that feed covers 10 problems that the effective altruist community is working to solve, including global poverty, pandemics, threats from new technologies, suffering of animals and factory farms, and several more. It's reasonable to listen to that one at the same time as this series, or afterwards, depending on whether you want high level ideas or more concrete examples of what we're actually, actually up to. If you've listened to the intro on effective altruism 10 global problems already, then you can skip to the first episode. Now, as you'll have heard the rest of this introduction before, if you are totally new to Effective Altruism, here is a preview of what it is all about. Let's say that you're planning to buy a new laptop. Well, how would you go about choosing that laptop? You're probably not going to pick randomly, and you're probably not just going to go around and choose the prettiest one either. I would guess that you would put in a bit of research and that is really just kind of common sense. You would likely cross reference a couple of different sources, Try to find a laptop that's endorsed by, say, someone that you respect. Or maybe you go to a review website like the Wirecutter to find out what professional reviewers consider to be the best deal. You might also not even be married to the idea of getting a laptop at all. If the underlying thing that you wanted to do is do your work, maybe you should get a desktop that's cheaper instead and just use your phone when you're on the road. Either way, at the end of this whole process, you would have hoped to find a way to get the outcome that you really wanted without spending too much time figuring it all out. But when it comes to doing good, most people don't instinctively apply the same kind of rigorous and pragmatic and practical mindset that they naturally do in so many other parts of their life. We're more likely to volunteer our time at a place that happens to be easy to get to, or perhaps give money to whichever charity happens to knock on our door or focus on a problem just because it grabbed our attention when we were young and impressionable. To people in the effective altruism community, that kind of thing seems like a pretty significant mistake. If you're someone who cares about the world and making it a better place, you might well spend a lot of hours over the course of your life trying to do that, or even decide the whole direction of your entire career with the goal of trying to make the world better. So shouldn't you spend at least a laptop's worth of time and effort in finding out the best way to do that? There's actually more reason to think about whether your actions are really improving the world than there is to think about which laptop is best to buy. And that's for a couple of different reasons. First off, truly bad laptop manufacturers tend to get driven out of the market by competition and regulation because people can tell whether laptops work or not. So probably any laptop you chose would at least be decent. But there's no similar process that prevents people from adopting misguided ways of improving the world. So there is no real flaw to how bad an opportunity to help other people can be, except perhaps being something that does very obvious and significant harm. But many people who are trying to do good don't realize that what they're doing isn't in fact working, mostly because it's just so hard to measure your real impact. You might think that if you do some research and choose a better charity to give to, it might achieve 50% more, or perhaps twice as much as the one that you were choosing earlier. The same way that if you choose a great laptop, it'll be better than a bad laptop, but not radically different. But after spending a lot of time investigating and comparing a lot of different ideas about how we could improve the world, we actually think that some approaches are 1000% better, or possibly even 10,000% better than others. Unlike with a laptop, there is effectively no ceiling on how good an opportunity to improve the world can be, as well as no flaw on how useless or counterproductive it might be. So being thoughtful about what you're choosing to do makes a much bigger difference. Unlike buying a laptop, figuring out the career that will allow you to do the most good is an intensely personal decision, one that depends on many things about you. Specifically, that makes it even more important to think really carefully about all your options, because you can't just take a generic recommendation off the shelf. Alright, let's say that you donate money every year to a cancer research charity. The effective altruist style of thinking would lead you to ask questions like, first off, is there a different cancer research project that is more likely to succeed or perhaps more limited by its access to funding? That might be hard to figure out, but we can try to check by, say, looking at the outcomes of the charity and and what it actually spends its money on. Secondly, on top of narrow questions like that, effective altruism encourages us to zoom out and ask what fundamentally are we trying to really achieve here? Are we giving to the cancer research charity because we want to extend people's lives? If so, there might be a different project that you could fund that's more likely to extend people's lives for even longer, and if that's what you really care about, why not fund that other project instead? Third, are there diseases other than this particular cancer that place an equally large or larger burden on people's health which aren't already saturated by funders pursuing all the good opportunities and which might be more easily curable with the right research? Unless you chose especially well or got lucky, the answer to that might well be yes. And fourth, zooming out a bunch further, could you actually extend lives or reduce suffering more by focusing on something besides health? What focus actually helps you to reduce suffering the most with your limited resources? All of these questions, at different levels of analysis and specificity are hard to answer, especially as an individual. But at its heart, the effective altruism community is a bunch of people all trying their best to answer questions like these and ultimately address the question how can we do the most good? Collectively, we've made substantial progress finding especially promising opportunities for people who want to help more people or animals and to help them in a bigger way. And if everyone who wanted to do good switched into these kinds of opportunities and many others that we'll find in future, we could probably achieve many times as much as we are right now. Before we go on, let's dispel a few common misconceptions about effective altruism. You might have read that EA is just about fighting poverty using the results of randomized controlled trials or something like that. But that's just one answer that some people have suggested to the question of how can we do the most good? Others, like me and my colleagues at 80,000 hours think that doing the most good requires figuring out ways to make the very long term future go well, such as reducing global catastrophic risks from engineered pandemics or preventing a nuclear war or something like that. A 2019 survey of people involved in the effective altruism community found that 22% thought that global poverty and health should be a global priority, 16% thought the same of climate change, and 11% said so of risks from advanced artificial intelligence. So a wide range of views on which causes are most pressing are represented in the group. You might also have the impression that effective altruism is mostly about donating money, and we did use donating to charity as an example above. But again, that's just one answer that some people have reached to the question how can we do the Most good at 80,000 hours? We focus on ways that you can use your career to do the most good in general, and for that, donations is just one option among many that you might choose between. That same survey of people involved in the effective altruism community found that 38% planned to have an impact through donations, with the rest planning to have an impact directly through their work in research, government and business, among many others. You might think that effective altruism is too apolitical and perhaps it ignores bigger picture changes that we could make to society. But that is simply not true, either in theory or in practice. Again, we're trying to answer the question how can we do the most good? And that will naturally often involve talking about politics and large scale changes to society. Many folks, including me and most of my friends, choose to engage in politics and think a lot about policy questions, while there's others who decide to focus their efforts on other areas. A key part of the effective actress mindset is something called cause neutrality, which means being intellectually open to the possibility that any focus or any approach might improve the world the most. If trying to improve the world in some systematic way is the course of action that will do the most good, then that's what we ought to do, at least if it's also a good fit for our personal situations. Let's talk a bit about confidence and humility. We try to use evidence and reason to guide our views here, but we are well aware that this is not an exact science. We are always just trying to make our best guesses better. And a key part of effective altruism is that we accept that we might be wrong about almost anything at 80,000 hours. We in particular focus on shaping the world in a way that will be good for future generations. But maybe we should focus exclusively on people who are alive today. Or perhaps we should focus on the plights of animals suffering in factory farms. And we currently think that safely guiding the development of artificial intelligence could represent a great opportunity to make the world a better place. But maybe the skeptics about that are right and we are just wasting our time. People in the affective altruism community aspire to avoid dogmatism and to enjoy actively debating things. If you think that someone's really misguided about something and you can convincingly back up your view, you can expect a lot of people in the community to gladly change their mind along with you. Or at least that is what we strive for. Of course, we are all human and can get attached to our ideas, but we genuinely just want what is best for the world. So if we're wrong about anything, even if it's the things that we've been dedicating our lives to, we should want to know about it. All right? Hopefully that gives you enough context to dive into these episodes with enthusiasm. And if you want to learn more about any of the topics discussed, be sure to check out the show notes for each episode. There's always a full transcript and lots of links to learn more about what we've talked about. Thanks for listening, Kieran, and I hope that you enjoy the ride ahead.
Episode: Effective altruism in a nutshell
Host: Rob Wiblin
Date: April 12, 2021
This introductory episode, hosted by Rob Wiblin, Head of Research at 80,000 Hours, gives listeners a "nutshell" guide to effective altruism (EA). The purpose is to familiarize newcomers with the EA mindset: using reason and evidence to find out how to help others as much as possible—and actually acting on those insights. Wiblin draws analogies to every-day decision-making, challenges common approaches to doing good, and dispels frequent misconceptions about the movement.
Decision-Making Analogy:
Choosing how to do good should be approached with at least as much care as buying a laptop—most people already strive to get value when making personal purchases, but don’t apply this thinking to altruistic efforts.
Quote:
"There's actually more reason to think about whether your actions are really improving the world than there is to think about which laptop is best to buy."
— Rob Wiblin (03:43)
Market Correction in Charity vs. Commerce:
Laptops that don’t work are quickly weeded out; not so with charitable interventions, which makes careful analysis even more essential.
Magnitude of Difference:
Some approaches to doing good may be "1,000% or possibly even 10,000% better than others" (05:04), highlighting just how important prioritization is.
Iterative Zoom-Out in Analysis:
Core EA Question:
"How can we do the most good?"
— Rob Wiblin (08:11)
Community and Progress:
The EA community pools their analysis to identify uniquely impactful opportunities—for humans, for animals, and for the long-term future.
EA ≠ Just Global Poverty or Randomized Controlled Trials:
EA encompasses work on existential risks, animal welfare, and far-future considerations—not just current global health and poverty.
“A 2019 survey…found that 22% thought that global poverty and health should be a global priority, 16% thought the same of climate change, and 11% said so of risks from advanced artificial intelligence.”
— Rob Wiblin (11:03)
EA ≠ Just About Donations:
Many EAs focus their impact through their careers, policymaking, research, or advocacy, not just charitable giving.
“38% planned to have an impact through donations, with the rest planning to have an impact directly through their work in research, government and business, among many others.”
— Rob Wiblin (12:32)
EA is Not Apolitical:
Cause neutrality means being open to the best ways to do good, which often includes activism and policy engagement.
Cause Neutrality:
EAs are encouraged to remain open-minded about what cause or method is actually best, guided by evidence and personal fit.
"If trying to improve the world in some systematic way is the course of action that will do the most good, then that's what we ought to do, at least if it's also a good fit for our personal situations."
— Rob Wiblin (14:12)
Confidence & Humility:
The community strives to avoid dogma, welcomes debate, and encourages changing minds in light of new evidence.
"People in the effective altruism community aspire to avoid dogmatism and to enjoy actively debating things. If you think that someone's really misguided about something and you can convincingly back up your view, you can expect a lot of people in the community to gladly change their mind along with you."
— Rob Wiblin (15:29)
Personal Reflection:
If EAs are fundamentally wrong about their cause or prioritization, they want to know and to course-correct.
"If we're wrong about anything, even if it's the things that we've been dedicating our lives to, we should want to know about it."
— Rob Wiblin (16:09)
On the Need for Rigorous Altruism:
"Shouldn't you spend at least a laptop's worth of time and effort in finding out the best way to [do good]?"
— Rob Wiblin (02:28)
On Impact Differences:
"Some approaches are 1,000% better, or possibly even 10,000% better than others."
— Rob Wiblin (05:04)
On the Role of Evidence & Reason:
"We try to use evidence and reason to guide our views here, but we are well aware that this is not an exact science."
— Rob Wiblin (14:50)
Listeners are encouraged to continue exploring both high-level principles and concrete cause areas throughout the rest of the selected episodes and to actively engage with the ideas—an approach that mirrors EA’s own openness and self-scrutiny. Show notes and further resources are recommended for deeper dives.