Let’s say you’re planning to buy a new laptop — well, how do you choose that laptop?
You’re probably not going to pick randomly. And you’re probably not even going to choose the prettiest one either.
I’m guessing that you’ll put a bit of research into it. And that’s just common sense.
You’ll likely cross-reference a couple of different sources, try to find a laptop that’s endorsed by a few people you respect. Or maybe you go on a review site like Wirecutter to find what the reviewers consider the ‘best deal.’
You also might not even be married to the idea of getting a laptop at all — if the underlying thing you want to do is your work, maybe you should get a desktop and just use your phone when on the move.
At the end of the process, you would have hoped to get the outcome you really wanted, without spending too much time figuring it out.
But when it comes to doing good, most people don’t instinctively apply the same rigorous and practical mindset they do in other parts of their life. We’re more likely to volunteer our time at a place that’s easy to get to, give money to whichever charity knocks on our door, or focus on an issue just because it grabbed our attention when we were young.
To people in the effective altruism community, that seems like a pretty significant mistake.
If you’re someone who cares, you might spend many hours over the course of your life trying to make the world a better place, or even decide the direction of your whole career, with the goal of making the world a better place. So shouldn’t you spend at least a laptop’s worth of time and effort in finding out the best way to do that?
There’s actually much more reason to think about whether your actions are really improving the world, than there is to think about which laptop is best to buy. This is for a few reasons:
First, truly bad laptop manufacturers are driven out of the market by competition and regulation, because people can tell if laptops work or not, so probably any laptop would at least be decent.
But there’s no similar process that prevents people from adopting misguided ways of improving the world, so there is no real floor to how bad an opportunity to help can be — except perhaps being something that does very obvious significant harm.
Many people who are trying to do good don’t realise what they’re doing isn’t working, mostly because it’s so hard to measure. You might think that if you do some research and choose a better charity to give to, it might achieve 50% more, or perhaps twice as much. The same way if you choose a great laptop it will be better than a bad laptop, but not radically different.
But after spending a lot of time investigating and comparing lots of ideas about how we can improve the world, we actually think some approaches are 1,000% better, or 10,000% better than others.
Unlike with a laptop, there is effectively no ceiling for how good an opportunity to improve the world can be, as well as no floor on how useless or counterproductive it can be. So where you donate makes a much bigger difference.
And also unlike buying a laptop, figuring out the career that will allow you to do the most good is an intensely personal decision, which depends on many things about you specifically. That makes it even more important to think carefully about your options, because you can’t just take a generic recommendation off the shelf.
All right, let’s say you donate money every year to a cancer research charity.
The effective altruist style of thinking would ask questions like:
- First off, is there a different cancer research project that is more likely to succeed, or more limited by its access to funding? That might be hard to figure out, but we can try to check by looking at the outcomes of the charity and what it actually spends its money on.
- But secondly, on top of narrow questions like that, effective altruism encourages us to zoom out and ask what we’re really trying to achieve. Are we giving to the cancer research charity because we want to extend lives? If so, there might be a different project you could fund that’s more likely to extend people’s lives for longer. And if that’s what you really care about, why not fund that instead?
- Third, are there diseases other than this particular cancer that place an equally large or larger burden on health, which aren’t already saturated by funders pursuing all the good opportunities, and which might be more easily curable with the right research? (Unless you first chose especially well, the answer to that one is probably yes.)
- And fourth, zooming out further, could you actually extend lives or reduce suffering more by focusing on something besides health? What focus actually helps you reduce suffering the most with your limited resources?
These questions are hard to answer, especially as an individual.
But at its heart, the effective altruism community is a bunch of people all trying their best to answer questions like these, and ultimately the question, “How can we do the most good?” Collectively we’ve made substantial progress, finding especially promising opportunities for people who want to help more people or animals, and help them in a bigger way.
And if everyone who wanted to do good could switch into these kinds of opportunities, we could probably achieve many times as much as we do now.
Before we go on, let’s dispel a few common misconceptions about effective altruism.
You might have read that EA is just about fighting poverty using the results of randomised controlled trials, or something like that, but that’s just one answer that some people have suggested to the question, “How can we do the most good?”
Others, like me and my colleagues at 80,000 Hours, think doing the most good requires figuring out ways to make the very long-term future go well, such as reducing global catastrophic risks from engineered pandemics or nuclear war.
A 2019 survey of people involved in effective altruism found that 22% thought global poverty should be a top priority, 16% thought the same of climate change, and 11% said so of risks from advanced artificial intelligence. So a wide range of views on which causes are most pressing are represented in the group.
You might also have the impression that effective altruism is mostly about donating money, and we used donating to charity as an example above. But again, that’s just one answer that some people have reached to the question, “How can I do the most good?”
At 80,000 Hours, we focus on ways that you can use your career to do the most good, and for that, donating is just one option among many.
That same survey of people involved in the effective altruism community found 38% planned to have an impact through donations, with the rest planning to have an impact directly through their work, in research, government and business, among many paths.
You might think that effective altruism is too apolitical and ignores bigger-picture changes we could make to society. But that’s simply not true, either in theory or practice.
We’re trying to answer the question “How can we do the most good?”, and that will naturally often involve talking about politics and large-scale changes to society.
Many folks, including me and most of my friends, choose to engage in politics and think a lot about policy questions, while others decide to focus their efforts on other areas.
A key part of the effective altruist mindset is ’cause neutrality,’ which means being intellectually open to the possibility that any focus or approach might improve the world the most. If trying to improve the world in some systemic way is the course of action that will do the most good, then that’s what we ought to do, at least if it’s also a good fit for our personal situations.
We try to use evidence and reason to guide our views here, but we’re well aware that it’s not an exact science.
We’re always just trying to make our best guesses better — and a key part of effective altruism is that we accept that we might be wrong about almost anything.
We focus on shaping the world in a way that will be good for future generations. But maybe we should be focusing exclusively on people alive today. Or perhaps we should focus on the plight of animals suffering in factory farms.
And we currently think that safely guiding the development of artificial intelligence could represent a great opportunity to make the world a better place. But maybe the sceptics are right, and we’re just wasting our time.
People in the EA community aspire to avoid dogmatism and enjoy actively debating things. If you think someone’s really misguided about something — and can convincingly back up your view — you can expect a lot of people in this community to gladly change their mind. (Or at least, that’s what we strive for — of course we’re human and can get attached to our ideas.)
But we genuinely just want to do what’s best for the world, so if we’re wrong about anything — even if it’s the thing we’ve been dedicating our lives to — we should want to know.
Finally, though effective altruism is a lot about asking questions, we also need to do something with the answers we come up with. So many people in the effective altruism community focus on implementing best-guess solutions, while at the same time others continue the research project.
Both threads are important and can be a valuable focus for donations, side projects, or — of course — your career.
Learn more about effective altruism
Listen to our 10-part podcast series
This article was taken from the introduction to our podcast series on effective altruism. The 10 episodes cover many of the biggest ideas and projects going on right now in the community.
Get involved in the community
Once you’ve learned a bit more, consider meeting other people who are interested in using careful reasoning to do good as effectively as possible. You can make connections, gain and share insights, and see where you might be able to contribute.
See our community page