Introduction and summary
We could have all the influence in the world, but if we focus on the wrong opportunity, we’re not going to have much impact.
How can we make sure we work on good opportunities in our careers? At 80,000 Hours, we think it’s really useful for most people to work on picking a cause to support in their career. By cause, we mean a set of opportunities for making a difference such that the people working on them tend to share common knowledge, skills and core values.
But how can we compare causes in terms of potential for impact? In this post, we present our answer, which we think differs considerably from how people normally go about choosing a cause, which focuses mainly on personal passion. Your degree of passion is important, but it’s just one factor among several others, which we’ll describe in this post.
Our answer to how you can compare causes in terms of the impact you can have with your human or financial capital, is in the form of a framework you can apply. Note that what follows is just our current best answer – it’s likely to change, and involves many judgement calls that some people may not agree with.
In a later post, we’ll apply this framework to a selection of causes we think are particularly promising.
In summary, we think you should look for the best overall combination of the following three factors, the names of which we took from GiveWell Labs:
- Important: If we make more progress on this cause, the world will be made a better place. By ‘world made a better place’ we mean that lots of people will be made better off in important ways. Causes can also be important indirectly, because progress on them lets us make progress on other important causes or provides valuable information about which causes are best.
Tractable: There are definite interventions to make progress within this cause, with strong evidence behind them For instance, there are definite opportunities for progress, backed by widely accepted theory, randomised control trials or a track record of success.
Uncrowded: If we add more resources to the cause, we can expect more promising interventions to be carried out. Uncrowded causes are often undervalued or neglected by society. There may be a shortage of important actors within the cause.
We think you can assess causes by:
- Assessing these factors and their subfactors by asking experts and gathering other relevant data (e.g. data about how many people are affected by a problem, how many people are working on the cause).
- Drawing on cost-effectiveness and benefit-cost analyses prepared by the Copenhagen Consensus, JPAL and other academic research.
- Using the results of GiveWell Labs, which aims to assess causes from the perspective of a donor (with the caveat that the best areas to lead your career within are likely to be different from the best areas to donate to).
In the rest of this post:
Further explanation of the factors
Some causes are more important than others. Importance isn’t all that matters. A cause can be extremely important, but impossible to do anything about. For instance, it would be extremely good if we could develop a perpetual motion machine to supply free energy forever, but that project isn’t worth working on, because we’re highly confident that it’s impossible. With other causes, a little bit of time and money can go a long way towards solving them.
People seem relatively good at judging which causes are important (climate change, malaria, political instability), but we’re much worse at assessing which causes are most worth acting on.
Some causes aren’t worth acting on because they’re intractable. It would be great if we could make progress, but we’ve got no idea how to do it. Making a perpetual motion machine falls into that category.
Other causes aren’t worth acting on because other people have already taken the best opportunities to make progress, so adding more resources doesn’t cause much more to be done. Pushing a paramedic doing CPR out of the way to perform CPR yourself achieves nothing.
So, we look for the causes with the best overall combination of being important, tractable and uncrowded.
A cause is important if more progress within it will make the world a better place.
Ultimately, we tend to think of making the world a better place as allowing more people to live better lives in the long-term. The precise definition depends on some moral questions that our members disagree about. Many of our members extend ‘people’ to include non-human animals. Some think there can be intrinsic value in the environment. ‘Better lives’ can be taken to mean ‘lives full of pleasure’ or something more complex like ‘lives full of potential for flourishing.’ These differences often don’t matter, and when we think they do, we’ll flag the judgement so you can make up your own mind.
In practice, it’s difficult to compare vastly different causes in terms of number of lives improved, so you can break importance down into several sub-factors (which can all be further divided):
- The immediate impact – how many people are made better off, and by how much, over the next couple of decades as the result of progress on this problem? This is the normal focus of impact evaluation.
The long-run impact – to what extent does progress generally lead to a more robust society that’s likely to survive and prosper in the long-term, or help us make progress on other important causes? More explanation
Indirect impact – to what extent does progress in this cause lead to progress on other important causes?
Value of information – to what extent does working on this cause help you and others learn about what causes are the highest priority?
We normally value long-run impact more highly than short-run effects, because we think that whether society prospers into the future is more important than just what happens today.
On the other hand, the short-run impact is much easier to evaluate, may be a good proxy for long-run impact, and is commonly regarded as high priority, so in practice we recommend considering both.
We also recommend placing relatively high weight on the value of information. There’s a huge amount we don’t know about which causes are high priority and learning about this is very valuable (as we’ll explain in an upcoming post).
A cause is tractable if there exist definite, evidence-backed interventions we expect to make more progress in the cause
To evaluate tractability, look for:
A cause is uncrowded if we can expect that adding more resources to the cause will result in more effective interventions being carried out.
We also call crowdedness ‘room for more resources.’ A cause can be important and tractable, but ineffective because the best interventions are being carried out anyway. Consider malaria nets as an example. At first, there’s an acute need for more nets and the cause is uncrowded. As more and more money is spent on nets, the areas with the highest incidences of malaria become covered. Further nets go to less high risk populations, and are less effective (becoming crowded). Eventually, almost everyone has a net, so further investment isn’t useful (highly crowded).
To evaluate crowdedness, look for:
Note that GiveWell has often found that a cause can look uncrowded from the outside based on some simple rules of thumb, but turn out to be much less promising upon deeper investigation. So, it’s important to try to probe deeper, and ask: are there good, definite opportunities within this cause that are not being taken, or are highly short of resources?. This is normally most easily done by asking experts or the relevant organisations. It can also be useful to assess how many resources are going into the cause, relative to the size of the opportunity or problem.
Overall benefit-to-cost ratio
There are some groups that make calculations of how effective it is overall to invest more resources in specific causes (at least in the short-run). Development, health and welfare economists often prepare estimates of the benefit-to-cost ratio of different policies and interventions. The Copenhagen Consensus asks leading economists to draw on this work to make benefit-cost analyses of various promising solutions to important global problems.
We recommend consulting these sources, where they exist, and using them as a piece of evidence in your overall judgement.
You may also find it useful to prepare your own back-of-the-envelope expected value calculation, though make sure you’re aware of the likelihood of omitting important factors or combining them incorrectly (model error).
- If more progress is made, will there be large immediate benefits, to lots of people?
- If more progress is made, will it help us make progress on other important causes?
- If more progress is made, will it contribute to building a generally robust and prosperous society?
- Will working on this cause teach us a lot about which causes are best?
- Do well defined interventions exist that you could take?
- Is there academic trial data showing that interventions within this cause work?
- Is there strong theory accepted by experts for why these interventions should work?
- Is there a track record of success in this area, or in analogous areas?
- Does society severely undervalue this cause for invalid reasons? Do lots of the rules of thumb apply?
- Are there definite opportunities for progress that are not being taken, which you could take?
Overall cost-benefit ratio
- Are there existing benefit-cost analyses which suggest that interventions within this cause yield a large return on additional resources invested?
How can you apply this framework to compare two causes?
Here’s a process that seems sensible, though we need to do more work applying this framework to real examples to be confident in it.
Do an initial side-by-side comparison using the check-list above. Does one cause seem to clearly win?
If not, ask what your biggest uncertainties are, and try to resolve them by speaking to experts, gathering relevant data (e.g. data on how many people are affected by the problem, how many people are working on the cause, to what extent the cause is prioritised by society) or consulting relevant cause prioritisation resources, like GiveWell, the Copenhagen Consensus, 80,000 Hours, JPAL and others.
You could also attempt a more detailed calculation of the expected value of additional work. Look for already existing calculations within the academic literature and from the Copenhagen Consensus.
Note that GiveWell and the Copenhagen Consensus focus on which causes result in the most social impact if additional money is invested in the cause. If you’re looking to lead a career in the cause, then what matters most is which causes yield the most social impact if more of your human capital is invested in them. We think the two (funding and human capital constraints) are closely related, since many causes just need more resources in general. But we also think the best funding opportunities will generally differ from the best opportunities for more human capital. Moreover, because everyone provides a different type of human capital, the list of best causes for one person could differ from the list of best causes for someone else. This isn’t the case with money, because a dollar in my hand can buy the same things as a dollar in your hand.
Unfortunately, much less work has been done on finding the most pressing talent gaps. This is something we’ll be working on over the coming years, as well as encouraging other groups to investigate.
Why this framework?
Why not just an expected value estimate?
Ultimately, most of our members want to pick the causes that allow them to benefit the most people, in the biggest ways, in the long-run. However, we don’t think the best way to do that is to start by making a detailed estimate of the expected value of the best projects within each cause. That’s because there’s very little relevant quantitative data available for comparing the value of adding more resources to different causes. Going through a detailed quantitative analysis therefore would be highly time consuming and potentially pointless. Moreover, pushing to make a quantified model too early can easily cause you to overlook important considerations that are not easy to quantify, or spend too much time evaluating intractable considerations. (For more reasons why an overly quantified approach can fail, see this post on GiveWell). There’s also reason to think it’s better to do head-to-head comparisons rather than stand-alone estimates of impact. Detailed quantified estimates are also difficult to communicate, which makes it harder for others to check and build upon our work.
We think it’s best to start by doing a qualitative analysis based on the factors above. Then, move on to making quantitative comparisons of each factor. There are often very large differences in importance or cost-effectiveness, and quantification can help you notice these.
After that, if you’re doing a more in-depth analysis, we would recommend that you consider making a quantitative model of the effectiveness of the causes. Use this model to inform your overall judgement, but be highly wary of important factors you’ve missed and other calculation errors (model error).
Why didn’t we include personal fit?
We did include personal fit, in a sense. You should rate each of the factors relative to your own situation. For instance, you should analyse importance relative to your own best moral judgements.
Generally, however, our aim is to make the issue of which cause is most effective relatively independent of who you are. The question of which cause you should work on, however, is different. That requires taking account of what different causes need and what you can contribute. For instance, if you have a lot of money, then the best cause for you to support may be one with great need for more funding. If you have strong scientific skills, the best cause to support may well be a field of scientific research. If you have strong prior knowledge of a cause, that could also be worth taking into account. There are roles for many different types of people within each cause.
One way you could proceed is to make a list of which causes you think are most effective in general, and then ask: given my resources, which of these I can make the largest contribution to? Or you could score each on personal fit, and then consider those that are overall top.
A word of warning: we think that people usually put too much weight on ‘personal fit’ with a cause. Importance, tractability and crowdedness vary enormously, and we think this often dwarfs the variation in fit, especially if you’re at the start of your career.
We also don’t endorse the overly simplistic, but common, advice to work on the cause you’re most passionate about. Passion and motivation are important, but it’s still important to trade them off against general importance, tractability and crowdedness. Moreover, we think that people have the capacity to become passionate about many causes. What’s most important is that you’re doing meaningful, valuable work, or we think you can just be passionate about making a big difference in an effective way.
How did we come up with this framework?
We came up with this framework by thinking about what factors matter in the context of talking to around 100 people about the impact of their careers, and examining the cause and intervention selection process of relevant groups (especially Giving What We Can, the CEA strategy team, GiveWell Labs and the Copenhagen Consensus).
We’re relatively confident that it’s useful, but because we haven’t worked on the issues for very long, we anticipate it will change significantly as we learn more. We’d also like to flag that the choice of factors and which sources of evidence to use are in large part a judgement call on our part.
We think it’s useful because:
- It covers the most important factors we currently know, divided in a way that we’ve found relatively easy to work with. In particular, it allows you to isolate the most important and tractable parts of the assessment, so that you can focus your research time.
It’s very similar to, and in part inspired by, the framework used by GiveWell.
Assessement based on all these factors will provide you with many partially independent sources of evidence in favor of the cause, which we think is often a better way to make an argument than drawing up a single strong argument, as explained in this post by Jonah Sinick. More generally, this is called a “model combination” or “ensemble learning” approach to inference.
Thank you to Nick Beckstead, Owen Cotton-Barratt, Jonah Sinick, Carl Shulman and Seb Farquhar for comments.
You might also be interested in: