Some causes are better than others
We tend to imagine that organised attempts to make the world a better place are almost always successful, at least to some extent. However, this is simply not the case.
GiveWell surveyed the literature on the effects of social interventions, concluding:
We think that charities can easily fail to have impact, even when they’re doing exactly what they say they are. In fact, our review of academic research has led us to believe that many of the problems charities aim to address are extremely difficult problems that foundations, governments and experts have struggled with for decades. Many well-funded, well-executed, logical programs simply haven’t had the desired results.
David Anderson, assistant director of the Coalition for Evidence Based Policy estimates:
(1) The vast majority of social programs and services have not yet been rigorously evaluated, and (2) of those that have been rigorously evaluated, most (perhaps 75% or more), including those backed by expert opinion and less-rigorous studies, turn out to produce small or no effects, and, in some cases negative effects.
Even within areas where interventions do work, the differences in effectiveness are often significant. The Abdul Latif Jameel Poverty Action Lab is a network of over 100 academics who carry out rigorous impact evaluations of interventions within international development. Within a program area, they often find that the best interventions are more than ten times as effective as others with the same aim, even when excluding entirely ineffective programs.
To take one example, they studied interventions aiming to increase the attendance of teachers in the developing world. They found that half of the interventions studied had no effect whatsoever. Even once those were excluded, the best three were over ten times more effective than the worst intervention.1
Moreover, these differences are hard to predict ahead of time. Most social interventions that end up being evaluated were originally supported by experts and governments, were executed on a wide scale and are widely thought to work. But when tested rigorously, they turn out not to.
What does this mean?
Looking back on several decades of impact evaluations, we can see that good intentions and passion alone aren’t enough. Rather, we need a strategic approach that makes use of data where it is available, or seeks to gather data where it is not. It’s not that we should only focus on already proven interventions; rather, we should focus on implementing the best interventions we know given the evidence, or searching for new interventions that might be effective and testing them out.
Our solution is our framework for assessing causes, which helps you to evaluate which areas to focus on. We also think it’s important to stay flexible about which causes to support, because new information is always coming to light about the most effective interventions.
Focusing on the right cause could boost your impact more than ten times, enabling you to achieve more in a few years than you might normally be able to achieve in a lifetime.2
Notes and references
- http://www.povertyactionlab.org/policy-lessons/education/teacher-attendance-incentives. Accessed: 2014-11-06. (Archived by WebCite® at http://www.webcitation.org/6TsweFekf)↩
- We find large differences in the effectiveness of different interventions whenever people try to quantify the effects of different interventions. Besides JPAL’s findings, two other major sources are: (i) The Disease Control Priorities Project, which collects cost-effectiveness estimates of different health interventions, and finds differences of more than 100 times between the least and most effective ways of improving health (ii) The Copenhagen Consensus, which collects cost-benefit analyses of interventions within international development, typically finding some interventions have negative cost-benefit ratios, some are about neutral, and ratios which vary from 1 to 50 times among the remainder.
One major problem with these studies is that it’s difficult to fully quantify the impacts of different actions. This provides reasons to think the full scale of the differences in the effectiveness of different causes when predicted in advance may be either larger or smaller. Overall, the issue of how much causes typically differ in effectiveness all considered has not been thoroughly studied, so remains uncertain; nevertheless, we believe that there are differences of at least ten times in effectiveness between causes that are plausibly good. This means that it is very important to select the cause you’re working on, as it could boost your impact by more than ten times. However, in many situations it will also be equally or more important to evaluate other factors, such as personal fit.↩