I am actually quite skeptical of most of the stories that people tell about why an intervention worked in one place and why it didn’t work in another place. Because I think a lot of those stories are constructed after the fact, and they’re just stories that I don’t think are very credible. But that said, I don’t want to say that we can learn nothing. I would just say that it’s very, very hard to learn things. But, what’s the alternative?
If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else?
Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development – including 15,024 estimates from 635 papers across 20 types of intervention – to help answer this question.
Her finding: not confident at all.
The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of an education program find that it improves test scores by 0.5 standard deviations – the next result is as likely to be negative or greater than 1 standard deviation, as it is to be between 0-1 standard deviations.
She also observed that results from smaller studies conducted by NGOs – often pilot studies – would often look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.
For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting.
Is ‘evidence-based development’ writing a cheque its methodology can’t cash?
Should we invest more in collecting evidence to try to get reliable results?
Or, as some critics say, is interest in impact evaluation distracting us from more important issues, like national economic reforms that can’t be tested in randomised controlled trials?
We discuss these questions as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
- What is the YC basic income study looking at, and what motivates it?
- How do we get people to accept clean meat?
- How much can we generalize from impact evaluations?
- How much can we generalize from studies in development economics?
- Should we be running more or fewer studies?
- Do most social programs work or not?
- The academic incentives around data aggregation
- How much can impact evaluations inform policy decisions?
- How often do people change their minds?
- Do policy makers update too much or too little in the real world?
- How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group?
- How often should we believe positive results?
- What’s the state of development economics?
- Eva’s thoughts on our article on social interventions
- How much can we really learn from being empirical?
- How much should we really value RCTs?
- Is an Economics PhD overrated or underrated?
The 80,000 Hours podcast is produced by Keiran Harris.