#41 – If the US put fewer people in prison, would crime go up? Not at all, according to Open Philanthropy’s renowned researcher David Roodman.

With 698 inmates per 100,000 citizens, the U.S. is the world’s leader in incarcerating people. But what effect does this actually have on crime?

According to David Roodman, Senior Advisor to Open Philanthropy, the marginal effect is zero.

This stunning rebuke to the American criminal justice system comes from the man Holden Karnofsky called “the gold standard for in-depth quantitative research”. His other investigations include the risk of geomagnetic storms, whether deworming improves health and test scores, and the development impacts of microfinance – all of which we also cover in this episode.

In his comprehensive review of the evidence, David says the effects of crime can be split into three categories; before, during, and after.

Does having tougher sentences deter people from committing crime? After reviewing studies on gun laws and ‘three strikes’ in California, David concluded that the effect of deterrence is zero.

Does imprisoning more people reduce crime by incapacitating potential offenders? Here he says yes, noting that crimes like motor vehicle theft have gone up in a way that seems pretty clearly connected with recent Californian criminal justice reforms (though the effect on violent crime is far lower).

Finally, do the after-effects of prison make you more or less likely to commit future crimes?

This one is more complicated.

His literature review suggested that more time in prison made people substantially more likely to commit future crimes when released. But concerned that he was biased towards a comfortable position against incarceration, David did a cost-benefit analysis using both his favoured reading of the evidence and the devil’s advocate view; that there is deterrence and that the after-effects are beneficial.

For the devil’s advocate position David used the highest assessment of the harm caused by crime, which suggests a year of prison prevents about $92,000 in crime. But weighed against a lost year of liberty, valued at $50,000, and the cost of operating prisons, the numbers came out exactly the same.

So even using the least-favourable cost-benefit valuation of the least favourable reading of the evidence — it just breaks even.

The argument for incarceration melts further when you consider the significant crime that occurs within prisons, de-emphasised because of a lack of data and a perceived lack of compassion for inmates.

In today’s episode we discuss how to conduct such impactful research, and how to proceed having reached strong conclusions.

We also cover:

  • How do you become a world class researcher? What kinds of character traits are important?
  • Are academics aware of following perverse incentives?
  • What’s involved in data replication? How often do papers replicate?
  • The politics of large orgs vs. small orgs
  • How do you decide what questions to research?
  • How concerned should a researcher be with their own biases?
  • Geomagnetic storms as a potential cause area
  • How much does David rely on interviews with experts?
  • The effects of deworming on child health and test scores
  • Is research getting more reliable? Should we have ‘data vigilantes’?
  • What are David’s critiques of effective altruism?
  • What are the pros and cons of starting your career in the think tank world? Do people generally have a high impact?
  • How do we improve coordination across groups, given our evolutionary history?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Highlights

I have never taken a class in economics or statistics, but replicating existing work is actually a wonderful way to learn this stuff. It’s kind of like a scaffolding, and so when I did it for the first time, when I was at the Center for Global Development, which is a think tank in Washington that focuses on what … we used to call Third World Development. I was working under Bill Easterly, who’s now a pretty famous critic of foreign aid, among other things.

And he had me replicate what was then a very influential study on the impact of foreign aid on economic growth in the countries that receive it. And he used pretty elementary methods, I now understand, but they were totally new to me. But to have the paper on my left and the textbook on my right and the computer in the middle was a wonderful way to step-by-step learn the methods, and it’s been that way for me, throughout.

The majority of the studies that I looked at that are set in the modern American context, putting extra weight on the ones that I could actually replicate and came out believing, say that the aftereffects are harmful. So yes, you get a short-term benefit when you put more people in prison, reducing crime. But in the long run that seems to backfire. Increasing, actually increasing crime when you get out.

So as a very rough estimate, I would say, we’re at a margin in the United States today where … by the way, we have huge numbers of people in prison. You know. Per population, we’re the highest in the world except possibly North Korea. We’re at a margin where incarceration is not affecting crime. The marginal effect is zero.

Probably a lot of your listeners are familiar with at least the rough idea of the central limit theorem in statistics. This is a really key result. It says that, for example, if you were to conduct the same presidential poll, the same moment in time. Maybe you did that poll a thousand times. You would get a slightly different answer each time, each run of the poll, but your answers would cluster around the true value and they would do so in a pattern that follows a bell curve. That’s also called the normal curve.

And that’s true regardless of the actual underlying distribution of views in the world. Almost every case that we can imagine, you get a bell curve when you repeatedly sample. And that’s a really powerful result because it means you can start to construct confidence intervals while remaining ignorant of the underlying distributions of the things you’re studying.

We can do something similar when we’re looking at extreme events. It turns out that, you know, we don’t know what the true statistical distribution of geomagnetic storms is. Some people have argued that it’s kind of a power law, something else. Turns out that, when you look at the tail of the distribution, you know, the way it’s sort of gradually coming down to zero and flattening out. Most tails are the same. That is to say, they fall within a single family of distributions. It’s called the generalized Pareto family. They vary in, you know, whether they actually hit zero or not, and how fast they decay towards zero. But they kind of look the same, and regardless of what the rest of the distribution looks like.

So what you can do is you can take a data set like, all geomagnetic disturbances since 1957, and then look at the, say, 300 biggest ones. What’s the right tail of the distribution? And then ask which member of the generalized Pareto family fits that data the best? And then once you’ve got a curve that you know … you know for theoretical reasons is a good choice, you can extrapolate it farther to the right and say, “What’s a million year storm look like?”

I think the definition of weak evidence is that your priors matter, all right? If the evidence were compelling, it almost wouldn’t matter what we thought before we came into the experiment, and we’re not in that situation.

But suppose we draw some bell curves representing our general understanding of the impact of deworming. For the worm study in Western Kenya, that bell curve would be to the right of zero. A little bit of the tail would be on the left of zero, which would mean it’s probably got a positive impact. And then we could combine that with the other studies that are producing bell curves that are centered around zero. Then we might bring in our own prior, based on what we know about the benefits of childhood interventions, generally, from other research that’s not about deworming, and then we could fuse that together. We might get an overall estimate, represented by a bell curve, with some spread to represent our uncertainty, which, who knows? Might have 20% or 30% of its weight to the left of zero, depending how you do it. There’s a zillion ways to do it.

And so we would say, our best central estimate is that this is doing good, but we are not hyper-confident of it. And that’s an uncomfortable position to be in, but if we’re true expectation maximizers, if we’re being rational about this, then we should still favor the intervention.

Articles, books, and other media discussed in the show

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.