Quantification – Part 2 – The Dangers

(Part 1 is here)

Somalia is in crisis. For decades it has been racked by civil war, famine, and political violence. Members of 80,000 Hours who want to help the people of Somalia will most likely explore various ways they can help and assess them quantitatively. Is it obvious that quantitative methods provide the correct tools to deal with a crisis like this? Or instead can quantification limit the kinds of possible interventions we think about, blinding us to significant long term solutions?

Rationality does not tell us what to value. But once we have decided what our ultimate goals are it can tell us what paths to those goals will be most effective. The problem is that often the goal we have in mind is rather fuzzy and intuitive. The methods of analysis used to decide on a course of action do not always work as expected, which can produce unwanted results.

We can break goals down by how easily they are converted into quantitative terms:

  • Inherently quantitative goals: These are those that are explicitly in the form ‘maximise numerical quantity X’. Examples might be ‘get more money’, or ‘get as many people as possible to join my organisation.’
  • Optionally quantifiable goals: These are goals that can be measured approximately/with difficulty/with some loss of completeness. For instance “improve health” can be replaced by the numerical goal ‘increase years of healthy life, or more precisely QALYs‘, but this will sometimes not completely match how we intuitively regard health.
  • Non-quantifiable goals: These are goals that, in practice or in principle cannot be reduced to numerical terms. For instance “produce the greatest work of art” might be assumed to be in principle not measurable. However “promote human rights” might be a large collection of ideas each of which might be in principle measurable, but are collectively incommensurate (how many censored books is worth one use of torture?)

Goals of the first kind are very well understood, the method is simple. Do research, find out what actions increase the quantity you’re trying to increase, do those things. But this method does not obviously give the best advice for the other two classes of goals. The questions then are when we should use quantitative methods and when we should rely on an intuitive or qualitative approach.

Non-quantifiable goals are certainly not dealt with well by scientific/rational study. Detailed and evidenced based tutorials on limited technical skills like accurate drawing exist. But there is little on art generally. This seems unlikely to change in the short term. Without a clear way of judging success most analytical methods struggle. (1)

The most interesting category is the “optionally quantifiable goals”. Here we have some way to approximately measure how well we are doing, but understand that this is not an exact measure. There may be situations that we measure as being better that are in fact less like what we want. There are several biases that might make a quantitative approach less effective than anticipated.

The question then for someone whose goal is to help Somalia is how to approach this task to greatest benefit. We might seek to increase QALYs, we might seek to increase some objective measure of stability, we might seek to increase Somali GDP. Whichever measure we choose there are some dangers we might run into.

The Dangers of Quantification

Goodhart’s/Campbell’s law. The use of any measure as the basis of policy often destroys the effectiveness of that measure to record the thing we are interested in. Goodhart’s original observation was that the correlations that justified monetarism no longer held when the market changed its behaviour in response to a monetarist government. In general we can expect that very large effects will change the world in ways that measures devised before the change will not completely capture. When intervening in Somalia we might include promoting one party over another in our measure of success, this would corrupt the supported party and result in negative consequences.

The McNamara fallacy. By measuring one aspect of a problem we tend to forget the importance of all other aspects of the problem. According to legend, Soviet planners once rewarded factories based on the number of nails produced. In response factories made millions of tiny nails of no use to anyone. The planners responded by rewarding the weight of metal used, resulting in a handful of giant nails being made. The story is funny but the point is serious: if we measure our good in Somalia by QALYs (years of healthy life) then we are likely to decide that the government bankrupting itself to pay for food to relieve a famine would be a good thing. This is not obviously how we would actually want things to pan out.

The “I need more data” fallacy. It is very hard to get enough information to come to a definitive conclusion. Quantitative methods make this lack very obvious. It is much easier to be convinced by a qualitative argument than to be satisfied that some statistical analysis is actually complete. For this reason it is easy for one who analyses the world quantitatively to introduce a bias to spend too long trying to improve their error bars and not on acting. If you think two cures will both save about a thousand QALYs per year to within 10 percent you might want to spend a month finding out which one outperforms the other. Half the time you will gain 100 QALYs like this, but all the time you will lose the 83 QALYs you could have saved the first month.

Restricting the domain of considered options. Some ways of helping (funding food aid, vaccination, sanitation) are very easy to understand quantitatively and very well researched. Others (lobbying the UN, leading a socialist coup, becoming the new President of Somalia’s PA) are much much harder to analyse. When faced with the choice between an intervention that is known to save 1000 ±10 lives and an intervention whose effectiveness is wildly uncertain, (say believed to save 1100 ±1000 lives) it is easy to bias your decisions towards the understood interventions. And that is if the possibility occurs to you at all. Despite spending quite some time considering third world interventions I hadn’t even considered staging a socialist coup until I deliberately tried to come up with strange examples.

Conclusion

There are many ways that a quantitative approach can lead us astray.

We can be misled into doing things that merely suit the numbers and which aren’t in fact what we want to see. This is dangerous if we empower agents to act on our behalf knowing only about the quantified goal. If we do empower agents to do anything that isn’t in the “inherently quantifiable” category it is important to put safeguards on them. Likewise it’s important to put safeguards on ourselves; running sanity checks every now and again is important. You need to make sure that neither you nor your agents is going to end up sending large shipments of pork to Somalia, no matter how much that might be the right way to solve a famine.

But once this is done, the more important question remains. Are we missing out on important interventions because we are afraid to use anything we have trouble quantifying? Next time I would like to argue that most of the time a quantitative approach, done correctly, will show us the best strategies. And that an alternative qualitative approach is not likely to do better on average.


You might also be interested in:


(1) That said, this might not be inevitable. 80,000 Hours might well have a post about “how to produce great art” at some point. With a better understanding of physiology the underlying principles that make art great might well reveal themselves to be quantifiable after all.