-->

Back to the blog
25

# Quantification as a Lamppost in the Dark - Quantification - Part 3

#### by Adam Casey on October 2nd, 2012

Late one evening a police officer comes across a man on the way home from a party. He is quite drunk and looking for something under a lamppost. “What are you looking for?” asks the policeman. “My keys,” the man replies, pointing down the road a little way, “I dropped them over there.” The policeman is baffled, “Then why are you looking for them here?”. “Because there’s no light over there.”

The joke is old but it gets to the heart of the debate over quantification. Is it best to look for keys under lampposts? Quantification lets us use a whole host of numerical tools to understand problems and develop solutions to them. But as I discussed last time there are dangers in using such tools on the wrong kinds of goals. In this way quantification is like the lamppost. It lets you see far more clearly what is going on, but only in some limited areas.

So what kinds of things tell us how good an idea it is to look under lampposts? (Or what kinds of problems should we use quantitative methods on).

• How many near misses there are. I’ll discuss this below.
• How bright the light is. i.e. how much information does this tool give you.
• Where you think you dropped the keys. i.e. if you suspect that the thing you are looking for cannot be analysed by these methods then don’t use them.

Last time we dealt with the biases and limitations around quantitative methods. With all these in mind we might think it at least possible that in fact the best interventions will be generally not quantifiable. We might think that the revolutionary overthrow of governments or some other project, is on average the best project; despite it being of high and almost impossible to quantify risk. I want to argue that even if you think that the best things aren’t under lampposts it can still be best to look there.

You can only do so much, so make it count.

The key fact in any intervention is that we have finite resources. You have only so many hours, only so much money, only so many favours you can call in. But there are many projects, there are 170,000 registered charities in the UK alone. And these are far from equally good. In health there are interventions out there that are 1000 times less effective than others. It isn’t like looking for one key, it’s like looking for the best thing you can find when you can only carry so much.

Imagine the street holds not keys but banknotes of all denominations. We can imagine there being a few \$1000 bills in amongst the many \$1 bills. Suppose you can only carry so many notes (you have only so many hours to work in a day). Now it most likely does make sense to look under a lamppost. If you look in the dark you pick up a random selection, a handful of notes with lots of \$1s and a few \$20s. If you look under a streetlamp you pick up only the \$1000 bills. In fact, for it to make any sense to look in the dark you’d have to expect more than just the bills over there to be worth more on average. You’d have to expect those notes to be worth more than the very best notes under the lamppost.

The point is that however much quantitative methods limit our focus they let us pick things that are vastly more effective than the average. Other methods might promise a dramatic effect in some cases but quantitative methods promise to give you the best intervention of one type almost every time. For it to make sense to reject quantitative methods we’d have to expect that the average intervention from the non-quantitative camp would be better than the best method in the quantitative camp.

X-axis: Effectiveness Y-axis: Number of interventions

Interventions are often distributed like a highly skewed log-normal distribution, a small number are vastly better than others. Choosing the best intervention is a very significant gain.

How good are the alternatives?

The next question is how much more information we get out of quantitative methods compared to other intuitive, philosophical or categorical tools. This is hard to measure exactly as, by definition, non-quantitative tools resist analysis of their effectiveness. But we do know that without care human thinking is riddled with biases and bad heuristic thinking. Most people will spend just as much money to save 2000 or 200,000 birds from an oil spill. This is typical of how we think. This means that interventions which seem intuitively good end up having no effect or even causing harm.

Different philosophical schools or other non-quantitative tools of analysis are not obviously good at combating these biases. Consider economics, academics who use non-quantitative approaches are as varied as Marxists and Friedmanites. There is nowhere close to this diversity in quantitative econometrics. Which begs the question of how confident the Friedmanite can realistically be that they are even close to a right answer. The light of quantification is very bright indeed.

Final thought:

For a non-quantifiable intervention to be a sensible choice we have to be confident that we do not need the bright light of quantification. What interventions from hard to quantify areas like political lobbying do you expect to be 1000 times more effective than a typical measure in something like public health? Comments are open for thoughts on that or any other part of this post.

You might also be interested in:

0
Zander Redwood October 2nd, 2012

Great post! I’d been wanting to write something to this effect myself, but this is as good as anything I could have hoped to write. Interesting to compare it with Katja’s argument for estimationref=”http://80000hours.org/blog/4-estimation-is-the-best-we-have”>http://80000hours.org/blog/4-estimation-is-the-best-we-have</a>).

#### Hello!

If you'd like to comment then please sign in if you are an 80,000 Hours member, or fill in your name and email below.

1
Lex October 12th, 2012

Awesome post! An elegant argument as well. I was discussing this the other day with Rich and its a really difficult topic to fully address.

#### Hello!

If you'd like to comment then please sign in if you are an 80,000 Hours member, or fill in your name and email below.

1
Lex October 12th, 2012

Any chance of a legend on that graph?

#### Hello!

If you'd like to comment then please sign in if you are an 80,000 Hours member, or fill in your name and email below.

0
Niel Bowerman October 21st, 2012

Political lobbying for increased spend on the most cost-effective health interventions would probably pass the test, provided you can get more than a \$ increase in funding for each \$ spent on lobbying. But this is just lobbying for a more quantified approach.

#### Hello!

If you'd like to comment then please sign in if you are an 80,000 Hours member, or fill in your name and email below.

0
Peter Hartree May 5th, 2013

Thanks for a thought-provoking series of posts. I read the above with particular interest. Here’s my attempt at a critical response.

You write:

(C) “For it to make sense to reject quantitative methods we’d have to expect that the average intervention from the non-quantitative camp would be better than the best method in the quantitative camp.”

I think this is wrong. In the previous sentence, you write:

“Other methods might promise a dramatic effect in some cases but quantitative methods promise to give you the best intervention of one type almost every time.”

I want to pick up on the “almost every time” here. I think you only get to (C) by playing down the error rate associated with the selection and implementation of quantitative methods (you seem to treat it as negligible). The lamppost analogy gives the impression that identifying the best quantitative methods is like taking candy from a baby - but clearly, it isn’t. We should imagine the lamp as an old fluorescent bulb, that flickers quite a lot, and so introduces a non-trivial error rate in our ability to pick up the most valuable dollar bills. Bear this in mind, and (C) looks much too strong.

It may be that we have to reformulate (C) as weakly as:

(C2) For it to make sense to reject quantitative methods we’d have to expect that the average intervention from the non-quantitative camp would be better than the average method in the quantitative camp.

This is a bit of a truism, but its still a decent challenge for the advocate of a non-quantitative approach. In some cases it could well be enough to generate the conclusion you’re aiming for. But I certainly don’t think that your argument against non-quantiative methods is strong enough to establish a general conclusion in favour of a quantitative approach. The regrettably brief argument you develop isn’t much more than an intuition pump (rather ironic, given the target) and invites the standard response to scepticism: the fact that our intuitions are sometimes misleading doesn’t mean we should radically distrust them.

In short, I think the lamp is prone to flickering, and the darkness is not as absolute as you suggest.