Let’s suppose there’s a cause that you care about much more than society at large. In your eyes, that cause is neglected. All else equal, you should have more positive impact by working on a neglected cause, because other people won’t already be taking the best opportunities within it. But how much more positive impact can you expect?
The following is a set of research notes we made while performing a case study, which we’re making available for feedback on our thinking. It argues for a simple result: If you care about an output K times more than society at large, then (all else equal) you should expect investing in gaining that output to be K times more effective than making other investments.
For instance, most people don’t put a high weight on avoiding animal suffering. Let’s suppose you do. In fact, you estimate that you care about it roughly 10 times more than the average person (i.e. you would be satisfied investing 10 times the amount of resources to avoid the same amount of animal suffering compared to the average person). Then, you should expect that investing to end animal suffering is, all else equal, roughly 10 times more effective than making other investments.
This seems like it might be a highly relevant consideration in picking causes. If the argument is correct then, all else equal, we should expect more neglected causes to be more effective. Our current position is that the arguement below shows that we should weight neglectedness to some extent in picking causes, but we’re not yet sure how highly we should weight it because we’re not sure: (i) how important neglectedness, as modelled in this way, is compared to other considerations we could investigate (ii) how tractable it is to investigate.
The research note also explores how important this consideration is to members of 80,000 Hours, the effect of adding further considerations, and how the result might be applied in practice.
Suppose that we are interested in some output X (for example, ending factory farming) which is less popular in the world at large. More concretely, suppose that our preferences resemble broad social preferences in most ways, but that we care K times more about X than others. That means,
- If we’re indifferent between some basket of outputs B and a unit of X, then
- Society is indifferent between B and K units of X
Intuitively it seems that we ought to invest in creating X, because good opportunities to create X won’t have been taken by society at large. But how much leverage should we expect to get? In fact, we think there’s good reason to think the answer is roughly K, and this is robust to changing assumptions.
A simple model
In a simple model, society has access to a broad variety of investments, each of which produce some valuable outputs and each of which is subject to diminishing marginal returns. Generically, there will then be some “price” on each output, such that every investment which society pursues has the efficiency, i.e. the total price of its outputs per unit of input is the same.
So if we are choosing between investing to create some basket B or to create some number of units of X. For simplicity, assume that the price to create basket B is the same as the price to create K units of X (the analysis would be unchanged if we changed the numbers). Then by the equilibrium condition society is indifferent between basket B and K units of X, so by hypothesis about our values we are indifferent between basket B and a single unit of X. Thus we get returns K times higher by purchasing X rather than other goods.
Why this matters
Not just markets
Though the analysis above was described in terms of costs and investment opportunities, it seems to apply cleanly to other domains, such as politicians investing attention and political capital in causes, socialites or academics investing social capital in issues, etc. To the extent that these conclusions are relatively robust, they can give a sense of how strongly we should prioritize our idiosyncratic concerns.
If we are in the setting of the philanthropist who values X more than the rest of the world, it is tempting to simply assume that we should invest in X. But in general there are many competing considerations, and your preference for X is merely one. For example, there are many things we might care about more than the rest of the world, and there are ways that we are different from the rest of the world beyond our preferences (for example we may have access to particular opportunities, may have certain kinds of insight, may have a comparative advantage at some tasks, etc.) Understanding how much leverage we can get by focusing on X seems that it should be a core consideration when deciding whether to spend limited capital on it.
Coping with uncertainty
In principle this kind of analysis is always superseded by a more detailed analysis of the actual opportunities available and their costs and benefits. But in practice there are handful of reasons that relying on this kind of reasoning is important or necessary:
- We make investments in resources that will be deployed in the future. In order to determine the relative value of different resources, we need to evaluate future opportunities. Doing so directly involves making successful predictions, which may be much harder than relying on this kind of analysis.
We need to prioritize our investigation of different opportunities. The world is big, and we need to use some preliminary considerations to understand where we should be looking.
Even in the world of today, and even when we have time to look at an opportunity in detail, evaluating its impact precisely is typically impractical. Many decisions are made by leveraging the knowledge of others, often implicitly by looking at what kinds of opportunities they pursue. To the extent that we rely on such implicit estimates, this kind of analysis is necessary.
Different valuations of costs
In general, it might be the case that an investment opportunity creates X at the expense of some other good Y. If our values differ from social values, we might consider this a positive change where they would consider it a negative change, and it is no longer clear how to apply this analysis. For example, a regulatory change might be positive but uninteresting according to normal social values (because the costs and benefits are nearly balanced) but very promising by our standards. Then the world at large would accept such a change, but would not push for it at all, and according to the analysis above we might expect infinite rates of return.
The analysis above is inapplicable in this situation, though there seems to be a real effect here which can introduce an additional factor in favor of pursuing regulatory changes when we have differing values.
To see why the analysis breaks down, consider the origin of this social exchange rate. It is not generally the case that there are two types of agents in the world, you and everyone else. Instead, there is a broad spectrum of agents with varying values. For most types of agent, the decision amongst existing investment opportunities is not a marginal one. However, social values are determined by the “marginal” actor, who is roughly indifferent between two options (with everyone on one side picking one option, and everyone on the other side picking the other option).
So if there are opportunities with returns which are negative by social values but positive by your values, there are probably other actors who have preferences more like yours and who are inclined to pursue those opportunities. The analysis of this situation is quite a bit more complicated than the earlier one; certainly the resulting tradeoff is at least as favorable than the above analysis would suggest, but it is not clear whether it is radically more favorable.
The analysis above applies to marginal opportunities, which society is almost but not quite willing to adopt. For other opportunities, society may not even consider them seriously. In those cases, you may not get such a large factor (and indeed there is no guarantee that the opportunity isn’t arbitrarily bad). The above analysis suggests that you are best served by pursuing marginal opportunities rather than inframarginal opportunities.1
One reason that many opportunities might be marginal is if each experiences continuously diminishing returns, and all “reasonable” opportunities receive some funding. In this case, the analysis above will apply quite broadly, but still only to opportunities which are reasonable in this sense.
As we’ve mentioned, the situation is actually a bit more nuanced. Society at large is highly inhomogeneous, and what you really care about is how different your preferences are from the marginal investor in a cause, that is, the person who cares least about a cause who still finds it worth investing anything in that cause.
More complicated models
In general there might be more than two goods and more than two types of actors, and there might be many differences in values. As long as our measurements are of aggregate social concern (see the next section below) then the conclusion is not sensitive to any of these issues.
How to measure “caring”?
Ideally we would determine society’s willingness to pay for X (in terms of Y) by observing the actual tradeoffs that are made between X and Y in some simple cases. The work in this argument is being done by assuming rationality, and thereby using revealed preferences in simple situations to determine preferences in more opaque situations.
If we can understand the source of social concern for X, we can also try to directly estimate how much society “would care” about X. For example, if we know that a disaster would do $1B of property damage and kill a thousand people, we can evaluate society’s willingness to pay by using their willingness to pay for avoiding damages and saving lives, rather than having to observe their behavior with respect to similar events.
None of these measures are ideal or extremely robust, but when a number of independent paths to estimation can be applied, the result seems to be a useful but rough sense of how much society cares. In the future, we’ll aim to give an example of this kind of analysis.