How are the resources in effective altruism allocated across cause areas?

Knowing these figures, for both funding and labour, can help us spot gaps in the current allocation. In particular, I’ll suggest that broad longtermism seems like the most pressing gap right now.

This is a follow on from my first post, where I estimated the total amount of committed funding and people, and briefly discussed how many resources are being deployed now vs. invested for later.

These estimates are for how the situation stood in 2019. I made them in early 2020, and made a few more adjustments when I wrote this post. As with the previous post, I recommend that readers take these figures as extremely rough estimates, and I haven’t checked them with the people involved. I’d be keen to see additional and more thorough estimates.

Update Oct 2021: I mistakenly said the number of people reporting 5 for engagement was ~2300, but actually this was the figure for people reporting 4 or 5.

Allocation of funding

Here are my estimates:

Cause Area$ millions per year in 2019%
Global health18544%
Farm animal welfare5513%
Biosecurity4110%
Potential risks from AI4010%
Near-term U.S. policy328%
Effective altruism / rationality / cause prioritisation266%
Scientific research225%
Other global catastrophic risk (inc. climate tail risks)113%
Other long term1.80%
Other near-term work (near-term climate change, mental health)20%
Total416100%

What it’s based on:

  • Using Open Philanthropy’s grants database, I averaged the allocation to each area 2017–2019 and made some minor adjustments. (Open Phil often makes 3yr+ grants, and the grants are lumpy, so it’s important to average.) At a total of ~$260 million, this accounts for the majority of the funding. (Note that I didn’t include the money spent on Open Phil’s own expenses, which might increase the meta line by around $5 million.)

  • I added $80 million to global health for GiveWell using the figure in their metrics report for donations to GiveWell-recommended charities excluding Open Philanthropy. (Note that this figure seems like it’ll be significantly higher in 2020, perhaps $120 million, but I’m using the 2019 figure.)

  • GiveWell says their best guess is that the figures underestimate the money they influence by around $20 million, so I added $20 million. These figures also ignore what’s spent on GiveWell’s own expenses, which could be another $5 million to meta.

  • For longtermist and meta donations that aren’t Open Philanthropy, I guessed $30 million per year. This was based on roughly tallying up the medium-sized donors I know about and rounding up a bit. I then roughly allocated them across cause areas based on my impressions. This figure is especially uncertain, but seems small compared to Open Philanthropy, so I didn’t spend too long on it.

Neartermist donations outside of Open Phil and GiveWell are the most uncertain.

  • I decided to exclude donors who don’t explicitly donate under the banner of effective altruism, or else we might have to include billions of dollars spent on highly cost-effective global health interventions, pandemic prevention, climate change etc. I excluded the Gates Foundation too, though they have said some nice things about EA. This is a very vague boundary.
  • For animal welfare, about $9 million has been donated to the EA Animal Welfare Fund, compared to $11.6 million to the Long Term Future Fund and the Meta Fund (now called the Infrastructure Fund). If the total amount to longtermist and meta causes is $30 million per year, and this ratio holds more broadly, it would imply $23 million per year to EA animal welfare (excluding OP) in total. This seems plausible considering that Animal Charity Evaluators says it influenced about $11 million last year, which would be about half the total.

  • From looking at GiveWell’s metrics report, I guess that most EA-motivated donations to global health are tracked in GiveWell’s figures already. I’ll guess there’s an additional $5 million.

  • I guessed $1 million per year is spent on neartermist climate change and mental health (that’s not global health or global catastrophic risks).

  • Overall all these figures could be way off, but I haven’t spent much time on them because they seem small compared to the Open Philanthropy and GiveWell donations.

Some quick things to flag about the allocation:

  • Global health is the biggest area for funding, but it’s not a majority.
  • AI is only 10%, so it doesn’t seem fair to say that EA is dominated by AI.

  • Likewise, only 6% is broadly ‘meta’. Even if we add in the operating budgets of Open Phil and GiveWell at about $10 million, which would bring it to 8%, we still seem to be a long way from being in a ‘meta trap’.

Allocation of people

For the total number of people, see this estimate, which finds that in 2019 there were 2,300 people similar to those who answered ‘5’ (out of 5) for engagement in the EA Survey, and perhaps 6,500 similar to those who answered ‘4’ or ‘5’.

The 2019 EA Survey asked people which problem areas they’re working on. They could give multiple answers, so I normalised to 100%.

For those who answered ‘5’ for engagement, the breakdown was:

5-engaged EAs cause currently working in (normalised)%
AI18
Movement building15
Rationality12
Other near term (Near-term climate change, mental health)12
Cause prioritisation10
Other GCRs10
Animal welfare10
Global poverty6
Biosecurity4
Other4
Total101

This question wasn’t asked in the 2020 EA Survey, so these are the most up-to-date answers on this question.

If I was repeating this analysis, I’d look at the figures for ‘4-engaged’ EAs as well, since I realised this is a pretty high level of engagement. This would tilt things away from longtermism, but only a little. (I quickly checked the figures and they were so similar it didn’t seem worth redoing all the tables.)

It’s interesting how different the allocation is compared to funding. For instance, global health is only 6% compared to 44% for funding. This should be considered when asking what ‘representative’ content should look like.

People plus funding

If we totally guess that the value of each year of labour in financial terms will average to $100,000 per year over a career, then we can look at the combined portfolio. (This figure could easily be off by a factor of 10.)

Different surveys used different categories, so I’ve had to make a bunch of guesses about how they line up.

Totals across all categoriesFunding per year in $ millionsNumber of 5-engaged peopleValue of labour in $ millionsLabour + funding per $ millionPercentage of total
Global poverty1851331319830%
Meta / cause prioritisation / rationality268858811518%
AI40430438313%
Animal welfare55221227712%
Biosecurity419610518%
Other GCRs1123924355%
Near-term U.S. policy32101335%
Other near term (Near-term climate change, mental health)223524254%
Scientific research22202244%
Other2889112%
Total4162357236652100%

It’s interesting to note that meta seems a bit people heavy; AI is balanced; and biosecurity, global health, and farm animal welfare are funding heavy.

These figures also suggest the value of the funding is about twice the value of the people – similar to what I found for committed funds as whole. This comparison is particularly rough guesswork, so I wouldn’t read much into it; though it does match a general picture where there’s more funding than labour (which seems the reverse of the broader economy).

What might we learn from this?

We can look at how people guess the ideal portfolio should look, and look for differences.

Below, the blue bars show the average response of the attendees of the 2019 EA Leaders Forum for what percentage of resources they thought should go to each area.

To check whether this is representative of engaged members of the broader movement, I compared it to the top cause preference of readers of the EA Forum in the 2019 EA Survey, which is shown in red. (The figures for all EAs who answered ‘4’ or ‘5’ for engagement were similar.) Note that ‘top cause preference’ is not the same as ‘ideal percentage to allocate’ but will hopefully correlate. See more about this data. You can also see cause preferences by engagement level from the 2020 EA Survey.

(The categories also don’t line up exactly again.)

Leaders Forum and EA Survey 2019

There was a similar survey from the 2020 EA Coordination Forum — an event similar to Leaders Forum, but with a narrower focus and a greater concentration of staff from longtermist organizations, which may have influenced the survey results. These results have not been officially released, but here is a summary, which I’ve compared to the current allocation (as above). Note that the results are very similar to the 2019 Leaders Forum, though I prefer them since they use more comparable categories and should be a little more up-to-date.

EACF 2020 ideal portfolioGuess at current allocation (labour + money)Difference
Global poverty9%30%22%
Meta / cause prioritisation / rationality23%17%-6%
AI28%13%-15%
Animal welfare8%12%4%
Biosecurity9%8%-1%
Other GCRs4%5%2%
Other near term (Near-term climate change, mental health)4%4%0%
Scientific research3%4%1%
Other (inc. wild animal welfare)4%2%-2%
Broad longtermist9%1%-8%

My own guesses at the ideal portfolio would also be roughly similar to the above. I also expect that polling EA Forum members would lead to similar results, as happened in the 2019 results above.

What jumps out at me from looking at the current allocation compared to the ideal?

1) The biggest gap in proportional terms seems like broad longtermism.

In the 2020 Leaders Forum survey, the respondents were explicitly asked how much they thought we should allocate to “Broad longtermist work (that aims to reduce risk factors or cause other positive trajectory changes, such as improving institutional decision making)”. (See our list of potential highest priorities for more on what could be in this bucket.)

The median answer was 10%, with an interquartile range of 5% to 14%.

However, as far as I can tell, there is almost no funding for this area currently, since Open Philanthropy doesn’t fund it, and I’m not aware of any other EA donors giving more than $1 million per year.

There are some people aiming to work on improving institutional decision making and reducing great power conflict, but I estimate it’s under 100. (In the tables for funding and people earlier, this would probably mostly fall under ‘rationality’ within the meta category, or otherwise within ‘Other GCRs,’ so I subtracted it from there.)

This would mean that, generously, 1% of resources are being spent on broad longtermism. So, we’re maybe off by a factor of nine.

Note that the aim of grants or careers in this area would mainly be to explore whether there’s an issue that’s worth funding much more heavily, rather than scaling up an existing approach.

2) Both the EA Leaders Forum respondents and the EA Forum members in the EA survey would like to see significantly more allocated to AI safety and meta.

3) Global health seems to be where the biggest over-allocation is happening. (And is responsible for why neartermist issues currently receive ~50% when the survey respondents estimate ~25% would be ideal.)

For global health, this is almost all driven by funding rather than people. While global health receives about 44% of funding, only about 6% of ‘5-engaged’ EAs are working on it. In particular, GiveWell brings in lots of funders for this issue who won’t fund the other issues.

I think part of what’s going on for funding is that global health is already in ‘deployment’ mode, whereas the other causes are still trying to build capacity and figure out what to support.

My hope is that other areas scale up over time, bringing the allocation to global health in line with the target, but without necessarily reducing the amount spent on it.

However, if someone today has the option to work on global health or one of the under-allocated areas, and feels unsure which is best for them, I’d say they should default away from global health.

Wrapping up:

I find it useful to look at the portfolio, but keep in mind that this ‘top-down’ approach is just one way to figure out what to do.

Comparatively, it’s probably more important to take a more ‘bottom-up’ approach that looks at specific opportunities, and tries to compare them to ‘the bar’ for funding, or based on more qualitative factors.

For someone choosing a career who wants to coordinate with the EA community, the portfolio framework should play a minor role compared to other factors, such as personal fit, career capital, and other ways of evaluating which priorities are most pressing.

You might also be interested in:

Comment on this post on the EA Forum.

Get a weekly update on all new articles at 80,000 Hours.

I post draft research ideas on Twitter.