Effective altruism and the current funding situation

This is a cross-post from the Effective Altruism Forum written by 80,000 Hours cofounder Prof. Will MacAskill. We’re posting it here because we think it presents a particularly useful perspective on the current funding situation in the effective altruism community, which we’re part of.

In 2021 we wrote about the growth of the effective altruism movement, and how funding in effective altruism is allocated across issues that are of particular interest to us and the wider community.

In this article, Will reflects on the growth of the effective altruism (EA) community, particularly growth in the amount of funding that could be put towards new projects, and calls for a culture of judicious ambition to ensure that the moral opportunity offered by this funding is put to the best possible use.

This post gives an overview of how I’m thinking about the “funding in EA” issue, building on many conversations. Although I’m involved with a number of organisations in EA, this post is written in my personal capacity. You might also want to see my EAG talk which has a related theme, though with different emphases. For helpful comments, I thank Abie Rohrig, Asya Bergal, Claire Zabel, Eirin Evjen, Julia Wise, Ketan Ramakrishnan, Leopold Aschenbrenner, Matt Wage, Max Daniel, Nick Beckstead, Stephen Clare, and Toby Ord.

Main points

  • EA is in a very different funding situation than it was when it was founded. This is both an enormous responsibility and an incredible opportunity.
  • It means the norms and culture that made sense at EA’s founding will have to adapt. It’s good that there’s now a serious conversation about this.
  • There are two ways we could fail to respond correctly:
    • By commission: we damage, unnecessarily, the aspects of EA culture that make it valuable; we support harmful projects; or we just spend most of our money in a way that’s below-the-bar.
    • By omission: we aren’t ambitious enough, and fail to make full use of the opportunities we now have available to us. Failure by omission is much less salient than failure by commission, but it’s no less real, and may be more likely.
  • Though it’s hard, we need to inhabit both modes of mind at once. The right attitude is one of judicious ambition.1
  • Judicious, because I think we can avoid most of the risks that come with an influx of potential funding without compromising on our ability to achieve big things. That means: avoiding unnecessary extravagance, and conveying the moral seriousness of distributing funding; emphasising that our total potential funding is still tiny compared to the problems in the world, and there is still a high bar for getting funded; being willing to shut down lower-performing projects; and cooperating within the community to mitigate risks of harm.
  • Ambition, because it would be very easy to fail by thinking too small, or just not taking enough action, such that we’re unable to convert the funding we’ve raised into good outcomes. That means we should, for example: create more projects that are scalable with respect to funding; buy time and increased productivity when we can; and be more willing to use money to gain information, by just trying something out, rather than assessing whether it’s good in the abstract.

Introduction

Well, things have gotten weird, haven’t they?

Recently, I went on a walk with a writer, and it gave me a chance to reflect on the earlier days of EA. I showed him the first office that CEA rented, back in 2013. It looks like this:

hutton_Park

To be clear: the office didn’t get converted into an estate agent — it was in the estate agent, in a poorly-lit room in the basement. Here’s a photo from that time:

Team

Normally, about a dozen people worked in that room. When one early donor visited, his first reaction was to ask: “Is this legal?”

At the time, there was very little funding available in EA. Lunch was the same, every day: budget baguettes and plain hummus. The initial salaries offered by CEA were £15,000/yr pre-tax. When it started off, CEA was only able to pay its staff at all because I loaned them £7,000 — my entire life savings at the time. One of our first major donations was from Julia Wise, for $10,000, which was a significant fraction of the annual salary she received from being a mental health social worker at a prison. Every new GWWC pledge we got was a cause for celebration: Toby Ord estimated the expected present value of donations from a GWWC pledge at around $70,000, which was a truly huge sum at the time.2

Now the funding situation is… a little different. This post is about taking stock, and reflecting on how we should respond to that. It builds on thinking I’ve done over the last 14 months, many conversations I’ve had, and the many recent Forum posts and comments. I’m not trying to be prescriptive — you should figure out your own takes — but hopefully I can be a little helpful. I’m aiming for this post to convey what I hope to be the right attitude to the situation as a whole, rather than merely discussing one aspect or other of the issue, as most of the recent posts have done.

In a nutshell: our current situation is both an enormous responsibility and an incredible opportunity. If we’re going to respond appropriately, we need to act with judicious ambition, holding both of these frames in mind.

The current situation

Effective altruism has done very well at raising potential funding3 for our top causes. This was true two years ago: GiveWell was moving hundreds of millions of dollars per year; Open Philanthropy had potential assets of $14 billion from Dustin Moskovitz and Cari Tuna. But the last two years have changed the situation considerably, even compared to that. The primary update comes from the success of FTX: Sam Bankman-Fried has an estimated net worth of $24 billion (though bear in mind the difficulty of valuing crypto assets, and their volatility), and intends to give essentially all of it away. The other EA-aligned FTX early employees add considerably to that total.4

November 17, 2022 1:00 pm GMT: Until recently, we had highlighted Sam Bankman-Fried on our website as a positive example of someone ambitiously pursuing a high-impact career. To say the least, we no longer endorse that. See our statement for why.

Note: On May 12, 2023, we released a blog post on updates we made to our advice after the collapse of FTX.

There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FTX) has a net worth of over a billion; a number of others are on track to give hundreds of millions in their lifetime. Among Giving Pledge signatories, there are around ten who are at least somewhat sympathetic to either effective altruism or longtermism. And there are a number of other successful entrepreneurs who take EA or longtermism seriously, and who could increase the total aligned funding by a lot. So, while FTX’s rapid growth is obviously unusual, it doesn’t seem like a several-orders-of-magnitude sort of fluke to me, and I think it would be a mistake to think of it as a ‘black swan’ sort of event, in terms of EA-aligned funding.

So the update I’ve made isn’t just about the level of funding we have, but also the growth rate. Previously, it wasn’t obvious to me whether Dustin and Cari were flukes or not; if they were, all it would take is for their interests to move elsewhere, or for Facebook stock to tank, for the amount of EA-aligned potential funding to decline considerably.

Now I think the amount of EA-aligned funding is, in expectation, considerably bigger in the future than it is today. Of course, over the next five years, total potential funding could still decrease by a lot, especially if FTX crashes. But it also could increase by many tens of billions more, if FTX does very well, or if new very large donors get on board. So we should at least be prepared for a world where there’s even more EA-aligned potential funding than there is today.

There’s a tricky question about how fast we should be spending this down. Compared to others in EA, I think I’m unusually sympathetic to patient philanthropy: I don’t think the chance of a hinge moment in the next decade is dramatically higher than the chance of a hinge moment in 2072-82 (say); and I think our understanding of how to do good is improving every year, which gives a reason for delay.

But even I think that we should greatly increase our giving compared to now. One reason in favour of spending quickly is that, even if you endorse patient philanthropy, you should probably still distribute some significant proportion of your funding over time (perhaps in the low single-digit percentage points5 because philanthropic resources have diminishing returns. And if you think that we’re at a very influential time, perhaps because you think we’ll probably see transformative AI in our lifetimes, then it should be larger still. (I lay out some more reasons for and against faster spending in an appendix.)

A second reason is that we can fund community-building, which is a form of investment, and which seems to have very high returns. Indeed, the success of FTX, and of EA in general, should give us a major update in this direction. So far, we’ve generated more than $30bn for something like $200 mn, at a benefit:cost ratio of 150 to 1;6 and even excluding the success of FTX, the benefit-cost looks very good (especially if we consider that funding raised is not all, or maybe even most, of the impact that outreach to date has generated). Further investment in outreach is likely to continue to raise much more money for the most pressing problems than it costs.7

A third reason is option value: if we build the infrastructure to productively and scalably absorb funding, then we can choose not to use it if it turns out to not be the right decision; whereas if we don’t build the infrastructure now, then it will take time to do so if in a few years’ time it does turn out to be needed.

A final consideration that weighs on me for spending faster isn’t based on impact grounds. Rather, it simply feels wrong to have such financial assets when there’s such suffering in the world, and such grave risks that we face. Now, of course what we ought to do is whatever is impact-maximising over the long run; but at least in terms of my moral aesthetics, it really feels that the appropriate thing is for this money to get used, soon.

For the time being, let’s suppose that we aim just to spend the return on EA-aligned funding: about $2 billion per year (at 5% real rate of return). Spending even this amount of funding effectively will be a huge challenge. It’s a big challenge even within global health and wellbeing, where the biggest scale-up in giving is currently happening. GiveWell now aims to move $1 billion per year by 2025, including (as a tentative plan) an annual allocation from Open Phil of about $500 million. But even now, and even if they lower their funding bar from 8x the cost-effectiveness of GiveDirectly to 5x the cost-effectiveness of GiveDirectly, they still have more funding available than funding gaps to fill.

Spending this funding will be a truly enormous challenge within cause areas such as AI risk and governance, worst-case biorisk, and community-building, that have fewer or no existing organisations that could productively use such sums of money. The Future Fund expressed a bold aim of giving between $100mn and $1bn this year. Let’s say that this ends up at $300mn in grants (which might be about 30% of total EA money moved this year). That’s a rapid scale-up from a standing start, but it’s a shortfall of $1.2 billion compared to what it would need to spend just to distribute the rate of return on the financial assets of Sam Bankman-Fried and Gary Wang.

If the total potential funding grows, or if the right thing to do is to be spending down our assets, then that number increases, perhaps considerably. And we should be particularly prepared for scenarios where our potential funding increases by a lot, because we have more impact in those scenarios than in scenarios where our potential funding decreases considerably.

Meeting this challenge means that EA’s culture and norms will need to adapt. In 2013, it made sense for us to work in a poorly-lit basement, eating baguettes and hummus. Now it doesn’t. Frugality is now comparatively less valuable, and saving time and boosting productivity in order to make more progress on the most pressing problems is comparatively more valuable. Creating projects that are maximally cost-effective is now comparatively less valuable; creating projects that are highly scalable with respect to funding, and can thereby create greater total impact even at lower cost-effectiveness, is comparatively more valuable. Extensive desk research to evaluate a small or medium-sized funding opportunity is comparatively less valuable; just spending money to actually try something and find out empirically if it works is comparatively more worthwhile.

Now, I miss the basement days. It feels morally appropriate to be eating baguettes and hummus every day. But solving the big problems in the world isn’t about acting in a way that feels appropriate; it’s about doing the highest-impact thing.

Nonetheless, it’s natural to worry: is this really EA adapting to a new situation, or is it value-drift? Maybe we’re fooling ourselves! I think it’s good we’re being vigilant about this. So let’s discuss how we should adapt; how to respond appropriately to the situation we’re in, while not losing the mission-driven focus that was present in the basement of an Oxford estate agent.

There are two ways in which we could fail in response to this challenge. We could cause harm by commission: doing dumb things that end up net-negative overall. Or we could cause harm by omission: failing to do things that would have enormous positive impact. Let’s take each of these risks in turn.

Risks of commission: causing harm

As has been noted in a number of recent Forum posts, there are ways that scaling up our giving could cause harm, such as by damaging EA’s culture — destroying what makes EA distinctive and great — or by funding net-negative projects. There are a number of risks here; I’ll discuss a few, but this list is still incomplete.

Appearances of extravagance

Other things being equal, EA wants to appeal to morally-motivated people. It’s more valuable to have someone who intrinsically wants to make the world better than someone who’s just doing it for a paycheck: they’ll be in it for longer, and are more likely to make better decisions even in cases where their income doesn’t depend on them making the right decision. But morally-motivated people, especially on college campuses, often find seemingly-extravagant spending distasteful.

What’s more, this is a perfectly rational response. Living on rice and beans is a costly signal: it’s easier for genuinely morally motivated people to do it than people who are faking. So if I meet someone living on rice and beans and giving a chunk of their income away, I take seriously their claims to be morally motivated. If I meet someone who’s flying business class — they might be doing that because they’re morally serious and trying to maximise their output, but it’s much harder for an outsider to tell.8

So we don’t want to turn off morally dedicated people. This is a major worry for me; I think that EA developing a bad reputation is one of the leading existential risks to the community, and a reputation for extravagance would not at all be helpful for that.

But there’s a balancing act. Very few people will want to live on rice and beans forever. Initially, we disproportionately appealed to people who were willing to be very frugal, and turned off those who weren’t, so the current community is disproportionately constituted of such people. Now, we have the chance to appeal to people who are less willing to be ultra-frugal, but have many other great qualities and can contribute enormously to the world.9 And that is a great thing.

My impression is that most of the issues that have gotten people worried have been unforced errors: people getting carried away, or not thinking about how what they did would be perceived, or talking about spending in a way that seems flippant. (And sometimes the alleged situations have been misrepresented, distorted through a game of Telephone.) This, naturally, feels alienating to many people, especially those who are new to EA. The opportunity cost of spending is very real and very major, in absolute terms; not recognising that can seem suspicious.

Given that the most egregious errors seem unforced, I think there are some easy wins, such as:

  • Treating expenditure with the moral seriousness it deserves. Even offhand or joking comments that take a flippant attitude to spending will often be seen as in bad taste, and apt to turn people off. Similarly, luxurious aesthetics are generally not a good look, especially on college campuses, and often don’t have commensurate benefits.
  • Heavily considering what you show as well as what you do, especially if you’re in a position of high visibility. ‘Signalling’ is often very important! For example, the funding situation means I now take my personal giving more seriously, not less. I think the fact that Sam Bankman-Fried is a vegan and drives a Corolla is awesome, and totally the right call. And, even though it won’t be the right choice for most of us, we can still celebrate those people who do make very intense moral commitments, like the many kidney donors in EA, or a Christian EA I heard about recently who lives in a van on the campus of the tech company he works for, giving away everything above $3,000 per year.

Harming quality of thought

Another worry is that funding will negatively impact EA’s ability to think; that, insidiously, people will be incentivised to believe whatever will help them get funding, or that particular worldviews will get artificially inflated in virtue of receiving more funding than they should receive.

My guess is that culture is an even bigger worry (where people, including funders, go too far in the direction of deferring to those regarded as particularly smart, or are too worried about deviating from what they regard as consensus views within the community). But either way, having incorrect beliefs or focusing on the wrong things is an easy way for the EA community to lose almost all its value. And we want to reduce that as much as possible.

Again, I think there are actions we can take to mitigate this risk:

  • One partial solution is to diversify funding. For example, the goal of diversifying and decentralising decision-making was a major motivation behind the launch of Future Fund’s regrantors program. Now, if you have some heterodox idea, you can potentially receive funding from lots of different sources, from different people with different perspectives and networks, rather than just one. And I expect the pool of large EA-aligned donors or grantmakers to increase in the future.
  • Another is to champion independence of thought. It seems to me we do pretty well at this (depending on how you count, three to five of the 10 most-upvoted Forum posts are ‘critical’ or at least ‘critically self-reflective’ posts). But I’d love to see more high-quality research that critically engages with views that are widely held within the community.

Resentment

There’s a tough messaging challenge around the funding situation. On the one hand, we want to convey that people should be developing big, ambitious plans, and convey how much we need to scale up the community’s giving. Given that, it’s natural to feel disappointed or even resentful if you create such plans but then don’t receive funding.

But there’s always an opportunity cost, and the bar for receiving funding is still extremely high. We could easily use up the entirety of our potential funding on cash transfers, or clean tech funding, or pandemic preparedness technology, or compute for the most safety-conscious AI labs. Given this opportunity cost, even despite the scale-up in funding, most well-meaning projects still won’t get funded. For example, Future Fund is trying to scale up its giving rapidly, but in the recent open call it rejected over 95% of applications.

As a proportion of the world’s resources, EA-aligned financial resources are still tiny: the return on EA-aligned financial assets is less than a hundredth of Alphabet’s yearly revenue and about one four-hundredths of the US defence budget. And they’re also tiny compared to the problems we face: despite about $160bn in annual official development assistance, and $630bn in annual (public and private) climate spending, extreme poverty and climate change are still serious problems.

To address this issue, how we talk about the situation is important:

  • As well as emphasising the importance of forming big plans, and the potential for projects to scale, we need to emphasise that it’s not easy to get funding for projects; the bar for last-dollar funding is very high.
  • In my experience, EA funders are highly aware that they’re going to get a lot of funding decisions wrong. But more could be done to emphasise that any funding processes are going to be imperfect, and many great opportunities won’t get funded even though, in a perfect world, they would be.

Losing evolutionary forces towards greater impact

One worry I’ve had is that availability of funding could mean we lose incentives towards excellence. If it’s too easy to get funding, then a mediocre project could just keep limping on, rather than improving itself; or a substandard project could continue, even though it would be better if it shut down and the people involved worked elsewhere.

Within the nonprofit world, there’s a general problem where, unlike unprofitable companies, bad nonprofits don’t die. We should worry that the same problem will affect us.

That said, this is something that I think donors are generally keeping in mind; many seed grants won’t be renewed, and if a project doesn’t seem like a good use of the people running it, then it’s not likely to get funded.

One way we as a community can mitigate this concern further is to celebrate failures. For example, No Lean Season was an impressive-looking global development nonprofit; it was incubated at Evidence Action, and went through Y Combinator (in the same batch as CEA). But, after an RCT found that its impact was lower than they’d hoped, and after they had to terminate their relationship with a partner organisation, they shut down and published their reasons for doing so.

This is a socially weird thing to do, and very unusual within the nonprofit world. But it was awesome, and should be praised as such.

Risks of harm

There’s one huge difference between aiming to do good and aiming to make profit. If you set up a company aiming to make money, generally the very worst that can happen is that you go bankrupt; there’s a legal system in place that prevents you from getting burdened by arbitrarily large debt. However, if you set up a project aiming to do good, the amount of harm that you can do is basically unbounded.

This is a common worry in EA, and it’s extremely important as far as it goes. The standard solution is to communicate and cooperate with others with shared goals; if there’s a range of opinions on whether something is a good idea, then following the majority view is the right strategy. And, in practice, all the major funders closely communicate and coordinate, and behave cautiously; similarly, when someone is starting a new project, in my experience they tend to get extensive feedback from the community on the risks and benefits of that project.

Indeed, my honest take is that EAs are generally on the too-cautious end. As well as the unilateralist’s curse (where the most optimistic decision-maker determines what happens), there’s a risk of falling into what we could call the bureaucrat’s curse,10 where everyone has a veto over the actions of others; in such a situation, if everyone follows their own best-guesses, then the most pessimistic decision-maker determines what happens. I’ve certainly seen something closer to the bureaucrat’s curse in play: if you’re getting feedback and your plans, and one person voices strong objections, it feels irresponsible to go ahead anyway, even in cases where you should. At its worst, I’ve seen the idea of unilateralism taken as a reason against competition within the EA ecosystem, as if all EA organisations should be monopolies.

The suggested examples of harmful projects I’ve heard tend, in my view, not to come from people who take the unilateralist’s curse seriously and are in close communication with the community, but go ahead and do it anyway. Instead, all along they were power-grabs, or they were by people who just didn’t care what others thought of their plans. In contrast, I’ve found that, for those who are highly concerned about unilateralism, if they end up doing something that might be harmful, they quickly receive feedback and course-correct.

Overall, risks of harm are something I think we’re actually managing pretty well as a community, and can keep managing well if we:

  • Stay in constant communication about our plans with others, inside and outside of the EA community, who have similar aims to do the most good they can
  • Remember that, in the standard solution to the unilateralist’s dilemma, it’s the median view that’s the right (rather than the most optimistic or most pessimistic view)
  • Are highly willing to course-correct in response to feedback

Risks of omission: squandering the opportunity

There are a number of ways in which the influx of funding could cause real harm. Despite this, I don’t think it’s the most likely way we’ll fail.

It seems to me to be more likely that we’ll fail by not being ambitious enough; by failing to take advantage of the situation we’re in, and simply not being able to use the resources we have for good ends.

It’s hard to internalise, intuitively, the loss from failing to do good things; the loss of value if, say, EA continued at its current giving levels, even though it ought to have scaled up more. For global health and development, the loss is clear and visceral: every year, people suffer and lives are lost. It’s harder to imagine for those concerned by existential risks. But one way to make the situation more vivid is to imagine you were in an “end of the world” movie with a clear and visible threat, like the incoming asteroid in Don’t Look Up. How would you act? For sure, you’d worry about doing the wrong thing. But the risk of failure by being unresponsive and simply not doing enough would probably weigh on you even harder.

There are a couple of reasons why I’m particularly worried about risks of omission. First, it’s just very hard to seriously scale up giving while spending the money effectively. It’ll involve enormous amounts of work, from hundreds or thousands of people. Often, it’ll involve people doing things that just aren’t that enjoyable: management and scaling organisations to large sizes are rarely people’s favourite activities; and, it will be challenging to incentivise enough people to do these things effectively.

To see how hard this is, we can look at existing foundations. The foundation that has most successfully scaled up its giving is the Gates Foundation: it gives out about $6bn per year, which is extremely impressive — far more than any other foundation.11 But it seems to me they are falling far short of their goals. Bill Gates and Melinda French Gates have said they want the foundation to spend all its assets within 20 years of their deaths, and comment that: “The decision to use all of the foundation’s resources in this century underscores our optimism for progress and determination to do as much as possible, as soon as possible, to address the comparatively narrow set of issues we’ve chosen to focus on.”

Even though the Gates Foundation spends far more than any other foundation, since 2000 their total assets have increased from $100 billion12 to $316 billion13 — over a factor of 3. They’re distributing close to $6bn per year, but that’s less than half the return they get on their total financial assets (at 5% real return per year). Given the ages of Bill Gates and Melinda French Gates, they should expect to live approximately another 30 years. But in order to spend down their assets within 50 years, if the foundation distributed a fixed amount every year, it would need to give out over $17bn per year.

I don’t want to make any claims about the tricky question of the optimal rate of giving over time. But we should at least feel the potential loss, here, if scaling up too slowly means that less good is done.

A second reason why I’m worried about scaling too slowly, or too low a plateau, is that there are asymmetric costs to trying to do big things versus being cautious. Compare: (i) How many times can you think of an organisation being criticised for not being effective enough? and (ii) How many times can you think of someone being criticised for not-founding an organisation that should have existed? (Or, suppose I hadn’t given a talk on earning to give at MIT in 2012,14 would anyone be berating me?) In general, you get public criticism for doing things and making mistakes, not for failing to do anything at all.

The asymmetric costs are especially worrying when salaries represent only a tiny fraction of the value you create, which is especially true for nonprofit projects. VCs struggle to get entrepreneurs to be ambitious and risk-taking enough: the solution that has emerged is to pay successful entrepreneurs huge amounts of money. A successful EA megaproject might generate far more value for the world than Uber (for example), but, even if EA salaries were to increase enormously, the founders and early employees will still get paid much less than the founders and early employees of Uber.15

The importance of finding ways to scale our giving also changes how we should think about grantmaking. Early EA culture was built on a highly skeptical mindset. This is still important in many ways (this post by Holden on ‘minimal trust investigations’ is one of my favourite blog posts of the last year). But it can cause us to go awry if it means we don’t take chances of upside seriously, or when we focus our concern on false positives rather than false negatives.

I worry we’ve made some errors in the past by not taking the chance of best-case scenarios seriously, out of a desire to be rigorous and skeptical. For example, I mentioned that Toby initially estimated the value of a Giving What We Can Pledge at $70,000 (as one example of quantifying the benefits of outreach and community-building more generally). I remember having arguments with people who claimed that estimate was too optimistic. But take the 7,000 Giving What We Can members, and assume that none of them give anything apart from Sam Bankman-Fried, who gives his net worth. Then a Pledge was actually worth $2 million — 30 times higher than Toby’s “optimistic” estimate at the time.16 In general, if our successes are sampling from a heavy-tailed distribution, the historical average value of our impact will very likely be lower than the true mean.

And when we look to future community-building efforts, the asymmetry of upside and downside suggests to me that, if we put the risk of harm to one side, we should be much more concerned about missing opportunities for impact than about spending money in ways that don’t have impact.

It’s easiest to quantify when looking at earning to give (but is in no way limited to that). We’ve seen, now, that EA outreach can inspire people to earn to give in ways that put them on track to donate hundreds of millions of dollars or more. That means the worry about missing out on opportunities to change people’s careers should, for the time being, loom larger than the worry about overspending.

(Quantitatively: suppose $200 is spent on an intro to EA retreat for someone. If that has a more than a one in a five hundred thousand chance of inspiring the attendee to earn to give and successfully donate $100 million over their lifetime, then the expected financial benefit is positive. Given the successes we’ve seen, both from FTX and outside of that, the real probability is orders of magnitude larger. That’s not to say $200 on a retreat is how much should be spent — if you can have the same impact at cheaper cost, you should. And excessive spending can even become counterproductive if it sends the wrong message. But it indicates just how small community-building spending is in comparison the potential benefits from changing people’s careers for the better.)

The need to scale changes the optimal approach to grantmaking in another way, too: it also means that making many small grants (small relative to the tens of billions of dollars per year we might need to spend) in order to find out, empirically, what things seem cost-effective, becomes well worth it.

Here’s a toy example. Suppose you give out 100 grants of $100,000 each. They all do nothing apart from one, which demonstrates a scalable way of absorbing $100 million at 120% of the cost-effectiveness of last-dollar spending. You’ve spent $10 million in order to gain impact equivalent to $20 million at last-dollar spending.17 It’s a good use of money, even though 99 of the grants achieved nothing.

I think this toy example often reflects reality. It’s much easier, and more reliable, to assess a project once it’s already been tried. If you need to scale giving dramatically, then often it makes sense to fund something and find out empirically how good it is, so that in two years’ time you can decide whether to stop funding altogether, or scale donations considerably. If the cost to fund and get the information is a small proportion of the giving you hope to scale up to, then it can be well worth just making the grant and figuring out how cost-effective it is later on if it seems potentially promising as something to scale. (A similar thought lies in part behind Future Fund’s 2022 goal of doing “bold and decisive tests of highly scalable funding models.”)

An incredible opportunity

It’s easy to feel stressed about the current situation. But paralysing anxiety or insomnia-inducing stress probably aren’t the attitudes that will help you have the most long-term impact.

So let’s reframe things, for a moment at least.

A classic reason why people feel unmotivated to do good things is that their contribution will be just “a drop in the bucket.” A helpful psychological response is to think of the impact that a community you’re part of, working together on that problem, will have.

When we think of the impact EA has had so far, it’s pretty inspiring. Let’s just take one organisation: Against Malaria Foundation. Since its founding, it has raised $460 million, in large part because of GiveWell’s recommendation. Because of that funding, 400 million people have been protected against malaria for two years each; that’s a third of the population of sub-Saharan Africa. It’s saved on the order of 100,000 lives — the population of a small city.

We did that.

And the current funding situation means that is just the beginning.

The amount of potential funding is still very small from the perspective of the world as a whole. But we’re now at a stage where we can plausibly have a very significant impact on some of the world’s biggest problems. Could we reduce existential risk from AI or pandemic by more than 10%, eradicate a global disease, or bring forward the end of factory farming by a year? Probably.

We should be judiciously ambitious. To achieve the sort of impact we’re now capable of, it means being sensitive to the risks of ambition, and the negatives of spending funds, for sure. But it also means we need to use the opportunities we have available to us. We should think big and be willing to take bold actions, while mitigating the risks. If we can manage to do both of these at once, we as a community can achieve some amazing things.

Appendix: how fast should we be scaling funding?

It’s non-obvious to me what the ideal rate of distributing funding should be, although my fairly strong best guess is that we should be scaling up to giving much more than we are at the moment.

Briefly, the main reasons I see in favour of giving more are:

  • To date, community-building has been an outstanding investment.
  • Even on fairly patient philanthropy models, we should be spending more than we currently are.
  • We have more impact in worlds where there is even more potential funding in a few years’ time, so we should be particularly prepared for those scenarios.
  • Option value: if we build the infrastructure to productively absorb funding, then we can choose not to use it if it turns out to not be necessary; whereas if we don’t build the infrastructure now, then it will take time to do so if in a few years’ time it does turn out to be necessary.
  • At least within longtermist funding, the lack of scalable funding opportunities is a turn-off for some potential donors; creating new scalable funding opportunities can lead to more funding overall in the long run.
  • The chance of near-term hinge moments, such as transformative AI within the next 20 years.

In my mind, the strongest cases against dramatically scaling up our giving are:

  • We might do it badly, with negative cultural consequences for our movement that are hard to reverse.
  • There might be some future time where it would be extremely valuable to spend truly enormous amounts of funding. This could be on compute in the run-up to AGI, or on rapid responses to a worst-case pandemic.
  • Perhaps the returns to giving diminish extremely rapidly, and scaling up our giving is a distraction from other things we could be doing.

Learn more

You might also be interested in:

Enter your email and we’ll mail you a book (for free).

We believe you can help tackle some of the world’s biggest and most neglected problems, and our only aim is to help you do that.

Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity.

Notes and references

  1. I considered suggesting the slogan “Move fast and don’t break things” to encapsulate this, but I thought that “move fast” isn’t really the right framing for ambition: setting up a massively scalable project might mean being small and testing things out for years in order to put yourself in a position to grow enormously.

  2. I don’t know exactly what the situation was like for GiveWell in New York or for the LessWrong / SingInst crowd in the Bay, but it wasn’t radically different: salaries were low, and funding was scarce.

  3. By “potential funding” I mean financial assets from people who plan to give those financial assets away to EA-aligned causes.

  4. For example, Gary Wang has a publicly estimated net worth of $5.9 billion, and plans to use the majority of that for EA-aligned goals.

  5. For example, in a very simple model by Phil Trammell, if you find yourself at a moderately influential time, you should spend 2.4% of your resources per year. On this simple model, if you think that “influentialness” (or “hingeyness”) will be roughly flat over the next fifty years, but will permanently fall to 1/10 of its current level soon after that, then you should give out 0.83% each year. (These numbers shouldn’t be relied on though; they are intended only to be illustrative.)

  6. If you were sceptical of replicating FTX-esque success, then that number might drop — but I think even so it should be much higher than 10:1.

  7. Note that the ‘community-building’ argument also justifies significant funding of direct work, simply because a movement that never does anything of actual value is not very compelling.

  8. What’s weird but important to bear in mind is that the perception of extravagance often has little to do with the amount of money actually being spent. One organisation I know hosted a conference at a very fancy venue: in reality, the venue is owned by a charitable foundation, so was a comparatively cheap option. But it looked extremely grand, and there were a number of complaints.

  9. In some cases, an extravagant lifestyle can even produce a lot of good, depending on the circumstances. I know of some people who have attended luxurious parties, met major philanthropists there, and gotten them involved in EA. It’s not my preferred moral aesthetic, but the world’s problems don’t care about my aesthetics. (Needless to say, if you find yourself in this unusual position, you should probably take special care to make sure that attending luxurious parties really is the way you can do the most good.)

  10. H/T Nick Beckstead

  11. I believe the foundation that distributes the second-largest amount per year is Wellcome, which in 2020/1 gave out £1.2 billion.

  12. Inflation-adjusted to today’s money

  13. Including the pledge from Warren Buffet to give almost all his wealth, which is currently $120 billion.

  14. https://80000hours.org/stories/sam-bankman-fried/

  15. Of course, relative to Uber there is the additional “incentive” of the impact that the project will create, which mitigates this issue.

  16. Of course, there’s plenty to argue with in the estimate, but I don’t think it changes the core point.

  17. In this toy example, I’m ignoring inflation and investment returns, and assuming that you can’t in advance identify which project is a “hit”. See also https://www.openphilanthropy.org/blog/hits-based-giving