Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

It feels kind of morally appropriate, relative to the problems of the world — the severity of poverty, the amount of suffering, the gravity of the risks we face — to be in a basement, working in this close environment.

What feels morally appropriate is different from what’s actually best.

Will MacAskill

Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you’re all bunched up on a few tables in a basement office.

But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You’re the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP.

You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have.

This is roughly the situation faced by today’s guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement.

Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing.

While surely a huge success, it brings with it risks that he’s never had to consider before:

  • Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it.
  • Being seen as profligate could strike onlookers as selfish and disreputable.
  • Folks might start pretending to agree with their agenda just to get grants.
  • People working on nearby issues that are less flush with funding may end up resentful.
  • People might lose their focus on helping others as they get seduced by the prospect of earning a nice living.
  • Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely.

But all these ‘risks of commission’ have to be weighed against ‘risk of omission’: the failure to achieve all you could have if you’d been truly ambitious.

People looking askance at you for paying high salaries to attract the staff you want is unpleasant.

But failing to prevent the next pandemic because you didn’t have the necessary medical experts on your grantmaking team is worse than unpleasant — it’s a true disaster. Yet few will complain, because they’ll never know what might have been if you’d only set frugality aside.

Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today’s episode, Rob and Will discuss the above as well as:

  • Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent?
  • Why are so many nonfiction books full of factual errors?
  • How does Will avoid anxiety and depression with more responsibility on his shoulders than ever?
  • What does Will disagree with his colleagues on?
  • Should we focus on existential risks more or less the same way, whether we care about future generations or not?
  • Are potatoes one of the most important technologies ever developed?
  • And plenty more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore


A culture of ambition within effective altruism

Will MacAskill: Why, if you’re altruistic, should you try and be very ambitious, or have ambitious plans? I think there’s a few reasons for that. One is more theoretical: if you think the distribution of impact is a fat-tailed distribution, that means that the mean impact is bigger than the median. So if you just aim to do kind of a typical amount of good, then you’re going to do much less good in expectation — and I think plausibly it’s the expected value of the good that you do that matters — than if you’re really aiming for the best. And that’s because, in the nature of the distribution, the best outcomes are just way, way better than the typical outcomes.

Will MacAskill: So there’s this theoretical case, and we’ve seen that in practice as well. I think it’s clearest if you look at people earning to give, because there, we can just assign a number on the impact they’re going to have — at least the impact in terms of donations — where the distribution of how much money you can make from careers that you might go into, earning to give, is just clearly this fat-tailed distribution.

Will MacAskill: When we look at the people who’ve earned the most in order to donate via earning to give, well, it’s enormously driven by those at FTX now. Dustin Moskovitz is obviously another major donor — my understanding is that he got into giving after cofounding Facebook; I don’t know about his motivations before that, whether he was always planning to give. But certainly for Sam Bankman-Fried and the other early employees at FTX, their aim was to make money in order to donate it. And the amount they’ve raised now is just very large compared to how much they could have made, if Sam had stayed working at Jane Street, for example, even though Jane Street’s a very well-paying quantitative trading firm.

Will MacAskill: So that’s just an illustration in practice of this fact that impact can be fat tailed. We focused on earning to give — that’s nice and quantifiable. But I’m sure this is true as well in other areas. Within politics, if you manage to become a head of state, then you’re having very unusually large impact. If you’re doing research, like being one of the seminal researchers, you’re going to have enormous impact compared to if you are a typical researcher.

Will MacAskill: And that really means that if you want to do as much good in expectation as possible, ideally you should be aiming for those best-case outcomes. And probably, you’ll fail — more likely than not. When Sam set up FTX, he thought it was a 20% chance of succeeding — and he was being relatively optimistic, he thought at the time. But obviously, he did end up being successful. But even if you divide that by five, expected value —

Rob Wiblin: It’s still a big number. Yeah.

Will MacAskill: It’s still bigger, and it’s bigger than what he would’ve made if he’d stayed at Jane Street. Similarly, when people have looked into this — like, “Should you go into politics?” — obviously it’s extremely hard to become an MP in the UK. And even harder again to become, say, the prime minister. But the chance of achieving one of these best-case outcomes is plausibly where much of the expected value comes from. And I think the same could be true for research and other career paths.

Barriers to ambition

Will MacAskill: I think the biggest barrier is just taking seriously what some of these best-case scenarios could look like. A second is then often just, and how could you do that quicker too? There’s this classic Silicon Valley question of, “What’s the best possible success you can imagine for the project you’re working on? Now how could you do that in six months?” Where, often at least, achieving very big things does mean having to go somewhat quickly as well. Again, for all of this stuff, emphasizing that we need to pay attention to the risks of harm as well. So don’t do crazy things that are going to taint the well of some area, or promote infohazards, or something like that. I think the biggest thing is just actually thinking about, “What are some of the best-case outcomes I could imagine?”

Will MacAskill: A second important thing is doing things such that you feel OK about it even if you fail. It’s interesting where people go into science, and because they’re intellectually motivated, they just want to do it for itself. And there’s this strong norm in science that that’s kind of what you ought to do as well — that it’s kind of weird to be going in with the aim of pursuing impact. I have this hypothesis that that’s actually kind of irrational as a norm, because the people who go in and try and do impact, they’re trying to get a guarantee of impact and that means you do something that’s not actually very good.

Rob Wiblin: You’re saying that, perversely, they’ll be motivated to be less honest with themselves than someone who’s motivated just by the truth.

Will MacAskill: Or they’ll just be motivated to do something that’s not very fundamental or important. So you could do something that is taking some existing science and applying it to some particular area. Whereas instead — this at least is the argument — having a 1 in 100 chance of some really fundamental contribution is actually just much more important in expectation.

Rob Wiblin: I see.

Will MacAskill: So I think actually, that doesn’t motivate this true scientific norm of, “Just do whatever you’re interested in,” because I’d be really surprised if, say, a marginal string theorist was going to be as impactful as other things you could go into. But the underlying thing of you should be pursuing something, such that even though you know you’re probably not going to have enormous impact — in fact, let’s say your impact might be close to zero — nonetheless, you’ll still be motivated and able to pursue it. Because that’s the real challenge, I think. Some people are maybe just happy to go with the EV calculations, but I think it’s an unusual psychology.

Rob Wiblin: You’re saying that they’re happy if, after the fact, things have fallen apart and they haven’t done anything good, they can look back and say, “Well, I made the right decision ex ante. At every point, I made a reasonable call on what to do. And so I’m satisfied.”

Will MacAskill: Yeah. That can be difficult, but you could be doing something that you’re intrinsically motivated by, such that, “Well, OK, it didn’t have any impact in the end, but I feel very happy. It was an intellectually rewarding life, or a rewarding life in other ways.”

Why focus on using money now?

Will MacAskill: I mean, my views on the patient philanthropy arguments haven’t changed all that much. I still think that most of the good that we will be doing is later than 10 years, rather than in the next 10 years. But there’s two things. One is that I think even if you have quite an extreme patient philanthropy view, you should still be donating some amount. And it turns out that the financial assets that we are advising have just grown so much in the last five years that we need to be rapidly scaling up the giving, even if you think we should only be giving a small percentage and still accumulating.

Will MacAskill: Because really there’s two updates from the success of FTX. One is obviously now there’s more total financial assets that we’re advising. But secondly, we should probably expect there to be even more in the future as well from other major donors getting involved, or other successful people earning to give. And that really means that —

Rob Wiblin: You have to run just to stand still. Or you have to make a lot of grants just to avoid —

Will MacAskill: Exactly. Yeah. And let’s just take the Gates Foundation as a foundation I really respect. Gates set up the giving pledge in I think it was 2010. Over that time he’s doubled his wealth.

Rob Wiblin: And he has been making a real effort to give it away.

Will MacAskill: He’s been making a real effort. They give more than, as I understand it, any other foundation per year. I think about six billion per year. They could have given more, and the world is developing — probably the best opportunities are drying up. Probably they ought to have been giving more, and potentially a lot more — like 10 billion per year or something, assuming the focus is on global health and development. And that’s just a real challenge. Because even at three and a half billion per year, they’re like, “How do we spend even more?”

Will MacAskill: So a failure mode that we could get into, that I’m really worried about, is just being any other foundation that ends up persisting for many, many decades — and just losing out on most impact it could have by virtue of not really trying to decisively scale up its giving. So even getting to the point where, from the patient philanthropy point of view, maybe you’re at the optimal level — maybe that’s donating a few percent per year — we’ve got to scale up and give it away.

Rob Wiblin: That’s good. You’ve got a lot of work to do.

Will MacAskill: Exactly. Then the second thing is just that the returns from movement building seem much greater than the returns from financial investment. Again, if you look at the rate of the return from some people going and earning to give and convincing people from that and so on, I think Ben looked into this and suggested it was like 30% per year or something.

Will MacAskill: But anyway, it’s certainly much higher than one can get as a market return. And obviously, even if you’re only focused on movement building as well, as we discussed, you should still be doing a lot of direct work — because a movement that’s only about growing itself is not a very convincing or effective movement.

Downsides and risks from the increase in funding

Will MacAskill: I think there are many risks and many things we should be paying attention to. I’m very happy to see this discussion in the EA Forum a bit recently as well. One is just extravagance… It feels kind of morally appropriate, relative to the problems of the world — the severity of poverty, the amount of suffering, the gravity of the risks we face — to be in a basement, working in this close environment. What feels morally appropriate is different from what’s actually best. But there are risks if we just start throwing money around or something. Take a third party: they’re not going to be able to tell. So we want to make sure that we’re still conveying the level of moral seriousness that we really have.

Will MacAskill: And so, it does mean some things. Like my attitude to my own giving, for example: I’ve increased my commitment to that. It’s kind of this funny thing. When I first started donating, I was like, “That’s going to be most of my impact in my life, so this is really important.” And now it’s like, “Raising a lot of money is not really how I’m going to have impact.” So I don’t know, I felt more confused about it. Now, I’m like, “OK, no. It’s more important again.” Because now that we’ve got so much funding, I want to really demonstrate that I am not doing this for any sort of financial gain to myself — I’m doing it because I want to try and make the world better. And I think that could be true for other sorts of moral commitments as well, like being vegetarian or vegan. I know someone who donates a kidney, and I’m like, “OK, this guy is morally serious…”

Will MacAskill: I think there’s a few issues related to mission drift here. One is that other people might start joining the community, not with the best of intentions. That’s something we should be on guard for. Another thing that we should worry about is if people start having motivated reasoning. If there’s some things that donors believe, and others don’t, then, “Oh, maybe it’s more convenient. I’ll just believe that as well.”

Will MacAskill: So we are trying to combat that quite directly. Future Fund has this regrantors program. People now will have their own independent funds that they can distribute. And it’s presumptively approved, precisely in order to avoid this intense consolidation of funding power. We’re also considering having “change our worldview” prizes. So if you can shift our credences on important topics — like what’s the magnitude of AI existential risk or something — if you can shift it by a certain amount, you win a prize, you get money. I think that’s another way in which this influx of resources could be bad, and we really need to be very guarded against.

Will MacAskill: And then a final worry is something that’s less explicit motivated reasoning and more just you lose evolutionary pressure that we want to cultivate, and definitely previously had cultivated. So with startups and companies, those that aren’t profitable go out of business. And there’s this classic problem in the nonprofit world that that doesn’t happen for bad charities. There would definitely be a worry that if there’s plenty of funding for various EA-oriented projects, the bad ones might still manage to get funding. And they kind of putter along, even if the people would be better used somewhere else.

Will MacAskill: And everyone has good intentions. People are maybe a bit biased in favor of their own thing, but that’s extremely natural. So there’s nothing untoward going on. But you’ve lost a bit of what might be kind of healthy ruthlessness — where certain bad projects, or projects that are not as effective, shouldn’t keep going. I think that means we should really celebrate organizations that just choose to not exist.

Will MacAskill: No Lean Season was an example of this. It’s a bit of a shame, because people forget about them because they’re not around anymore. But No Lean Season was an excellent organization working on promoting seasonal migration to the cities, which was beneficial economically. They went through Y Combinator — I was in the same batch as one of them — so they were clearly very promising. They did an RCT on the program. It wasn’t as effective as they had initially hoped. They just shut down. I was just like, “This is amazing. When does this happen? This is so good.”

Will MacAskill: This is particularly important if we’ve got this kind of culture of ambition, framing this like: “Really try and aim if you can for the best possible outcome, while avoiding risks of doing harm, because most attempts will fail.” That means if you have 10 people, let’s say they all try and do their nonprofit startup. One of them perhaps really crushes it and is the best. Probably, what should happen is the other nine shut down and join that one. And that’s very hard psychologically. That takes some kind of cultural engineering to encourage that as something that can be really rewarded.


Rob Wiblin: One theme of the book is challenging the idea that there’s been this inevitable convergence on the better modern values that we have today: that as we got richer and more educated, that we had to converge on the values that we have now. You argue that things could have gone off in many different directions. I think if there’s one thing you learn from studying history, it’s that people can believe all kinds of different, crazy, crazy things. Well, crazy things from my point of view, or just absolutely outrageously different cosmologies.

Will MacAskill: Yeah. Historians in general tend to emphasize this idea of contingency. Where when I say that something is contingent, I don’t mean it was an accident that it happened; I just mean that it could have plausibly gone a different way. So an example that I like to use of contingency and what I would call bad lock-in is the song “Happy Birthday.” The melody for “Happy Birthday” is really atrocious. It’s like a dirge. It has this really large interval; no one can sing it. I can’t sing it. And so really, if you were going to pick a tune to be the most widely sung song in the world, it wouldn’t be that tune. Yet, it’s the one that took off, and it’s now in almost all major languages. That’s the tune you sing for “Happy Birthday.”

Will MacAskill: You get one piece of music that becomes the standard, and then other people could create a different happy birthday song, but it won’t take off because that’s not the standard. And this is like, why does it persist? Well, partly because it’s just not that big a deal. But I think it’s an illustration of the fact that you can get something that is quite contingent. It could have been otherwise. “Happy Birthday” could have had one of many different tunes — it could have been better. We would all be slightly happier if we were singing a different happy birthday tune, but we’re stuck with it.

Will MacAskill: Then there’s other things if you look. Again, I like to think of non-moral examples to begin with. So now it’s fading out of fashion, but the fact that we all wear neckties in businesses. It’s such a bizarre item of clothing. There’s obviously no reason why wearing a bit of cloth around your neck would indicate status or prestige. We could have had different indicators of that. Yet that is, as a matter of fact, the thing that took off. That’s another thing when we look at these non-moral examples —

Rob Wiblin: I guess there, people are pretty happy to accept that it is random.

Will MacAskill: Yeah, exactly. There’s just this fundamental arbitrariness. But then where things get really interesting is the question of how much is that true for moral views? Certainly, it’s not going to be true for many sorts of values. The idea that it’s important to look after your children or something. A society that didn’t value that probably wouldn’t do very well as a society.

Rob Wiblin: It’s at a competitive disadvantage. So there’s selection pressure going on there.

Will MacAskill: Yeah, exactly. Whereas other moral values, I think we can say they really do seem contingent, and it’s perhaps quite uncontroversial. Attitude to diet, for example. Most of the world’s vegetarians live in India, where something like a third of the population are vegetarian. Why is that? Well, it’s clearly just because of religious tradition. Similarly, why Muslims or Jews tend to not eat pork — again, we can just point to the religious tradition. And there’s clear variation in people’s diet, and that really makes a moral difference as well. So that’s, I think, a very clear-cut case of moral contingency.

How is Will doing?

Rob Wiblin: So basically, you’ve been burning out somewhat, or at risk of burning out, at least very recently.

Will MacAskill: A little bit. But the positive lesson, I think the fact that I had been so attentive to the idea of “it’s a marathon, not a sprint” for many years before that did allow me to pull out the extra gear for these last few months. I also will return to that again. I’m taking some time off literally next week. Maybe things are feeling more set up with the Future Fund. Obviously the book launch will be intense, but I’m going to move back to a more sustainable state. But I think the fact that I had invested in myself, I just now am far happier than I was say 13 years ago. And progressively so: I estimate I’m five to 10 times happier than I was.

Rob Wiblin: So you’re public about having had depression for quite an extended time, as a teenager, as an undergraduate as well, and having spent a lot of time working on that. How have you ended up five or 10 times happier? It sounds like a large multiple.

Will MacAskill: One part of it is being still positive, but somewhat close to zero back then. But also now I would say that if I think of my peer group, say friends from university and so on, I’d probably put myself in the happiest 10% or something. So that’s really pretty good.

Rob Wiblin: That’s so good.

Will MacAskill: I mean, I feel happy about it. And that’s been from just a wide variety of things, just over a very long time period. There’s the classics, like learning to sleep well and meditate and get the right medication and exercise. There’s also been an awful lot of just understanding your own mind and having good responses. For me, the thing that often happens is I start to beat myself up for not being productive enough or not being smart enough or just otherwise failing or something. And having a trigger action plan where, when that starts happening, I’m like, “OK, suddenly the top priority on my to-do list again is looking after my mental health.” Often that just means taking some time off, working out, meditating, and perhaps also journaling as well to recognize that I’m being a little bit crazy.

Rob Wiblin: Overriding the natural instincts.

Will MacAskill: Exactly, yeah. So perhaps I’ll be feeling on a day, “I’m being so slow at this. Other people will be so much more productive. I’m feeling inadequate.” Then I can be like, “OK, sure. Maybe today is not going so well. But how is the last six months or something? Think about what you have achieved then.” And then it’s like, “OK, that does seem better.” So I’ve just gotten better at these mental habits, and that’s just been this very long process, but it’s really paid off, I think.

Articles, books, and other media discussed in the show

Free book giveaway!

80,000 Hours is offering a free book giveaway! All you have to do is sign up for our newsletter. There are three books on offer:

  • Doing Good Better: How Effective Altruism Can Help You Help Others, Do Work that Matters, and Make Smarter Choices about Giving Back by today’s guest, Will MacAskill
  • The Precipice: Existential Risk and the Future of Humanity by Oxford philosopher Toby Ord (also available as an audiobook!)
  • 80,000 Hours: Find a Fulfilling Career That Does Good by 80,000 Hours cofounder and president Benjamin Todd

Will’s work:

Growth of effective altruism and aesthetic concerns:

Projects, people, and future directions Will is excited about:


Other 80,000 Hours Podcast episodes:

Everything else:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.