Transcript
Rob’s intro [00:00:00]
Rob Wiblin: Hi listeners, this is The 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and the tragedy of ‘Happy Birthday’ lock-in. I’m Rob Wiblin, Head of Research at 80,000 Hours.
If you know of one person involved in the effective altruism community, there’s a good chance it’s the Oxford philosopher Will MacAskill.
His previous appearances on the show were #68 on the paralysis argument, whether we’re at the hinge of history, and his new priorities; and #17 on moral uncertainty, utilitarianism, and how to avoid being a moral monster.
The first two episodes were audience favourites and I imagine the MacAskill charm will make this a favourite as well.
Here we preview his upcoming book, and discuss mental health, work-life balance, how contingent good moral values are, and the huge changes that we’ve seen in the effective philanthropy space in recent years.
Speaking of books, here at 80,000 Hours we recently started a free book giveaway, which you can take advantage of at 80000hours dot org slash freebook. There’s three books on offer.
The first is The Precipice: Existential Risk and the Future of Humanity by Oxford philosopher Toby Ord which we discussed with him in episode 72.
It’s about the greatest threats facing humanity, and the strategies we can use today to safeguard our future.
The second is called 80,000 Hours: Find a Fulfilling Career That Does Good by Benjamin Todd — that’s a book version of our guide to planning your career.
The third is Doing Good Better: Effective Altruism and How You Can Make a Difference
by today’s guest Will MacAskill.
We pay for shipping and can send the book to almost anywhere in the world.
If, like me, you prefer audiobooks, then we can offer you an audio copy of The Precipice.
The only thing you need to do to get one of these free books is sign up to our email newsletter.
On average we send one newsletter email a week, usually letting you know about some new research about high-impact careers going up on the website, or about a new batch of job opportunities going up on our job board.
The email newsletter is pretty great, but if you decide you don’t like it you can always unsubscribe. Indeed, if you just want a book, I give you approval to immediately unsubscribe after ordering your book — that’s totally legit.
We actually launched this giveaway a bit earlier in the year, but we had to delay announcing it to you all because the offer was so popular our poor book orderers were flat out shipping all the books people were already asking for, and probably couldn’t handle the influx from all you podcast listeners.
But they’ve increased their capacity and so are now standing by to quickly turn around your requests.
If you’d like to take advantage of that and get one of those books, then just head to 80000hours dot org slash freebook.
We’d be more than happy for you to tell your friends about the giveaway if they might be interested in one of those books.
All right, without further ado, I bring you Will MacAskill.
The interview begins [00:02:41]
Rob Wiblin: Today I’m speaking with Will MacAskill, who will be well known to many people as a cofounder of the effective altruism community. Officially, Will wears many different hats, including Associate Professor of Philosophy at Oxford University, Director of the Forethought Foundation for Global Priorities Research, and now an advisor to the Future Fund. In his academic capacity, Will has published in philosophy journals, such as MIND, Ethics, and The Journal of Philosophy. And in his capacity as entrepreneur, he cofounded Giving What We Can, the Centre for Effective Altruism, and our very own 80,000 Hours, where he remains a trustee on our various different boards.
Rob Wiblin: He’s the author of Doing Good Better, a coauthor of Moral Uncertainty, and in August, his third book will be out — titled What We Owe The Future — this time covering longtermism. This is also his third appearance on the show, the previous ones being episode 68, when we talked about the paralysis argument and whether we’re at the hinge of history, and episode 17, when we spoke about moral uncertainty and utilitarianism. Thanks for returning to the podcast, Will.
Will MacAskill: Thanks so much for having me on.
Rob Wiblin: I hope we’re going to get to talk about how the effective altruism community has been progressing in leaps and bounds over the last few years, and in what directions you would ideally like to nudge it. But first, what have you been up to since you were last on the podcast in January 2020? It feels like a lifetime ago.
Will MacAskill: Feels like a century ago. So the big thing was writing this book, What We Owe The Future. I’d been planning to write it for a couple of years. Before that, I’d been working on it part-time in the background. Then basically, when the pandemic hit, I thought, “Well, there’s just never going to be a better opportunity in my life for writing a book.” And I really just wiped everything else off my plate and went into a book-related hole for what turned out to be two years. So that was really 90% of my focus from about March 2020 until December of last year.
Rob Wiblin: Yeah. You had a whole team of people working with you on that, I guess. We’ve had Luisa Rodriguez on our show before, talking about some of the work that she did to help you out. But I think she wasn’t the only one.
Will MacAskill: Yeah. Luisa was enormously helpful, both in managing part of the book and then also being an expert on global catastrophe and collapse. But yeah, there was a small army of people working on this book, especially by the end, where I had Luisa, and Laura Pomarius, and then Max Daniel in a Chief of Staff role, managing the whole book. Then I had some full-time researchers employed as well. The structure there was they had some specialty — so John Halstead‘s was climate change, Stephen Clare‘s was great power war — but then would also be able to help on fact checking in general.
Will MacAskill: Then we did actually have a fact checker who was essentially full time for probably close to a year of work. And then we just had this large body of consultants: people who were either writing one of the reports — for example, Jaime Sevilla wrote a report on persistence studies; Lukas Finnveden wrote a report on value lock-in — or were just providing in-depth advice. So from the early stages, I had Anton Howes, a historian, advising me on what the most interesting historical case studies could be to look at.
Rob Wiblin: Yeah. A lot of nonfiction books seem to have a lot of problems with them, or the things that they say often don’t really hold up, and it’s tempting to blame the authors for that. But to some degree, I think it’s a systematic thing: that actually, writing a book that has hundreds and hundreds of pages in it, and checking it all to make sure that all of your claims actually are substantiated by the references, is beyond the abilities of any one person. I guess, to some extent, you might have been able to avoid this just by throwing so many personhours at the project.
Will MacAskill: It’s wild. I think the book used over a decade’s worth of time, is my guess of what went into it. And again, my estimate is almost two years of that was just fact checking, especially if you have a high level of rigor. And I suspect there will still be many mistakes in it, and that keeps me up at night. But especially if you have a high level of rigor, then just saying, “Oh, this study says this” or “The IPCC says this” — that would be the normal, reasonable level of rigor. But if you really care about the things you’re saying being true, then digging deeper into that. Very often these claims that are passed around, they’re just…
Rob Wiblin: Like citation chains that lead to nowhere in particular. Or you look at the paper and you’re like, “Wow, this is hot garbage. I don’t believe this at all.”
Will MacAskill: Yeah, absolutely. I mean, we really learned the lesson of this over the course of COVID with the five-micron rule.
Rob Wiblin: Oh, what’s that?
Will MacAskill: This is the idea that anything above five microns cannot be airborne. That was just a definitional matter, and it went back to these studies in the ’20s and ’30s. And there was just basically no —
Rob Wiblin: It came from nowhere. Or I think someone made it up by accident, because they’d misread something else. I’m trying to remember. We’ll stick up a link to the article for listeners to check it out. But this was an extraordinary citation chain to nowhere, on which most public health was based.
Will MacAskill: Yeah. It’s a really striking thing. And my book does cover a lot of history, and there, it’s really tough, because often there will be just not that much scholarship in an area and it’s very hard to know.
Rob Wiblin: And almost any important claims you make are contested to some degree.
Will MacAskill: Exactly. And then it’s a little shame that often there’s an anti-correlation between what facts are the most interesting — exactly the sort of facts you want to put into a general audience book — and those that are true. So a lot of very interesting claims were cut at the final stage because they turned out to not be true, unfortunately.
Rob Wiblin: I bet a common situation that comes up is that nonfiction writers write their book, and then maybe towards the end, they realize that a lot of the supporting claims don’t really hold up. But by that stage, it’s so costly to back out of saying these things — they’d have to go back and rewrite the entire book, and let down their publisher, and so on — that they just kind of look the other way. They’re like, “Well, I’m committed to this path now.”
Will MacAskill: Yeah. That’s if books even get to the fact-checking stage. So a tiny, tiny percentage of any nonfiction books are actually fact checked, and so they can have really egregious errors. But then when you do see books that have been fact checked — an example being Why We Sleep by Matthew Walker: that was fact checked and found to have egregious errors. That was brought up, and it’s just not really paid attention to. People didn’t really seem to care.
Rob Wiblin: Yeah. I actually looked into this a couple of months ago, because I was curious about whatever became of someone checking the first chapter of that book and just finding that it was wrong all over the place, despite being written by a Harvard professor who supposedly is an expert in sleep? He’s just going around giving the same talks. Nothing has changed, and no one raises this in his interviews anywhere. As far as I can tell, it’s just completely neglected. [Update since recording: Matthew Walker did discuss this on Peter Attia’s podcast in 2020.]
Will MacAskill: Yeah. It’s a shame.
What We Owe The Future preview [00:09:23]
Rob Wiblin: So we’re going to do a full episode later in the year on What We Owe The Future when it’s actually available for listeners to read and follow along the conversation. But to whet people’s appetites, what parts of the book do you think the sorts of folk listening to this interview might be most excited to read when it comes out?
Will MacAskill: I think there’s a few aspects. So later in the book, I get into population ethics. It was very hard to try and give an overview of population ethics aimed at people who didn’t have a philosophy background, because it’s normally graduate-level study. But I hopefully managed to do that, and especially its relevance to thinking about how we can benefit the long term.
Will MacAskill: Similarly, I did a pretty deep dive into this question of the value of the future, and whether we should expect it to be good or bad — including just asking the question, “Is now good?” Just take the world in 2022, or if you want to pick a different year, 2018 or something: is that more positive than negative?
Rob Wiblin: I love that you did that. Because as far as I know, I’ve never seen anyone write that up before. At least not anyone who’s not a friend of mine.
Will MacAskill: Yeah, exactly. So there had been EA work on it that I had drawn on, but nothing that had done the deep dive in quite the way that we tried to do. And obviously, there’s still an enormous amount more that could have been written — it’s just one chapter. So later on I start to really deal with these more philosophical issues.
Will MacAskill: I also have a chapter on something that’s fairly neglected, which is the importance of long-run technological stagnation. The argument there is not so much that we might stagnate and never get out of it — I think that’s quite unlikely. But instead, if we stagnated during what Carl Sagan called the “time of perils” — just during a period of heightened extinction risk — that could greatly increase extinction risk overall. We might have the power to create dangerous bioweapons, but not the technological power to defend against them, for example. And that’s a reason for being concerned about technological stagnation.
Rob Wiblin: By getting stuck in a dangerous state for an extended period of time?
Will MacAskill: Exactly. Yeah.
Rob Wiblin: Leveling off for a while.
Will MacAskill: Exactly. Then the main thing is probably I focus much more on the idea of values, and the contingency of values, the importance of promoting better values, and the possibility of value lock-in than normally gets discussed in longtermism.
Rob Wiblin: Yeah. I guess you’re fixing a longstanding problem that we’ve recognized with effective altruism, or the research projects and the stuff that we talk about and prioritize. I feel like for years, people have said, including both of us, “We don’t really ever talk about this values thing.” But isn’t it possible that we could avoid going extinct and then produce no value because we just have the wrong ideas about what’s good? And now you’re really putting it front and center. It seems like the book will hopefully be big enough that it can maybe fix this problem in one fell swoop.
Will MacAskill: Yeah. I hope so. Towards the end of writing the book and having further discussions about it, I started to appreciate that there are certain issues where there’s really quite a lot of disagreement within those who consider themselves longtermists, which really makes a large difference for conceptually how you think about what we ought to do with respect to making the long-term future go better. And just how we should categorize things as well.
Will MacAskill: And the key difference there, it seems to me, is whether — from a very wide variety of worlds, of ways the present could be — you expect us to converge onto basically the right thing to do, some very good outcome. Or do you think it’s actually really quite contingent, and depends on the values that are present at the moment that could really shape the value of the very long term?
Will MacAskill: You could think, as a simplification, “How much does it matter who gets AGI if the AI alignment problem is solved?” So assume that we’ve managed to solve the risk of the misaligned AI. Does it really matter if it’s me, or you, or Julia Wise, or Kim Jong-un, or Hitler? How big are the value differences there? One view is just that anyone who has that level of intelligence at their disposal will reflect and reason and converge onto probably the right answer. And then you can go all the way to a view that posits radical contingency, where you would think actually it’s just enormously important who is in charge during those critical moments for the long term.
Rob Wiblin: One theme of the book is challenging the idea that there’s been this inevitable convergence on the better modern values that we have today: that as we got richer and more educated, that we had to converge on the values that we have now. You argue that things could have gone off in many different directions. I think if there’s one thing you learn from studying history, it’s that people can believe all kinds of different, crazy, crazy things. Well, crazy things from my point of view, or just absolutely outrageously different cosmologies.
Will MacAskill: Yeah. Historians in general tend to emphasize this idea of contingency. Where when I say that something is contingent, I don’t mean it was an accident that it happened; I just mean that it could have plausibly gone a different way. So an example that I like to use of contingency and what I would call bad lock-in is the song “Happy Birthday.” The melody for “Happy Birthday” is really atrocious. It’s like a dirge. It has this really large interval; no one can sing it. I can’t sing it. And so really, if you were going to pick a tune to be the most widely sung song in the world, it wouldn’t be that tune. Yet, it’s the one that took off, and it’s now in almost all major languages. That’s the tune you sing for “Happy Birthday.”
Rob Wiblin: Is there an explanation for that?
Will MacAskill: I think it’s just —
Rob Wiblin: Chance and circumstance happened to us all?
Will MacAskill: Exactly. Yeah. You get one piece of music that becomes the standard, and then other people could create a different happy birthday song, but it won’t take off because that’s not the standard.
Rob Wiblin: It’s been done. Yeah.
Will MacAskill: And this is like, why does it persist? Well, partly because it’s just not that big a deal. But I think it’s an illustration of the fact that you can get something that is quite contingent. It could have been otherwise. “Happy Birthday” could have had one of many different tunes — it could have been better. We would all be slightly happier if we were singing a different happy birthday tune, but —
Rob Wiblin: We’re stuck.
Will MacAskill: Yes. But we’re stuck with it. Then there’s other things if you look. Again, I like to think of non-moral examples to begin with. So now it’s fading out of fashion, but the fact that we all wear neckties in businesses. It’s such a bizarre item of clothing. There’s obviously no reason why wearing a bit of cloth around your neck would indicate status or prestige. We could have had different indicators of that. Yet that is, as a matter of fact, the thing that took off. That’s another thing when we look at these non-moral examples —
Rob Wiblin: I guess there, people are pretty happy to accept that it is random.
Will MacAskill: Yeah, exactly. There’s just this fundamental arbitrariness. But then where things get really interesting is the question of how much is that true for moral views? Certainly, it’s not going to be true for many sorts of values. The idea that it’s important to look after your children or something. A society that didn’t value that probably wouldn’t do very well as a society.
Rob Wiblin: It’s at a competitive disadvantage. So there’s selection pressure going on there.
Will MacAskill: Yeah, exactly. Whereas other moral values, I think we can say they really do seem contingent, and it’s perhaps quite uncontroversial. Attitude to diet, for example. Most of the world’s vegetarians live in India, where something like a third of the population are vegetarian. Why is that? Well, it’s clearly just because of religious tradition. Similarly, why Muslims or Jews tend to not eat pork — again, we can just point to the religious tradition. And there’s clear variation in people’s diet, and that really makes a moral difference as well. So that’s, I think, a very clear-cut case of moral contingency.
Rob Wiblin: I’ve been on this kick learning about theology and old religions the last six months. And one of my favorite crazy facts about the history of religion is about the Jains, who have been around for a very long time, this Indian religion that is to some extent sort of a precursor of Buddhism. I think around 4,000 BC, they already had this concern that you couldn’t eat a particular kind of root vegetables, because they tended to attract insects onto them. If you pulled these carrots out of the ground and then ate them, you’d be harming the insects that are on these root vegetables. So that’s why they have particular dietary restrictions: not only do you have to be vegetarian, but you can’t eat plants that attract too many insects to them. Imagine being a subsistence farmer in rural India, 4,000 BC. Presumably no education. But this strikes you as a really important thing, insect wellbeing, and you’re willing to make material sacrifices.
Will MacAskill: It really shows the power of culture.
Rob Wiblin: Ideas.
Will MacAskill: Yeah. Culture and values where people can just make what seem like very large material sacrifices on the basis of these cultural practices or higher moral ideals. The most interesting cases are where currently the moral norm is closer to universal. You might think that’s some evidence of convergence: perhaps we’ve all just realized that this moral norm is correct, and so that shows moral progress. But not necessarily. There was convergence on the tune of “Happy Birthday” — that doesn’t mean that that’s the correct tune, or even the best tune.
Will MacAskill: In the book, I give this deep dive into abolition of slavery. This was actually something that the historian, Anton Howes, first pointed me onto. When he did, he made this claim that he thought if it weren’t for the very particular abolitionist campaign that did in fact happen — originating primarily out of the Quakers in North America, or Pennsylvania in particular, and then it got exported to the UK — that the abolition of slavery would’ve taken decades, centuries, or maybe even just would not have happened. And I thought that was just wild when I first heard it.
Rob Wiblin: Sounds like a crazy claim.
Will MacAskill: It sounded like a crazy claim. Yeah. I was intrigued enough, though, that I really started to dig into it. With anything that’s a historical counterfactual like this, it’s very hard to know — you’ve kind of assessed the counterfactual from one data point. But I certainly think it’s a non-crazy claim, and I certainly think that the standard arguments for the inevitability of the abolition of slavery are not good arguments. And then secondly, I think the fact that we find it a crazy claim is more a matter of our…
Rob Wiblin: Lack of imagination.
Will MacAskill: Lack of imagination, yeah. I think we tend to think of other people as more morally similar to ourselves than they in fact are.
Rob Wiblin: So in studying economics, I encountered this view that slavery went away because of the Industrial Revolution making it an obsolete method of production — that slavery no longer made sense, even for slave owners or for countries in terms of maximizing GDP — even setting aside moral concerns. But you make the case in the book that that is actually not an academically defended view these days, and also that it’s just clearly wrong.
Will MacAskill: Yeah. So this actually stems back to a scholar, Eric Williams, who made this claim in the ’40s. And that was an important piece of scholarship, but it’s really not accepted now, and I think for good reason.
One big thing is that slavery was booming as a trade. The slave trade was booming when it got abolished. So the number of slaves was increasing, the economic value of a slave was increasing. The sugar trade, which was the primary slave trade founded, was enormously profitable, especially for Britain. So it actually looks like Britain took a hit of about 5% of its GDP, just because of the price of sugar increasing.
Rob Wiblin: What?
Will MacAskill: It’s wild. At the time, sugar was huge. I mean, think of it as the luxury good. And Britain consumed more sugar than the rest of Europe combined at the time, because Britain was quite a bit more economically advanced than other countries at the time. I’m trying to think of a good analogy for sugar now.
Rob Wiblin: Yeah. Is there anything like that in the modern day?
Will MacAskill: It was almost like oil.
Rob Wiblin: OK. Like oil. Yeah.
Will MacAskill: Which seems mad.
Rob Wiblin: Both sources of calories.
Will MacAskill: Yeah. It seems mad, because obviously we take it for granted now. And then also, if you just start to think about the claim that it was mechanization that took it away: even in the US South, mechanization of agriculture didn’t really happen until well into the 20th century. Even now, enormous amounts of labor are engaged in unmechanized work. There’s also all sorts of labor that are still not mechanized, that slaves have traditionally been used for. Sex slavery, and —
Rob Wiblin: Housework?
Will MacAskill: Yeah. Housework as well, very common. And then also sometimes in history, enslaved people were used as shop managers, for example, or actually quite highly skilled —
Rob Wiblin: Or business managers. Yeah.
Will MacAskill: Business managers in classical Athens in particular, yeah. That was relatively common.
Rob Wiblin: My understanding is the Romans also had this system where slaves could potentially work in these more intellectually demanding professions. And once they made their owner a sufficient amount of money, then they were given their freedom or at least their children were freed. But they could be running whole enterprises effectively as CEOs for companies for these oligarchs.
Will MacAskill: Absolutely. Yeah, exactly.
Rob Wiblin: So it doesn’t seem like there’s any principal reason why you can’t have highly skilled work done using slavery, unfortunately.
Will MacAskill: Yeah. Exactly. I don’t talk about this aspect in the book, because it’s a little bit more speculative, but technology takes some of the economic incentive away from using labor of enslaved people because of mechanization. But it also potentially decreases some of the costs of using enslaved people as well. So you could have anklets that mean that you can track people more accurately or surveillance cameras that mean you can monitor people’s work more accurately. And it’s at least not obvious from the armchair how those two things shake out.
Rob Wiblin: Yeah. OK, I think I might try to cut off this conversation here, because we’ve got to do a full episode later in the year where we can talk about this part of the book and all the others, and really dive into the substance, which I’m looking forward to. An interesting thing about the book is that I think it’s fair to say I’ve spent an hour or two in my life thinking about longtermism and how to affect the long-term future — and I would still say that the great majority of stuff in this book was basically new to me, or inasmuch as it wasn’t new to me, it was because we’d talked about the book at various points over the last couple of years.
Will MacAskill: Wow. That’s so nice to hear. I wouldn’t have thought that.
Rob Wiblin: Yeah, it’s surprising that there is so much new stuff to say. Or that even doing an introduction or something that goes from the beginning, you can basically do it all with fresh material.
Will MacAskill: Yeah. I mean, it really updated me to thinking that there’s an enormous amount that one can do in terms of longtermist research that is not yet more work on AI timelines or AI safety or something — there’s a huge number of views, and sometimes knowledge, that does float around that has never been written up. And there’s also a lot of things that are just total gaps. So Luisa got involved because close to nothing had been written on this question of, “If there’s a catastrophe that kills the vast majority of people, what’s the chance of recovery?” Nick Beckstead had written a blog post on this. That was really —
Rob Wiblin: The state of the art.
Will MacAskill: — most of the literature. I really wanted to just come in and be like, “Look, this is of huge importance” — because if it’s 50/50 when you lose 99% of the population whether you come back to modern levels of technology, that potentially radically changes how we should do longtermist prioritization. Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.
Will MacAskill: And that’s just one of very many particular issues that just hadn’t had this sufficient investigation. I mean, the ideal for me is if people reading this book go away and take one little chunk of it — that might be a paragraph in the book or a chapter of it — and then really do 10 years of research perhaps on the question.
Longtermism vs x-risk [00:25:39]
Rob Wiblin: Yeah. Before we push on, there was one recurrent audience question for you about the book. Many people who are involved in longtermism, or people who are focused on preventing global catastrophes and existential risks, including me, think that there’s a 10% or higher probability that over the course of our lifetimes, things are going to go radically terribly, and we could die, and everything we care about could be destroyed. And if that’s what you think empirically, if that’s your model of the world, then you don’t really need the longtermist philosophy to motivate you to work on reducing these risks and tackling these problems. Because whether you care about future generations or not, this just seems to be a massive problem.
Rob Wiblin: And some people are making the case that maybe we should stop talking about all of this population ethics and philosophy, and trying to get people to care about their great, great, great, great, great grandchildren as much as the present generation. And instead, just point out, like, “The world is a super risky place and there’s lots of stuff that could go really wrong. And you could be killed by these things. And all of the things that you care about in the world could be destroyed.” Perhaps that’s a more compelling case, and one that’s easier to make and doesn’t require these difficult philosophical issues to be addressed.
Will MacAskill: I think it’s a great thing for us to be thinking about, and honestly, is the sort of thing that keeps me up at night as someone who’s spent a bit exclusively on longtermism. I have a bunch of thoughts. One is just that Toby’s book, The Precipice, at least the way I take it, focuses much more on existential risk and makes more of just the direct case. Like, “Hey, the risk is really quite large,” essentially. And so there’s benefits of trying out both.
Will MacAskill: A second thing is just the question of how well do these things play? It’s just an empirical question. And there has been some message testing done on this, which isn’t public yet, but I think hopefully will be soon. It’s not really clear actually, from that testing, whether both existential risk and longtermism do similarly well. Once you dig into particular risks: risk from pandemics does very well now, as you might guess; risks from AI does quite poorly. And that’s pretty important for the, “We should really just be focusing on existential risk” — because most people who have that view think that almost all of the risks are from AI.
Rob Wiblin: So people are just skeptical that it’s a big deal?
Will MacAskill: Yeah, exactly. Because people aren’t familiar with the arguments. And so the question really should be, “Which is more compelling: the particular arguments for AI, or the arguments for a long-term outlook?” And again, that’s at least non-obvious to me, and in particular, I think it depends a lot on how you frame things. So, “We should care about the lives of our grandkids and our grandkids’ grandkids” — I remember telling someone about the book I was writing, and she was like, “Oh, are you also going to write that water is wet?” Because it just seemed so obvious, like, of course we should care about how things go. Not just now, but into the centuries that come.
Will MacAskill: Then the final thing — and this is really what moves me, and interestingly, I think Toby at least in conversation said he wanted to write more on longtermism for this reason — is just I think it’s really important to convey what you believe and why. Here’s an example of that going wrong. Suppose you really care about people becoming vegetarian because you’re concerned about animal welfare. And you think, “We can convince these people even better to become vegetarian if we talk about the health benefits.” And then perhaps people do that, and they don’t actually become vegetarian — they cut out beef and start eating more chicken. Perhaps you’ve actively done harm. I think empirically, that isn’t actually what happens, but at least there were major worries that it could happen.
Rob Wiblin: I suppose if you convince people of some specific conclusion under false pretenses, or at least not for the main reason that you believe in, it’s very hard for them to go out and operationalize what you think ought to be done on that basis. Because they’re walking away believing something that’s quite different potentially than what you believe.
Will MacAskill: Exactly. So if the environment changes, or if our information changes, then you ideally would want the people you’ve convinced to also update in the same way and change their behavior. But if you’ve just convinced them of this fairly narrow thing, then they won’t do that. So I do think it’s very important that people actually understand, “No, morally speaking, this is the fundamental thing that we should be aiming for, which is all the good that could be done over, not just this century, but the many centuries that come.” Because then I think people will just make far better decisions.
Will MacAskill: Then I guess the final thing is that I think your views on this will probably just depend on, one, how large you think existential risk is at the moment. There are some people who think that the probability of human extinction from AI within the next 20 years is like 90% or higher. And if you do believe that, it just seems pretty plausible that you should just be talking about that, rather than going via any detour. If you think the probability is much lower and it’s not as clear how to prioritize between, say, values changes or AI or reducing risk of pandemics or unknown unknowns, then getting the fundamental moral view across becomes more important. Then also the more that you think we’ve still got surprises ahead of us in terms of how we currently see the world, then the more important it is to get…
Rob Wiblin: Give people the right underlying understanding. Rather than just feed them the conclusions that you happen to have right now.
Will MacAskill: Exactly.
Rob Wiblin: Yeah. It is quite interesting that we’ve got these two different angles: one is extinction is a high risk over the next 50 years, the other is future generations matter a lot. And it seems like reactions to either of these vary so massively. To some people it’s like, “Is water wet?” for “Do future generations matter similarly to the present generation?” Whereas to other people, it seems like a wildly counterintuitive and completely bizarre claim.
Rob Wiblin: And likewise, to some people you say, “I think the risk of extinction during our lifetimes is 10%,” and they’ll say, “Absolutely. If anything, it’s higher.” And I don’t mean people who are already involved in longtermism, but just random strangers. To some people, it’s very intuitive, whereas other people will regard that as absolutely baffling. I suppose it’s hard to prove either of these claims, so maybe you could potentially just take a strategy of pointing out all of these things.
Will MacAskill: Yeah, that’s right. I also think there’s a big difference between whether you want to say that something is the most important thing, versus, “This is really important, we should act on it.” Where extinction risk being high in our lifetimes certainly makes the claim like, “This is a very important thing and should be a key moral priority,” I think on any reasonable moral view. But effective altruists are generally trying to figure out, “Is this the most important thing?” And perhaps it’s also justified, even if you’re just looking at the next century. But it is a much higher bar.
Rob Wiblin: Yeah. Makes sense.
Will MacAskill: I do think there’s one thing that is an issue in talking about longtermism, which is getting into these tiny probabilities of very large amounts of value.
Rob Wiblin: That was a terrible detour. I’m so sad that we ever got into that.
Will MacAskill: I know. It’s very theoretically interesting.
Rob Wiblin: Yeah. It’s fascinating if you are someone who studies edge cases in expected value theory as an economist or a philosopher.
Will MacAskill: Yeah, exactly. But it gives people the impression that people think that existential risk is 0.00001%. And I’m like, “No, no. Even if you’re on the skeptical end, you think it’s more likely than dying in a car crash in your lifetime,” or around that. And we certainly take a lot of measures to stop ourselves from dying in a car crash.
Rob Wiblin: Cool. All right. Let’s push on, lest we cannibalize the next episode that we’re going to do.
How is Will doing? [00:33:16]
Rob Wiblin: One thing that is always a little bit hard to make sense of when I’m updating your little bio at the top of the episode every time we do an interview, is just that you seem to have so much stuff going on at any point in time. There’s so many competing priorities pulling you every which way. At the moment, you’ve got the whole book thing coming up and you’ve been working on the book. You’re still at the University of Oxford, right? You’re still a philosophy professor as I understand it. You’re helping to run the Forethought Foundation, which you also helped to set up. Now you’re an advisor on the Future Fund, as of recently. And I know whenever 80,000 Hours has a particularly thorny problem that we can’t figure out between the 20 of us, you are one of the people we turn to for advice. And I think we’re in good company there.
Rob Wiblin: When I solicited questions for this episode, quite a lot of people responded with this similar sort of disbelief at the lifestyle you lead, the amount of work responsibilities that you have. People were curious, how are you doing on a personal level? How do you cope with it all?
Will MacAskill: It’s a good question. I very often feel too thinly spread and that is a cause of stress. We’ve talked a lot on previous podcasts about how “it’s a marathon, not a sprint” — the importance of looking after yourself and making sure that you’re working in a way that’s sustainable over the long term. I will acknowledge the last four or maybe six months have not been sustainable in terms of how hard I’ve been working. In particular, I really did a big sprint to finish the book over the course of 2021. I just really got quite obsessed by it. I was working extremely hard.
Will MacAskill: Then I was looking forward to some time off. But then there was this whole Future Fund, FTX Foundation thing. And I went to talk with Sam Bankman-Fried, who was just on the podcast, and Nick Beckstead, who I’m sure will be on sometime this year. And it just really did seem to me that this was just an enormous inflection point for EA, and that I could just be very unusually helpful, having worked with Nick for many years and getting on very well with him. So I did cancel all of those more fun plans and just worked on that. And that used up just quite a lot of my time over the last few months, and it means I’ve been traveling quite a lot as well.
Rob Wiblin: So basically, you’ve been burning out somewhat, or at risk of burning out, at least very recently.
Will MacAskill: A little bit. But the positive lesson, I think the fact that I had been so attentive to the idea of “it’s a marathon, not a sprint” for many years before that did allow me to pull out the extra gear for these last few months. I also will return to that again. I’m taking some time off literally next week. Maybe things are feeling more set up with the Future Fund. Obviously the book launch will be intense, but I’m going to move back to a more sustainable state. But I think the fact that I had invested in myself, I just now am far happier than I was say 13 years ago. And progressively so: I estimate I’m five to 10 times happier than I was.
Rob Wiblin: So you’re public about having had depression for quite an extended time, as a teenager, as an undergraduate as well, and having spent a lot of time working on that. How have you ended up five or 10 times happier? It sounds like a large multiple.
Will MacAskill: One part of it is being still positive, but somewhat close to zero back then. But also now I would say that if I think of my peer group, say friends from university and so on, I’d probably put myself in the happiest 10% or something. So that’s really pretty good.
Rob Wiblin: That’s so good.
Will MacAskill: I mean, I feel happy about it. And that’s been from just a wide variety of things, just over a very long time period. There’s the classics, like learning to sleep well and meditate and get the right medication and exercise. There’s also been an awful lot of just understanding your own mind and having good responses. For me, the thing that often happens is I start to beat myself up for not being productive enough or not being smart enough or just otherwise failing or something. And having a trigger action plan where, when that starts happening, I’m like, “OK, suddenly the top priority on my to-do list again is looking after my mental health.” Often that just means taking some time off, working out, meditating, and perhaps also journaling as well to recognize that I’m being a little bit crazy.
Rob Wiblin: Overriding the natural instincts.
Will MacAskill: Exactly, yeah. So perhaps I’ll be feeling on a day, “I’m being so slow at this. Other people will be so much more productive. I’m feeling inadequate.” Then I can be like, “OK, sure. Maybe today is not going so well. But how is the last six months or something? Think about what you have achieved then.” And then it’s like, “OK, that does seem better.” So I’ve just gotten better at these mental habits, and that’s just been this very long process, but it’s really paid off, I think.
Rob Wiblin: Was it a matter of finding out that that was something that’s important to do? I think a lot of people who tend to beat themselves up about work, or just about their performance in life in general, they might know in principle that those are exactly the moments when they need to ease off on themselves and focus on their wellbeing and their mental health. But of course, that’s the last thing you want to do if you feel like you’re not being productive. The natural instinct is, “Now I need to double down. I need to pull an all-nighter to finish this project.” And that instinct can be so strong that it can override what you’ve read in any mental health books.
Will MacAskill: I think this was a huge realization for me. And I have to thank a very excellent therapist, who I have subsequently put a bunch of EAs onto. I think she was really confused with why she gets so many referrals from me. Because when I came in there — in 2012, 2013, probably even earlier, 2011 — I definitely had this mindset that the self-flagellation, the negative blame propaganda was very important. I remember she said, “Well, you seem very stressed.” I was like, “Of course I’m stressed. I’m a utilitarian.”
Rob Wiblin: “We have to suffer!”
Will MacAskill: Exactly.
Rob Wiblin: That’s the core part of the philosophy.
Will MacAskill: Exactly. It felt like it’s a sacrifice of my own wellbeing, but it’s in order to achieve things. And she just called bullshit on that. And I think she was just totally correct. She was like, “No, you’re just beating yourself up. And you would do that, whether or not you were trying to do good in the world.”
Rob Wiblin: Did you only just start doing this when you discovered moral philosophy?
Will MacAskill: Exactly. No, I was doing it when I was a teenager and I wanted to be a poet. So that was the fundamental insight: “Oh no, actually, maybe this is bad for me doing good in the world.” And I started in that lesson, and it took a very long time, because you’ve built up by that point this mental habit.
Rob Wiblin: Such a strong urge.
Will MacAskill: This whole set of mental propaganda to back that thought of, “No, this is important, and the things you really care about, you will drop if you…”
Rob Wiblin: If you express any kindness to yourself.
Will MacAskill: Exactly. And so it was helpful having some role models as well. I think Holden [Karnofsky]’s really good in this regard. He’s always someone I’ve always looked up to and respected an enormous amount, and just is hugely productive. And he would be like, “Yeah, I never feel guilt. I never beat myself up. How do I decide how many hours I do? Well, I look at what’s my average number of hours that I’ve worked the last few months and I work that number. Or I’d maybe try and be higher than that number.” I was like, “You can do this? It’s possible?”
Rob Wiblin: And when does the suffering come in? When do you schedule that?
Will MacAskill: Exactly. I think it is just important, and having those mental habits in place from this long time period of investment does mean that if there are particular moments, opportunities, where it’s like, “Oh wow, I can have real impact here,” you do have this extra kind of gas canister to use.
Rob Wiblin: Yeah, we should really do an episode on self-compassion at some point soon. I feel like I know a lot of people, because of the seriousness of the problems that we’re dealing with or trying to solve, who are extremely harsh on themselves all the time, and exactly have this mentality of, “If I wasn’t harsh on myself, if I wasn’t beating myself up all the time about ensuring that I get my work done, then I wouldn’t get anything done.” There’s a lot of research into this now, which shows that is mistaken and it’s very counterproductive, which is unsurprising in a sense.
Rob Wiblin: If you had someone who was managing someone in an organization through a difficult project, a difficult time, that they’re taking on a big challenge professionally, you would never have them follow this person around and denigrate their efforts all the time. “You’re not working hard enough. You need to go harder. This just shows your stupidity, the work that you’ve done today.” Obviously you’ll get a lot more done if you had someone who was supportive, coaching you through the challenges. Sometimes maybe geeing you on a little bit and saying, “No, you can do better” — but being brutal to someone is terrible management. And yet that is the thing that people are saying to themselves in their head all the time.
Will MacAskill: It’s incredible how much of an asymmetry people can have — certainly I had, and still have, with respect to other people versus myself. I remember I did a meditation that really had quite a big impact on me, which was just a gratitude meditation. I was very used to being grateful for other people and feel that very strongly. The meditation got me to be grateful for myself, asked me to do that. I’d never thought of doing that before. And I was like, “Yeah, man, I really am thankful for the things earlier Will did.” And it was pretty striking that an attitude I would so easily have towards other people, I hadn’t even thought to have towards myself. And that’s just absolutely the same as well with being critical and forgiving. Very different attitudes by disposition to other people and myself.
Rob Wiblin: I used to do a bunch of this. I think I was never quite at the level that you were back in 2011, but I used to beat myself up pretty regularly about, “I’m too lazy. I’m messing this and that up.” I think antidepressants made a huge difference for me, and just breaking that habit of waking up in the morning and spending half an hour in bed, just thinking about how I’m failing to achieve various goals. And letting go of that did not damage my productivity at all, or not getting as much done. I’m more energetic and more enthusiastic to do things, because it doesn’t bring with it this anxiety and self-denigration.
Will MacAskill: That’s great to hear. And you do just seem a lot happier and more zen than back in the day.
Rob Wiblin: Yeah. It is just such a revelation to realize that it was completely unnecessary all along.
Rob Wiblin: Speaking of self-sacrifice and doing difficult things, a couple of weeks ago, you had a journalist from The New Yorker, who’s writing a profile on you, following you around for a week. Maybe you are used to this and this is just like water off a duck’s back at this point, but this sounds like living hell to me, having a journalist just following me around all week, watching my conversations, what I’m doing…
Will MacAskill: Definitely, I’m not used to this. It’s funny. I’m now aware that he’s going to be listening, so hi, Gideon.
Rob Wiblin: I’m sure he’s a lovely guy. If I asked Gideon if I could follow him around for a week, he probably would say no.
Will MacAskill: I’m thankful that he is a great guy, and genuinely engaged in these issues, and just a wonderful profile writer as well. So yeah, you can read some of his work in The New Yorker. Did a great profile of Paige Harden, who wrote a book, The Genetic Lottery.
Will MacAskill: But yeah, it’s definitely intense. Like you, I find meetings and social interaction kind of draining. And I deliberately scheduled my time so that he could have more of an insight into what things are like, so if I had more meetings, it was going to be maybe more tiring anyway. And then I had an EAGx Oxford as well. But certainly when you’ve got a third person observing, you’ve now got to track, “Is he OK, as well?” It was definitely pretty tiring. My EAGx Oxford had a fireside chat, which I think was my worst fireside chat. I was pretty tired by the end of that week.
Rob Wiblin: Well, we’ll stick up a link to that so that listeners can check that out on YouTube.
Will MacAskill: I hope you do. There’s one point where I get asked a question, and then just, midway through, I’m like, “I have no idea where I’m going with this.”
Rob Wiblin: OK, so it was challenging. Are you excited to read the profile?
Will MacAskill: I’m a mix of excited and terrified. I mean, we’re going to have many more conversations, and he’s going to talk to my family and lots of my friends and other people in EA and so on. So yeah, we’ll see. Who knows where I’ll be.
Rob Wiblin: Well, I guess, thanks for suffering this so that the rest of us don’t have to have any New Yorker profiles.
Will MacAskill: Yeah, I guess the worst case is that you can get an enjoyable read.
Having a life outside of work [00:46:45]
Rob Wiblin: I think I do remember a couple of years ago you saying, “I’m kind of taking it easy this year, because I don’t feel like this is the most important year. So this is a year when I can kind of relax relative to the past and build up energy for the future.” And that strategy has now paid off, it seems.
Will MacAskill: Absolutely. I took several holidays that were three weeks long each, and did other things. I got much more into music, including making music. And for sure, I’m still at the moment not doing as much of that as I would like. But it makes it much easier because I’ve had this period of time where I was investing much more in non-work aspects of my life. That means I have cultivated really strong friendships, really wonderful relationships that mean I get extremely high-quality time off when I have it. Similarly with other interests as well. And the biggest thing of that as well is having a multiple identity. Earlier days of EA, I really was so obsessed with the work that it was like my defining characteristic, just my identity was my work or something. And that means if it’s going badly, then…
Rob Wiblin: And I guess if your friendship group is the same as your work colleagues to a large extent.
Will MacAskill: That’s right. Whereas now, I spent years very deliberately cultivating a very strong social circle outside of that, including my old friends from school and university. And that’s just really nice, because I know that even if all the work stuff went badly, I’d still have a good life doing other stuff.
Rob Wiblin: Yeah. That’s a shift that’s been helpful for me the last couple of years, is meeting more people who have nothing to do with effective altruism or work. Just feeling like I have multiple aspects in my life is so refreshing. So, so relaxing. So reassuring as well.
Will MacAskill: Absolutely. For all this stuff, I think people should always just think “it’s a marathon, not a sprint”– that’s the mantra I use for myself. And I do know some people who just worked extraordinarily hard for short periods of time and then burned out after that point. It’s really something to guard against.
Rob Wiblin: I guess it’s a slightly perverse thing that the more successful your career goes, the more impactful each additional day of work is going to be. So any time that you take a holiday or anytime you take an afternoon off, the moral importance of that only gets larger and larger. And yet, you do still have to maintain that level of balance. At some point, someone has sent you an incredibly important problem and you have to say, “No, I’m not going to answer that email.”
Will MacAskill: It’s wild. It’s completely wild. And it’s a way in which doing good is just different from making money or just self-interest. If you are really successful in your business career, and you’re just aiming to make as much money or have as much wellbeing for yourself as possible, then at the point where you’ve done really well at that, then hey, you can retire. Money’s not worth that much anymore. Not for doing good. It’s the opposite. Once you’ve been very successful and you’re now in a position where you can have even more influence —
Rob Wiblin: Things just get worse.
Will MacAskill: Yeah, you can have even more influence now. So that is tough. And that’s something where the mental health training that I have done just becomes even more useful. I definitely reflect, because with Future Fund things feel very high stakes and for the rest of EA as well — it’s just enormous stakes compared to how things were, say in 2010. If I had the mental state I had in 2010, I would not be able to cope at all. Whereas being in a position where I can feel more empowered to look after myself and just be less anxious does allow you to become more comfortable with things that are so high stakes, and therefore have a bigger impact overall, I think.
Rob Wiblin: You are in this slightly odd position of having had a really huge influence over the careers of, I guess, hundreds, thousands of people probably out there. I imagine that conversations that you’ve had with people have led to dozens of people changing their careers, and probably in terms of stuff that you’ve written more broadly, it’s made a big change to the lives of plenty of people that you meet. How does that make you feel? Is that a high-pressure situation? How do you deal with that mentally?
Will MacAskill: Hopefully the influence I had is positive rather than negative, but I mean, it’s great. One thing that’s just been interesting is way back in 2010, 2011, we were doing these arguments. I was going around giving these high-impact careers talks, talking about earning to give. And the twist was, “But you can do even more good again, by going around and convincing other people.” And it’s so interesting looking back then, because these were all on-paper ideas and we really had no clue how true they were going to be in reality. What was it like actually?
Rob Wiblin: And this argument of, “No, I shouldn’t do X. I should convince everyone else to do X” sets off red flags for a lot of people. It feels wrong somehow.
Will MacAskill: For sure. It’s definitely true that not everyone can do that, so you’ve got to have some argument about why you’re in an unusual position. But I at least think for me that was just absolutely correct. So I gave a talk at MIT about earning to give and Sam Bankman-Fried was there and he went and earned to give, and now he’s doing rather well. That was really the important impact.
Will MacAskill: Similarly, I met Leopold Aschenbrenner because he got involved with Global Priorities Institute and I managed to convince him not to go to law school. Now he’s working at the Future Fund, having just enormous impact there. It kind of relates to the mental health thing a bit, something at least I find helpful in my own case is like, “Well, if I don’t have any impact, at least I’ll keep doing good through these people that I’ve helped to nudge in a more positive direction.” That is kind of reassuring.
Underappreciated people in the effective altruism community [00:52:48]
Rob Wiblin: There’s this general problem for the effective altruism community, or indeed any group of people whatsoever, that it’s very easy to focus on what things are going wrong and what people you think are making mistakes at any given point in time. But to motivate themselves, people need appreciation and encouragement, just as we’ve been talking about.
Rob Wiblin: And at any point, there’s hundreds or thousands of people in our broader social networks who are just going into work every day, doing things that aren’t always super appreciated or super understood, whose contributions ideally might get a bit more acknowledgement and understanding. Is there anyone or any groups you’d like to shout out to now as doing something that’s either especially cool, or just unsung heroes who are going in and doing the important work of maintaining and growing things every day?
Will MacAskill: I mean, loads of people I’d love to talk about. In terms of people that I just really value the work they do, which is often just a very unpleasant job. I think Nicole Ross and Julia Wise at the CEA Community Health Team. I just really, really value what they do. It’s often just really tough. You know, the whole job’s just dealing with messy issues and problems, often in a thankless way. And so, I just have huge, huge respect and gratitude for them.
Will MacAskill: In terms of people doing stuff that’s really cool, there’s a project that got set up fairly recently — I guess over the course of the pandemic, so I only properly learned about it a few weeks ago — which is the Lead Exposure Elimination Project. Clare Donaldson was giving the talk I was at. I think there’s three employees and they’re all directors. And this is just so good. My understanding is that a literature review was done, I think by Rethink, saying that reducing lead exposure seems like it could be this enormously cost-effective thing.
Will MacAskill: And then Jack Rafferty and Lucia Coulter set up this organization while being incubated at Charity Entrepreneurship. And then, on the basis of what’s so far a shoestring budget, they have now convinced the government of Malawi to eliminate lead paint. And it was amazingly simple the way they did this, which is going to Malawi, buying loads of paint, running a test — as I understand it, pretty simple test — to see the lead concentrations. For many types of paint, the lead concentrations were extremely high, and lead is extremely damaging to both health and brain development. And they conducted this study. Then they went to the government of Malawi. Turns out Malawi actually already had regulation, so it wasn’t that they even needed new regulation.
Rob Wiblin: They just weren’t enforcing it.
Will MacAskill: They just weren’t enforcing it. So from there, on the basis of that study, I think they were just relatively easily able to convince the government to say, “OK, we’re going to start enforcing this.”
Rob Wiblin: Yeah, that’s incredible.
Will MacAskill: That’s just amazing, what quick turnaround to impact, doing this thing that’s just very clearly, very broadly making the world better. So in terms of things that get me up in the morning and make me excited to be part of this community, learning about that project is definitely one of them.
Rob Wiblin: The new Charity Entrepreneurship organizations, they really punch above their weight in my mind. It’s amazing to see these cool new things going on that for some reason, no one else was really doing.
Will MacAskill: Absolutely.
Rob Wiblin: I met Andrés last night, who is one of the founders of the Shrimp Welfare Project. I don’t know that they’ve had quite as big wins as this lead paint thing, but I think they’re the first group of people ever focused on shrimp welfare in particular. Prawns and shrimp and crustaceans just generally, even among animal wellbeing groups, tend to get completely ignored. So they’re almost the first people looking at how do you farm shrimp, and what improvements could be made such that it economically doesn’t perhaps cost anything particularly to the farmers. So you could get a lot of uptake, but the shrimp don’t suffer as much because of the incredible crowding that they’re typically exposed to.
Will MacAskill: I think one reason I just love stuff like this, just for the EA community as a whole, the value of getting concrete wins is just really high. And you can imagine a community that is entirely focused on movement building and technical AI safety.
Rob Wiblin: One could imagine …
Will MacAskill: [laughs] One could imagine. I mean, obviously those are big parts of the EA community. Well, if the EA community was all of that, it’s like, are you actually doing anything? It is really helpful in terms of just the health of the overall community and culture of the community to be doing many things that are concretely, demonstrably making the world better. And I think there’s a misunderstanding that people often have of core longtermist thought, where you might think — and certainly on the basis of what people tend to say, at least in writing when talking in the abstract — “Oh, you just think everyone should work on AI safety or AI risk, and if not, then bio, and then nothing else really matters.”
Will MacAskill: It’s pretty striking that when you actually ask people and get them making decisions, they’re interested in a way broader variety of things, often in ways that are not that predictable necessarily from what they’ve just said in writing. Like the case of Lead Exposure Elimination Project. One thing that’s funny is EAs and names: there’s just always the most literal names. The Shrimp Welfare Project.
Rob Wiblin: “What are you doing?” So LEEP, right? So the Lead Exposure Elimination Project?
Will MacAskill: Lead Exposure Elimination Project. Anyway, very literal name. We know what it’s about. So I saw the talk, I made sure that Clare was applying to Future Fund. And I was like, “OK, we’ve got to fund this.” And because the focus is longtermist giving, I was thinking maybe it’s going to be a bit of a fight internally. Then it came up in the Slack, and everyone was like, “Oh yeah, we’ve got to fund this.” So it was just easy. No brainer. Everyone was just totally on board.
Will MacAskill: And why is that? Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development — especially if it’s like, “We’re actually making real concrete progress on this, on really quite a small budget as well,” that just looks really good. We can just fund this and it’s no downside as well. And I think that’s something that people might not appreciate: just how much that sort of work is valued, even by the most hardcore longtermists.
Rob Wiblin: Yeah. I think that the level of intuitive, emotional enthusiasm that people have about these things as well would actually really surprise folks who have the impression that, if you talk to you or me, we’re just like “AI or bust” or something like that.
Will MacAskill: Right. Exactly. Whereas, no, this is really getting people out of bed in the morning.
A culture of ambition within effective altruism [00:59:50]
Rob Wiblin: OK, let’s push on to the main topic for today, which is how the overall situation has shifted over the last couple of years for people who want to improve the world along the kinds of lines that we’ve discussed in the show, and maybe how this ought to affect the kind of culture that we’re aspiring to develop and the sorts of projects that we’re trying to get launched and off the ground and expanding. We’re recording a couple of days before Effective Altruism Global London in April 2022. Effective Altruism Global, for people who don’t know, is this get-together for people who are working on the question of how to do the most good in the world and trying to figure out how they can make the biggest contribution themselves. And I think this one is the biggest ever?
Will MacAskill: That’s right. 1,300 people.
Rob Wiblin: Wow, OK. I think this is actually the first one that I’m not going to, after all of these years.
Will MacAskill: Yeah. It’s like Burning Man for Rob now: “It’s not cool anymore. I was there at the beginning, and it was four people in the seminar.”
Rob Wiblin: Yeah. “Effective altruism, it’s too big now.” But yeah. It is actually incredible, just the number of people and the caliber of the people at these events.
Will MacAskill: Yeah, absolutely.
Rob Wiblin: I went to the last one, and I was just constantly blown away. But anyway, yeah, the unofficial theme for this particular conference is “A Culture of Ambition.” I know you have a whole lot of thoughts on that topic, and I think you’re giving a keynote about that?
Will MacAskill: I’m giving the opening talk. That’s right.
Rob Wiblin: Opening talk. So yeah, for people who are serious about improving the world, in a kind of effective altruism mindset or way, how should their attitude and approach potentially have shifted over the last couple of years?
Will MacAskill: So yeah, this theme is “A Culture of Ambition.” Why, if you’re altruistic, should you try and be very ambitious, or have ambitious plans? I think there’s a few reasons for that. One is more theoretical: if you think the distribution of impact is a fat-tailed distribution, that means that the mean impact is bigger than the median. So if you just aim to do kind of a typical amount of good, then you’re going to do much less good in expectation — and I think plausibly it’s the expected value of the good that you do that matters — than if you’re really aiming for the best. And that’s because, in the nature of the distribution, the best outcomes are just way, way better than the typical outcomes.
Will MacAskill: So there’s this theoretical case, and we’ve seen that in practice as well. I think it’s clearest if you look at people earning to give, because there, we can just assign a number on the impact they’re going to have — at least the impact in terms of donations — where the distribution of how much money you can make from careers that you might go into, earning to give, is just clearly this fat-tailed distribution.
Will MacAskill: When we look at the people who’ve earned the most in order to donate via earning to give, well, it’s enormously driven by those at FTX now. Dustin Moskovitz is obviously another major donor — my understanding is that he got into giving after cofounding Facebook; I don’t know about his motivations before that, whether he was always planning to give. But certainly for Sam and the other early employees at FTX, their aim was to make money in order to donate it. And the amount they’ve raised now is just very large compared to how much they could have made, if Sam had stayed working at Jane Street, for example, even though Jane Street’s a very well-paying quantitative trading firm.
Rob Wiblin: Yeah. Many listeners will have listened to the interview with Sam Bankman-Fried, but a couple won’t have, so it might be worth saying something quickly about the kind of sums that we’re talking about.
Will MacAskill: Yeah, that’s right. So Sam intends to give away essentially all of his wealth, like 99%, as do the other early employees, is my understanding. I don’t know the details for each person. Sam’s net worth, at the moment, was estimated by Forbes to be $24 billion. Gary Wang, who has helped create FTX, his net worth is now public at about $6 billion. And there are a few other early employees of FTX who are also doing it for earning-to-give reasons as well. So we are talking about very large sums of money now.
Rob Wiblin: Yeah. Sorry, go on. What implications does that have?
Will MacAskill: So that’s just an illustration in practice of this fact that impact can be fat tailed. We focused on earning to give — that’s nice and quantifiable. But I’m sure this is true as well in other areas. Within politics, if you manage to become a head of state, then you’re having very unusually large impact. If you’re doing research, like being one of the seminal researchers, you’re going to have enormous impact compared to if you are a typical researcher.
Will MacAskill: And that really means that if you want to do as much good in expectation as possible, ideally you should be aiming for those best-case outcomes. And probably, you’ll fail — more likely than not. When Sam set up FTX, he thought it was a 20% chance of succeeding — and he was being relatively optimistic, he thought at the time. But obviously, he did end up being successful. But even if you divide that by five, expected value —
Rob Wiblin: It’s still a big number. Yeah.
Will MacAskill: It’s still bigger, and it’s bigger than what he would’ve made if he’d stayed at Jane Street. Similarly, when people have looked into this — like, “Should you go into politics?” — obviously it’s extremely hard to become an MP in the UK. And even harder again to become, say, the prime minister. But the chance of achieving one of these best-case outcomes is plausibly where much of the expected value comes from. And I think the same could be true for research and other career paths.
Rob Wiblin: So it seems like, over the last 10 years, we’ve had a decent number of people who’ve had this kind of “swing for the fences” approach to their careers or to doing good. Do you think we’re still short of what would be optimal, in terms of how risk-taking people typically are?
Will MacAskill: I think we should expect that to be the case, where this is a way in which altruistic reasoning might be quite different from, say, self-interest. Again, it’s easiest thinking about earning to give: if you’re just earning money for yourself, then the difference between making 100,000 and 200,000 is very great. The difference between making 1,000,000 a year and 1,100,000 a year is not nearly as great. Whereas altruistically, plausibly, those differences matter basically about as much. And that’s quite an unusual thing to kind of think about.
Will MacAskill: Certainly, my attitude coming into all of this was very much not thinking, “Oh, wow. What are the really big things I could potentially achieve?” — and then really going out to try and do them. When I started giving with Giving What We Can, I thought my personal donations were going to be most of my impact in the course of my life. I remember talking to Toby, and he thought, “Maybe one day we’ll have a part-time secretary.” And I was like, “No way. That will never be worth it.”
Rob Wiblin: Yeah. I guess you were planning to pursue a fairly normal career with a typical salary. Giving, I suppose more than 10% was your pledge, but still we’re talking like £10,000 or something a year.
Will MacAskill: Yeah, exactly. Well, maybe tens of thousands, but not enormous sums. And there’s also just a kind of modesty thing. Like Silicon Valley does a lot to encourage this culture of really trying to think big. And I think there are natural ways you can poke fun at that. That’s often kind of justifiable, when it’s grandiose or unjustified. But —
Rob Wiblin: They have a point.
Will MacAskill: Yeah. If you’re trying to do as much good as possible, I think we should take these facts seriously. And it is just kind of an unintuitive thing, where it’s like, “OK, I want to have some really big plan.” It might feel overly self-important or something, but I think it’s at least something that you’ve really got to think about.
Rob Wiblin: So the clearest examples here are in the money case, where it’s easily measurable. Another case where I think there are instances that we can’t talk about as much is in politics — where it seems like there’s been a few outlier cases, where a few people have done an enormous amount of really valuable work but can’t be as public about it, just because of the nature of the industry.
Rob Wiblin: But one lesson is, looking at all of those experiences, we’re kind of getting confirmation of this idea that we tentatively had: that maybe the right way to go, if you’re trying to maximize your impact, is to take these high-risk, high-return approaches to choosing your career. Start a business that will probably fail but could be enormous if it succeeds, or try to become a national senator rather than a state senator, that sort of thing.
Rob Wiblin: But I guess there’s this other thing, which is that now we’ve got all this extra funding. It’s enabling people, I guess, to be much more risk-loving than perhaps would have been reasonable or sensible 10 years ago.
Will MacAskill: Yeah, that’s right. We now are in this position where we’ve been fortunate enough that we have a lot of financial capital that we can use to do good things. And that gives us a moral responsibility to try to use it well. Effective altruism started with this idea of you have this tiny amount of funding — like from the Oxford thread of effective altruism, it’s me and Toby, and we’re thinking about how to spend £1,000 or something, maybe £10,000, among different preexisting organizations — so you’re just thinking about what marginal difference you can make. Whereas now, it’s like, “What projects could you start?” That’s a serious question, because you could have funding for them.
Will MacAskill: Then secondly, not just what project could you start that would have the highest cost effectiveness with a marginal use of funding, but what could scale? Because even if you’ve got one organization that’s half as cost effective but can use 10 times as much funding, by setting that up, you will do five times as much good. (Well, not quite, but to a first approximation.) That’s just a very different way of reasoning than what effective altruism started with. And this is more true, I should say, on the longtermist side of things. I think it is still true in neartermist giving as well because —
Rob Wiblin: And certainly things have shifted in that direction, relative to where they were before.
Will MacAskill: Yes. That’s right. Especially because the global health and wellbeing giving is scaling up much faster than the longtermist giving: GiveWell is now aiming to be moving a billion dollars a year. My guess is that the Open Philanthropy Global Health and Wellbeing team is shooting for something like half a billion a year. I think it’ll probably be a while before longtermist giving amounts to that amount. But the difference is that there are very few preexisting organizations focused on promoting long-term value, or explicitly focused on that. So there’s more in the way of creating new things to fund.
Rob Wiblin: That has to be done.
Will MacAskill: Yeah, that has to be done. Whereas in the case of global health and development, there’s still huge funding gaps in bed nets and cash transfers and so on.
Rob Wiblin: Yeah. Mathematically, the mental switch is from thinking about benefit divided by cost, which is like cost effectiveness, to thinking about benefit minus the cost, which is like total impact basically. The odd thing is that you can have a project that is half as cost effective — so the benefit divided by the cost is half as good. But if it can be 10 times as big — that is, if it can absorb 10 times as much cost — then it will do more good in total.
Rob Wiblin: So inasmuch as you’re finding it hard to fund anything that solves a particular problem, then you care about the second one. Or inasmuch as you have close to unlimited funding on the margin, or you’re not giving up something incredible — like an incredible project that is actually working to solve the problem that you care about on the margin — then you care about this benefit minus cost.
Will MacAskill: Yeah, exactly. And creating scalable projects is something I think that’s particularly hard for the more longtermist giving.
Massively scalable projects [01:11:40]
Rob Wiblin: Yeah. One term I only heard for the first time about a year ago was “megaprojects.” Is that a coinage of yours?
Will MacAskill: So in this context, it is a coinage of mine. I think it’s not necessarily the best coinage, because it does also refer to a term that Flyvbjerg uses.
Rob Wiblin: Oh, it’s like huge infrastructure.
Will MacAskill: These huge bureaucratic infrastructure projects that very famously massively overrun in cost, and they don’t justify themselves.
Rob Wiblin: It’s like the Burj Khalifa.
Will MacAskill: Yeah. Or like dams that are enormously expensive. And so, the better term might be “massively scalable projects,” because that gets across more of what we care about, which is: maybe you’re starting small, but the point is you’re starting small to build something that could have very large total impact. Rather than you’re starting off with a thousand-person organization that’s very bloated or something.
Rob Wiblin: Yeah. Are there any examples of massively scalable projects that plausibly longtermists or people involved in effective altruism should be piling onto, that are worth highlighting so people can get the concept?
Will MacAskill: One project I’m particularly excited about and advising on is media: so documentaries, potentially also TV shows and movies. And Natalie Cargill is leading on that, working with Joseph Gordon-Levitt. That’s something that I think can be hugely impactful. I mean, we’ve had this recently with existential risk: Don’t Look Up I think has had a kind of big influence. But the movies Deep Impact and Armageddon were helpful, in terms of getting the Spaceguard program set up to detect the amount of asteroids. Fiction such as Ghost Fleet and The Cobra Event have also been helpful.
Rob Wiblin: What’s Ghost Fleet? I haven’t heard of that.
Will MacAskill: Ghost Fleet is about lethal autonomous weapons, by a different Peter Singer, actually: Peter W. Singer. My understanding is that he wrote just tons and tons of policy and these academic articles, and just no one listened to him. So he wrote a novel that has these elaborate footnotes, explaining why it’s all representative. I’ve not read the book. I may be misrepresenting it.
Rob Wiblin: Yeah. But that got much more attention than the papers ever did.
Will MacAskill: Exactly. Yeah. So this is something where it often requires larger amounts of money in order to have an impact. You can often make an investment of it as well. But it has potential for huge cultural influence, so that’s one thing I’m excited about.
Downsides and risks from the increase in funding [01:14:13]
Rob Wiblin: I suppose the virtue of massively scalable projects is just that they can potentially do more good in total — by absorbing lots of money in very useful ways — if not the very most useful ways imaginable.
Will MacAskill: Yeah, yeah.
Rob Wiblin: Are there any important downsides to having this mentality? I guess we’ve slightly alluded to one already with the megaprojects comparison.
Will MacAskill: Yeah, exactly. I think there are many risks and many things we should be paying attention to. I’m very happy to see this discussion in the EA Forum a bit recently as well. One is just extravagance. So recently, when I had this writer visiting, I took a little tour around memory lane and showed him some of the old sites. And we went to the office that was the first CEA office we had, which is in this estate agent that you had to go in and then walk down into the basement. It was this almost lightless room, and we had like 12 people kind of stuffed in there, everyone on their laptops. I remember a funder, an early donor to CEA, Fred Mulder, when he saw it, he was like, “Is that legal?” I think it probably wasn’t.
Rob Wiblin: No, and I think we thought it might not be legal at the time. And we decided just not to pursue that.
Will MacAskill: Yeah. There’s probably some health and safety we were getting in the way of.
Rob Wiblin: So there was one room that was OK to work in, I remember. But there was another room where I think I had to crouch in order to get in there, and it had no windows whatsoever. And you’d have people regularly working in there and taking meetings.
Will MacAskill: I was just so happy to see it again. Because it was the last time, actually, before it gets turned into a different office. So that was the kind of mentality in which effective altruism was founded. And I think it’s important that we maintain that, as best we can. Where it feels kind of morally appropriate, relative to the problems of the world — the severity of poverty, the amount of suffering, the gravity of the risks we face — to be in a basement, working in this close environment. What feels morally appropriate is different from what’s actually best. But there are risks if we just start throwing money around or something. Take a third party: they’re not going to be able to tell. So we want to make sure that we’re still conveying the level of moral seriousness that we really have.
Will MacAskill: And so, it does mean some things. Like my attitude to my own giving, for example: I’ve increased my commitment to that. It’s kind of this funny thing. When I first started donating, I was like, “That’s going to be most of my impact in my life, so this is really important.” And now it’s like, “Raising a lot of money is not really how I’m going to have impact.” So I don’t know, I felt more confused about it. Now, I’m like, “OK, no. It’s more important again.” Because now that we’ve got so much funding, I want to really demonstrate that I am not doing this for any sort of financial gain to myself — I’m doing it because I want to try and make the world better. And I think that could be true for other sorts of moral commitments as well, like being vegetarian or vegan. I know someone who donates a kidney, and I’m like, “OK, this guy is morally serious. I have both my kidneys, I should say…”
Rob Wiblin: OK, so there’s the appearances issue. That if you just start throwing money around, people might lose trust that actually you’re doing this for ethical-related reasons at all. Is there another aspect to it as well? Is it kind of important to signal to yourself that things are serious, so that you don’t just become blase about what you’re doing?
Will MacAskill: Yeah. I think there’s a few issues related to mission drift here. One is that other people might start joining the community, not with the best of intentions. That’s something we should be on guard for. Another thing that we should worry about is if people start having motivated reasoning. If there’s some things that donors believe, and others don’t, then, “Oh, maybe it’s more convenient. I’ll just believe that as well.”
Rob Wiblin: To just go along with it, yeah.
Will MacAskill: So we are trying to combat that quite directly. Future Fund has this regrantors program. People now will have their own independent funds that they can distribute. And it’s presumptively approved, precisely in order to avoid this intense consolidation of funding power. We’re also considering having “change our worldview” prizes. So if you can shift our credences on important topics — like what’s the magnitude of AI existential risk or something — if you can shift it by a certain amount, you win a prize, you get money. I think that’s another way in which this influx of resources could be bad, and we really need to be very guarded against.
Will MacAskill: And then a final worry is something that’s less explicit motivated reasoning and more just you lose evolutionary pressure that we want to cultivate, and definitely previously had cultivated. So with startups and companies, those that aren’t profitable go out of business. And there’s this classic problem in the nonprofit world that that doesn’t happen for bad charities. There would definitely be a worry that if there’s plenty of funding for various EA-oriented projects, the bad ones might still manage to get funding. And they kind of putter along, even if the people would be better used somewhere else.
Rob Wiblin: Yeah. And the trouble there is that there would have still been funding, and that’s some cost. But the real problem is that they’re absorbing people who could be doing something substantially better.
Will MacAskill: Exactly.
Rob Wiblin: If only the funders said, “We think you can do better than this.”
Will MacAskill: Exactly. And everyone has good intentions. People are maybe a bit biased in favor of their own thing, but that’s extremely natural. So there’s nothing untoward going on. But you’ve lost a bit of what might be kind of healthy ruthlessness — where certain bad projects, or projects that are not as effective, shouldn’t keep going. I think that means we should really celebrate organizations that just choose to not exist.
Will MacAskill: No Lean Season was an example of this. It’s a bit of a shame, because people forget about them because they’re not around anymore. But No Lean Season was an excellent organization working on promoting seasonal migration to the cities, which was beneficial economically. They went through Y Combinator — I was in the same batch as one of them — so they were clearly very promising. They did an RCT on the program. It wasn’t as effective as they had initially hoped. They just shut down. I was just like, “This is amazing. When does this happen? This is so good.”
Will MacAskill: This is particularly important if we’ve got this kind of culture of ambition, framing this like: “Really try and aim if you can for the best possible outcome, while avoiding risks of doing harm, because most attempts will fail.” That means if you have 10 people, let’s say they all try and do their nonprofit startup. One of them perhaps really crushes it and is the best. Probably, what should happen is the other nine shut down and join that one. And that’s very hard psychologically. That takes some kind of cultural engineering to encourage that as something that can be really rewarded.
Rob Wiblin: Yeah. And there won’t be the pressure to create that culture, if everyone can get funding.
Will MacAskill: Exactly. Yeah, exactly.
Rob Wiblin: So like, “The marginal project is pretty mediocre. This is mediocre. We’ll just fund this as well.”
Will MacAskill: Exactly, yeah. So it’s very tough. On the one hand, we really want people to be starting projects, especially projects that can potentially scale. But we really want to maintain this culture where people have high moral standards, people are able to still demonstrate moral seriousness. And also, just that we still have these incentives and evolutionary mechanisms in place.
Rob Wiblin: Yeah. That last one makes sense to me. The idea that it’s more important to potentially be giving or making sacrifices or not spending money, the more money you have, in order to show other people that you’re serious and to maintain your own seriousness. There’s something that’s a bit perverse about it. From one point of view, you might say it’s important to show other people that I’m morally serious by continuing to give money to the point where it hurts — say, where I’m actually sacrificing money that I would’ve spent on myself in a way that I would’ve recognized as enjoyable.
Rob Wiblin: But on the other hand, sometimes I see people who are engaging in projects where they’re trying to make the world better, and they’re kind of cutting corners on spending here and there. They’ll get inconvenient flights, for example, because they don’t want to be too lavish. They don’t want to spend money on the flights that leave at a convenient time for them, so they can get a good night’s sleep. And that strikes me as not morally serious, because they’re not taking the work that they’re doing as being important enough to actually put appropriate resources into.
Rob Wiblin: The case might be clearest for you, where you’re doing enormously important work pretty clearly, and you’re overloaded with opportunities to do really valuable stuff. But maybe the morally serious thing to do is to bite the bullet and say, “No, I’m going to spend a whole lot of money on myself, so that I never have to worry about finances. I’m just going to get whatever flights I want, live wherever I like. And when people say I’m not serious, no: it’s because I’m serious that I’m doing this.”
Will MacAskill: Yeah. So there’s a couple of important things there. One is just I was saying this as it’s a consideration — it’s not the overwhelming thing. And it’s not like I recommend my level of giving to everybody. I’m also just in a weird position in the world. The thing I will say, though, is giving a very large proportion of your income is a costly signal in the evolutionary biology sense: if in fact you are morally motivated, it’s easier for you to do that than if you’re not morally motivated. Whereas spending a lot of money taking flights and doing other things to save productivity is not a costly signal, because it’s equally easy to do that if you are morally motivated and if you aren’t. That’s why there’s a difference in what you’re conveying to other people.
Will MacAskill: And sometimes, I don’t know, the word “signaling” has gotten this bad rep, but it’s unclear whether people are meaning it in the technical sense or not. And like virtue signaling is this really bad thing. I’m like, “Surely it’s good to have virtue. And then it’s good to signal it.”
Rob Wiblin: And if you’re credibly signaling it, then you actually are virtuous.
Will MacAskill: Exactly. Unfortunately, the word seems to mean fake signaling now — like you’re essentially lying. But if it’s a true costly signal, you’re giving information about what sort of person you are. There is a cost as well though, which is, do you just in fact do less good? So how much are you willing to pay for this costly signal? In my own case, I just like this balancing act of I expense things that are work related, so I have a relatively clean division between work-related expenses and personal expenses. Obviously, you can’t expense everything. It’s hard to estimate, but maybe I lose like two days’ worth of time as a result of having relatively high personal giving compared to if I was spending a lot more on myself. But that feels worth it to me.
Rob Wiblin: It’s probably a price worth paying.
Will MacAskill: Yeah.
Rob Wiblin: Yeah. That makes sense.
Will MacAskill: And for other people it wouldn’t be.
Rob Wiblin: Yeah. There’s been some interesting articles out lately basically raising these points. It’s obviously good, all things considered, that there’s more funding available for great projects to prevent pandemics and prevent asteroids from hitting Earth and so on. But people have started being worried about these perverse effects potentially affecting the community and harming people’s behavior in years to come. When I read them, very often I can’t help but have this feeling that they’re motivated in part by an aesthetic judgment. Like you were saying, even if you rationally think that trying to become a senator is the best way for someone to try to do the most good, one can’t help but have an aesthetic revulsion to that because of the narcissism that it seems to embody.
Rob Wiblin: Likewise, even if you think, on reflection, that someone should be getting the business class flight so they can sleep on the plane and perform best in the meeting, there’s just something kind of disgusting about that too. But aesthetics is not a reason to do things. And it’s very hard to balance the rational side of things, or to make sure that you’re not indulging too much in just your aesthetic preferences and thereby making projects worse than they could be.
Will MacAskill: Yeah. I’m laughing because I had a conversation about this topic with this writer, who seems to keep coming up in this conversation. I had this sentence that was talking about how working in a basement feels like the morally appropriate thing. But then over the course of the sentence, I started using the word “aesthetic” rather than “moral,” and he thought that was pretty notable.
Will MacAskill: I agree that you could call it aesthetic. Nonconsequentialists at least would think there is this idea of, “What’s an appropriate response to the world?” It’s different from, “What does the most good?” I think, even with the nonconsequentialist hat on, the stakes are just so high, it’s just going to override this in an emergency situation. So doing what is appropriate, at least from one lens, is kind of morally indulgent or something. I think the important thing though, is we do want to appeal to morally serious, morally motivated people. And for them, the natural reaction —
Rob Wiblin: Is disgust.
Will MacAskill: Well, yeah. Is this —
Rob Wiblin: Or at least skepticism.
Will MacAskill: Yeah, exactly. And I think it’s also a very reasonable reaction. That’s not to say that we shouldn’t do all these things. In fact, investing in stuff that helps productivity — like, you should have the best laptop.
Rob Wiblin: Yeah. Have a good desk.
Will MacAskill: Have a good laptop, have a good desk, have a good office chair. When I was younger, I spent 18 hours in Doha airport to save myself £45.
Rob Wiblin: Nowadays that’s not the math.
Will MacAskill: Yeah. Nowadays that’s not a good use of money if you’re doing important work. But we should be sensitive to the fact that that’s a weird situation to be in. Even when, if we look at, say, salary norms as well: it’s a weird and I think unfortunate cultural norm that nonprofit salaries are radically below market rates, such that people can find it weird if a nonprofit is paying closer to market rates. So it’s at least something we need to bear in mind. Certainly when there are easy cases to not appear profligate, we should avoid those.
Barriers to ambition [01:28:47]
Rob Wiblin: If people are really taking on this lesson and trying to be as ambitious as they can in their career, what might be the barriers to doing that?
Will MacAskill: That’s a great question. I think the biggest barrier is just taking seriously what some of these best-case scenarios could look like. A second is then often just, and how could you do that quicker too? There’s this classic Silicon Valley question of, “What’s the best possible success you can imagine for the project you’re working on? Now how could you do that in six months?” Where, often at least, achieving very big things does mean having to go somewhat quickly as well. Again, for all of this stuff, emphasizing that we need to pay attention to the risks of harm as well. So don’t do crazy things that are going to taint the well of some area, or promote infohazards, or something like that. I think the biggest thing is just actually thinking about, “What are some of the best-case outcomes I could imagine?”
Will MacAskill: A second important thing is doing things such that you feel OK about it even if you fail. It’s interesting where people go into science, and because they’re intellectually motivated, they just want to do it for itself. And there’s this strong norm in science that that’s kind of what you ought to do as well — that it’s kind of weird to be going in with the aim of pursuing impact. I have this hypothesis that that’s actually kind of irrational as a norm, because the people who go in and try and do impact, they’re trying to get a guarantee of impact and that means you do something that’s not actually very good.
Rob Wiblin: You’re saying that, perversely, they’ll be motivated to be less honest with themselves than someone who’s motivated just by the truth.
Will MacAskill: Or they’ll just be motivated to do something that’s not very fundamental or important. So you could do something that is taking some existing science and applying it to some particular area. Whereas instead — this at least is the argument — having a 1 in 100 chance of some really fundamental contribution is actually just much more important in expectation.
Rob Wiblin: I see.
Will MacAskill: So I think actually, that doesn’t motivate this true scientific norm of, “Just do whatever you’re interested in,” because I’d be really surprised if, say, a marginal string theorist was going to be as impactful as other things you could go into. But the underlying thing of you should be pursuing something, such that even though you know you’re probably not going to have enormous impact — in fact, let’s say your impact might be close to zero — nonetheless, you’ll still be motivated and able to pursue it. Because that’s the real challenge, I think. Some people are maybe just happy to go with the EV calculations, but I think it’s an unusual psychology.
Rob Wiblin: You’re saying that they’re happy if, after the fact, things have fallen apart and they haven’t done anything good, they can look back and say, “Well, I made the right decision ex ante. At every point, I made a reasonable call on what to do. And so I’m satisfied.”
Will MacAskill: Yeah.
Rob Wiblin: But that can be difficult to do.
Will MacAskill: That can be difficult, but you could be doing something that you’re intrinsically motivated by, such that, “Well, OK, it didn’t have any impact in the end, but I feel very happy. It was an intellectually rewarding life, or a rewarding life in other ways.”
Rob Wiblin: Yeah. For a while there’s been this meme that, inasmuch as we want people to take a lot of risk in their careers in order to try to have more expected impact, that we need to do more to reward people who have tried that and then haven’t had impact, things haven’t worked out for them — which is going to be like 90% or 99% of people.
Will MacAskill: Yeah.
Rob Wiblin: Should we be giving people exit grants on particular paths? Where we’re saying, “Well, what you did didn’t turn out to work at all, but we think it was a good call ex ante so we’re going to pay you some money now”?
Will MacAskill: It relates to the projects-shutting-down idea. I have wondered, should there be a prize for honorable failures? Where it’s like, “Look, you did a really good ex ante thing. We think your thing should shut down now, but you should be rewarded for that.”
Rob Wiblin: Yeah. I guess on a personal level, to some extent you could do that. I suppose one way to make it more palatable would be to make it a transition grant, where you’re like, “You were doing this thing that hasn’t worked out, but now we’re going to give you a grant in order to move into something else, given that you’re acknowledging that your original path isn’t going to work.”
Will MacAskill: Yeah. I think that’s really good. I think, in general, probably people just stick to the existing paths longer than they should. At least that’s what you would expect, because moving is uncertain. You’ve got to have the time to think about what you’re going to do next, but you’re also working this way in a full-time role.
Rob Wiblin: And also kind of bite the bullet and accept that this thing that you’ve been working at, you’re actually throwing in the towel.
Will MacAskill: Yeah, exactly. And so if there are ways of making transitions easier, that seems very promising as well.
Rob Wiblin: Yeah. Should we have someone who maintains kind of a hall of fame of good ex ante decisions?
Will MacAskill: Yeah. I think that could be very cool. The issue is, it’s often hard.
Rob Wiblin: To tell. Yeah.
Will MacAskill: Yeah. Because otherwise they do get forgotten. Before I was prepping for this talk, I realized I hadn’t really thought about No Lean Season for several years, and it’s just because they don’t exist anymore. But it’s absolutely great that they don’t exist. And yeah, we could do that on an individual level as well.
Rob Wiblin: This seems like a cool project for someone to take on, to create a website, a hall of fame of projects that didn’t work out but were good at the outset. Just to acknowledge the good decision-making of the people and also their good decision to give it up.
Will MacAskill: Yeah, exactly. Because I do have the sense that — I don’t know how much it’s true now, but certainly I was thinking this even a year ago — a lot of people in the effective altruism community feel nervous or even insecure about doing good. They’re really worried about the problems. It’s very natural. Very worried by the problems in the world, worried about making a mistake. And that was providing a motivation for doing some safe option, where one worry was that that’s just the things that are on the list of Open Phil–funded organizations or the 80K priority paths. Whereas it’s more precarious-feeling to be saying, “No, I’m actually going to do something that’s not on the list. Because actually I think if it does work out, it’s going to be higher impact and we get this amazing information of this new path being more important.”
Rob Wiblin: Yeah. But also, the most difficult case is when you have some idiosyncratic view that other people don’t agree with, and you’re saying, “I’m going to deviate from what is typical and what is accepted, to do this other thing that I think will fail.” Then not only are people not paying me any respect now when I have the potential to succeed, but the most likely thing is that it doesn’t work out — and then people are definitely not going to pay me any. Or the concern you would have is that no one’s going to respect what you did because they didn’t even like it at the beginning and now it hasn’t worked.
Will MacAskill: Yeah. That’s a great example. But the benefits if it does are so huge.
Rob Wiblin: Right. So you want them to do it.
Will MacAskill: You really want them to do it, yeah.
Rob Wiblin: I guess this is another reason why you want to have a culture of really appreciating people for what they’re doing, regardless of whether it’s delivering benefits right now.
Will MacAskill: Appreciating them for what they’re doing, and appreciating worldview diversity and disagreement. And again, I think this is an easy failure mode that people can fall into, where in general, you just shouldn’t judge someone’s epistemic virtue, as it were, by their beliefs.
Rob Wiblin: By whether they agree with you?
Will MacAskill: By whether they agree with you.
Rob Wiblin: Because that’s what it ultimately has to come down to, if that’s your approach.
Will MacAskill: Exactly. I mean, it is hard, and maybe you have some, like, “Oh no, the moon landings were fake.” Or you’re like, “9/11 was an inside job.” Probably at that point, something’s gone wrong.
Rob Wiblin: Not going to take a second meeting.
Will MacAskill: But if someone instead is saying, “I think existential risk from AI is 1 in 1,000, or perhaps even less, or that actually biorisk is 90%,” or just lots of views where actually there’s just a large range of reasonable disagreement.
Rob Wiblin: Because these questions are so hard.
Will MacAskill: Because the questions are extremely hard. Especially if you’re a community that’s trying to do action. And if I think a particular existential risk is high and you think it’s low — and perhaps even your arguments are good, or seem good because you’re a smart guy, a quick reasoner — from my perspective, you’re this existential threat, because you’re going around with what I would regard as serious-seeming arguments. Like, “Oh my God, I need to silence you.” That might be the kind of first-order thought.
Rob Wiblin: That’ll be the naive idea.
Will MacAskill: Exactly. But it’s enormously important, in fact, to counter the natural tribal instincts of just wanting to clump together on the basis of your shared views — to instead be like, “No, this person’s a good thinker and I disagree with their conclusions. Perhaps I trenchantly disagree with their conclusions, but we should really support them.” That’s a hard thing to do psychologically, but I think it’s very important for us to do.
The Future Fund [01:38:04]
Rob Wiblin: Yeah. So the Future Fund has been one of the main things you’ve been working on since What We Owe The Future wrapped up. Is that right?
Will MacAskill: That’s right.
Rob Wiblin: What’s your actual role in the whole project?
Will MacAskill: I’m an advisor. I was doing quite a variety of things, honestly. So early stages, I was just really helping work with Nick on: Who should the early hires be? What is the scope of what the fund is doing? How does it relate to other things happening at FTX and FTX Foundation? Within the fund, what are the most exciting projects? Where that’s prizes, versus having an open call, versus regrantors, versus just doing standard funding, versus should we develop projects in-house?
Will MacAskill: And then also starting to just write up the website, decide what name should it have? One of the very early things we did was start writing up this list of projects. And then subtle questions about, “What’s the culture?” as well. I felt like I was able to add quite a lot of value in those things in the very early stages.
Rob Wiblin: Yeah. What are you doing these days?
Will MacAskill: Well, we’ve just had this open call, which was a big lift. We received 1,800 applications, and we said we’d get back to everyone within 14 days.
Rob Wiblin: How’d it go?
Will MacAskill: We 90% hit that. There was some people, it was more like 17 days.
Rob Wiblin: That’s incredibly impressive.
Will MacAskill: But to be clear, that’s like, everyone gets a response within 14 days — it’s not a yes or no. I mean, sometimes it was an easy yes. But then obviously there was a huge number that were clear noes. Sam tweeted it. We got a lot of crypto applications.
Rob Wiblin: I see. That makes sense.
Will MacAskill: And then we got a lot of things that are clearly just a nonprofit in a very different area submitting an application. But there’s just a lot that are hard judgment calls, so at the moment we’re still working through many of those more difficult cases as well.
Rob Wiblin: So you’re really at the brass tacks level here. You’re reading these applications and trying to decide.
Will MacAskill: Yeah. The last two weeks have been…
Rob Wiblin: Busy.
Will MacAskill: Yeah. A lot of what I’ve been doing.
Rob Wiblin: It’s very different than what you’ve been doing before. You’re really back at the sharp end of practical decision-making, relative to the philosophy and the global priorities research that you’ve been doing for the last five years.
Will MacAskill: That’s right. It’s pretty notable how driven by high-level strategy grantmaking is. Take bio grants, for example. This is something I’ve had to learn, rather than something I really knew about. What do you fund? As a first heuristic, if something is a biomedical response to pathogens, that is not very promising if you’re concerned about these worst-case pandemics. For two reasons: one is that they’re often dual-use technology, and then secondly, because very worst-case pathogens will typically be able to get around medical countermeasures.
Rob Wiblin: Right.
Will MacAskill: In contrast, physical countermeasures — far ultraviolet radiation to sterilize surfaces, or super PPE like extremely good masks and so on that just literally don’t let anything in — it’s just like, this really could protect against absolutely everything. So just with that high-level strategic understanding of the nature of biorisk, you’ve actually cut down the search space enormously. I think that similar sorts of reasoning can apply in other areas as well. The thing that’s tough is where there are these strategic arguments for thinking that an area might be very good, but not like, “Definitely, we need to fund this.”
Rob Wiblin: Those are the things that absorb the time or that weigh on you.
Will MacAskill: Yeah, exactly. At least you really need to think about. One thing has been applications we get to reduce the risk of war, for example. For my work, I think this is enormously important, and I would say still kind of neglected within the longtermist community. The real difficulty is just in how tractable is this? You can fund things that are doing activities that seem plausible, but it’s not like you get reliable feedback loops on this. And there’s a bit of a question of should we just go in on this, even if it is shooting in the dark a little bit, or not getting such reliable feedback loops? Or do we just say, because of that, it’s not going to quite pass the bar for cost effectiveness. And yeah, it’s tough.
Rob Wiblin: Yeah. I did an interview with Chris Blattman last week. We were talking about what causes violence, what causes war. And in the last section, I was basically like, “We’ve thought that preventing war is really important for a long time, but we’re kind of at a loose end on what to actually fund. Do you have any ideas?” And the reality was, I think, he didn’t have a ton of ideas. Which I suppose is to be expected.
Will MacAskill: Yeah. And especially when you’re concerned about war between nuclear powers or great powers. You can build these datasets of conflict in the past of what’s escalated and deescalated conflict. But does it apply?
Rob Wiblin: He’s got tons of things to say about preventing gang violence.
Will MacAskill: Exactly. You’ve got much larger datasets on gang violence. Whereas if you’re concerned about a war between the US and China, probably just details of the particular situation are going to matter a lot more than a dataset that includes several conflicts from the 1900s or something.
Rob Wiblin: Yeah, when I spoke with Sam, he was really advocating for this approach to evaluating the grants where you spend one minute reading the grant, and then you say, “If I spent another week evaluating this topic, how likely would it be to change my decision?” And if the answer is it probably wouldn’t change the decision, then you just go, “Yes” or “No” and then move on. I suppose with 1,800 applications, you kind of don’t have much choice but to adopt that.
Will MacAskill: Well, bearing in mind that 90% of them were easy noes. But there was one part of that conversation with Sam that really resonated with me, where you’ve got a group of people discussing something, and everyone thinks it should be funded, but some people “have concerns” and they just want to talk about their concerns. But even after the concerns, they still want to fund it, just like, “Yeah, but I think we should just feel a bit bad about that,” or something. That’s a mode I think is easy to get into and we should try and avoid.
Rob Wiblin: I guess just always be asking, “How is this decision-relevant? How is this going to affect what we’re going to do, exactly?”
Will MacAskill: Yeah.
Rob Wiblin: What were some of the most exciting applications, if you’re able to share them at this point?
Will MacAskill: We had a lot of applications in reducing biorisk, actually.
Rob Wiblin: Makes sense. So tractable.
Will MacAskill: Yeah. It’s better masks. It’s far ultraviolet radiation. So the bio applications have been very exciting. It’s been really nice seeing some applications that just fit the call for projects so well. So one of the project ideas was just getting more expert opinion. At the moment, there’s this IGM poll of economists and if you want to know what expert economists think about a certain issue, monetary policy or something, well, you can just look up a poll and you get the answer. But remarkably that exists really only for economists. We could have it for many more fields. So we wrote that up as a project idea and we got an application that’s just bang on, and it’s by someone who seems very well qualified to do it.
Rob Wiblin: Wow.
Will MacAskill: We’re like, “Oh, this is so nice. That’s actually great to hear.” There were a lot of grants in forecasting as well.
Rob Wiblin: That seems really hot at the moment.
Will MacAskill: It’s definitely a growth area. Including organizations to just start training superforecasters and employing them and really getting them to work on the most important decision-relevant forecasts. I’m really excited about that, because it’s still this quite nascent field.
Rob Wiblin: Yeah. It has a sense of a burgeoning area that needs to mature a little bit.
Will MacAskill: Yeah, absolutely. There was another application as well, another project for finding talented and morally motivated youth in India and China, run by someone at EA Cambridge, that I thought just seemed very strong and very exciting as well.
Rob Wiblin: Fantastic.
Will MacAskill: There was also Non-trivial Pursuits. That’s by being set up by Peter McIntyre. One could consider that a competitor to 80,000 Hours.
Rob Wiblin: That’s true. Yeah. It’s a breakaway faction.
Will MacAskill: Exactly. Run by a former 80,000 Hours employee. It’s aimed at high school students, rather than people who are older, so there is product differentiation from 80,000 Hours. But honestly, we also just feel good about there being competition in the EA ecosystem. The number of times I’ve heard people say, “I shouldn’t do X because there’s already a thing on X” — I’m like, “Oh yeah, we shouldn’t start Google because Ask Jeeves has already got that covered.” It’s like, no, this is not how progress happens.
Rob Wiblin: “We shouldn’t start a second university. There’s already one.”
Will MacAskill: Exactly. “What’d be the point?” So yeah, we got this wide variety of excellent projects. And I think it did in some cases incentivize people to actually ask for funding. Like Kevin Esvelt, who’s just the leading person in terms of reducing the risk of worst-case pandemics, I think we did just inspire him to think like, “Oh yeah, actually there are things I could do.”
Rob Wiblin: “I’m so glad you asked.”
Will MacAskill: Yeah, exactly.
Rob Wiblin: Yeah. I guess one way you could make a big difference is, people are sitting on these ideas but they expect the process of finding funding or supporters to be incredibly painful and arduous, and to take them away from their normal job for longer than they can afford. And you’re saying, “No. Write us an email.”
Will MacAskill: Yeah, exactly. And if it is good enough, it can be just a very painless process.
Rob Wiblin: Yeah, the Future Fund has all kinds of interesting project ideas on the website. Are there any there that maybe you haven’t already mentioned, because maybe someone hasn’t come forward to really take up the mantle, but they’re really cool?
Will MacAskill: Yeah. We put this as the final one on the project list in order to give it emphasis, but actually EA criticism. It’s kind of funny, because I think EA criticism just is doing effective altruism research. Though there’s obviously a spectrum between how much you like developing some already widely held view versus criticizing some widely held view. But yeah, we wanted to give that special emphasis. I’m not familiar with all of the grants, but I don’t really know of anything that came through that I was really excited about there.
Will MacAskill: Also on this, I made this comment on one of the EA Forum posts that was suggesting that there wasn’t enough criticism of core EA beliefs or widely held EA beliefs. And I made this comment that was like, “I’m really sorry to hear that. I would love to fund this stuff. Here’s my email.” And got a pretty disappointing response actually. There was a few people, like one thing I’m going to fund. But there was surprisingly little uptake, because I think it was the most upvoted comment ever on the Forum. So there’s a lot of support for it.
Rob Wiblin: For the idea.
Will MacAskill: But at the moment, people are maybe still not feeling incentivized to work on it perhaps.
Rob Wiblin: Yeah. I was surprised. It sounded like some people had the impression that if they were too critical of common ideas that are held among people involved in longtermism, that this would be bad for their career or it would then be harder to get grants. Maybe I’m just totally naive, but I kind of know a lot of the people who do the grants, and I’m like, “No, they love this stuff.”
Will MacAskill: Because honestly, yeah, I think it’s the opposite.
Rob Wiblin: Yeah.
Will MacAskill: When we were developing the website, we got feedback from a bunch of different sources. But one was Max Roser, who I know reasonably well, and Our World in Data is a potential grantee of the Future Fund. And he gave brutal feedback on the website, like, “I probably shouldn’t do this as a potential grantee.” And it was just pages and pages of really deep, insightful, but biting criticism.
Rob Wiblin: Wow. Because Max is German, so he tends to play it pretty straight.
Will MacAskill: He plays it very straight. And Nick, who doesn’t know Max, was just like, “I love Max Roser.” He was just feeling so good about this.
Rob Wiblin: Yeah.
Will MacAskill: And similarly, if you look at the Forum and the most upvoted posts, almost half of them are posts that are being critical of some widely held worldview.
Rob Wiblin: To be honest, I think we might have slipped too far into the opposite direction, where you can get a lot of applause just for criticizing without… And people feel like if they were too discerning about what critiques were good, then that would shut down criticism, so they just tend to upvote anything that’s negative. And then that worries me.
Will MacAskill: I think that can happen. I agree. It can be a bit of an applause light of its own. But you will remember the days of Holden Karnofsky’s view of MIRI.
Rob Wiblin: Oh yeah.
Will MacAskill: This is 2013 or something. And that became the most upvoted LessWrong post of all time at the time. And it was brutal. It was really brutal.
Will MacAskill: So it’s a shame. I think there’s two things that are the shame. One is that people tend to perceive EA views as this monolith, especially longtermist views. This was true for me as well, where a few years ago when I was like, “I’m going to start writing this book on longtermism; there’s lots of areas I want to look into.” And I just had this sense of like, “Everyone believes X.” It takes a while of digging in and you realize, no, actually, it’s just that a couple of people believe X, and they happen to have been going around championing it. Or it’s the most extreme view perhaps, or the most interesting, and therefore is being tossed around.
Will MacAskill: So one thing that’s a shame is just that people regard views as more monolithic and more of an orthodoxy I think than it is. And then the second thing is that people also just feel worried about providing criticism, when really that’s actually just very often very well received.
Patient philanthropy [01:52:50]
Rob Wiblin: Yeah. So FTX, as people might have heard, seems keen to get money out the door pretty quickly, if it’s possible to do that sensibly. When we’ve spoken in the past, you’ve seemed more sympathetic to arguments for more patient philanthropy, the kinds of things that Phil Trammell outlined in episode 73 and that people might have seen online elsewhere. And that kind of mindset would potentially suggest that instead of making grants as quickly as possible to get resources applied in the real economy quickly, instead we should take the money, invest it in the stock market, let it accumulate for a long time and then give much bigger grants in future. Why the focus on using money now, if you’re quite sympathetic to this patient philanthropy point of view?
Will MacAskill: I mean, my views on the patient philanthropy arguments haven’t changed all that much. I still think that most of the good that we will be doing is later than 10 years, rather than in the next 10 years. But there’s two things. One is that I think even if you have quite an extreme patient philanthropy view, you should still be donating some amount. And it turns out that the financial assets that we are advising have just grown so much in the last five years that we need to be rapidly scaling up the giving, even if you think we should only be giving a small percentage and still accumulating.
Will MacAskill: Because really there’s two updates from the success of FTX. One is obviously now there’s more total financial assets that we’re advising. But secondly, we should probably expect there to be even more in the future as well from other major donors getting involved, or other successful people earning to give. And that really means that —
Rob Wiblin: You have to run just to stand still. Or you have to make a lot of grants just to avoid —
Will MacAskill: Exactly. Yeah. And let’s just take the Gates Foundation as a foundation I really respect. Gates set up the giving pledge in I think it was 2010. Over that time he’s doubled his wealth.
Rob Wiblin: And he has been making a real effort to give it away.
Will MacAskill: He’s been making a real effort. They give more than, as I understand it, any other foundation per year. I think about 6 billion per year. They could have given more, and the world is developing — probably the best opportunities are drying up. Probably they ought to have been giving more, and potentially a lot more — like 10 billion per year or something, assuming the focus is on global health and development. And that’s just a real challenge. Because even at three and a half billion per year, they’re like, “How do we spend even more?”
Will MacAskill: So a failure mode that we could get into, that I’m really worried about, is just being any other foundation that ends up persisting for many, many decades — and just losing out on most impact it could have by virtue of not really trying to decisively scale up its giving. So even getting to the point where, from the patient philanthropy point of view, maybe you’re at the optimal level — maybe that’s donating a few percent per year — we’ve got to scale up and give it away.
Rob Wiblin: That’s good. You’ve got a lot of work to do.
Will MacAskill: Exactly. Then the second thing is just that the returns from movement building seem much greater than the returns from financial investment. Again, if you look at the rate of the return from some people going and earning to give and convincing people from that and so on, I think Ben looked into this and suggested it was like 30% per year or something.
Rob Wiblin: Yeah.
Will MacAskill: But anyway, it’s certainly much higher than one can get as a market return. And obviously, even if you’re only focused on movement building as well, as we discussed, you should still be doing a lot of direct work — because a movement that’s only about growing itself is not a very convincing or effective movement.
Will’s disagreements with Sam Bankman-Fried and Nick Beckstead [01:56:42]
Rob Wiblin: A final one on the Future Fund. Do you have any interesting, exciting disagreements with Sam Bankman-Fried or Nick Beckstead?
Will MacAskill: It’s a great question. I actually find myself remarkably aligned with Sam, in interesting ways, on a couple of issues. One is this sympathy to broad longtermism. I think the two issues relate to each other quite closely. One worldview you might have is that there are things that are narrowly focused on extinction risk, and those things have longer-term impact, and basically nothing else does. And then you’ve got this different worldview, which Sam was espousing on this podcast, which is like, the world’s really connected — loads of things you can do have indirect impact on things that then do impact the very long term.
Will MacAskill: I’m much more in the latter camp. And hopefully the book gives some kind of background explanation for that. But part of that is this kind of contingency versus convergence-of-values idea as well. Where if you think it really matters who’s in charge, who has the power at these critical junctures — like development of AGI or first space exploration or formation of a world government — then loads of things change that. And that means you can affect the long-term future in just an enormous number of ways.
Will MacAskill: Probably the biggest disagreement I’d have with Sam is like, it’s kind of a spectrum of how cautious versus gung-ho you are. Sam is very keen on building things, just making things happen. And I’m really pretty sympathetic to that compared to many people, but he’s definitely further on the tail of that than me.
Rob Wiblin: And Nick?
Will MacAskill: Oh yeah. And then Nick, plenty of disagreements actually. I also just have more time to talk about them. A few recently. One on this contingency-versus-convergence thing: Nick is much more sympathetic to the idea of convergence.
Will MacAskill: Then we also have some differences in how we think about making grants or assessing organizations. That’s been an interesting and ongoing conversation, where I am more inclined to look at past track records and not pay that much attention to future plans, whereas Nick is more inclined than me to really dig into the plans going forward. And Nick, when he’s on the podcast, perhaps can defend his view. Partly I find it just from setting up organizations myself and being the recipient, where I’m like, “Look, I can make these plans. They’re going to change. Probably most of the impact will come via these weird things we don’t expect.” Whereas you can look at track record, and the people involved.
Rob Wiblin: Right. That’s more measurable.
Will MacAskill: More measurable. Exactly. So that’s pretty interesting. Then there’s also just a question of, what is cautiousness in this current context in terms of funding? Is that erring on the side of giving people more or giving people less? I tend to be a little bit happier on the side of giving people more. Again, these are matters of detail, but they really matter for grantmaking.
Rob Wiblin: Yeah. OK, we’re almost done. But we got a ton of audience-submitted questions for you, and I’d like to throw a few of them at you quickly in a kind of rapid-fire session.
Will MacAskill: I’ll try my best to be brief.
Astronomical risks of suffering (s-risks) [02:00:02]
Rob Wiblin: What’s your take on the idea that we should give special priority to s-risks, possible futures in which there’s a lot of suffering rather than just nothing?
Will MacAskill: I think it depends exactly on what you mean by “special priority.” I definitely think we should give more priority to it than in some sense of a world that has an equal quantity of happiness. Under moral uncertainty, you’ve got a kind of classical utilitarian view, which, let’s say it’s defining one unit of happiness and one unit of suffering. There’s a reasonable case to think that on the classical utilitarian view, we should think that the best possible future is as good as the worst possible future is bad, but we shouldn’t be extremely confident in that.
Will MacAskill: There are possible views in which you should give more weight to suffering, or in general to worst-case outcomes. You could be risk averse too. You could also have a prioritarian view, which, depending on exactly how you model it, can give extra weight to very worst lives. I think we should take that into account too, but then what happens? You end up with kind of a mix between the two, supposing you were 50/50 between classical utilitarian view and just strict negative utilitarian view.
Will MacAskill: Then I think on the natural way of making the comparison between the two views, you give suffering twice as much weight as you otherwise would. I think that’s a completely reasonable conclusion to come to. Something that’s pretty interesting to interrogate is to ask yourself what tradeoffs would you in fact make if you’ve got these decisions ahead of you — like some chance of heaven and some chance of hell — at what probability are you indifferent?
Rob Wiblin: Yeah. I mean, I feel very risk averse about that sort of thing. I still feel quite risk averse even in more normal cases, where it’s like you have a great day or a terrible day. You get more and more risk averse the more extreme the case.
Will MacAskill: Yeah. That does suggest that there’s this common-sense argument for caring more about avoiding the downsides than getting the upsides.
Will MacAskill: Putting all of that together, I think there’s some arguments on which s-risks are all that matters, but I think you only get that if you assign very high credence to a pretty narrow range of moral views. Whereas if you also have credence on the views on which positives count a lot as well, then you get more of this kind of mixed view, where you give some extra weight to downsides compared to upsides. I think this does give some extra weight for trajectory changes versus extinction risk production, but not overwhelming importance.
Will’s future plans [02:02:41]
Rob Wiblin: Yeah. I guess for the rest of the year, the book and the Future Fund are going to be pretty time absorbing, but what do you think you might be working on in two or three or four years’ time?
Will MacAskill: There’s two things that kind of weigh on me. One is I do just really enjoy getting very bright, younger people engaged in these ideas. I find that personally very motivating and inspiring. It’s like these people have their whole life behind them and they’ve got this youthful energy and stuff. And I feel like often relatively low amounts of effort can really help people and put them on the right path.
Rob Wiblin: I feel like we should get you in very occasionally to be a one-on-one career advisor, and people could just randomly get Will as their advisor. Perhaps if that’s energizing for you.
Will MacAskill: I used to do career advice and I loved it. I really enjoyed it. That’s one thing. And given my views about the world, just growing the number of people who are morally serious and intellectually curious, who are trying to make the world better, it’s plausibly the most important thing.
Rob Wiblin: Yeah.
Will MacAskill: Then the second thing is the thing that I feel is lacking most. I call it “weird macro-strategy research,” but I should come up with a better name than that.
Rob Wiblin: Bit of branding there.
Will MacAskill: Yeah. But Holden has this post, “Important, actionable research questions for the most important century.” He’s focusing in particular on AI, and he’s more negative on issues that are a bit broader than that — at least for the bit that’s saying it’s super important to have more people working on this. And the style of investigation he’s encouraging, it is actually kind of bold, ambitious research, where you’re not just saying, “Hey, I’m just going to look at one case study and just pass that to everyone.” Instead, it’s like, “I’m trying to develop a view here,” like a real view on AI timelines, or the importance of who develops AGI versus risk for misaligned AI, or how fast we should expect an AI takeoff to be. Just like, “I’m going to try and have a view on this and defend it, and I’m going to go really deep as well.”
Will MacAskill: He points out very few people are doing this. People often say EA is full of researchers, but not that many people are doing that sort of research. And it just seems enormously important to me. So that was just one of my favorite Forum posts that I’ve read in a long time.
Will MacAskill: I would go a bit broader than Holden. He was more critical of attempts to find cause X or have crucial considerations — basically on the grounds that it’s really hard and he didn’t expect people to succeed. I agree it’s hard and people probably won’t succeed. But, well, this whole conversation has been me like, “Well, the payoff would be very large.” So at least I’m more sympathetic to that. Though I don’t really feel I want to claim it’s more important than some of the AI topics that he was talking about. Then there’s another one, which was try to form your own worldview. I think there I feel even more sympathetic to wanting to defend the worldview thing.
Rob Wiblin: What do you mean by that?
Will MacAskill: There, I think it’s even broader than just tackling one of these questions like AI takeoff speeds or AI timelines. Instead it’s having a pretty broad view of kind of everything. So it’s a little hard to convey. Carl Shulman is the person that I think most has this.
Rob Wiblin: Most embodies that.
Will MacAskill: Yeah. But he manages to have both incredible breadth and depth.
Rob Wiblin: He’s a special guy.
Will MacAskill: I think it’s because, since he was a teenager, he was just like, “I want to figure out how the world works and what’s most important and how it all fits together.” And how many people are trying to do that in the world, like truly? Almost none. It’s extremely out of fashion in academia. There’s very few people who have the opportunity to do that. And I just think it has enormous potential upside.
Will MacAskill: And so putting both of those things together, one idea that I do come back to — and Leopold Aschenbrenner in particular has recently been pushing me on — is having my own university or research institute to combine and do both of those things.
Rob Wiblin: Oh yeah. Where you fund people to come up with their own worldview or really, really dig deep?
Will MacAskill: Yeah. You’d find super, super promising people at an early stage. They’d come to this rather than like a standard university.
Rob Wiblin: And say, “All right, you’ve got 30 years.”
Will MacAskill: That would be an extreme version of it, but there’s a really difficult incentives question. On the one hand, the best research just does take a long time. But on the other hand, you don’t want to just simply say, “You are now funded forever.”
Rob Wiblin: Yeah. Some people are super intrinsically motivated by this, and I think you would have to be in order to stick with it.
Will MacAskill: Yeah, for sure. But then you could also have it where most people go off, don’t pursue that kind of research path. So yeah, that’s something I could imagine myself getting very excited by. Something I think about. Other obvious paths are like more public intellectual work, perhaps working more with Future Fund, maybe writing more books. Another one that I feel a bit of an urge towards is really going deep on AI as well. It’s kind of the in thing to do.
Rob Wiblin: Finally sync up.
Will MacAskill: I know, I’ve been trying to hold off for so long, but it just is very important in expectation. It’s also where a lot of the focus of many researchers is at the moment. It’s something where there’s just lots of things I feel confused about, especially what’s coming out of the research area, and it feels like these open threads that I really want to dig into. And I’m always like, it’s not the thing I should be doing. But those are some of the things most on my mind.
What is it with Will and potatoes? [02:08:40]
Rob Wiblin: So listeners are dying to know, what is it with you and potatoes?
Will MacAskill: I just think they’re neat.
Rob Wiblin: This might be a little bit in jest. I think when I say “listeners are dying to know,” I mean two listeners.
Will MacAskill: Yeah, this did come up at the last EA Global London. So yeah, during like May, June 2020 — kind of the peak of lockdown madness — I was working hard on the book, and I was often going down kind of rabbit holes that were often very useful. But one I went down was the history of the potato, and the potato’s long-term impact. And an early draft of the book had a lot of potato-related content. People didn’t really like it.
Rob Wiblin: It’s mostly just a book about potatoes at this point.
Will MacAskill: I do now own many books about potatoes.
Rob Wiblin: Yeah.
Will MacAskill: And it all stemmed from how the potato was one of the most important transformative technologies of all time.
Rob Wiblin: Yeah? Tell me more.
Will MacAskill: Well, when it was first imported to Europe from South America, it was actually regarded with quite a lot of skepticism, because people thought that it would give you leprosy because the skin was —
Rob Wiblin: Looked leprous.
Will MacAskill: Yeah, looked like the skin of a leper. But it was also just a radically new sort of vegetable. So it took a while to take off, but then those areas that were suitable for potato-based agriculture started using it. There’s one study that suggests that they had radically more urbanization and population growth. You could get three times as many calories per acre from the potato as you could from —
Rob Wiblin: From wheat, or…?
Will MacAskill: Yeah, from wheat, or…
Rob Wiblin: It’s just so fast growing, or just so efficient at converting sunlight into calories?
Will MacAskill: Yeah, basically. It’s also quite close to a superfood, so there’s this paper by Nathan Nunn and Nancy Qian in The Quarterly Journal of Economics, it’s got like thousands of citations, and it’s like, “Section II: The virtues of the potato.” It’s like comparing potatoes to turnips. At least, if you’re an agricultural worker growing more calories per day, you can live on nothing but buttery mashed potatoes.
Rob Wiblin: In reasonable health?
Will MacAskill: In reasonable health, yeah. You get all the relevant nutrients apart from vitamins D and K, which you can get from milk. You also need an occasional supplement of lentils or oats for the molybdenum. But basically, to a first approximation, buttery mashed potatoes can just be your life. So it seemed to be actually just very good nutritionally as well.
Rob Wiblin: Huh.
Will MacAskill: The joke I started making was these analogies between the potato and AI. Because it was this discontinuous, technological advancement that was in some ways more “general” than previous vegetables, but it was also recursively self-improving.
Rob Wiblin: Because we kept selecting the best potatoes?
Will MacAskill: Well, no. Because by growing potatoes, it actually was also very good feed for livestock, which produced manure, which allowed you to make many more potatoes. Were you thinking that’s a bit of a stretch?
Rob Wiblin: I was thinking, how many cycles of improvement do you get out of that?
Will MacAskill: You do plateau. But I think that is also relevant.
Rob Wiblin: Right. Could happen with AI.
Will MacAskill: Exactly. Getting a bit of self-improvement doesn’t mean you go forever necessarily.
Rob Wiblin: Yeah, totally.
Will MacAskill: You can plateau. And it is relevant for thinking about automation more generally, or productivity improvements more generally. In agriculture in general, we’ve had these enormous productivity gains and automation and actually that’s meant agriculture’s become a much smaller part of the economy rather than the bigger one — because it’s the stuff that’s hard to automate, but essential, that ends up becoming kind of the bottleneck and swells to become the whole economy.
Rob Wiblin: Yeah.
Will MacAskill: And that could well happen with AI as well — this economic model and how this might go and how that could be like a bottleneck. Kind of singularity-esque both. But people thought it was a little flippant.
Rob Wiblin: Didn’t make it into the final cut. I think there’s almost no mentions of potatoes in the book.
Will MacAskill: Sadly, it’s all gone, but I have been approached by people asking for perhaps a standalone article. Maybe it will still see the light of day.
Rob Wiblin: Yeah.
Will MacAskill: There’s a bit. I have commissioned research onto this, because the other part of the issue is the core persistence study. If you take the persistence study and then just extrapolate it out, then if the potato had never existed or hadn’t been imported to Europe, a billion fewer people would be alive today.
Rob Wiblin: Wow. It kind of makes it surprising that the civilizations in the Americas weren’t more powerful or weren’t more populous. I mean, if it’s so much more efficient at producing calories, you’d think that’d be a huge advantage in terms of getting economies to scale and being at the forefront of technological development.
Will MacAskill: Yeah. That’s actually kind of maybe a good argument for some of the skepticism about the studies. So I have asked Jaime to look into it, because I’m confident the effect size will not be as large as it is stated in the paper.
Rob Wiblin: Yeah.
Will MacAskill: And then is it zero? I’m not sure.
Rob Wiblin: OK, well, we’re out of time, but our traditional last question is: do you have three books about potatoes that you can recommend to listeners?
Will MacAskill: I’ll have to get back to you. But there was a book called The Potato King that my team got me.
Rob Wiblin: Original edition, first edition?
Will MacAskill: No, sadly not. We did once have a party that was potato-themed where everyone’s dressed in potato sacks that we’ve made into various costumes. So that was my pandemic.
Rob Wiblin: We’ve all changed in our own way. All right. We’ll be back soon with another interview about What We Owe The Future. But for now, thanks for coming back on The 80,000 Hours Podcast, Will.
Will MacAskill: Cool. Thanks for having me, Rob.
Rob’s outro [02:14:50]
Rob Wiblin: A few little factual corrections to make on this one.
- Will gave some thanks for people who’d contributed to the book, which is always risky as you can easily forget someone, and in this case he omitted Laura Pomarius who was his Chief of Staff before Max took over.
- Eric Williams’s book Capitalism and Slavery came out in the ’40s, not the ’50s.
- The Gates Foundation gives out about $6 billion per year, not $3.5 billion.
For my part, I got my dates for Jainism wrong. Jainism’s origins are a bit hard to pin down but they’re more like 500–800 BC rather than 4,000 BC. For a religion that might be as much as 4,000 years old, you’d have to go back to the ancient Egyptians, and even that would be pushing it.
Oh well, what’s a few millennia between friends?
If things go to plan, the next of my interviews to go out will be on the 80k After Hours feed rather than this one. It’s a conversation with two expert forecasters about how they go about predicting inter-state conflict, and what’s hot in the ‘predicting the future’ scene at the moment. If that sounds interesting to you, subscribe to 80k After Hours wherever you get this show.
And just a reminder about the book giveaway I mentioned in the intro. If you’d like a free copy of The Precipice: Existential Risk and the Future of Humanity by Toby Ord, 80,000 Hours: Find a Fulfilling Career That Does Good by Benjamin Todd, or Doing Good Better: Effective Altruism and How You Can Make a Difference by Will, then you can get it at 80000hours dot org slash freebook in exchange for joining our weekly email newsletter.
Or in exchange for subscribing and insta-unsubscribing — that’s also cool.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Audio mastering and technical editing by Ben Cordell.
Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.
Thanks for joining, talk to you again soon.