Transcript
Rob’s intro [00:00:00]
Rob Wiblin: Hi listeners, this is The 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and what really knocked off the giant ground sloth. I’m Rob Wiblin, Head of Research at 80,000 Hours.
This is Will’s fourth appearance on the show and he’s one of our most consistently popular guests so I’ll forego the usual introduction here.
Today we’re discussing Will’s new book What We Owe The Future which is all about longtermism and I expect to make a big splash over the next month.
If you already know a fair bit about longtermism or have listened to Will talk about the book on another show, you can potentially skip forward to about minute 22, or the chapter called “Will’s personal journey” to get to things you very likely haven’t heard before.
But… ya know, the first 22 minutes is still pretty interesting whether you’re new to all this or not, and it has some stuff that would be have been new to me, so I wouldn’t say you should skip forward, just that you’ve got the option if you prefer.
Here’s a teaser for 2 quick notices I’ll cover in full in the outro.
First we haven’t gotten as many responses to our user survey as we’d like in order to know we’re getting a full picture of all your experiences, so we’re extending the deadline for that one by 2 days to give you more time to fill it out. So this year’s entries will close on Wednesday so please go to 80000hours.org/survey before then.
We’re currently looking for a Marketer to join the 80,000 Hours team and applications for that close 23rd. You can find out about that role at 80000hours.org/latest or hear a bit more about it by skipping to the outro for this episode.
Alright, with that little bit of housekeeping out of the way, I bring you Will MacAskill.
The interview begins [00:01:36]
Rob Wiblin: Today I’m speaking with Will MacAskill, who will be well known to many people as a cofounder of the effective altruism community. Will is an associate professor of philosophy at Oxford University, director of the Forethought Foundation for Global Priorities Research, and an advisor to the new Future Fund.
Rob Wiblin: In his academic capacity, Will has published in philosophy journals such as MIND, Ethics, and The Journal of Philosophy. In his capacity as an entrepreneur, he cofounded Giving What We Can, the Centre for Effective Altruism, and our very own 80,000 Hours, and remains a trustee on all those various boards. Back in 2015, he published Doing Good Better. In 2020, he published Moral Uncertainty. And now, in August 2022, he’s releasing his third book, titled What We Owe the Future — which is the topic of today’s conversation. Thanks for returning to the podcast, Will.
Will MacAskill: Thanks so much for having me on.
Rob Wiblin: Today we are basically just going to be talking about all sorts of different aspects of longtermism, considerations in its favour, arguments against it, what it currently implies (if anything), whether we should expect the future to be good or bad, whether anything we can do now can actually change the values of people hundreds of thousands of years in the future, and so on.
What longtermism actually is [00:02:31]
Rob Wiblin: But before that, we should really talk about what longtermism actually is. When you talk about longtermism today and in the book, What We Owe the Future, what specifically do you mean by that term?
Will MacAskill: Longtermism is the view that positively influencing the long-term future is a key moral priority of our time. So it’s about taking seriously just how much is at stake when we look to humanity’s future, and then trying to figure out what are the events or challenges that could be pivotal in humanity’s long-run trajectory and ensuring we act responsibly and carefully to navigate civilisation onto a better path.
Rob Wiblin: OK. So you say it’s a top priority. Why not the overwhelming priority or the only priority? That might seem a natural extension of this idea.
Will MacAskill: Well, there are a variety of strengths of view you could have. One is just saying it’s one of the things that we as a society should really care about — that’s “longtermism.” Saying it’s the key priority — it’s the most important thing — would be what I call “strong longtermism.” And then you could imagine an even stronger view, saying it’s overwhelmingly important.
Will MacAskill: And I think there’s two reasons for just focusing on the weaker claim: just that it’s a priority. First, from a practical perspective, it doesn’t really make any difference. At the moment, how much does society spend trying to preserve and safeguard and navigate the world and challenges for future generations? Maybe it’s 0.1% of society’s resources. I think that should be higher. Should it be 1%, 10%, more? At the moment, it doesn’t really matter, because if we get to 1% of society’s resources, I will be a very happy man.
Will MacAskill: Then the second reason is just how confident am I in different views? I feel very confident in the weak form: just among different priorities we should have, the long-term future of our species should be one. The idea that it’s the most important thing is much more controversial, and so it’s something I’m kind of less happy to stand up in public and defend.
The case for longtermism [00:04:30]
Rob Wiblin: Yeah. OK, let’s dive straight into the case for longtermism. What’s the number one argument you want people to keep in mind in favour of having a longtermist worldview?
Will MacAskill: I think the core argument is very simple. It’s that future people matter morally. It’s that there could be enormous numbers of future people. And then finally, it’s that we can make a difference to the world they inhabit. So we really can make a difference to all of those lives that may be lived.
Rob Wiblin: OK. I guess we should probably take those piece by piece. The first one is just that the future could be really big in expectation. Did you want to explain why you think that?
Will MacAskill: Sure. So Homo sapiens have been around for about 300,000 years. If we live as long as typical mammal species, we will survive for hundreds of thousands of years. If we last until the Earth is no longer habitable, we will last for hundreds of millions of years. If one day we take to the stars and have a civilisation that is interstellar, then we could survive for hundreds of trillions of years. I don’t know which of those it will be. I think we should give some probability to all of them, as well as some probability to near-term extinction, maybe within our lifetimes or the coming centuries.
Will MacAskill: But taking that all into account, and even on the kind of low estimates — such as us living as long as a typical mammal species — the future is truly vast. So on that low estimate, there are about 1,000 people in the future for every person alive today. When we look at those longer time scales that civilisation could last for, there are millions, billions, or even trillions of people to come for every person alive today.
Rob Wiblin: So on one level, it’s pretty natural for people to care about how the future’s going to go. You give some nice examples to try to make this intuitive in the book. Do you mind going through one or two of those?
Will MacAskill: For sure. One thing is just simply imagine you’re hiking on a trail and you drop some glass. Suppose you know that in 100 years’ time, someone will cut themselves on that glass. Is there any reason at all for not taking the time to clean up after yourself, that the person who will be harmed lives in 100 years’ time?
Rob Wiblin: Or hasn’t been born yet.
Will MacAskill: Maybe hasn’t even been born. And it seems like the answer is no. Or if you could prevent a genocide in 1,000 years versus 10,000 years versus 100,000 years, and it will kill 100,000 people, does it make any difference when those lives will be lived? Again, it just seems like intuitively not. Harm is harm wherever it occurs. And in that way, distance in time is quite like distance in space. The fact that someone will suffer is bad in and of itself, even if they live on the other side of the world. The fact that someone will suffer is bad in and of itself, even if they will live 10,000 years from now. So I think when we reflect on thought experiments like this, we see that, yeah, we want to give a lot of moral weight to future people.
Will MacAskill: And in many other areas where we know we’ll have long-term impact: disposal of radioactive nuclear waste, for example. It’s just perfectly common sense that the fact that this waste will be radioactive to some extent for hundreds of thousands of years, well, we should think a little bit about how we ensure that we’re not harming people in the far future. And that just seems really pretty common sense. So I think there is a strong element of common sense, at least in this idea that future people matter morally, that’s just really a pretty common-sense idea.
Rob Wiblin: Yeah. I think that the thing that strikes even deeper for me on a gut level is that lots of projects that we participate in — like trying to do scientific research to cure cancer, or building buildings that are going to last for a very long time, or writing novels that people might read in the future — obviously, part of the motivation is knowing that the benefits of the work that we’re doing will accrue for a very long time. If we just found out that humanity was going to disappear in 2040, or 2070 even, I think it would make doing scientific research at great cost today, or doing costly innovations, or building infrastructure that ideally could last for a very long time, it would make it all feel a lot less worthwhile, at least in my mind.
Will MacAskill: Absolutely. There’s this proverb that I love: “A society grows great when old men plant seeds for trees under whose shade they will never sit.” And I think that’s just a very resonant idea, that what makes for a meaningful great society is one where we’re playing a sort of relay race with our ancestors and our descendants. We’re taking the projects that they have bestowed on us, we’re making them even better, and we’re creating a world that future generations can thrive in.
Rob Wiblin: Yeah. But at the same time, we shouldn’t pretend that longtermism is just mundane and completely common sense. There is something that’s more distinctive and a bit more controversial about it. Do you want to draw out that aspect of it?
Will MacAskill: For sure. I think the thing that’s most distinctive is what we mean by “long term” — the sheer scale of how we’re thinking about things. People criticise current political thought for being short-termist, or companies for being short-termist, and what they mean is, “Oh, companies are focused on the next quarter profits. Our political cycles are focused on the next election. And they should be thinking further out on the order of years or decades.”
Will MacAskill: But I think we should not be so myopic. In fact, we should take seriously the whole possible scale of the future of humanity. So one thing that we’ve just learned in the last 100 years of science is that there is a truly vast future in front of us. And it feels odd, possibly even grandiose, to start thinking about the things that could happen in our lifetime that could have an impact — not just over decades, but over centuries, thousands, millions, or even billions of years. But if we’re taking seriously that future people matter morally, and that it really doesn’t matter when harms or benefits occur, then we really should take seriously this question of whether there could be events that occur in our lifetimes that have not just long-lasting, but indefinitely persistent effects.
Rob Wiblin: OK. So that’s the reason why, given that people will exist in the future, we should try to make their lives better. Do you want to say anything about the idea that it would be good to make there be more people in the future? Which is one subset, I suppose, of the activities that people engage in to try to make the future better.
Will MacAskill: Yeah. So there are two ways you can positively influence the long-term future. You can increase the duration of civilisation — so you can make sure we don’t go extinct or civilisation doesn’t irrecoverably collapse. And you can improve the quality of the future — so for however long civilisation will last, you can make sure that the people who do live in that future have better lives.
Will MacAskill: If you’re looking at the first of those — reducing risk of extinction or catastrophe — then you’re not actually benefiting future people; you’re just enabling future generations to exist. And even if you think they’ve got very good lives, you might think, “Well, there’s not really a moral loss if they don’t exist.” I think there’s something intuitive about that view, but I think ultimately it’s not correct. And that’s for a few arguments, some of which get technical, and I talk about it in the book.
Will MacAskill: One way of thinking about it is just to imagine lives with intense suffering. So if you could bring into existence a life that lasts 10 years and just has the most tortured, extreme suffering, and you have the option of not bringing into existence that life, it seems pretty clear that you ought to just say, “No. A life full of suffering just shouldn’t be lived.” And in fact that would be just a harm for the person whose life you’d bring into existence.
Will MacAskill: So if we think that about lives that are negative, that have more suffering than happiness, well, why should we not think symmetrically the same? This becomes more intuitive if you think about really, really good lives. Just imagine your best day, then imagine you could create a life where every day is just as good as that, and they just have this wonderful flourishing life. Then it becomes more intuitive — that actually it is just a loss if that person doesn’t get to live and experience this world.
Rob Wiblin: Yeah. I guess population ethics, which is kind of the philosophy of what you’re just talking about, is a huge area in itself that we probably won’t be able to dive into very much today. But we can stick up a link to the best summary that we can find on that in the show notes.
Will MacAskill: For sure. And I should say it’s one of the hardest and most complex and challenging areas of moral philosophy. I have a chapter on this in the book, chapter eight, I think, and it kind of is the best overview I could do at least.
Rob Wiblin: OK. The third plank in the argument is that we actually can do something to help people in the future. Do you want to explain why you think this issue is tractable, as we say?
Will MacAskill: I think there are two ways in which we can impact the long-term future. The first is we can ensure that we have a future at all, by reducing risks of extinction of human beings or of the collapse of civilisation. A second is that we can help ensure that the values and ideas that guide future society are better ones rather than worse.
Rob Wiblin: OK, so we’ve got avoiding extinction, which is pretty intuitive that that would have persistent impacts. And then there’s trying to change what people opt to do, what activities they’re engaged in in the future — you’re saying by shifting the values that they have, so the things that they think are valuable to do. I suppose it’s a little bit counterintuitive that we might be able to affect what people care about thousands of years in the future, but as we’ll come back to later, actually the case for that is almost overwhelming that that is possible. Is it only values that are ways of shifting what people do in future? Or is there also changing people’s practices or some other way of shifting what people are up to in future?
Will MacAskill: There are some other things I think we can do: preserving information, preserving species, digitising things, preserving internet archive backups. I think those have a very mild benefit just forever, because I think information can persist forever. There are also sometimes arguments about economic or technological lock-in. So QWERTY keyboards versus Dvorak keyboards is this paradigm example, allegedly of lock-in — where a certain standard was chosen and just persisted for a long time.
Rob Wiblin: And arguably it’s way worse, but hard to shift now.
Will MacAskill: Exactly. From my investigation, I found approximately zero examples of plausible technological lock-in. On QWERTY versus Dvorak, there’s not good arguments that Dvorak is really superior to QWERTY. The arguments that you’ve heard came from Dvorak himself — he was a good self-publicist. And then when we’re thinking millions of years into the future, and we ask, “Why does future civilisation not switch to some better standard?” Seems pretty hard.
Rob Wiblin: Just because they won’t be in a hurry at that point. At some point things will stable out.
Will MacAskill: Exactly. Yeah. I mean, imagine in your own case, if you were going to be a typist for a million years, you would probably invest the time in setting up your keyboard in the way that makes most sense.
What longtermists are actually doing [00:15:54]
Rob Wiblin: Yeah. Makes sense. So, so far we’ve been talking at a pretty abstract level. It might help people to think about longtermism if they have a more concrete idea in mind of what it actually implies, or what longtermists are actually doing. Do you have a couple of useful illustrative examples of that to hand?
Will MacAskill: For sure. One focus area is pandemic prevention, in particular from worst-case pandemics. You might think this is a very trendy thing to be hopping on the bandwagon of, but we have been concerned about this for many, many years. 80,000 Hours started recommending this as a career area in 2014, I believe. And why are we so concerned about this? Well, developments in synthetic biology could enable the creation of pathogens with unprecedented destructive power, such that the COVID-19 pandemic — while killing tens of millions of people, wreaking trillions of dollars of damage, being an enormous tragedy — it would make that tragedy look kind of small scale. In fact, at the limit we could create pathogens that could kill literally everyone on the planet. And if the human race goes extinct, that’s a persistent effect. We’re not coming back from that.
Will MacAskill: So what are we doing? Well, there’s various possible options. One thing we’re doing is investing in technology that can be used to prevent worst-case pandemics. Something I’m particularly excited about at the moment is far-UVC. This is quite a narrow spectrum of light, you can have comparatively high-energy light in the spectrum, and the hope is that it basically sterilises rooms that the light is shining on. Because it’s a physical means of sterilising surfaces and even sterilising air, if it really works, it’s protective against a very wide array of pathogens. It’s not necessarily something that clever and mal-intentioned biologists could guard against. But yet if this were implanted as a standard in all light bulbs around the world, then potentially we could just actually be protected against all pandemics, ever, as well as all respiratory diseases, just as a bonus. So this is very early stage, we’re going to fund this a lot, but potentially at least extremely exciting.
Rob Wiblin: Yeah. What’s another example?
Will MacAskill: Another example within pandemic preparedness would be early detection. At the moment we do very little for screening for novel pathogens, but you could imagine a programme that just all around the world is constantly screening wastewater, for example, to just see, is there anything in these samples that we just don’t know about that looks like a new virus or a new bacterial infection? And then can ring the alarm bell. It means we could respond much, much faster to some new pandemic outbreak.
Rob Wiblin: Yeah. Any non-pandemic stuff?
Will MacAskill: Yeah. There’s lots, but the other big focus by far is on artificial intelligence. So the development of artificial intelligence that’s not just narrow — we already use AI all the time when we’re using Google search, let’s say — but AI that is more advanced, more able to do a very wide array of tasks and able to act basically like an agent, like an actor in the world in the way that humans do. There’s good arguments for thinking that could be among the most important inventions ever, and really pivotal for the long-run trajectory of civilisation.
Will MacAskill: And that’s kind of for two reasons. One is that technological progress could go much, much faster. At the moment, why does technological progress go at the pace it does? Well, there’s just only so much human research labour that can go onto it, and that takes time. What if we automate that? What if now AI is doing R&D? Economic models produce the result that technological progress could go much, much faster. So in our lifetimes, perhaps actually it’s the equivalent of thousands of years of technological advancement that are happening.
Rob Wiblin: I suppose it could be thousands of years, but even if it was just decades occurring in a single year, that would still be massive.
Will MacAskill: That would still be absolutely massive.
Rob Wiblin: And I guess we’ll be going into this process not knowing how much it’s going to speed things up, which is pretty unnerving.
Will MacAskill: Which is pretty unnerving, exactly. It could lead to enormous concentrations of power — perhaps with a single company, single country, or the AI systems themselves. And so this is the scenario kind of normally referred to as “The Terminator scenario.” Many researchers don’t like that. I think we should own it.
Rob Wiblin: It’s a good movie.
Will MacAskill: It’s a good movie. The time travel is maybe unrealistic, but other elements actually do map onto the worries. Where the thought is, you’ve got very rapid technological progress happening, that’s being driven by the AIs themselves. The AI systems aren’t bottlenecked in terms of the peaks of the intelligence that they can reach in the way that we are. There’s only so much brain you can fit inside a human skull, whereas AIs could have more and more computational power that they’re using, better and better software, and so it might not be a very long time period at all that these AI actors move from being about human-level intelligence to being much, much smarter than we are. And we might find ourselves in a situation where it’s really AI systems that are controlling the future, rather than human beings.
Will MacAskill: That seems like a pivotal moment, because we might ask, “Well, what do those AI systems care about? What are they aiming to do?” It could be they’re benevolent and they work together with humans and we all together build this flourishing future society. It could be that they care about things that are just very alien to us, and that perhaps have no moral value at all, and humanity is left entirely disempowered. And the future just gets driven by things that just might seem kind of arbitrary to us. That could just be this indefinite loss of value.
Rob Wiblin: Yeah, that’s a lot of concreteness there. If you’re about to stop listening, then don’t walk away thinking that longtermism is synonymous with preventing pandemics and worrying about AI, because the longtermist mentality can go in all kinds of different directions, and we’ll come back to some other possible concrete ways of cashing out the ideas later on. But let’s come back to the abstract philosophy for a little bit longer.
Will’s personal journey [00:22:15]
Rob Wiblin: Something that might not be totally obvious to people — until they read your book, anyway — is that you actually took quite a long time to come around to being bought into longtermism, at least on a kind of emotional level or being motivated by the ideas. What were your reservations when you first heard about the idea?
Will MacAskill: Yeah. It wasn’t just emotional disinclination; intellectually, I wasn’t bought in for quite a while. In fact, the first time I met Toby [Ord], and he presented to me the idea of Giving What We Can, which we later cofounded, I asked him, “What are your biggest worries about this project? The ways in which you think it could be fundamentally mistaken?” And he said among his biggest worries was whether perhaps focusing on global health and development wasn’t the right thing to do. Instead, we should focus on existential risks — that is, risks that could dramatically reduce humanity’s long-term potential, such as catastrophic pandemics, catastrophic AI.
Will MacAskill: And I thought this was crackpot. I thought this was totally crackpot at the time. In terms of my intellectual development, maybe over the course of three to six months, I moved to thinking it was non-crackpot. And then over the course of a number of years, I became intellectually bought in when I wrote Doing Good Better. So I first met Toby Ord in 2009. When I wrote Doing Good Better, I have some material on global catastrophic risks in there. I considered having much more, but decided I didn’t want to make the book too broad, too diffuse. Then it was by 2017 that I thought I just want to really pivot to focusing on these issues.
Rob Wiblin: Why did you initially think it was crackpot? I guess this is 2009. So it was a weirder idea at that point, far fewer people were into it.
Will MacAskill: Yeah, so in 2009, it just was a lot more crackpot then, in two senses. One was just very few people went into this idea. Second is that the particular activities that were being discussed were much more speculative than now. One thing that’s striking is actually the correlation between the biggest risks that were identified at that point, almost from the armchair, and what we now regard as the biggest risks. AI was regarded as a very major risk, pathogens were regarded as a very major risk. I think the most notable difference is that nanotech was regarded as a very major risk.
Rob Wiblin: It’s kind of dropped off the radar.
Will MacAskill: Dropped off the radar. People don’t really see it as pivotal now. Having said that, what do we do about such things? That was much, much less well developed. I certainly could not have told you about far-UVC or early detection programmes in 2009. So it was much more like a bunch of people on the outskirts of academia — or not in academia, kind of on blogs — really speculating about such things.
Rob Wiblin: Yeah. Writing up their pet theories.
Will MacAskill: Exactly. That’s right. So why was I not intellectually convinced to begin with? I think there’s two reasons. Firstly, I understood existential risks as just extinction risks, and that being the main pathway. And the focus for thinking that extinction risk is enormously important from a very long-term perspective, it does rely on some philosophical assumptions. In particular, population ethics assumptions that the loss of a future life is a moral loss — in the sense of that life never existing in the first place, not in the sense of someone dying. Then a second reason is the future being net positive, which I guess I have just always thought, but at least it’s another question mark that you could have.
Will MacAskill: And then thirdly, even if you accept that the very long-term future is of enormous importance, that it’s where almost all value is, is the best way to promote that value by reducing extinction risk? What about speeding up economic growth? Both in terms of increasing more value in the future and also just from a practical perspective, even if you think that reducing extinction risk is the pathway to increasing future value, that doesn’t mean you should work on that directly. Maybe we’re just unsure enough about how to reduce extinction risk that doing much broader things, like generally making the world better, is the best thing to do.
Rob Wiblin: Yeah. Something that’s a little bit funny about this is recently I’ve gone back and been looking at some old books that discussed existential risks or global catastrophic risks. In 2003, the famous lawyer and jurist Richard Posner wrote this book called Catastrophe: Risk and Response, which I think just lays out this incredibly boring common-sense case that humanity is vulnerable to big disruptions and we’re not really doing very much about that. We don’t really dedicate any people to thinking about that or preventing that, and there’s stuff that could be done. It’s a lot more common sense in flavour and more embedded within mainstream academic discussion than the existential risk stuff was in 2009.
Rob Wiblin: And I almost think it’s a real shame that — I don’t know, maybe if we’d stuck with the kind of common-sense argument, that would’ve closed our mind to more interesting things — but I wonder whether in fact this whole process was slowed down by not just adopting a more boring economics cost-benefit analysis way of talking about existential risks.
Will MacAskill: I mean, I think that’s very possible. Certainly the early discussion of this had a feeling of kind of fringiness to it that wasn’t helpful and wasn’t really necessary as well. I should say with that book, Catastrophe: Risk and Response, it’s focused on catastrophes — where that could mean 10% of the population dying or 90% — which are distinct from the category of existential risks and distinct from longtermism. Where Posner endorses a discount rate, such that the long-term future does not have enormous moral value. Past a few centuries, it loses almost all its value, because on his model, you decrease the value of the future every year by 2% or so in terms of even welfare. And that means after a few hundred years —
Rob Wiblin: You just don’t care.
Will MacAskill: — you just don’t care. Exactly. And that’s how he gets around the conclusion that extinction risk should be a particular focus. And how big a difference this makes in practice is not hugely obvious, but at least in terms of conceptual understanding, I think it’s very important. There’s a very important difference between what Richard Posner was doing and what the people working on existential risk were doing.
Rob Wiblin: Yeah. So coming back to your gradual journey towards being excited by longtermism, did you find it intuitive to care about future generations to quite the degree that supporters like Toby Ord were suggesting in 2009?
Will MacAskill: I certainly found it intuitive to care about future generations. I think it was very open as a question to me, the best way of doing that. In particular, I think I would’ve endorsed the idea that, maybe there are these things that have very long-lasting effects, but who knows if we can really make any traction on them. Perhaps just the best thing you can do, even for the long term, is building a flourishing society, increasing economic growth, improving education, that sort of thing.
Rob Wiblin: Making democracy work better, yeah.
Will MacAskill: But then the biggest thing was just looking at what are the options I have available to me in terms of what do I focus my time on? Where one is building up this idea of Giving What We Can, kind of a moral movement focused on helping people and using evidence and data to do that. It just seemed like we were getting a lot of traction there.
Will MacAskill: Alternatively, I did go spend these five-hour seminars at Future of Humanity Institute, that were talking about the impact of superintelligence. Actually, one way in which I was wrong is just the impact of the book that that turned into — namely Superintelligence — was maybe 100 times more impactful than I expected.
Rob Wiblin: Oh, wow.
Will MacAskill: Superintelligence has sold 200,000 copies. If you’d asked me how many copies I expected it to sell, maybe I would have said 1,000 or 2,000. So the impact of it actually was much greater than I was thinking at the time. But honestly, I just think I was right that the tractability of what we were working on at the time was pretty low. And doing this thing of just building a movement of people who really care about some of the problems in the world and who are trying to think carefully about how to make progress there was just much better than being this additional person in the seminar room. I honestly think that intuition was correct. And that was true for Toby as well. Early days of Giving What We Can, he’d be having these arguments with people on LessWrong about whether it was right to focus on global health and development. And his view was, “Well, we’re actually doing something.”
Rob Wiblin: “You guys just comment on this forum.”
Will MacAskill: Yeah. Looking back, actually, again, I will say I’ve been surprised by just how influential some of these ideas have been. And that’s a tremendous testament to early thinkers, like Nick Bostrom and Eliezer Yudkowsky and Carl Shulman. At the same time, I think the insight that we had, which was we’ve actually just got to build stuff — even if perhaps there’s some theoretical arguments that you should be prioritising in a different way — there are many, many, positive indirect effects from just doing something impressive and concrete and tangible, as well as the enormous benefits that we have succeeded in producing, which is tens to hundreds of millions of bed nets distributed and thousands of lives saved.
Rob Wiblin: OK, that’s kind of the setup for how there was a decent degree of resistance to longtermism when you first encountered it. What was one of the first steps that you took towards embracing it on a deeper level?
Will MacAskill: Intellectually, the two big differences. The first was just starting to appreciate the scale of the future, and that really came pretty early on. I think that the future, in expectation, is very big indeed. That’s a bit of technical terminology that means once you take both probabilities and scale into account. Life expectancy is like this: if you’ve got a 10% chance of living 100 more years, that would increase your life expectancy by 10 years. But in expectation, the future is very big.
Will MacAskill: Yes, perhaps we’ll go extinct in the next few centuries, but if we don’t and we get to a kind of safe place, things could be really big indeed. As I said, hundreds of millions of years until the Earth is no longer habitable, hundreds of trillions of years until the last stars fade. So the idea that there’s just this enormous amount of moral value there. In fact, whatever you care about, essentially — whether that’s joy, adventure, achievement, the natural environment — it’s almost all in the future. That I had liked; I bought into fairly early on.
Will MacAskill: And then the two things that really changed were, firstly, the philosophical arguments becoming just more robust over time, I think. The issue of population ethics, for example. Questions about is it a moral loss if we fail to create someone in a future date who would have a happy flourishing life? That’s actually essentially irrelevant to the case for longtermism, because I think there are things that we can do that aren’t just about increasing the number of people in the future, but about how well or badly the future goes, given that the future is long. So the difference between a flourishing utopian future and one of just perpetual dictatorship.
Rob Wiblin: Yeah, OK. So it might affect where you focus, whether you focus on creating a future that has way more people in it versus not. But even if you weren’t bought in on there being more people — because there will probably be tonnes of people, or at least in expectation there’ll be lots of people in the future — then you’d still care about the long-term impacts, because you’d want to improve their quality of life and leave them in a better situation.
Will MacAskill: For sure, exactly. That’s one. Secondly, I think getting more clarity on the arguments. Why economic growth, for example, is at least not directly a way of positively influencing the very long-term future, because at some point in time we will plateau. Perhaps you speed up a little bit the point at which we get as well off technologically as we ever will, but that’s something that we will achieve over the course of not that long, but certainly thousands of years. That’s not something that’s affecting the really long-run trajectory of civilisation. Whereas the things that actually do affect it are perhaps a much narrower set of activities, such as AI and pandemic prevention.
Will MacAskill: But then the second category of things was less philosophical and more empirical, where the sorts of actions that one would recommend as a longtermist moved out of the category of, “Let’s sit and think more about this in a philosophy seminar” or something equivalent — where there’s just a real worry that’s like, “Are we really achieving anything here?” — and instead are now just very concrete. So within AI, there’s this huge boom in machine learning research. We are now able to experiment with models to see if they display deceptive behaviour. Can we make them more truthful? Can we make them less likely to cause harm? In the case of pandemics —
Rob Wiblin: We’ve learned a lot.
Will MacAskill: We’ve learned a lot over the last few years. We have very concrete ways of making progress. So the actions that we now recommend have really moved out of something where it feels very brittle, it feels very like we could easily be fooling ourselves, and more to just, “Look, it’s just this list of things that we can really get traction on.”
Rob Wiblin: Yeah. It’s very interesting. You’re a philosopher by training, and in 2009, were you doing your philosophy PhD then?
Will MacAskill: I was doing my master’s degree, so just before my PhD.
Rob Wiblin: Before that, yeah. But even given that, if you’re in a philosophy seminar room where people are talking about really big, important, potentially crucially decisive ideas that might affect what you should do, if there’s no actual way of cashing it out, if there’s no project that seems tractable and viable to build, then no matter how important the ideas seem on paper, you had this intuition that you want to hit the eject button, basically, and say, “I’m not going to let this blackboard philosophy determine my direction.” And you almost insist that, “Until there’s something practical, I’m just going to refuse to engage with this on this level.”
Will MacAskill: For sure. And this is actually something that I think is great about the intersection between academic research and the effective altruism movement: it’s extremely easy for academic research or similar to just get lost in the weeds of some intellectual project that actually just doesn’t cash out, even if it feels like it’s very practical. Philosophy engages a lot with the question of the value of equality and inequality, disvalue of inequality. And there’s various different models of how should you take into account the value of inequality? At the time, this really felt like a very important thing. It actually really does influence economists, to some extent.
Will MacAskill: But actually, I think it basically doesn’t make any difference. Because look at the world: the scale of inequality is just so vast that whether you are simply adding up benefits to people, or whether you give extra value also to people being less well off than others — that is, you give weight to inequality, the disvalue of inequality itself — just doesn’t make any difference for practical purposes. So that’s a case where something that seems like it’s going to be practically relevant does not become practically relevant.
Rob Wiblin: Yeah, I guess economists would say it’s practically relevant, but not on the margins.
Will MacAskill: Yes, exactly.
Rob Wiblin: It’s like with the case where we are, with spending to reduce global catastrophic risks. If you’re having a debate about whether you should increase spending by 30 fold or 70 fold, on some level, that is a practical question. On another level, you could just say, “Well, let’s just start by doubling it and then we could discuss this more.”
Will MacAskill: Exactly. And this is one of the things that I think most often gets misunderstood about effective altruism, is that we’re always thinking on the margin. And so, if I’m going around championing a very long-term perspective — so greater investment in pandemic preparedness and AI safety and AI governance — that’s saying relative to where we are now, we should be spending and focusing a lot more on this. Whereas people might respond and say, “Are you saying that all of our resources should be spent on this?” I’m like, “No.”
Will MacAskill: I really don’t know the point at which the arguments for longtermism just stop working because we’ve just used up all of the best targeted opportunities for making the long term go well, such that there’s just no difference between a longtermist argument and just an argument that’s about building a flourishing society in general. Maybe you hit that at 50%, maybe it’s 10%, maybe it’s even 1%. I don’t really know. But given what the world currently prioritises, should we care more about our grandkids and their grandkids and how the course of the next few millennia and millions of years go? Yes. And that’s the claim.
Rob Wiblin: Yeah. Coming back to your gradual persuasion, what time scale are we talking about here? In 2009, you first kind of learned about it, and it sounds like gradually you came to think it’s not crazy. But in 2014 you wrote Doing Good Better, and that somewhat soft pedals longtermism when you’re introducing effective altruism. So it seems like it was quite a long time before you got fully bought in.
Will MacAskill: Yeah. I should say for 2014, writing Doing Good Better, in some sense, the most accurate book that was fully representing my and colleagues’ EA thought would’ve been broader than the particular focus. And especially for my first book, there was a lot of equivalent of trade — like agreement with the publishers about what gets included. I also wanted to include a lot more on animal issues, but the publishers really didn’t like that, actually. Their thought was you just don’t want to make it too weird.
Rob Wiblin: I see, OK. They want to sell books and they were like, “Keep it fairly mainstream.”
Will MacAskill: Exactly. And there was also just a thought to start talking about the issues about existential risk at more length. Again, it’s not that clear what you tell people to do. Having said that, yeah, in 2014, I think I would’ve just presented all of global health and development, animal welfare, and existential risk work on an even playing field, as it were. And it was after then, the wave of publication of that and increase in the uptake of the effective altruism movement, that I felt like, “OK, that project has now been kind of wrapped up. Now I want to focus in a much more deliberate way on these longtermist issues.” I remember 2017 in particular, because that felt like I was at a bit of a moment where I’d wrapped up some projects. It was also a moment of particularly energetic discussion about AI timelines, where AlphaGo appeared in 2016. Then some people started making arguments in 2017 for very short AI timelines.
Rob Wiblin: As in, we might have human-level AI within years or something?
Will MacAskill: Yeah. There were a couple of people who were arguing for five-year timelines, as in 50/50 chance of human-level intelligence within five years.
Rob Wiblin: Bold prediction.
Will MacAskill: You might notice that was exactly five years ago. One pathway into the book was that these were people I respected a lot, so I thought, “I’m going to dive into this a bit more.” Started diving into it, and I was not particularly convinced of the ultra-short AI timelines arguments. That led to pulling in a bunch of other threads, and I thought, “OK, I just actually really want to figure this all out for myself.” And then I guess five years later, I’ve now got this book, What We Owe the Future.
Strongest arguments against longtermism [00:42:28]
Rob Wiblin: Yeah. So that’s some of the areas where you’ve kind of come around to the longtermist perspective. What still feel like the strongest arguments against the long term being a key moral priority? That still seem kind of plausible to you?
Will MacAskill: I think I have a few. Maybe the most powerful, but not that action-relevant, is just maybe nothing matters. Maybe the world is just entirely meaningless, and anytime you say we should do something, that’s just false.
Rob Wiblin: Yeah, OK. It’s a misunderstanding.
Will MacAskill: Yeah, because nihilism is true.
Rob Wiblin: OK. Well, let’s bracket that.
Will MacAskill: We can bracket that. The one that I think really weighs on me most is just the following argument: look over our intellectual progress in the last few hundred years. There’s a series of just enormous intellectual changes that Nick Bostrom would call “crucial considerations,” that greatly change how we should think about our top priorities. Probability theory was only developed in the 17th century by Blaise Pascal, even in its very early stages. The idea that the future might be enormously large and the universe might be enormously big, but yet unpopulated — that was only the early 20th century. The idea of AI actually has a pretty decent history going. The first computer science pioneers, Alan Turing and I. J. Good, basically just understood the risks that AI posed.
Rob Wiblin: Seems like almost immediately.
Will MacAskill: Immediately, it was incredible. I mean, they were very smart people.
Rob Wiblin: That’s true, yeah.
Will MacAskill: I mean, they don’t have the arguments really well worked out and I think a lot of intellectual contribution comes from really going deep. But you look at the quotes in Turing and I. J. Good in the ’50s and early ’60s, and it does seem like they’re getting some of the key issues. The idea that artificial beings could be much smarter than us, and that there’s a question of what goals do we give them. Also, the idea that they’re immortal because any AI system can just replicate itself indefinitely.
Rob Wiblin: And the positive feedback loop really jumped out at them. They’re like, “Oh, you make these smart machines. Then they can improve themselves better than we could and so it could take off.”
Will MacAskill: Exactly. That was I. J. Good, stated that just very cleanly. Pandemics as well. The first-ever piece of dystopian science fiction was by Mary Shelley, called The Last Man. That was in the 19th century, and was about a pandemic that killed everyone in the world. So actually, there is a good track record of some people being strikingly prophetic. Having said that, it was still only in the ’80s that population ethics really became a field. Nick Bostrom’s astronomical waste argument was in the 2000s. You know, nanotech was still one of the top causes in 2010.
Will MacAskill: And then I honestly feel like we’re learning a lot in terms of what are the right ways to tackle priorities. Even if the priorities aren’t changing, like the high risk of AI, we’re learning a lot about how best to tackle them. So I think we’re still learning a lot. In 100 years’ time, might there be very major, crucial considerations — such that we would look back at us today and think, “Oh, they were really getting some major things wrong”? In the same ways that we would look back at people in the 19th century, and say, “Oh wow, they have really misunderstood.” Actually, I have an example. The early utilitarians, John Stuart Mill and colleagues, had this brief longtermist phase.
Rob Wiblin: Really?
Will MacAskill: Yep. John Stuart Mill has this wonderful quote, speaking to Parliament, “You might ask, why should we care about posterity? After all, what has posterity ever done for us?” And then has this beautiful discussion of actually, posterity has done lots of things for us, because we build projects. Life only has meaning because we are doing things for the benefit of posterity.
Rob Wiblin: So the thing that they’re doing for us is holding all the value that we’ll create?
Will MacAskill: Exactly.
Rob Wiblin: Yeah, OK.
Will MacAskill: But interestingly, the focus there is keeping coal in the ground, because the thought at the time is that the reason Britain in particular at the time is so rich is because it’s able to burn all this coal. And they have this extraordinarily, extraordinarily low estimate of how much coal reserves there are. And so they think, “We need to keep coal in the ground so that future generations have energy.”
Rob Wiblin: Will have some energy.
Will MacAskill: Yeah. So they’ve got an incredibly bad understanding of economics, an incredibly bad understanding of coal reserves. So it’s a very good thing that they were not spending all of their time keeping coal in the ground, for those reasons at least.
Rob Wiblin: Oh, it’s even better than you’re saying, because it seems like it would’ve been actively detrimental, perhaps to us, if they’d done the thing that they think would be good. Because they just would’ve slowed down the Industrial Revolution and everything would have been set back, maybe.
Will MacAskill: Yeah, exactly. And you might worry about climate change, but one thing to say is the amount of fossil fuels you’re burning at the end of the 19th century is just basically tiny, compared to the amount we’re burning now. I think it’s still a very strong argument that the economic benefits are outweighing the climate change effect at that point in time.
Rob Wiblin: Sure.
Will MacAskill: So yeah, in fact I think you’re right. I think it would’ve been actively harmful. Now, the people in 100 years’ time might look at us and be like, “Oh whoa, they thought the AI was the thing to focus on, and pandemics….”
Will MacAskill: Yeah. The idea that we’re sufficiently confused, that our actions are doing harm. Again, possibly with AI as well, where it’s like we’re focused on the power of AI and therefore, perhaps that speeds up development of it. So the thought here is just that there’s a very different perspective that one could take on how we should be taking action, and I call that “robust effective altruism.”
Rob Wiblin: I see.
Will MacAskill: So we’ve got longtermist effective altruism, which is focusing on the long-term future as a key moral priority of our time. Robust effective altruism is a little bit different — though it might be very overlapping — where you’re trying to do things that, even if on a naive benefit cost calculus do not look like the optimal thing, they look pretty good on a wide variety of perspectives.
Rob Wiblin: You have clean energy as an example of this, right?
Will MacAskill: Yeah. Clean energy, I just think of as weirdly, robustly good. It makes sense. Most things are just, “Are there arguments on either side? OK, I think it is good, but maybe it could be bad.” With clean tech, in the book I describe it as a win-win-win-win-win. But I actually think I missed out on a win, so I think it’s six wins.
Rob Wiblin: I don’t want to talk about this for too long, so maybe just list them quickly.
Will MacAskill: Very briefly, very near term: particulates from fossil fuels kill about three and a half million people a year. It’s just really bad, and this is like this enormous non-internalised externality. And there’s a strong case for getting off fossil fuels as quickly as possible, just for those reasons.
Rob Wiblin: Yeah. I think economists think that speeding up the transition away from fossil fuels a lot would pay for itself just through the health gains. At least in rich countries, it would be well worth it.
Will MacAskill: Yeah, for sure. Even in the EU, we lose about a year of life expectancy just from the particulates from fossil fuels. OK, so that’s one. A second, of course, is climate change, which will be very familiar. A third is that clean technology, investment in innovation, speeds up technological progress. I think that’s a good thing for many reasons, including to avoid the chance of very long-term stagnation, which we might talk about later in this podcast. A fourth is it reduces energy poverty in poor countries, so it’s also making poor countries richer there.
Will MacAskill: Then we get to some somewhat more esoteric reasons, but a fifth is that it helps keep coal in the ground, and we might need it for future generations. That’s John Stuart Mill coming back again. But in particular, we’ve got a lot of fossil fuels remaining. If you imagine there was some catastrophe, and humanity gets sent back to agricultural-level technology and we have to rebuild again. Well, they couldn’t rely on easily accessible oil because we’ve used up almost all of it, but they would have, at current levels of consumption, about 300 years’ worth of coal. If, however, there were a catastrophe and we had to rebuild to current levels, we would use up all of those fossil fuels again. So it’s kind of like we have one life remaining.
Rob Wiblin: Like in Super Mario.
Will MacAskill: Like in Super Mario, exactly. And so if we were to have a second catastrophe, then we would have to rebuild civilisation without relying on fossil fuels. Which I think, again, we probably could do.
Rob Wiblin: Probably, yeah.
Will MacAskill: People in general are very innovative, there are very strong incentives for economic growth, but I think it would be harder. That’s the one thing that most gives me pause when I’m thinking about recovery after collapse. And so this is another reason. Again, I should be clear. I’m not talking at all about the magnitudes of any of these, I’m just talking about the sign.
Rob Wiblin: Yeah. And then what do you have on the other side?
Will MacAskill: OK, then the sixth one is about distribution of values in the world. So if you think about the countries that are authoritarian, or even leaning dictatorial, where some really kind of toxic ideologies can persist. Pretty strong correlation between those cultures or countries, and countries where leadership can take power and literally fuel themselves — pay for themselves indefinitely by extracting natural resources and selling them. And they don’t have to be responsive to the public.
Will MacAskill: It’s familiar in economics, this idea of resource curse, that at least in Africa, actually having a lot of natural resources, namely fossil fuels, seems to be a bad thing because it gave an incentive for dictators. Similarly, the fact that Russia is able to persist as this oligopoly is because one economist describes it as “a gas station with a military.” Whereas, if you’re relying economically on the innovation that’s being produced by your people, it’s much harder to be kind of authoritarian.
Rob Wiblin: Yeah. Well, I suppose you have a strong incentive to educate people and make sure that the population broadly is flourishing and the power is widely distributed, which tends to lead to a different, and hopefully a more humane, set of values. And then I guess the other thing with clean energy is that then you try to ask, what’s on the other side? What are the risks here? How could it go wrong? And it’s just kind of a tumbleweed, I guess.
Will MacAskill: Yeah, for sure. The strongest case would be if you thought that economic development in general is bad, which I don’t.
Rob Wiblin: Even if it’s driven by solar panels?
Will MacAskill: I know, exactly. Solar panels, enhanced geothermal. I’m just really struggling to see that as a really big risk to the world. But there is one line of argument you could make — which I think doesn’t ultimately cash out — which is: we’re going really fast, technologically. Our societal wisdom is not going fast enough. If technological growth was just in general going slower, there would be more time for moral reasoning and political development to kind of catch up. So actually, we just want a slower growth rate over the coming centuries. I think that’s not true, but that’s the one argument you could make against it.
Rob Wiblin: Yeah. Do you want to elaborate a bit more on robust effective altruism versus the alternative? What do we call that?
Will MacAskill: I was contrasting it with longtermist effective altruism. Although really, the two things are overlapping, because you might want to do robustly good actions because you think longtermism is true, but you’re just not sure what are the best ways of promoting long-term value. But you also might want to do robustly good actions because maybe longtermism isn’t true. So I do think they’re kind of distinct ways of viewing the world and what we should prioritise.
Rob Wiblin: OK, that was an example of robust effective altruism. Do you have any others that stand out?
Will MacAskill: Yeah, many. Another one I think is just reducing the risk of war. And I think “war is really bad” is something that would’ve been accessible for basically all of history. If you look at the Mohists, for example, who are kind of like very early effective altruists — or at least there’s some notable similarities where they had a broadly consequentialist moral philosophy. They were a kind of influential philosophy in the period of the Hundred Schools of Thought about 500 BC in ancient China. So, somewhat intellectually similar. And their view was, “War is the worst thing. We need to stop war.” And so they became very good at defensive warfare.
Rob Wiblin: Yeah, they got really good at building walls around cities, so that attackers wouldn’t even think it was worth their while to have a go?
Will MacAskill: Yeah, exactly. It’s amazing. They were a group of altruists, and they would wear rags because they didn’t want to spend money on luxuries. And they formed a paramilitary group in order to defend cities that were under siege.
Rob Wiblin: New org idea.
Will MacAskill: Yeah, I know. When we’re talking about concrete implementations… [laughs] I think it’s very accessible. And later, when we talk more about some of the risks we face — like risk from worst-case pandemics via artificial pathogens; risks from people just being really dumb when it comes to developing AI and going too fast, being in an arms race dynamic; or risks from some single country trying to take over the world and implement a dictatorship; risks from unknown unknowns as well — all of these things look much worse in my view in either a hot or cold war scenario. And I think there’s a plausible explanation for that, which is that people start doing really dumb things.
Rob Wiblin: Or at least reckless things.
Will MacAskill: Yeah, reckless things, exactly, in a war scenario. It’s also very strongly negative sum. So if you think people’s goals in general are normally good, then the fact that we’re in this world of trade — where in general, countries are cooperating, at least at the moment — that’s way better than this aggressive, competitive scenario. So I think even for things I haven’t really figured out, reducing the risk of war looks pretty good.
Rob Wiblin: Yeah. Yeah, a really nice example of that, which has come up on the show once or twice before, is the Manhattan Project. Obviously, the US was fighting the Nazis, fighting the Japanese. Some of the physicists thought about this a bunch, they did some modelling and they were like, “We’re pretty sure that a nuclear blast won’t ignite the atmosphere, but we’re not totally sure about it.” And they went ahead and definitely tested out the nuclear bomb anyway. If they hadn’t been at war, they might have pressed pause on that and thought a little bit longer about whether they wanted to take a 1-in-100 chance of killing everyone.
Will MacAskill: Exactly. Yeah, exactly. And perhaps even from their perspective, they were right to do so.
Rob Wiblin: Totally, yeah.
Will MacAskill: The risk was attempted world domination by the Nazis. That’s not the sort of thing that I —
Rob Wiblin: That one does in peacetime.
Will MacAskill: Exactly. It’s just far more reckless. And then a final set of things is just trying to generally build a community and movement of people who are morally motivated, care a lot about the truth and just want to have true beliefs, are cooperative and willing to respond to new evidence as it comes in. And so I think the growth of effective altruism as an idea and a community, as long as we can preserve those virtues, also looks good and robustly good across a wide array of scenarios. Because you know, in 20 years’ time, maybe Rob Wiblin, Jr. comes up with some amazing kind of insight.
Rob Wiblin: Much better than the trash that we could think of.
Will MacAskill: Yeah. We’ve really got to just update our views. Well, if you’ve built a community that is capable of updating in that way, changing its mind in light of good arguments or evidence, then great. We’ve built these resources, we’ve given those resources to people who are morally motivated and smarter and better informed than us, and then they take action on the right things.
Rob Wiblin: Yeah. OK, so you’ve written a book about effective altruism, as well as now a book about longtermism. What is the relationship between these ideas, if anything? Does effective altruism imply longtermism, or longtermism imply effective altruism? Or are these just kind of logically independent ideas that you can accept and reject as you like?
Will MacAskill: For longtermism, I think they’re separate ideas. So obviously, many people in effective altruism are not longtermists; they focus on near-term issues. There are a number of people who we could consider longtermists, who are not part of the effective altruism community. The Long Now Foundation is like that.
Rob Wiblin: The Long Now Foundation, I guess they don’t really identify as part of effective altruism. But wouldn’t we say maybe the reason why they’re doing it is actually effective altruist-flavoured reasons, whether they see it that way or not?
Will MacAskill: Well, I think whether they see it that way or not is kind of crucial. It might be that they think the work they’re doing is just the best thing they know of as a way of doing good. In which case, they’re effective altruists, even if they don’t know it. It might be just that they thought this stuff was cool and they want to promote discussion of the long term, but they’re not particularly making a claim that this is the best way of doing good.
Rob Wiblin: OK, I see. So someone who was interested in improving the long term, but wasn’t claiming that this was the best way for them to do good, or among the very best ways for them to do good, would counter someone who’s embracing longtermism as a practice, without being an EA?
Will MacAskill: Exactly, yeah.
Preventing extinction vs. improving the quality of the future [00:59:29]
Rob Wiblin: OK, so that’s been an introduction to the reasons why people might take longtermism seriously as a theory, as well as some of the potential weaknesses with it. We’ll talk more about the weaknesses later on. Let’s now push on to fleshing out these two approaches that you’ve mentioned a little bit more, these two different schools of thought within longtermism about how one can potentially have impacts that are very persistent.
Rob Wiblin: I suppose the setup here is that it’s kind of counterintuitive, the idea that you could have impacts over millions of years. Many people have the intuition that anything we do now might help someone today, but in the long term it’s all just going to wash out. So you need some special theory for why any actions you’re going to take now won’t just wash out as society returns to some long-run trend that it would have gotten to regardless of what you did.
Rob Wiblin: Of course the first approach that stands out is preventing extinction, because there’s just such a clear case that if all the humans disappeared, then we’re not going to be able to return to some previous trend. I mean, maybe there’ll be some future species on Earth, kind of Planet of the Apes style, but most plausibly there just isn’t going to be a species like humans on Earth again.
Will MacAskill: Yeah, exactly.
Rob Wiblin: And then the second one — which is perhaps a more emerging, more modern school of thought within longtermism — is this idea of kind of changing the trajectory that we’re on, without changing whether we go extinct or not. So humans are going to be around, but maybe we could improve what people will decide to do in the future. So this is more about improving the quality of society or the quality of lives that will exist in the future.
Will MacAskill: That’s right. With one caveat that I think we should focus on: whether or not humans are around. I think most people, when they think about the very long-term future, suspect that human beings — in the sense of like flesh-and-blood human beings — might be only a very small proportion of future beings. And instead it’s kind of digital beings — whether they are our successors in some way, and we feel good about those beings, or where there’s been some hostile event where digital beings have kind of aggressively taken over control of the future.
Rob Wiblin: I see. So I guess we could talk about humanity and its descendents.
Will MacAskill: Humanity and its descendents, or just civilisation.
Rob Wiblin: OK, civilisation. There’s humanity broadly construed, which includes uploaded people and any other descendents that we might produce. So on the first one, the extinction risk, we’ve had a lot of episodes on that over the years, including a couple that stand out, like episode 72 with Toby Ord, where he talks about his book, The Precipice, which is pretty focused on these extinction scenarios. There’s also, more recently, episode 112 with Carl Shulman, where he makes the common-sense case for focusing on the risk of extinction. So we’re not going to go into that in quite as much detail, because it’s covered elsewhere.
Rob Wiblin: But one thing I wanted to bring up with you about this is that recently, at least like this year, I’ve seen a lot of people make the case that in their mind, the risk of humans going extinct during their lifetime, or at least over the next 100 years, is so high, like 10% or more, that one doesn’t really need to give a damn about future generations in order to be really worried about that.
Rob Wiblin: If you think there’s a 10% chance or a 20% or 30% chance that civilisation is going to be wiped out during your lifetime, then obviously that is a massive emergency. And it’s an emergency for you and people you know, not just because of some philosophical argument about future generations. And those folks are maybe even worried that focusing on the longtermist philosophy is kind of a distraction. Maybe it undersells the case for worrying about extinction risk if the risk is so large, because it gets people thinking about whether they care about future generations — whereas really what they should be thinking is, “I’m at great risk and everything that I care about is at great risk right now.”
Will MacAskill: Yeah.
Rob Wiblin: So you might actually get more buy-in if you just point to the empirical case for why we’re on really shaky ground right now. What do you make of this idea?
Will MacAskill: Yeah, there’s lots to say on that. One important thing is to distinguish between is something a good thing to do, and is it the best thing to do? The core idea of effective altruism is we want to focus on the very best thing. And I entirely buy that even if you’re just concerned about what happens over the next century, reducing the risks of extinction and other sorts of catastrophes, like reducing the risk of misaligned AI takeover, are just extremely good things to do. And even concerned about the next century, society should be investing a lot more in making sure they don’t happen.
Will MacAskill: Effective altruism is about doing the best we can. And certainly on its face, it would seem extremely suspicious and surprising if the best thing we could do for the very, very long term is also the very best thing we can do for the very short term. And then secondly — at least given my own estimates of human extinction and risks from misaligned artificial intelligence and other sorts of risks — I don’t think that they would be sufficiently justified as saying that they’re the very best thing, if we’re just looking at the next century, let’s say. Even though I would if they were thinking on much longer time scales. Over what time scales you need in order to start to really shift your priorities, maybe whether you’re looking like a million years out or 10 billion years out, doesn’t make a difference. I’m completely on board with that.
Rob Wiblin: Yeah.
Will MacAskill: But I think if we’re really ruthlessly focused on doing the best we can, then the fact that almost all value is in the future, one would expect it to really make a difference. I also just think in practice, it does really make a difference. A big underlying thing is just like, what do we see as the size of extinction risk within our lifetimes? Where people you reference think that by far the most likely way they’re going to die is at the hands of a misaligned artificial intelligence. I think that’s probably a risk that we should take really seriously, but I think I’m more likely to die of cancer.
Rob Wiblin: But by a decent margin?
Will MacAskill: Yeah. I mean, I think I’ve got about a 20% chance of dying of cancer.
Rob Wiblin: OK, yeah.
Will MacAskill: Whereas in the book, I put it at something like a 3% chance of misaligned AI takeover — and conditional on misaligned takeover, I think like 50/50 chance that involves literally killing human beings, rather than just disempowering them.
Rob Wiblin: Yeah. I suppose the other argument you might make is that if longtermism is true as a philosophical proposition, then it’s important for people to know that. Even if it’s not relevant to this specific decision, because it might have all sorts of other implications. We can’t just ignore it.
Will MacAskill: For sure. This again relates to the question of, “Oh, maybe we’re badly mistaken about certain things.” I think there’s an important role for people to go out and say, “Catastrophic risk from pandemics is way higher than you probably think.” Or, “The risk from AI is way higher than you think.” And other people have gone and done that. I think that’s just really excellent.
Will MacAskill: But I think it’s also important to just try and convey, fundamentally, what’s going on. So that perhaps we get information that biorisk is much higher than we thought, or AI risk is higher or lower than we thought, or there’s some new cause areas even more important, or perhaps we should do very different things. If you’re concerned about very worst-case pandemics, the sort of thing that could really kill us all, perhaps there’s a very different set of actions than just pandemics that could kill 90% of people.
Will MacAskill: I think in general, people really understanding what the goals are, what we’re aiming towards, is just going to produce a better outcome. Rather than telling people, “Adopt this particular policy,” but they don’t really understand what particular path of action, they don’t understand the ultimate goals. Because then if the environment changes or our information changes, people are doing wrong things.
Is humanity likely to converge on doing the same thing regardless? [01:06:58]
Rob Wiblin: OK, let’s move on from extinction and talk about the second approach, which is trying to make sure that humanity does something really good, conditional on us surviving. As I said, you might think that if humanity survives for a really long time, then we’re going to just converge on the same values and the same activities no matter what, because we’ll just think about it a lot. And no matter your intellectual starting point, ultimately the right arguments are going to win out. What do you think of that idea?
Will MacAskill: I think it’s something we should be open to at least. But one way in which I differ from certain other people who work on longtermism is that I put a lot less weight on that. In that sense, I’m a lot less optimistic. I think it might be very hard for society to figure out what the correct thing to do is. It might require a very long or at least reasonably long period of no catastrophes happening — no race dynamics of people fighting over control of the future, a very cooperative approach. And in particular, people who just really care about getting the right moral views — and once they’re in positions of power, are then thinking, “OK, how do we figure out what’s correct? Understanding that probably I’m very badly wrong.” Rather than getting into a position of power and saying, “OK, great. Now I can implement my moral views.”
Rob Wiblin: “The stuff that I care about.” It’s not the historical norm.
Will MacAskill: Exactly. How many people have gotten to a position of power and then immediately hired lots of advisors, including moral philosophers and other people, to try and change their ideology? My guess is it’s zero. I certainly don’t know of an example. Whereas I know of lots of examples of people getting power and then immediately wanting to just lock in the ideology.
Rob Wiblin: Yeah. In building this case that what people decide to do in the future could be contingent on actions that we take now, I guess that the main argument that you bring is that it seems like the values that we hold today, and the things that we think today, are highly contingent on actions that people took in the past. And that’s maybe the best kind of evidence that can be brought to bear on this question. What are some examples of that sort of contingency that we can see in views that we hold and practices that we engage in today?
Will MacAskill: There are some that I think are pretty clear that they’re kind of contingent. Attitudes to diet, attitudes to animals are things that vary a lot from country to country. Most of the world’s vegetarians live in India. That goes back thousands of years to kind of Vedic teachings. It’s an interesting question. Imagine if the Industrial Revolution had happened in India rather than in Britain. Would we be on this podcast talking about, “Whoa, imagine a possible world in which animals were just incarcerated and raised in the billions to provide food for people.” And you were saying, “Wow, no. That’s just way too crazy. That’s just not possible.” It seems like a pretty contingent event that that’s the way our kind of moral beliefs developed.
Rob Wiblin: Yeah. So that’s an interesting case, where we don’t know the identities of the people, potentially many different people, who contributed to the subcontinent — India and other nearby areas — taking this philosophical path where they were much more concerned about the wellbeing of animals than was the case in Europe, or indeed most other places. But we know that there must have been some people who made a difference, because evidently, it didn’t happen everywhere. It’s not something that everyone converges on.
Will MacAskill: Yeah.
Rob Wiblin: So it seems like almost necessarily there has to have been decisions that were made by some people there, and not elsewhere, that caused this path to be taken.
Will MacAskill: Yeah. Absolutely. Then when we look at other things too. I think the rise of monotheistic religions, certainly the rise of the Abrahamic religions, seems pretty contingent. There’s not really a strong argument, I think, one can give about why monotheism should have become so popular rather than polytheism.
Rob Wiblin: Yeah. I suppose people do sometimes make arguments about how monotheism is either better for cooperation or more persuasive and appealing to people somehow. But the effect can’t be that strong, because it hasn’t taken over everywhere. You think of Japan, China, the Indian subcontinent, the Americas — I don’t think any of them adopted monotheism, at least not until very recently, when Europeans basically forced them to.
Will MacAskill: For sure. And one of the interesting things, that I think I hadn’t fully appreciated before I started to get to grips with the historical literature more, is that things you might think of as convergence are much more like a single culture getting very powerful and then that culture getting exported across the world.
Will MacAskill: So another one is monogamy versus polygamy, where monogamous societies are the exception rather than the norm across history. But the kind of European, Western cultures promoted monogamy. And it seems like there’s just a lot of either direct exporting of that via colonialism, and also in some cases just imitation as well — where countries see very economically successful and militarily successful cultures and they just start trying to mimic that, including in all sorts of ways that maybe aren’t very important. I mean, there are some arguments about why monogamous cultures are more economically successful, but that’s certainly non-obvious, certainly to the people adopting them.
Will MacAskill: The example that I focus on most in the book is the abolition of slavery. I go deepest into this because, firstly, it’s just the most important moral change that I know of — certainly among the most important moral changes in all history. And secondly, I think the case for it being, in some important way, contingent — that is, it could have gone either way, such that we could have current levels of technology and very widespread slavery — is much stronger than one might think.
Will MacAskill: We certainly shouldn’t be very confident that current levels of technological development would lead to a society that had banned slavery. Maybe one thinks it’s 50/50. Maybe actually you think it’s more likely than not that we didn’t. And we talked about this more in the last podcast. I go deep into it in the book. One thing I should say is I’m not some philosopher, imperialistically going into history and then making all sorts of pontifications.
Rob Wiblin: This was the view among people who’ve studied it?
Will MacAskill: Yeah. I couldn’t say definitively what’s the median view among academic historians, but certainly the idea that the abolition of slavery was economically determined is very, very out of fashion among historians now.
Rob Wiblin: I see.
Will MacAskill: The general view is that it was a cultural change primarily. And then there’s a question of, why did that cultural change happen? Was it actually just really quite a contingent particular thing? There’s some real evidence for this. The fact that you really don’t see abolitionist campaigns occurring outside of Britain. Abolitionist sentiment, you don’t really see outside of Britain and France, and the United States as well.
Will MacAskill: You look at the Netherlands, which in some sense was the first modern economy, and they had these petition campaigns that got almost no signatures. There was almost no abolitionist sentiment, almost no movement there. The Industrial Revolution could easily have happened in the Netherlands. It could have resulted in a very different kind of moral landscape.
Will MacAskill: So that’s one view, which is that no, it’s really quite particular. There’s another view that there’s some broader trend towards liberalism and democracy ideology in favour of markets, and abolition was kind of part of that.
Rob Wiblin: Even if it’s not necessarily entailed, it’s very adjacent conceptually?
Will MacAskill: Exactly.
Rob Wiblin: So it was likely that it would take off at some point?
Will MacAskill: Exactly. And so then there’s two questions. One is just where on the spectrum should that be? And then secondly is a question of, did we need this movement towards liberalism, egalitarianism in order to get to current levels of technology? And my honest guess there is also no. My guess is that it’s helpful, but there are many things that are helpful. And if the Western Protestant culture that then led to liberal culture had just never happened, I think China and other countries would’ve gotten to current levels of technological development. Maybe it would’ve taken a bit longer, but probably not radically longer. And their culture might just be very different in some important ways.
Rob Wiblin: OK, so we’ve got all of these examples where it seems like what we believe today, what kind of culture is dominant globally, seems super contingent on events in the past that easily could have gone some other way. The fact that that is the case, and that there doesn’t seem to be overwhelming pressure to converge on doing the right thing, it’s quite disturbing on some level, I feel. Because it implies, firstly, that we should be super suspicious of the stuff that we think now.
Will MacAskill: For sure.
Rob Wiblin: If someone had fallen off a horse at the right time in a different century, then maybe we’d just have completely different views. It makes everything seem very arbitrary. But it also makes it seem alarming, the possibility that we could kind of lock in ideas now, because the ideas we have now are, in that case, kind of arbitrary. Different than what people thought in the past. If people in the past had been able to lock us into their ideas, we would think that was awful.
Will MacAskill: Yeah.
Rob Wiblin: It makes the path to the future feel a little bit more like a shitshow, if you buy this idea.
Will MacAskill: Yeah. An analogy I often think of is just imagine we’re doing all the same things as we’re trying to do now, facing the same risks, but we’re in second-century Rome. Ten percent of the population own slaves. The elites attend the Coliseum to watch people get disembowelled or to fight each other to death, and they just love it intensively. Extremely patriarchal culture; the idea of female rights is just ludicrous.
Rob Wiblin: They’re basically completely out of the picture politically.
Will MacAskill: Exactly. Yeah. Now imagine some incredibly powerful technology is being developed. I would be really worried that Nero or Caligula would develop artificial general intelligence.
Rob Wiblin: Yeah.
Will MacAskill: I would really worry that even if AI is navigated safely — in the sense that the AI does not destroy us all and it’s in human control — I would still be worried about what happens to that future. The case of slavery is a case in point. Again, at some point in the future, digital beings will probably — well, it’s hard to know, but probably — have sentience, and will be kind of beings of moral status. At the moment, computer models like AI models are just property.
Rob Wiblin: Right.
Will MacAskill: So in some sense, the default trajectory is towards them continuing to be property, indefinitely into the future. If you then also have an ideology where slavery is fine, well, you could imagine a future where the vast, vast majority of beings are slaves, and not necessarily happy ones. Romans didn’t have cosmopolitanism as an ideal, didn’t have moral impartiality as an ideal. That would be pretty scary. And one can debate how scary that is compared to how scary is takeover by AI agents, but I think both are pretty scary.
Will MacAskill: And then how much better are we than the Romans? I don’t know. If you have the appropriate level of humility, I think we’re better, for sure. But still very, very far away from the best moral views. And that means a world in which potentially even well-meaning people — that we would regard as good people now — if they were to lock in their unreflective values, it would be very bad indeed.
Rob Wiblin: Very, very far short of the best world that would be possible. Yeah.
Will MacAskill: Yeah. I think it would still be a good future, but an enormous loss of potential.
Rob Wiblin: Yeah. Is there anything you can say to cheer us up? It sounds so grim. Or it sounds like we’re in a very bad situation. I don’t know.
Will MacAskill: I’ll give the strongest arguments against my view.
Rob Wiblin: OK.
Will MacAskill: So there’s one argument that I don’t often hear, which is perhaps the moral truth is just like a strong attractor. Just in the same way I expect in 10,000 years to have beings —
Rob Wiblin: They’ll have great chemistry.
Will MacAskill: — to have really great chemistry and really great physics. There’s an argument you could make, that they’ll have really great ethics too. And one could argue that if there’s no such thing as the moral truth; things don’t matter anyway. There is obviously an intermediate view, where you’ve got kind of a subjectivist view of metaethics, where there’s no ultimate fact of the matter, but —
Rob Wiblin: You still want things to go one way or the other.
Will MacAskill: Yeah, exactly. A second thought though, is just that a future world where we are much more advanced technologically, in particular via AI, just has a lot of advantages compared to the world today. One thing is just that we know everything, all the empirical facts. So discrimination against other beings on the basis of believing them to be intellectually inferior or something will not happen, unless people are deliberately deceiving themselves in some way, because we’ve got ideal information there.
Rob Wiblin: I see. So some things that look like they’re barbaric and were justified by barbaric philosophy might actually have been because of bad empirical beliefs. So at least they would’ve become untenable in the long run if people had had a full understanding of the world.
Will MacAskill: That’s right. Yeah. And some people say that that’s like most moral disagreement. I strongly disagree with that view. I think, in general, it’s like the moral ideals are the horse and the empirical beliefs are the cart that’s being dragged along — where people come up with some empirical reasons to justify their moral views, rather than the other way around.
Will MacAskill: But in the future, we’ve got people who have figured out everything empirically. And then if we can get something that’s broadly a kind of egalitarian, liberal world — where lots of people have substantial amounts of power — well, at least some proportion of them will be morally motivated and try to figure out what’s best.
Will MacAskill: And in a world with future tech and AI, there’ll be lots of gains from trade. So perhaps there are the people who are kind of selfish and maybe they have some bad ideas. But others are the altruists, and they’re like, “No, I really care about what happens.” Perhaps they can just make compromises, such that one group wants great monuments built in their honour and will produce all of these workers to do so. Then another group says, “OK, but can you make the workers kind of happy?” And we’ve got such technological abundance that it’s very cheap for them to do that.
Rob Wiblin: I see.
Will MacAskill: So perhaps a lot of these trades are very easy. You can get enormous gains from trade, potentially, because you can figure out all possible trades that could happen and then choose whatever ones are getting like 99% of what both parties want. That’s kind of optimistic.
Rob Wiblin: Yeah. I guess another way to phrase what you’re saying there is that there could be in the future a lot of indifference to the wellbeing and flourishing of others, but there probably will also be a decent bunch of sincere altruism and concern for other people. And there won’t be an offsetting amount of sadism, where people are desperate to harm other people as much as possible. And so if you combine kind of indifference with nice altruism, then on average, that actually works out to an OK situation.
Will MacAskill: Exactly. And in general, when we’re thinking about the value of the long-term future, that’s the crucial asymmetry — where I think there’s an enormous number of ways the future could go, such that we lose almost all of our potential and those things are close to zero. They might be like a little bit good or a little bit bad.
Will MacAskill: There are some ways in which the world could go which are very, very good. Where we’ve got people who are just really morally concerned and they’re trying to do the best thing possible. And it’s very hard to see ways in which there’s worlds where things go as badly as possible. Like how many people in history are like the effective anti-altruists?
Rob Wiblin: Yeah, yeah. Effective sadism.
Will MacAskill: Yeah. Exactly. Effective sadists.
Rob Wiblin: How would they get along with one another? I guess they need to limit their sadism to people outside the effective sadism movement. Otherwise, it would be very difficult to coordinate.
Will MacAskill: Yeah, I guess so. I mean, yeah. They just love suffering as an ideal and want to produce as much suffering as possible. Things have gone really badly wrong if those are the people that control the future.
Will MacAskill: So I think that’s the way we should think about it. And this is true for futures in general. Then also within a future, something like this more liberal, egalitarian future, where it’s like, as you say, there’s many people who are like, “Hey, I want castles for myself.” And then there’s other people who are like, “Look, we just want whatever there is that’s morally best, a morally just world.” And then trade and compromise such that, if something like wellbeing is the thing that matters, then trade away suffering and trade promotion of happiness.
Rob Wiblin: To date, it seems like there’s been a much bigger focus on extinction risk than there has been on improving moral values. I think one reason for that is that the idea of trying to convince other people to have better values — which in practice almost always cashes out to the values that the speaker holds, or trying to basically persuade people of your perspective on things — feels more hostile than trying to prevent a disaster that in principle everyone would be against. Should we worry that this kind of moral improvement sounds good from one point of view, but isn’t as cooperative as other activities, in a way?
Will MacAskill: Yeah. I think there’s two things I want to say. One is that I count AI safety and AI alignment as in the kind of values bucket or the trajectory change bucket — where even in the scenario where a misaligned AI takes over and kills everyone, civilisation continues. Precisely the risk, the threat is because the AI has its own goals and builds the civilisation that’s maximising for paperclips or something very alien compared to us. And that’s still of moral value. It could be positive, could be negative. And it really matters whether that AI builds happy paperclips or unhappy paperclips or whatever else. So I think that thinking of AI as an extinction risk in the sense of like, “That happens, and then there’s nothing afterwards” is just, from a moral perspective, the wrong kind of framing.
Will MacAskill: Secondly, onto your actual question of promoting values and moral advocacy. Is that hostile? I think it really depends on the methods one uses. So if it’s that I go around and I just brainwash people or trick people into having my moral views, which has happened a lot throughout time — the dictators take power and then brainwash their population as best they can with propaganda — that seems very hostile.
Will MacAskill: If, in contrast, it’s like what the abolitionists did, which is you make arguments, you write books, perhaps you demonstrate, you make more salient the suffering that is happening, then you try and change both public and political elite opinion that way. I don’t really see how that is hostile, in the same way as trying to get people to have better empirical beliefs is also not hostile.
Rob Wiblin: Yeah. I guess the thing that might be missed in the hostile framing is that people have specific ideas about what is good, but at least most people also have this broader goal of forming more accurate ideas, or wanting to engage in a continuous reflective process where their views will change — and should change if they encounter new arguments that they find persuasive, or new information that shifts their ideas. So they’re not just like machines that want to keep exactly the same values and exactly the same goals that they happen to hold at this instant, no matter what else comes up.
Will MacAskill: Exactly. And then if they’re not… I don’t know, Stalin and he has this ideology where he’s like, “I will never change my mind.” It’s not really the sort of agent that I think I should be cooperating with.
Rob Wiblin: I see.
Will MacAskill: It’s more like a terrorist.
Rob Wiblin: Because there’s no reciprocity there?
Will MacAskill: Yeah. I think that was more just a bold intuition. The lack of reciprocity, I think, seems correct. There’s not a relevant sense of like, “If we were in each other’s shoes, would Stalin be looking out for me?” I don’t think so.
Lock-in scenario vs. long reflection [01:27:11]
Rob Wiblin: Yeah. It makes sense. So within the framework of wanting to improve the trajectory that we’re on, wanting to improve values, there’s maybe two different stories that people sometimes tell. One is this story of us being on a very long-term journey, where we want to engage in a very long-term reflection. In this picture, humans are around for a very long time, and we want to have really good philosophy departments doing very good moral philosophy, as well as collecting relevant empirical information.
Rob Wiblin: The other is this kind of emergency situation, where we might be about to create artificial intelligence quite soon. That creates the potential for lock-in of whatever dumb stuff we happen to believe today. It’s not about just persuading people that we need to engage in some extended reflection at some future time — although possibly you could take that approach. We might also just be like, “We just have to fix the most barbaric, dumb ideas that people hold now so we don’t accidentally lock them in in like the 2040s.” Is that how you picture it?
Will MacAskill: Yeah. Both of those things just seem correct to me. I mean, in the case of AI, there’s obviously a risk of some misaligned AI takeover, risks from lock-in. And the sort of thing we want to do is we want to have some process that’s just careful — a carefully navigated trajectory from where we are today to then having very well-informed and reflective moral beliefs.
Will MacAskill: It’s possible that happens in calendar time very quickly. Again, we mentioned earlier that AI could very rapidly speed up technological progress. Potentially it could very rapidly speed up ethical reflection as well. So if the arguments that AI leads to this kind of tech or intelligence boom happen, maybe in calendar time, this all happens over a decade or something. But at least in the sense of just how much intellectual progress is being made, it could be a huge difference between this kind of immediate lock-in and this broader reflective process.
Rob Wiblin: Yeah. It seems in the “lock-in risk soon” scenario, we do not have time to persuade everyone in the world of the importance of moral philosophy. I mean, we don’t even have time for you and I to spend years trying to improve our moral views and root out all of the errors that we might be making right now, let alone the entire world.
Rob Wiblin: And most people are focused on feeding themselves and getting things done. They don’t have time for constant philosophy. So it seems like really the only path is — if you do have AIs that are substantially smarter than humans, able to do research faster than us — we need to find some way to explain what we think of as the project of philosophy, and find some way to hand it over to them so they can do a better job of it than we’ve ever been able to. Basically have them take over philosophy and fix it for us before they then go and change everything.
Will MacAskill: Yeah. There’s this idea of differential technological progress, which again comes from Nick Bostrom: that there’s certain technologies you want before other technologies. And having AI systems that can model your reason very well, but aren’t agents in the relevant sense — you just put in a question, they give out an answer, but they’re not trying to make big changes in the world — ensuring that that comes. Which is more kind of like if you see GPT-3 —
Rob Wiblin: Or Google Maps.
Will MacAskill: Yeah. Or Google Maps. Exactly. That’s kind of like you put in text, you get out text. Compared to say AIs that play StarCraft or something, which is more like you give it an environment and then it’s taking goals to improve its reward function. You really want those AIs that are non-agentic to come first.
Rob Wiblin: Just oracles that answer questions within the constraints.
Will MacAskill: Exactly, yeah. Able to give you answers. You really want them to come first. One thing I think I should say about the values framing is that the natural thing to think is like, “OK, cool. I care about the values of the future. I’m going to go and take to the streets and convince people.” And that’s definitely one thing that one can do. But it’s not the only implication, by any means, of thinking about the importance of values.
Will MacAskill: Take the example of nuclear war. So normally the standard longtermist analysis of nuclear war is, what’s the probability of nuclear winter? Given a nuclear winter, how many deaths? What’s the chance of this leading to extinction? But here’s another thing: nuclear war, even putting nuclear winter to the side, could result in just most of the liberal democracies in the world being wiped off the map.
Will MacAskill: Then what’s the chance that we get back to this current level of technology? What’s the chance that liberal democracy is the predominant way of organising society? I’m not sure. Again, it’s one of these tough questions. It’s on the order of 50/50, is my guess. And then would you prefer artificial general intelligence to be developed in a liberal democracy, or would you prefer it to be developed in some dictatorship or authoritarian state?
Will MacAskill: Again, I think it’s much more likely to go well if it’s the former rather than the latter. And it’s quite plausible, actually, when we look to the very long-term future, that that’s the biggest deal when it comes to a nuclear war: the impact of nuclear war and the distribution of values for the civilisation that returns from that, rather than on the chance of extinction.
Will MacAskill: So I’m just using that as an example of how the potential sensitivity of the long-term future to the values that are predominant today. It starts to impact lots of things that we might care about.
Is the future good in expectation? [01:32:29]
Rob Wiblin: OK, zooming out for a second: a potentially crucial question that might shift someone between wanting to work on extinction versus wanting to work on this “improving values” cluster is whether they think the future is likely to be good — like right now, conditional on us surviving. If you think that on our current track, if we don’t go extinct, then we’re likely to do more harm than good — we’re likely to produce more suffering or more just bad things, offsetting positives — then in fact, you might be indifferent to extinction, from a longtermist perspective anyway. You’re definitely going to want to focus on the values improvement. What are the key arguments either way on whether we should expect there to be more good stuff or bad stuff in the future, given where we stand today?
Will MacAskill: I think there are two categories of argument that one can make in this. I should say both are pretty weak as far as evidence goes. One is you can take an empirical argument. You can say, “OK, throughout history, has the world been good or bad? Then secondly, what’s the trendline looking like? How does that change over time?” The second category are more theoretical arguments you can make. So I can take these in turn?
Rob Wiblin: Yeah. One thing I really like in the book, that I hadn’t seen elsewhere, is that you try to do this sort of accounting of all the goods and bads that have existed in the universe up to this point. Maybe you can walk us through that.
Will MacAskill: Sure. Yeah, so I do this deep dive, and it’s pretty wild to answer this question of just, “Has life so far been good or bad?” The thing that normally happens is that this is evaluated from a pure utilitarian standpoint, whereas I think we should look at this from a variety of moral perspectives. I think on that question — of have things to date been good or bad? — I think the answer is just that it’s unclear.
Rob Wiblin: Such a cop out, Will. Come on, pick a side.
Will MacAskill: I know. Actually, all things considered, I think I would probably say it’s good, actually.
Rob Wiblin: OK.
Will MacAskill: That’s slightly controversial in these parts. That comes a fair bit from moral uncertainty. But here’s the thing: looking backwards, let’s say it’s unclear where you draw the line at what beings are conscious or not. Let’s say you allow invertebrates in there. Almost all beings are roundworms or nematodes. You’ve got to ask the question — in fact, most of the question is just — “What’s a nematode life like?” If I say that’s unclear, I think it doesn’t sound so much like a cop out anymore, but more just like this is really hard to know.
Will MacAskill: Even if you exclude invertebrates and you just look at vertebrates, well, 80% of all life years that have ever been lived — and again, this is extremely approximate — are fish. 20% are amphibians and reptiles. That adds up to 100%, rounding up. I think humans are less than one thousandth of a percentage point. So now the main question is, “What is life like as a fish?” If you could be a randomly selected fish from all history, is that net good, or would you prefer to have never been born? Fish have terrible suffering. Obviously, their deaths are very bad. The deaths are short though, in general. Most of the time, swimming around as a fish seems about neutral. So the main lesson is I think it’s pretty close to the zero level, compared to how good an experience could be or how bad it could be on net.
Will MacAskill: Then when we broaden out from utilitarianism, at least total utilitarianism, and consider other moral perspectives, we get considerations that push in both directions. A total utilitarian says that adding a being to the world, if that being has more happiness than suffering, that’s good. If it has more suffering than happiness, that’s bad. But many views of population ethics say that actually, in order for adding a being to the world to be good, it doesn’t just have to be more positive than negative in terms of its experiences — it has to be sufficiently more positive than negative, like actually quite a bit better than zero.
Will MacAskill: And views have that implication in order to escape what Derek Parfit called the “Repugnant Conclusion,” where you conclude that a very, very large number of beings with lives that are just a little bit positive add up collectively to more than 10 trillion beings of lives of utter bliss, just because lots of little drops of water add up to more than a tank of water. But if you have this view that a life has to be sufficiently good for it to be positive, and below that it’s negative, then you get the conclusion that life to date has strongly been negative.
Rob Wiblin: Just because you think relatively few people or relatively few beings are going to be above whatever this critical threshold is, where it’s good for someone to exist.
Will MacAskill: Exactly. Yeah, so if you think that what Parfit calls the Repugnant Conclusion world is a bad world — you know, this indefinite life of beings with just barely lives worth living; like, at best, that’s what history looks like, the 600 million years in which we’ve had creatures with brains — you would conclude pretty strongly that the history of life has been bad. On the other hand, you might want to put in goods that are not just about wellbeing. I’m sympathetic to views that don’t just count wellbeing — actually, these move me a fair bit, at least intuitively, just the fact of complexity and social organisation and having conscious experience at all.
Rob Wiblin: Things are valuable in themselves.
Will MacAskill: Yeah, exactly. At least intuitively, I find that a pretty plausible moral view.
Rob Wiblin: I feel like if you’re someone who values justice and disvalues injustice, then things look almost even worse than they do to the utilitarian. Like, has there been justice in the world? OK, you disagree. Interesting.
Will MacAskill: Yeah, I think there is a bit of what philosophers call “objective goods” — they’re things other than wellbeing. I do think there’s an asymmetry there, where it’s plausible that there are, for many things, more objective goods than objective bads. So people talk about knowledge being good, where I think having a justified true belief is a good thing. I mean, I find it hard to get an intuition about that, but some people do. Whereas, having a false belief being a bad thing, I find it hard to find that intuitive. Or the creation of complexity or consciousness being good in and of itself — what’s the inverse of that? I don’t think there really is one.
Rob Wiblin: I see.
Will MacAskill: So something that didn’t make it into the book is this incredibly startling passage from Robert Nozick, who talks about injustices like the Holocaust and the slave trades and so on. His view is that they were just so bad that they outweigh… He actually puts it in even more poetic language. He says that, “Humanity has lost its claim to continuation” — anything that you thought was distinctively valuable about humanity has been struck from the record because of these great injustices.
Will MacAskill: All this is to say, that once you get into these kind of objective goods and bads, how on Earth do you do this accounting? You’re just throwing intuitions around. And that’s maybe part of the reason why I’m sceptical that such goods and bads actually exist, but I want to be ecumenical in the analysis here.
Rob Wiblin: Yeah. OK, so that’s kind of the state of the world up until now. I guess if people want to hear more about that, they can read the book. The other important consideration you mentioned is the trajectory: so are things getting better rather than worse? We’ll probably have to do this one a little bit more quickly, but do you want to summarise what you think the trajectory is?
Will MacAskill: Yeah, I think two things: there’s the trajectory of the world, and my guess is that it is getting better, even after you factor in the enormous amount of suffering that humanity has brought via factory farms. I think if you look at the underlying mechanisms, it’s more positive. Where in the long-term future, how many beings do you expect to be “agents” — as in, they’re making decisions and they’re reasoning and making plans — versus “patients,” which are beings that are conscious, but are not able to control the circumstances?
Will MacAskill: The trajectory to date is pretty good for agents. Certainly since the Industrial Revolution, I think human wellbeing has just been getting better and better. Then for animals, it’s quite unclear. There are far fewer wild animals than there were before. Whether that’s good or bad depends on your view on whether wild animals have lives that are good or bad. There are many more animals in factory farms, and their lives are decisively worse. However, I think it’s unlikely that when you extrapolate these trends into the distant future, that you expect there to be very large numbers of moral patients, rather than it mainly consisting of moral agents who can control their circumstances.
Rob Wiblin: Yeah. It’s interesting. On this model, basically until there were humans, basically everyone was a moral patient. In the sense that maybe wild animals, with some conceivable partial exceptions, neither had really the intellectual capacity nor the practical knowhow to control the situation they’re in in order to make their lives much better. The fraction of all conscious experiences being had by agents is kind of going from zero, gradually up. And so long as we don’t allow there to be patients by effectively prohibiting future forms of slavery, then we might expect it to reach 100, and then for most of their lives to be reasonably good.
Will MacAskill: Yeah. Absolutely. And it makes sense that their lives are reasonably good, because they want their lives to be good and they’re able to change it. Then that relates to the kind of final argument — which is the more “sitting in an armchair thinking about things” argument, but which I think is very powerful. This is just that, as we said earlier, some agents systematically pursue things that are good because they’re good. Very, very few agents systematically pursue things that are bad because they are bad. Lots of people do things because they’re self-interested and that has negative side effects. Sometimes people can be confused and think what they’re doing is good, but actually it’s bad.
Will MacAskill: I think there’s this strong asymmetry between if you imagine a very best possible future, how likely is that? What’s happened? I can tell a story, which is like, “Hey, we managed to sort out these risks. We had this long deliberative process. People were able to figure things out and people just went and tried to produce the best world.” That’s not so crazy. But then if I tell you the opposite of that, it’s like, “We have the worst possible world. Everyone got together and decided to make the worst possible world.” It seems very hard indeed.
Will MacAskill: It’s very plausible to me that we squander almost all of the value we could achieve. It’s totally on the table that we could really bring about a flourishing near-best society. And then it seems much, much less plausible that we bring about the truly worst society. That’s why I think the value of the trajectory of the future skews positive.
Rob Wiblin: Yeah. Just before we move on, these questions of “Will the future be either good or bad?” or “Has the universe been good up until now, or bad up until now?” — these seem like such important, fundamental questions for understanding existence, understanding our situation, that you’d think that there’d be lots of academics who specialised in this question. Like, “I’m in the subdiscipline of understanding what it’s like to be a fish in order to help to answer this broader question of whether the universe has been good or bad.” Yet, it’s funny because you know that there are going to be no academics who will specialise in this question more or less. I don’t think that there actually are any, or that this is like a discipline exactly. It just seems crazy.
Will MacAskill: It’s completely crazy to me. If philosophers are going to talk about anything, you think it would be this. David Benatar has made some arguments, Parfit makes these casual comments, but yeah, very, very few people have really given a sustained treatment of this topic.
Will MacAskill: The problem is wider than that. Within psychology and economics, here’s a question: how many people in the world today have lives that are above zero, such that they are actually happy to have lived? This is enormously important. Forget all the longtermist stuff. Suppose you’re just doing public health. If you are weighing lifesaving interventions compared to interventions that include quality of life, the answer to that question really matters.
Rob Wiblin: It’s basically the same logic as at the species level, because if people’s lives are actually just bad, then let’s set aside extending life for now and just improve their lives, so that it’s good when their lives are longer.
Will MacAskill: Exactly. Yeah, exactly. It means you might be sympathetic to people who smoke or people who take drugs that shorten their life expectancy, but are improving the quality of life while they are alive — obviously, smoking is also an addiction and so on — but just lots of things. If your life is close to zero in wellbeing, then shortening your life expectancy in order to make your wellbeing greater, it just makes sense.
Rob Wiblin: Makes sense. Yeah.
Will MacAskill: And so before this book was written, how many published studies were there addressing this question?
Rob Wiblin: I’m going to guess zero?
Will MacAskill: Zero is the answer. Exactly. There’s still zero published studies. There’s one unpublished by Joshua Green and Matt Killingsworth, which somewhat does address this question, looking at people in the US in particular. And then a commissioned one that’s by Abigail Hoskin, Lucius Caviola, and Joshua Lewis, that’s just directly asking people a variety of questions, including “Does your life contain more happiness than suffering?” We asked that of people in the US and India. Among the respondents, 16% in the US said that their life contained more suffering than happiness. About 40% said it was more happiness than suffering. In India, it was 9% that said that they had more suffering than happiness, so Indian respondents actually rated their lives better than those in the US.
Rob Wiblin: By that measure.
Will MacAskill: Yes, and then these aren’t representative of the country necessarily. But the standard economic method of doing cost-effectiveness analysis, the quality-adjusted life year, assumes that death is the worst state you can be in. That’s zero. You can’t go below zero. Whereas the empirical evidence is that lots of people think their lives are worse than zero. That would really mean that we should have much more focus on improving quality of life rather than saving lives.
Rob Wiblin: Yeah, which is quite an intuitive implication in a way. I mean, lots of people have said we massively underrate the importance of treating mental health, for example, over physical ailments, for reasons kind of along these lines.
Will MacAskill: Yeah, absolutely.
Can we actually predictably influence the future positively? [01:47:27]
Rob Wiblin: OK, let’s push on and talk about some of the various concerns or objections, or clusters of worries that people have about longtermism, either philosophically or in practice. One strain of criticism that I’ve heard a tonne over the last 10 or 15 years is that because longtermists are trying to have positive effects that are very far down the road — hundreds, thousands, tens of thousands, millions of years in the future — there isn’t really a very good feedback loop to figure out whether what we’re doing is helping or harming. This basically makes the entire enterprise far less tractable than, say, trying to improve the health of people around today, where you can clearly see whether you cured someone of cancer or whether they died of cancer. You get this very rapid feedback loop that allows you to improve what you’re doing very quickly.
Rob Wiblin: Whereas even just on the measure of trying to prevent nuclear war right now, you’ll never really know whether you’ve succeeded — it’s very hard to tell what things will work and what things won’t. But then thinking about the longer-term chain of consequences that you’re trying to bring about, if you are trying to prevent a nuclear war in order to then make the future go better through some of the indirect channels that you’ve been describing, you’ll be long dead before you ever figure out whether that worked. You’re just kind of floundering around in the dark. I’ll let you give your favourite formulation of this concern first, and then maybe you can respond to it.
Will MacAskill: I thought your formulation was very good, where you might just think, look, most activities have very little impact indeed. The way you manage to avoid having very little impact is precisely by testing and learning and iterating. If you’re just trying to do good, because of these like armchair arguments, perhaps again, this is how the criticism would go: you’re just as likely to do harm as you are to do good, and so really should think that you’re having about zero impact.
Rob Wiblin: Yeah. Why isn’t this a decisive objection?
Will MacAskill: Well, I think it’s very important.
Rob Wiblin: Yeah, it’s a great objection.
Will MacAskill: Yeah, it’s a great objection, exactly. I think it’s something we should be really worried about. There’s two thoughts. One is a kind of sad response. A second is somewhat more optimistic.
Will MacAskill: The more pessimistic response is just, unfortunately, this bleeds into everything. Once you’ve endorsed the evaluative claims that most value is in the future — because most people are in the future and they matter too — then we’re not getting feedback loops on anything. You might be working in development and you’re providing benefits. You’re getting feedback loops on whether bed nets work, but almost all the value is in the very long-term future. How does saving someone’s life with insecticide-treated bed nets impact the very long-term future? Well, there’s just many, many, many causal consequences. You’re not getting any feedback loops on whether those causal consequences are good or bad.
Rob Wiblin: Yeah. You’re just as in the dark empirically as the folks who are doing more peculiar longtermist stuff.
Will MacAskill: Yeah, exactly.
Rob Wiblin: Both of you are stuck just making theoretical arguments, based on your model of the world about which things you expect to have better long-term consequences.
Will MacAskill: Exactly. And that’s even if you say, “But I just think that all washes out. All of those knock-on effects of improving economic development or saving lives, they all just cancel out.” That is itself a claim, in fact, and actually quite a bold claim. The pessimistic response is just, “Yeah, we’re all in that situation,” and if you think, “OK, well that means that we’re just completely shooting in the dark,” that just applies to everything.
Will MacAskill: So there’s a little bit of a kind of dominance argument you can make against this, which is perhaps we can’t learn because we don’t have feedback loops. Then just everything’s on the table. Like everything’s just equally as good or equally as bad, depending on your perspective. Then, well, there’s nothing we can do in that world. So we shouldn’t really think about it. We should at least act on the assumption that well-meaning people who are really trying to reason about things are doing at least better than neutral, they’re doing a little bit good. Which is what I think.
Rob Wiblin: OK, so the argument here is we could live in a world where people who try to improve the long-term future because of the unforeseeability of things, in fact, their actions on average are of neutral value. They neither help nor hurt. Some help and some hurt, and they cancel out. We could live in a world where people who try to make the long-term future go better, in fact, on net make things worse, maybe because they’re drawn to counterproductive activities by some mechanism. Or we could live in a world where things are very obscure, and it’s hard to tell what impact your actions have, but nonetheless, people have some ability to discern actions that are more likely to make things better than to make them worse.
Rob Wiblin: Now, you want to say there’s an asymmetry between the second and third ones — that it’s unlikely that we live in a world where people like you and me, trying to make the long-term future go better, are in fact drawn systematically to harmful activities. Instead, you kind of are choosing between the first and the third — and then on average, that suggests that things are positive.
Will MacAskill: Yeah, exactly.
Rob Wiblin: Yeah. Do you want to explain why the second one is unlikely?
Will MacAskill: Well, I just think, again, you’ve got people who are aiming to achieve something, carefully reasoning about achieving something. In general, if I learn this person is trying to achieve X, does that make X happening more or less likely? It seems like obviously more likely, because now there’s this agent trying to make X happen. And people in general do achieve their goals, at least make the goals more likely than less likely.
Will MacAskill: Then similarly in this case, what does the evidence say about how best to achieve our goals of making the long-term future go better? Well, evidence in general points in like a goods direction, as in it makes your beliefs more likely to be true than false. Do I have reason to think that we’re more likely to believe true things than false things? I think yes, and I think that’s amply demonstrated in cases where we do get feedback effects — like forecasting makes this fairly clear. Between those two, you get things tilted clearly in a positive direction.
Tiny probabilities of enormous value [01:53:40]
Rob Wiblin: Yeah. Makes sense. Another kind of broad cluster of concerns that you hear quite a bit about longtermism is that in some ways, it’s too close to being a fanatical idea. This comes in various different flavours, but basically, if you buy that we should care about future generations, at least somewhat, per person, and you also buy that trillions and trillions — or trillions of trillions — of people could exist in the future, then it seems like you might plausibly believe that you should do really extreme stuff, just because it might have some tiny influence on humanity’s trajectory over all these things in the future.
Rob Wiblin: So in practice that might mean totally changing your job or giving away all your money or breaking the law, just to have some negligible impact basically of producing some vast amount of value far in the future. I think a lot of people think it’s even more weird and unappealing than that, because the impact that you might be having if you quit your job and did something completely different is so totally unknown. It’s not only uncertain in the sense that it’s unknown — whether a coin that you’ll flip is going to come up heads or tails — but it’s much more deeply unknown than that.
Rob Wiblin: It feels super speculative, betting your entire career on your estimate of what are the chances that there are aliens that look like horses somewhere in the Andromeda Galaxy. It just feels like it’s pulled out of your rear end. Do you want to give a formulation of this fanaticism objection that you like? Or maybe I did an OK job of it there?
Will MacAskill: I think there’s a few different objections that maybe got run together there. So the first one is about tiny probabilities of enormous amounts of value, and that’s a problem for decision theory. Where I think, very intuitively, I can produce some guarantee of good — like saving a life, or a one in a trillion trillion trillion trillion trillion trillion trillion chance of producing a sufficiently large amount of value. That expected value — that is, the amount of value multiplied by the probability of achieving it — is even greater than the expected value of saving one life.
Will MacAskill: So what do we do? I’m like, “Eh, it really doesn’t seem you should do the incredibly-low-probability thing.” That seems very intuitive. And then the question is, can you design a decision theory that avoids that implication without having some other incredibly counterintuitive implications? It turns out the answer is no, actually.
Rob Wiblin: Right.
Will MacAskill: Now, this isn’t to say that you should do the thing that involves going after tiny probabilities of enormous amounts of value. It’s just within the state where we can formally prove that there’s a paradox here — that, I’m sorry, the philosophers have failed you: we have no good answer about what to do here.
Rob Wiblin: We’re stuck in this respect. Yeah.
Will MacAskill: I think in practice, thankfully, we don’t actually need to really encounter this issue, because the probabilities we’re talking about are not astronomically small in this way. So, what’s my probability of extinction from engineered pandemics this century? I tend to say about 0.5%. You could easily say it’s higher or lower. That’s not a low probability. What’s my chance of dying in a car crash in the course of my life?
Rob Wiblin: It’s about the same.
Will MacAskill: Might be about the same. Yeah. Do I wear a seatbelt? I absolutely wear the seatbelt. And in fact, I wear the seatbelt even if I’m taking just a one-mile drive — that’s about a one in 300 million chance of dying. Still seems like a good thing to do. And then I think the risks from AI are much higher again. So society as a whole is really taking some quite meaningful risks without any sort of proportions at all. And so we’re really not at all in the world where it’s like, “This is a one in a trillion trillion trillion chance.”
Will MacAskill: In fact, it’s a world where these are really quite meaningful probabilities. And as a community, I think we’re actually going to make a reasonably large dent in these probabilities, while also probably making the world better in the near term along the way. So even though I think there’s just not really a good answer about what to do about these tiny probabilities of enormous amounts of value, if a probability is one in 1,000 or one in 1 million, that’s not so small as to be getting into these terrible issues.
Rob Wiblin: Yeah. Just to calibrate people, political scientists try to estimate what’s the probability of an election being swung by a single vote. And in a smallish country, in a moderately close election, it’s often like one in a million. In the biggest countries, it’s more like one in 10 million, if you live in a swing seat or a swing state and so on. And I think many, many people will accept that it could be reasonable to go and vote in an election in the hope that you will swing it, because the impact will be really massive if you do, even though the probability of swinging the election is really low. Or at least, the fact that the odds of swinging the election is low is not an in-principle argument why it must always be irrational to vote, because you’re somehow getting messed around by impossibly low probabilities.
Will MacAskill: Yeah, absolutely.
Rob Wiblin: And I suppose we might think that the probability of some of the projects that we’re engaged in — preventing human extinction or putting us on a better track — are at least on the order of one in 10 million, given how few people are working on some of these problems that you’re saying have one in 200, or maybe higher, chance of ending humanity.
Will MacAskill: Yeah. And I think that often the right way to think is as the member of the community that you’re a part of that is taking action. In other contexts, if you’re like, “I’m going on this protest,” it’s like, “Well, what’s the chance of effecting change?” That’s just not the way of thinking about it, because we’re very bad at reasoning in that way. And so you think, there’s this whole movement doing climate protests or whatever, and that movement maybe has a significant chance, like a 10% chance, of making policy and that would be really huge. And I’m like maybe a thousandth of the movement or something. Then you can be like, “OK, yeah, this is actually looking like a good use of time.”
Rob Wiblin: Yeah. So some people brought this up in response to the last time that we talked about this general idea. They were saying, “Well, sure, maybe the risk of humanity going extinct because of bioweapons or whatever is one in 200, but your individual chance by contributing by working on preventing that from happening is much, much lower. More like a chance of one in a million, one in 10 million, one in 100 million.”
Rob Wiblin: And there we think they’re thinking about it the wrong way, as you’re saying, because you could apply this to all kinds of actions that require more than one agent, or even require you to engage in sustained effort over a period of time where you’re kind of coordinating with yourself on different days. And we think it’s actually worth thinking at a more group level, where you think: given the full cost of a project, given all of the people who might have to participate in it for it to reach a reasonable scale, and given the probability of that project as a whole, with all of those inputs succeeding, is it worth it in aggregate? And then if it is, then it’s probably worth it for each of the individual contributors to participate in it.
Will MacAskill: Exactly.
Rob Wiblin: And that’s a much more natural way of evaluating whether something is worthwhile than thinking about whether it’s worth you going in for one individual day more to work on the project. It’s too granular.
Will MacAskill: Exactly, and we don’t have good intuitions about it. And if you don’t have that view, then you have this weird thing where it’s like, “Oh yeah, the climate protest community as a whole should be doing this protest, but every individual is wrong to do so.” It’s like, “What?” There’s something weird there, if you think a group of people is above the probability threshold for acting, but every individual is below. Come on, something’s gone wrong there in your reasoning.
Rob Wiblin: Yeah. And also it has nothing to do with longtermism in particular.
Will MacAskill: Exactly. This is just a general issue for decision theory. Then the other thing that I find wild is that sometimes people declare fanaticism with very large probabilities. I had someone say — I’m not going to reveal who, but they have power — that 5% is a fanatical probability. I’m like, “OK, do you not take out insurance? Wear a seatbelt?”
Rob Wiblin: Hold on, what do you mean by a “fanatical probability”?
Will MacAskill: As in if you’re doing something and there was a 5% chance of catastrophe resulting from it, that’s too low a probability — just ignore that.
Rob Wiblin: OK. Just in principle, you should dismiss such possibilities.
Will MacAskill: Exactly. Yeah. And surely they can’t have been serious in the deep sense of how that would affect their actions. But in general, people are just very poor at reasoning about probabilities. There’s this other thing where, even if I’m saying like, “It’s a one in 100 chance,” people are like, “Oh, I don’t know how to reason about that.” Whereas look at things like taking out house fire insurance, wearing a seatbelt, exercising: loads and loads of things we do are making differences that are far, far smaller to your chance of dying or losing your house, for example.
Rob Wiblin: Yeah. OK, so that’s one strain of the fanaticism concern. What’s another one?
Will MacAskill: Then a couple of other things you link to. One was just, should we sacrifice all of the present wellbeing for the sake of posterity? It’s like, “There’s so much of value at stake, and future people outnumber us a thousand, a million, trillion to one. So even if the present generation is entirely impoverished, surely that would be the right thing to do, if it makes the future go even a little bit better.”
Rob Wiblin: Yeah. Sounds bad.
Will MacAskill: Sounds bad. Yeah.
Rob Wiblin: What’s the response?
Will MacAskill: Well, I think there’s two again. The first one, which is really the most important thing, is again, maybe we figure that out when we come to it. Which will be never, because at the moment we spend — and I picked this number out of the air — 0.1% of global resources in making the long term go better. It’s probably even less than that in the relevant sense. Let’s bump that up to 1% or something and then we’ll see. The world where we’re really getting to, “Should it be 50%? Should it be 90%? Or should it be even higher?” I think is just very far away. And again, I think that probably by that point, we’re just generally building a flourishing society, rather than anything that’s very targeted.
Rob Wiblin: And we’re not just far away from that world by happenstance. It’s because people care about themselves and are not so altruistic that they tend to completely work themselves to the bone and make themselves miserable in order to help people who aren’t even around.
Will MacAskill: Exactly. Yeah.
Rob Wiblin: It’s kind of like saying, “If you argue that climate change is so important, then as a small nation, we should completely impoverish ourselves in order to prevent climate change.” But it’s like, you know that’s never going to happen. We can’t even get people to coordinate to do basic stuff.
Will MacAskill: Yeah, exactly. And so there is this more general philosophical question of: what are the demands of morality? And it really maps the classic Peter Singer arguments, that I’ve given 90% of my income away and now I’m on, let’s say, £5,000 per year. But that’s still a lot more than the poorest people in the world. And by donating half of that over the course of a couple of years, I can save a life. Well, it’s only half your income. You can still live on that. And being a bit poorer is not comparable to saving a life. And so from the Peter Singer arguments, just looking at extreme poverty, it seems you get this conclusion that you should just keep giving and giving until…
Rob Wiblin: You’re in absolute penury.
Will MacAskill: Exactly. I think a lot of people think that’s too extreme, but at the same time, we should be doing more than we are now. There’s some kind of middle ground. And again, I think moral philosophers have just failed us on that one. I don’t really know of a good principled philosophical account of where one draws the line. But there’s plenty of practical advice, which we’ve talked about a lot: reflect, set what’s a good line for how much one plans to give…
Rob Wiblin: Choose something sustainable. Choose something that other people can emulate.
Will MacAskill: Yeah, exactly. There’s lots of things like that. But in terms of a deep philosophical principle, it’s really hard for the answer not to be that you either should give nothing or everything you can.
Rob Wiblin: Yeah. I suppose the alternative will be that you say it’s a threefold difference: you can value yourself exactly three times as much as people in severe poverty.
Will MacAskill: But actually, even that wouldn’t work.
Rob Wiblin: OK. Because if people get poorer?
Will MacAskill: Well, no. Because let’s say you value yourself three times as much. You still give everything. You now don’t have to give to exactly the level at which you are as poor as someone living in extreme poverty — but it’s very close. Because at the moment, you can provide 100 times as much benefit as you can to yourself. So if you’re three times richer, well, the poorest people in the world live on about $2 per day, that’s $700 per year. So, OK, you are living on about £1,000 per year. That’s the 3x equivalent.
Rob Wiblin: I see. Yeah, I suppose the problem is no matter what number philosophers pull out of a hat for that ratio, then it just becomes an empirical question of how severe is the poverty and how rich are you? And in totally imaginable worlds, you’re still going to be making enormous sacrifices.
Will MacAskill: Exactly. And you get this parallel issue when you’re thinking about the long-term future. So we should weigh the interests of the present generation more than future generations. I think that’s true. I have special relationships with people in the present. They have given me benefits that I should give them back. So how much more weight do you give? Well, this has come up in the context of a discount rate. If you discount the wellbeing of future generations — basically by any number at all, but by the same number every year — then you give them essentially no weight at all. And that seems clearly wrong.
Rob Wiblin: Seems harsh.
Will MacAskill: Exactly. If you give no discount rate at all, then they’re of overwhelming importance and we should sacrifice everything. There’s basically no really satisfactory theoretical middle ground.
Rob Wiblin: Can you have a geometrically decreasing discount rate? Like the standard rate itself reduces each year into the future?
Will MacAskill: Yes, you can.
Rob Wiblin: Because you’ll have time and consistency problems.
Will MacAskill: Yes. I think that is how you should discount — it’s called a “declining discount rate schedule” — but it’s still going to be you’re completely hostage to how many future people there will be.
Rob Wiblin: I see. Yeah.
Will MacAskill: And so again, you get either the conclusion that you do nothing, because there aren’t enough future people to outweigh this, or it is of overwhelming importance.
Rob Wiblin: Yeah. I suppose, because it becomes empirical again. You could live in a world where you think that, say, there’s only the Milky Way galaxy. And in that world, you’re entitled to have a good life. And then you discover that there’s other galaxies, and now you’re like, “Oh no. Now, based on the discount rate that I’d chosen before, I just have to give away absolutely everything.”
Will MacAskill: Exactly. Which just doesn’t seem that plausible.
Rob Wiblin: It’s not intuitive.
Will MacAskill: But on the other hand, surely the amount we ought to do is sensitive to the stakes. So let’s say, Rob, you’ve donated so much that you’re now on £1,000 per year. You can go down to £500 per year. The world will end if you don’t. Seems like morally you ought to do that. And that’s because the stakes really are relevant.
Rob Wiblin: Yeah.
Will MacAskill: So yeah, I think we face the same issue when we’re thinking about, from the philosophical perspective, how much ought the present give for future generations. I think there’s no really firm line that isn’t between zero, which is implausible, and everything, which is also implausible.
Rob Wiblin: It feels like what people want to do intuitively is say something like, in a normal world, you should dedicate at most a third of your time or a third of your money to helping other people. And then unless things become way worse than what we’re used to, then that is an acceptable amount and you shouldn’t be very sensitive on the margin to how the world is. It’s like we want to do buckets somehow, and then just not have the buckets spill over between the personal and the altruistic buckets, except in very extreme circumstances.
Will MacAskill: Yeah. But then it’s hard to argue that we’re not in those very extreme circumstances. So in the fanaticism case, the probabilities I’m like, “We’ve just got no idea.” In this case, I’m at least more sympathetic to the idea that, in terms of pure moral philosophy, in principle, we should just do as much as we can to do the best thing. But again, in practice, I just don’t think it makes a difference, because the idea of all of society dedicating itself to making the long-term future go well, it’s so ridiculously far off. What’s actually happening now is just, “Do we give it essentially no concern or do we give it some concern?” And that’s just robustly a good thing to do.
Rob Wiblin: Yeah. I thought you might give kind of a different response to this, which is that there’s maybe different classes of moral concerns. So this impartial concern for the wellbeing of others that is kind of driving longtermism, that’s one class of moral concern, and we should maybe bracket how demanding that can be. And then it has to be weighed up against other classes of moral concern: we have concern for people we know, concern for the world, concern for people around today. That has a different driving motivation and it should get some significant fraction of our resources as well, because it’s a different magisteria and it can’t just be traded off one for one against the impartial one.
Will MacAskill: Yeah, that’s a way you could go. But again, it just seems weird for it to be completely stakes-insensitive. Let’s say it’s a third on making things go better, impartially speaking. And I spend a third of my income, or the present generation spends a third of its GDP. But now some new opportunity arises.
Rob Wiblin: Much better than the previous ones.
Will MacAskill: Yeah. And really it’s an emergency situation. The entire future’s going to get turned into a dystopian hellscape if only we’d spend the extra percent. Obviously that matters. So having these kind of hard cutoffs is, again, from this philosophy perspective, not something that ends up theoretically satisfying at all.
Rob Wiblin: Yeah. I suppose on the demandingness thing, you kind of have to bite one of the bullets: either the potential for fanatic demandingness, or the potential for fanatic indifference to the harm that will be done if you don’t act. And it is very hard to find any middle ground between these.
Will MacAskill: Yeah, for sure. Exactly.
Rob Wiblin: Yeah. OK. I thought you might suggest that the idea of moral uncertainty, which you’ve written a lot about, might in some way bail us out here and help us find a reconciliation.
Will MacAskill: So it’s helpful on a different issue. In general, very broad brush stylistically, there are these two major camps within moral philosophy: consequentialism and non-consequentialism. Consequentialism differs from non-consequentialism in two ways. One that says, “You don’t really have personal prerogatives” — so if there’s something that could be good for the world, but it would benefit me more if I use this money to spend it on myself rather than helping others, consequentialism says, no, you should just always do the thing that’s best, even if that’s best for other people. Non-consequentialism says sometimes it’s OK: it can be permissible to benefit yourself rather than just the impartial good.
Will MacAskill: The second way in which consequentialism and non-consequentialism differ is on side constraints. So consequentialism says, in principle at least, that it’s just always about what outcome you produce. So if you can save more lives at the cost of sacrificing one life, then you ought to do that — and there’s big debates on how much the thought experiments that philosophers talk about are ever applicable in real life. Whereas non-consequentialism says the ends don’t always justify the means. Sure, if it’s a billion people at stake, maybe you need to sacrifice one, but if it’s just 10 people, then people have a right to life. And similarly for rights against other things, like side constraints against stealing or telling lies and so on.
Will MacAskill: And I kind of think, once you properly take moral uncertainty between the two, it’s kind of one each for consequentialism and non-consequentialism. Where, as per the consequentialist, morality becomes very demanding, because it’s like, “I could spend money on myself or I could give it to help others.” While the non-consequentialist says, “It’s permissible either way,” the consequentialist says, “Oh no, it’s wrong if you don’t give it.” So on the moral uncertainty, the best compromise is to give the money.
Will MacAskill: On the other hand, for side constraints, I think that non-consequentialism weighs quite heavily there. Let’s say it’s sacrificing one to save two others: a consequentialist says, “Yeah, you ought to do that.” The non-consequentialists though, in my view, are saying, “Oh, it’s really wrong to do that.” Given that you’re giving some weight to non-consequentialism, which I think you should, then you should take side constraints pretty seriously, and often not act in a way that will do the most good — because it would involve violating rights or breaking some other moral side constraint.
Rob Wiblin: Yeah. I guess the issue is exactly how you aggregate between those two, or how you figure out what the tradeoff is in any specific cases.
Will MacAskill: Yeah. That’s much more complicated, but at least it points in a particular direction.
Rob Wiblin: Yeah.
Will MacAskill: And that relates to the final aspect of fanaticism that you mentioned. The idea of, “Concern for long-term future, could that just justify anything? What about atrocities?” People sometimes kind of associate “concern for the long term” with “What about dictators who had some ideology or vision?” I think utopianism, people can think that’s kind of dangerous. I think the key thing there is just strongly distinguishing between, “Do you have a positive vision of a good future?” Longtermists often don’t necessarily — you don’t even have to have that. Apart from a very vague sense of…
Rob Wiblin: The future could be really good.
Will MacAskill: It could be really good. We want people to be happier, want people to be reasoning about things. But even if you do have a utopian vision, that does not mean that any ends justify the means. And one thing we’ve learned through history is just that people who are doing bad things for a greater good — that doesn’t go well. So I think on pure consequentialist grounds, taking extremely cooperative, common-sense approaches to doing good is just exactly right.
Will MacAskill: And we’ve seen this with certain animal rights campaigning. There were extreme wings of the animal rights movement that would send bombs in the mail to MPs and things. It was just so bad for the animal movement, as well as just morally wrong in and of itself. So similarly, we have good arguments in favour of taking the long term seriously. We should promote that. We should take these cooperative actions that make the short term better as well as making the long term better. That seems really good, as well as having moral reasons not to violate side constraints in pursuit of a greater good. So I think that’s what really ought to determine it.
Rob Wiblin: Yeah. Every so often I complain that there’s kind of no strain of utopianism in society anymore, or people just focus constantly on dystopias and don’t really think about how the future could be far better. I think one reason for that is that utopianism has developed a very bad reputation, because in the past it’s been used to justify all kinds of horrific things. It’s motivated people to engage in horrific actions as part of political revolutions that basically 100% of the time backfire and are really catastrophic.
Rob Wiblin: I think the lack of positive visions for the future, even sometimes among longtermists, there’s not that much interest in spelling out how the future could be far better. I think there’s downsides to that, but a really positive thing is that this horrific history of utopianism in the past has really inoculated us against this idea that you can justify atrocities, or that you could justify otherwise really morally bad actions because the consequences will be good. It’s so hard to point to people who advocate for that in practice. Even if in some sense their philosophy might imply it, no one’s willing to go there.
Will MacAskill: Yeah. I agree, it would be good to have more of a strain of positive utopian thinking. So a little side project I have done with the book is called Future Voices, which is working with authors to write short stories that are voices of future people. And they can write on anything, but I think there will be a bit of a positive bent, precisely to fill this gap where there’s a lot of dystopian fiction. So Ian McEwan, who wrote Atonement, is one of the authors. Jeanette Winterson is another, and Naomi Alderman. I’m quite excited to see how it turns out. And yeah, as you know, I did a little bit of dabbling into utopian fiction as well — there’s an Easter egg in the book that you can maybe find. But that’s just a very different thing from this “ends justify the means” mentality, which is systematically very bad, I think.
Rob Wiblin: OK, so we’ve dug into two strands of objection there: the fanaticism, as well as the intractability. To keep this interview at a sane length, we’ve had to somewhat constrain what objections we’re going to bite into today. Are there any other ones that you are keen to highlight or at least flag for the audience?
Will MacAskill: Going back to our previous discussion, this idea of just maybe we’re really missing things, missing crucial moral or empirical insights. And so therefore generations to come — maybe that’s ourselves later in our life, or maybe it’s the people who take up the torch for trying to do good — they can do even more good than us. I think that’s sort of the important consideration, and can affect how much of our resources we want to go all-in on the priorities that seem most pressing now. And I think it’s a lot. But I still also think we should be trying to create a kind of community and movement such that people can — in 50 years’ time, once they’ve learned a lot more — take action to make the world better, even if that’s in a way that’s not what we would call longtermist now, perhaps.
Stagnation [02:19:04]
Rob Wiblin: OK, let’s push on to a specific chapter of the book that I thought was particularly interesting. In particular, it covered a bunch of stuff that we’d never talked about on the show before, so I was keen to dive into it. We had quite a smorgasbord of chapters and topics that we could go into and I just had to basically pick one, and this is the one I went with.
Rob Wiblin: So one trajectory that you contemplate in What We Owe the Future is the scenario where there’s no big global disaster or anything like that — the future is in a sense a little bit boring compared to the stuff that we’ve been talking about — but instead, we just see technological and economic and possibly cultural improvement peter out and stagnate for an extended period of time, over the next 100 or 200 years. So it’s a stagnation scenario. I think the chapter is called “Stagnation.” Why should we think that might happen?
Will MacAskill: So the core argument is this. What causes economic progress to occur, or technological advancement? Well, it’s just having more ideas about how you combine raw materials into new things. And a significant amount of development of new ideas is how much time are people trying to develop new ideas? Which we could think of as just like a fraction or total number of person-hours going towards R&D. And we keep that as an abstraction.
Will MacAskill: What’s the story then of why we have had — at least in frontier economies — fairly steady technological progress over the last couple of centuries, fairly steady economic growth? It turns out that there’s two things that have been going on, and they approximately balance each other out.
Will MacAskill: On the one hand, ideas have been getting harder and harder to find. So this qualitatively makes sense. Einstein was able to develop his theory of general relativity just sitting in his attic in his spare time as a patent clerk. Or at least for special relativity. But it was really just armchair reasoning. Now the latest advance in physics requires a multibillion-dollar Hadron Collider. And the advance of discovering the Higgs boson is really pretty minor compared to theories of special and general relativity.
Rob Wiblin: Yeah. So the returns per dollar or per hour have gone way, way, way down in physics.
Will MacAskill: Yeah, exactly. But then there’s arguments — so this paper, “Are ideas getting harder to find?” — that suggest this is true just really quite generally. So that’s one thing: ideas are getting exponentially harder to discover.
Will MacAskill: At the same time, we’ve been throwing exponentially more time at doing the equivalent of R&D. So there’s two ways in which that’s happened. One is just the population has grown a lot — by approximately seven billion over the last couple of hundred years — so we’ve just got a larger labour pool for people to be doing R&D. And then secondly, we’ve been putting a larger and larger proportion of the population into R&D. So the proportion of the population who are, for example, trying to design new computer chips: I can’t remember it quantitatively, but it’s tens, maybe hundreds of times greater than it was even just a few decades ago. And that’s been required to keep Moore’s law going.
Rob Wiblin: Yeah. So more broadly across society, we’ve had population growth, but we’ve also had explosive growth in the fraction of the population that has finished high school and then has finished university and then has finished PhDs and people who are doing knowledge jobs. That’s gone from basically a negligible fraction to quite a significant number now.
Will MacAskill: Yeah, exactly. So that’s been the trajectory so far. Looking ahead, do we expect these trends to continue? Well, population projections are that the population will plateau at about 11 billion. You can argue with precisely the number, but the world as a whole is only barely above replacement now. In fact, it’s below replacement in all continents apart from Africa. So that method — increasing even further the number of researchers — that’s going to level out.
Will MacAskill: And then secondly, how much larger the percentage of the population can be put towards R&D? Well, that’s got a natural cap of 100%. Probably the actual cap is much lower than that. But again, we know that can’t increase indefinitely.
Will MacAskill: And so the prediction from the leading long-run growth model — semi-endogenous growth theory — is that given this plateauing of fertility and the limited gains you can have by increasing the proportion of the population dedicated to R&D, you get declining growth rates that then plateau, and you get a long-run stagnation.
Rob Wiblin: Yeah. Another reason why someone might think that this could happen or will happen is that, if you look over long periods in history, you have the hunter-gatherer era in which our productivity was rising very slowly. And this was then reflected in the very gradually growing global human population. Then you had a farming era where growth rates rose to a new level, but they were still very low by today’s standards. And so, again, over the long haul, over 10,000 years, you had very gradual growth in human population.
Rob Wiblin: And then we have the Industrial Revolution, which really kicks things into the gear that we’re familiar with. Things are really changing over a single person’s lifetime after 1750. There’s all kinds of new things going on. Global economic growth rates are something like 3% or 4% per year.
Rob Wiblin: Then, however, a lot of people have argued that since about 1970, the rate of productivity improvement or the rate of innovations that are actually useful has declined relative to what it was between 1800 and 1970. And so possibly we’re already seeing the effect that you’re talking about, where we can’t increase the number of people doing interesting R&D quickly enough to offset the fact that the ideas are getting harder and harder to find. It might have already bitten to some degree.
Will MacAskill: That’s right. So economists measure technological progress in this metric called total factor productivity. And that’s shown a pretty consistent downward trend over, as I understand it, the last 50 years. So that’s what the data suggests.
Will MacAskill: And then there’s this qualitative argument of just, if you look at all of the changes that happened between, let’s say, 1870 and 1920, and between 1920 and 1970, each interval is just enormous. Electrification is huge, indoor plumbing is huge.
Rob Wiblin: Is it revolutions in social structure as well, and how people were coordinated?
Will MacAskill: What are you thinking of in particular here?
Rob Wiblin: I was thinking, going from 1800 to 1970, we’ve gone from a world where almost everyone lives in a monarchy and almost no one in a democracy to a world where we have communism and liberal democracy. There’s big changes in how we think about how humanity ought to be organised.
Will MacAskill: Huge changes. Yeah. And then from 1970 to 2020, there’s been huge changes in communications and information technology, undoubtedly, but then pretty incremental changes everywhere else. So some people argue that that shows we’re already in a great stagnation or approaching a great stagnation. I’m sympathetic to that. Having said that, that’s not necessary for the argument to work.
Rob Wiblin: Yeah, OK. So as longtermists, thinking about the very long term, you might think, does it really matter that much whether the next 20 or 30 or 40 years are a bit quiet technologically? Why should we care from a longtermist perspective about the possibility that there is a stagnation for decades, possibly centuries?
Will MacAskill: There are two main reasons, I think. One is that if we get stuck in what Carl Sagan called the “time of perils” — a period of heightened extinction risk — then that increases total extinction risk. So imagine that in 50 years’ time, we now have weapons that can create engineered pathogens. We don’t have the defences against them; that requires more advanced technology. And then we get stuck there. Then let’s say it’s 0.1% annual risk. If we get stuck there for centuries or even longer, then that adds up to a really significant risk overall.
Rob Wiblin: That we’ll never make it out.
Will MacAskill: Yeah, exactly. So, that’s one reason. That it’s just an existential risk factor.
Will MacAskill: The second is more unclear in how it goes, but it’s this values consideration again. What are the values that are predominant today? What are the values that restart economic progress in hundreds of years? You might think they’re better, perhaps that’s a chance for moral progress to march on. But if you think that liberal democracy is somewhat fragile, it’s not a guarantee at all, you might well think that actually the values are worse over this time period after a long stagnation.
Rob Wiblin: Yeah. I think this argument is famously made in Benjamin Friedman’s book, The Moral Consequences of Economic Growth.
Will MacAskill: Exactly. Yeah. His argument’s a little bit different. He’s just saying, basically, when the economy’s growing, people are happy. You get cooperation because you can work together and you’re going to grow the pie together. If the economy is stagnant, then people start behaving worse.
Rob Wiblin: Bickering about shares.
Will MacAskill: Exactly. And yeah, I think that’s plausible. And he claims the evidence supports him. I think it’s plausible at least. I’m not sure I’m very confident in that hypothesis.
Rob Wiblin: How is your theory different than what Friedman said?
Will MacAskill: Well, mine’s different because what Friedman’s talking about was very short timescales — they were like years or decades — and within, say, a liberal democracy. Whereas I’m saying, over the course of hundreds of years, the predominant social structures could be extremely different. So let’s say we do think that liberal democracy is this contingent, fragile thing. Maybe we have a dictatorship or authoritarian rule that evolves in hundreds of years’ time.
Rob Wiblin: I see. So your argument is less that the stagnation will specifically cause us to cease to become — but merely that if we wait for ages, then things could get worse.
Will MacAskill: Absolutely.
Rob Wiblin: Inasmuch as we think humanity is in a reasonable position now, perhaps we don’t want to roll the dice and just see how things are in 2300.
Will MacAskill: Yeah. And in the book, actually I just come out as agnostic on the values question. I think I more want to point to this as a really important consideration either way, and maybe more research is needed. Because it’s a big deal; it’s just that I’m not super confident what the sign is.
Rob Wiblin: Yeah. What probability do you place on a stagnation scenario of some sort actually happening?
Will MacAskill: In the book, in this embedded footnote, I gave it one in three. As in over the course of the century tending towards stagnation. I think probably that’s a little high now, but not wildly high. I think it’s a scenario very plausibly on the table.
Rob Wiblin: OK, yeah. Is there anything that you think that we should or could plausibly do now that will help in the stagnation scenario that doesn’t make things worse in other scenarios?
Will MacAskill: Well, there’s this question of should you speed up or slow down AI progress? And that’s a very difficult question in general. Lots of factors on either side. This is one factor on the side of it going faster.
Rob Wiblin: Or at least not actively shutting it down or slowing it down.
Will MacAskill: Yeah, exactly. There’s another risk from delaying it massively. Where these arguments I’ve given — how on Earth could you increase research, the amount of labour going into R&D when we’ve plateaued? — well, one answer would be you’ve automated it, and so effectively you have AI researchers. And then I just think we’re off to the races. The stagnation world is a long AI timeline for sure.
Rob Wiblin: I see. Yeah. It seems that producing artificial intelligence that can do its own R&D is the main path out of stagnation, or at least the main way to speed things up again. Because it doesn’t seem we’re on track to figure out how to get people to have massive families and grow the population again, even if that were desirable. Nor does it seem like we have a tonne of potential to increase the number of PhD students tenfold or a hundredfold in the future, or super educated people.
Will MacAskill: Yeah. I think ultimately it’s about AI, and when do we get it? So yeah, the stagnation worries in general are long-timeline worries. I think we get an extra century of technological progress, is my understanding, given the standard models. Do we get to the point in time at which we can automate R&D by then?
Rob Wiblin: Wait. Hold on. I guess we can project this, can’t we? Because we can say, well, population’s going to be this. This is how we expect education to shift.
Will MacAskill: Exactly. Yeah.
Rob Wiblin: And then you’re saying, after the equivalent of 100 more years of the growth rate that we have now, things just basically cap out.
Will MacAskill: Yeah. That’s my understanding.
Rob Wiblin: OK, cool. I hadn’t heard that one. Sorry, carry on.
Will MacAskill: Yeah. So there’s a question of, do we get to automated R&D in that point in time? And yep, pretty good chance we do. But maybe we don’t. Then what are the solutions to ensure we don’t stagnate? For example, what could we be doing now? What things can we do that increase the chance that we do get to the point at which you can have automated R&D?
Will MacAskill: There are some things you could do. There’s still enormous numbers of people around the world who are not able to meaningfully contribute to the research frontier, because they’re born into poverty. They just don’t have the opportunities. But yet they could make huge contributions. So that’s a really big thing.
Will MacAskill: And as you intimated, attempts to increase population size. There’s a weird thing at the moment where people are having fewer kids than they want. So even just getting people to have the number of kids they want would be a huge positive.
Rob Wiblin: Yeah. I saw a graph recently showing household income against how many children you have. And perhaps not shockingly, it initially goes down — as I suppose people get more career focused up until the point where they’re upper middle class — and then as you get very wealthy, then it goes back up again, people start having more children. Presumably because now they actually have the financial slack and perhaps they have the leisure time from earning enough per hour that now they can have the family size that they originally wanted.
Will MacAskill: OK, I didn’t know that fact. But it is very hard to increase family sizes if you’re a government. So Hungary has spent 5% of its GDP trying to incentivise larger families. And I think they moved the fertility rate from 1.5 to 1.7. That was the impact. So it’s very hard indeed.
Will MacAskill: I think another thing one could do though, in combination with just finding the potential research talent from all around the world, is also just having more accelerated programmes — things that are just much more targeted. We do this with athletics, where they scout: they go all around the world, they try and find the most promising people, and the most promising footballers get put on these fast tracks to become professional athletes. You could do the same, but with professional researchers, and they get directed towards areas of R&D that seem potentially likely to pay off in terms of further technological progress.
Rob Wiblin: Yeah, interesting.
Concrete suggestions [02:34:27]
Rob Wiblin: OK, we’re getting to the end of the conversation. It probably is worth turning our minds to what this might all mean for what you and I and listeners should be getting up to. There’s tonnes of ideas out there about this, and a decent fraction of them we’ve talked about in lots of previous episodes — when it comes to bio, AI, preventing war. I guess one you’ve flagged that we haven’t talked that much about is improving clean energy research. It’d be good to do some episodes on that.
Rob Wiblin: To try to add some value in this episode, above and beyond all of the other resources that we have on this question of what does longtermism imply: how do your personal ideas for what we ought to prioritise differ from other people who research or write about longtermism?
Will MacAskill: It’s tough, because people are already pretty in favour of this, but I think I’m even more in favour of the growth of effective altruism as an idea and a community. Where I think values promotion is important.
Will MacAskill: Growing EA is kind of two things. One, it’s a sort of investment, where you’ve got more resources to put to good things in the future. But it’s also I think a path, hopefully, to a good future. Imagine a world in 200 years, let’s say, where there’s just this general culture of people being motivated by impartial moral concern. They use evidence and careful reasoning to try and work out what that is. When they disagree with people, they have these level arguments that really try to understand the other point of view. When the disagreement is intractable, there’s a strong default of cooperation.
Rob Wiblin: Or compromising.
Will MacAskill: Or compromise. And that’s just a global culture. It’s looking pretty good. I’m feeling kind of happy about the world there, compared to now. So I think that becomes the even better thing. Another thing that I’m particularly concerned about is preserving and promoting liberal democracies. I had this consideration of, how do you evaluate nuclear war? Do you just look at the extinction risk? I think no. The fact that it would be a loss of potential, just wiping liberal democracy off the map, I think is…
Rob Wiblin: It’s a downside?
Will MacAskill: Huge, huge downside. I think that also means for other things that might seem like very broad activities — like looking at India and helping Indian economic growth as the world’s largest economy, and also perhaps guiding away from kind of Hindu nationalism and some of the more authoritarian strains of Indian political thought — could be a very important thing. Similarly, authoritarian trends in the US too. Not going to name any names.
Will MacAskill: So those things seem extremely important to me too. And then I’m not sure quite how much this pops out in the book, but I’m also just generally more concerned by war, I think, than perhaps other people are. What’s my credence that there’s an equivalent of World War III, like a war between great powers, in our lifetimes? I don’t know, 40% or something.
Rob Wiblin: Wow. That’s high. I suppose that’s the base rate.
Will MacAskill: It’s kind of the base rate. Also, if you look at Metaculus and the predictions that it makes and extend it over the course of maybe lifetime, I was thinking by 2100, I don’t know, one in three or something. It’s really surprisingly high.
Will MacAskill: Then I have this view, just a lot of things go a lot worse. I do have lower existential risk numbers than some other people, such as Toby. And part of that is just this view that society is really messy and can be hard, but is self-correcting. In the sense there’s risks for novel pathogens, but then you’ve also got people like Kevin Esvelt making sure that we reduce those risks. And there’s this big asymmetry where people don’t want to catastrophically die, and so those arguments are going to systematically win over coming decades.
Will MacAskill: Then I feel like when people are in a state of going to war, all bets are off. You have something like the USSR biological weapons programme employed 60,000 people — really, really devoting itself to try and figure out the nastiest viruses you can.
Rob Wiblin: Yeah. It was terrible for the world, and also probably terrible for the USSR. And yet, because of the Cold War situation, they were able to make mistakes like that.
Will MacAskill: Yeah, absolutely. Again, I think in general, people act fairly rationally in their own self-interest. In war scenarios, even that can go out the window to some extent. And not just the fact that they’re now making decisions with stakes where it’s like, “We’re maybe just off the map unless we do this crazy thing.”
Rob Wiblin: Is it bad that when you say there’s like a 40% chance of us having a great power war in our lifetimes, my first thought is, why am I saving so much for retirement? I think I need to spend more on my holidays, instead of for a future that won’t exist.
Will MacAskill: Yeah, that’s probably bad.
Rob Wiblin: It’s a little self-focused, isn’t it?
Will MacAskill: I think your thought should maybe be, “Yeah, let’s make that war not happen.”
Rob Wiblin: OK, yeah, that’s fair. Probably won’t succeed at that, but I can definitely pull money out of my retirement account.
Where Will donates [02:39:40]
Rob Wiblin: Where do you donate personally?
Will MacAskill: So my last donation was to the Lead Elimination Exposure Project.
Rob Wiblin: Nice.
Will MacAskill: There’s two grounds. So for those who don’t know, it’s a new organisation incubated within the effective altruism community, which tries to eliminate lead paint and ultimately lead exposure from all sorts of sources. Lead exposures are really bad. It’s really bad from a health perspective, also lowers people’s IQ, lowers their general cognitive functioning. Some evidence that it kind of increases violence and social dysfunction. And then LEEP, as they’re called, have just already had a tonne of action, going to Malawi and basically just getting the government of Malawi to enforce regulation against lead paint. So it seems like they’re really making traction.
Will MacAskill: This is an example of very broad longtermist action, where I think this sort of intervention is maybe kind of different from certain other sorts of global health and development programmes. If I imagine a world where people are a bit smarter, they don’t have mild brain damage from lead exposure that has lowered their IQ and made them more impulsive, more violent, it just broadly seems like a much better society.
Will MacAskill: That was the first argument. And then the second was just I think it’s really good for EAs to be doing things in the world — making it better, achieving concrete wins. And I’m aware that my donation has a symbolic value, as well as just being a place where money goes. So I really wanted to recognise and reward that.
Will MacAskill: And then the final thing is just that they actually seem to me to be in real need of money and further funding, in a way that lots of the maybe more core, narrowly targeted longtermist work is not currently. So my sense is that a lot of the best giving opportunities are more in the stuff that’s a bit broader, because that really hasn’t been as much of a focus of grantmakers.
Potatoes and megafauna [02:41:48]
Rob Wiblin: Yeah. OK, well, we have sufficiently exhausted you, so it’s time to release you back into the wild. Last interview, we spoke a bunch about potatoes and your obsession with potatoes, and you justified that. Is there any topic that you’ve recently gotten obsessed with that’s maybe taking the baton from potatoes, in terms of pointless, massive interests?
Will MacAskill: Well, I should say in the last podcast, I made a comment about this persistent study about how potatoes had this enormous long-run impact. And I commented, “Oh, it probably doesn’t check out.” Turns out it checks out! So my interest in potatoes was vindicated all along. It’s the most important technology ever invented. Among the most important.
Rob Wiblin: We’ll go and issue a correction to that episode.
Will MacAskill: OK, please do. Another topic that I got really down the rabbit hole and slightly obsessed with — again, coincidentally, during the pandemic lockdown period — was megafauna. In particular, extinct megafauna. I do include this at the start of chapter two of the book. We normally think that megafauna are animals that are human size or larger; I think it’s above 45 kilograms. There are a lot of them in Africa. You’ve got rhinoceroses, hippopotami, giraffes, elephants, just a diversity of really large animals. But not really as much in other places in the world — it’s much, much fewer. And you might think that’s because of something to do with Africa’s ecological niche or something. No, it’s because of humans. It’s because human beings evolved in Africa, we co-evolved with these large animals. And so, they evolved to escape humans as a predator.
Rob Wiblin: Whoa.
Will MacAskill: But then there was the great migration out of Africa. Humans in evolutionary times very quickly spread to all corners of the globe. And systematically, when humans arrived in a certain area, it wasn’t that long afterwards that most of the large animals went extinct. This is interesting in a few ways. One is, it’s just an example of early humans having extremely persistent effects. Because again, like with the human species, once some other species is extinct, it’s very hard to come back from that. It’s possible. There are efforts to de-extinct certain animals.
Will MacAskill: Second, is just that the animals were wild. In South America there were the glyptodonts — they’re my personal favourite — which are a sister family of the armadillos, but they’re the size of a car. So like a Ford Fiesta. They would weigh somewhere between like 800 kilograms up to two tonnes, and covered in this carapace of hard shell. Humans would kill them and use the shell, was the hypothesis (as well as for meat) — kill them for having this protective armour, essentially. They were blind during the day, which is bizarre, so just adapted to low light.
Will MacAskill: That was just one example of these megafauna animals. Also the giant ground sloth, Megatherium.
Rob Wiblin: How big are they?
Will MacAskill: Two tonnes. So like African elephant in size, but a ground sloth could stand on two legs, because they were herbivorous, to get leaves from trees.
Rob Wiblin: I guess we don’t know if it moved really slowly.
Will MacAskill: Yeah. Great question. I don’t know.
Rob Wiblin: It doesn’t show up in the fossil record.
Will MacAskill: Many elephant-like creatures in South America. Camels in Europe. So if they hadn’t gone extinct, “Oh, go to Spain, go on a camel.”
Will MacAskill: Haast’s eagles — they were actually a little later, so went extinct in 1400 — were the largest bird species we know of. They would hunt moa, these large flightless birds. I think moa themselves were really quite large. The number I’ve got in my head is that they were like 200 kilograms, which is fairly big. Haast’s eagles were, I think, half the weight of a human or something, and had a 2.6-metre wingspan. And they would hunt these enormous, flightless birds. But then the moa got made extinct by humans, so then the Haast’s eagles died off too.
Will MacAskill: And so in these cases, it’s not just that humans were killing the animals directly, but they were just having untold ecological damage. There’s often this narrative of like, “Early humans, before we had technology, we were living in accordance with nature.” No, no, no: we were responsible for the extinction of most megafauna. Burning vast swathes of forest land in order to make hunting easier. I mean, it’s just actually terrifying.
Will MacAskill: And then the most striking of all is the other Homo species, multiple other human species — so the Denisovans, Neanderthals, Homo floresiensis, and a few others, and Homo sapiens. In some cases, Denisovans and Neanderthals, there was some interbreeding, but out-competing. And this is kind of a contingent fact that there’s only one human species. There’s not necessarily a lesson here, apart from just boggling that we could have had a world where there were two different human species. Think how different our morality would’ve been. In particular, there’s a strain of moral thought which really places humans on this pedestal, far above other animals. Well, if there’d been two human species, would that have happened? Maybe not.
Rob Wiblin: I know. I think what I take away is that humans are cold-blooded killers.
Will MacAskill: Yeah. Have you heard about persistence hunting?
Rob Wiblin: Oh, that’s the thing where they outrun them, right?
Will MacAskill: Yeah. This is another thing I got a little bit obsessed by. Humans are actually very strange physically. We’re one of the only creatures that sweat — us and horses. And we’re incredibly good long-distance runners. Humans evolved, in significant part, to hunt animals over very long time periods.
Will MacAskill: So you’re the zebra, I’m an early human, we’re on the plains in Africa. I will see you and I’ll start chasing you down. It’s like 11:00am, blazing hot sun. And I’ll just start jogging after you, slowly. You run away out of the horizon. I can track you. But not just that — some zebra is running away from me and I can identify one particular zebra and how that’s different from others. And I just keep jogging after you. You see me again on the horizon, you run away. This happens again and again, over the course of many hours, until you just collapse from heatstroke and exhaustion. And then I stand over you, strangle you to death and then eat you. Terrifying. Utterly terrifying. And this was early human hunting.
Rob Wiblin: That’s what we’re born to do.
Will MacAskill: That’s what we’re born to do, exactly. That’s why we’re such good long-distance runners. We also had the advantage of very early tool use. So being able to carry water, we didn’t have to lug around water with our bodies, but at the same time we could carry it some distance. This was very helpful for this.
Rob Wiblin: Well this is a positive note to end the podcast on. We usually try to go for something fun and positive to inspire people to action. People always say humans lived in harmony with nature. I feel like we want to modify that to humans lived in harmony with nature after they killed everything that they could. Then what was left that they couldn’t get rid of, no matter what they did, that they lived in harmony with.
Will MacAskill: Yeah, absolutely. So we were doing fact checking [for the book]. We really want everything to be vigorous. Probably the issue that we got the most heat on out of the whole book was not abolition of slavery. That’s a sensitive topic. No, it was the question of what killed off the megafauna. Where I really think that if you look at the literature, it’s just very, very strong arguments that humans played a decisive role. I’m not claiming it was a human cause for every single megafaunal extinction, but the large majority.
Will MacAskill: A lot of people really hate that. They claim it’s like climate change. There’s this kind of covenant of thought that we should not overemphasise humans. But I think the more I learned about evolution and early human history and cultural evolution, is that actually humans are just a radically different species. We’ve been having this enormous environmental impact since Homo sapiens first evolved and spread across the globe.
Rob Wiblin: Wow. Here’s to that. My guest today has been Will MacAskill, and the book is What We Owe the Future. Thanks so much for coming back on the show.
Will MacAskill: Yeah. Thank you.
Rob’s outro [02:50:41]
Rob Wiblin: Alright a few closing notices.
This one I didn’t tease at the start because it’s just for people dedicated enough to make it to the end of a long episode like this: I’m looking to chat to 5-10 randomly selected subscribers to this show in order to get a better sense of who’s out there and what you’re looking to get out of these episodes.
If you’re open to helping with that, go to 80000hours.org/podcastchat (all one word), put in your name and email, and if the random number generator picks you out then I’ll send you a link to book a 20 minute call with me at some hopefully convenient time. I look forward to virtually meeting a few of you and understanding who’s really listening in on these conversations!
———
Second, we’re extending the closing date for this year’s user survey because we haven’t gotten as many responses as we need or hoped for. So you’ve two more days to get those in, which means the end of Wednesday (August 17th, 2022).
It’s important because the user survey is the main way we figure out what among all the things 80,000 Hours does are most helpful and which are not useful or even harmful to folks. You can fill it out at 80000hours.org/survey.
Naturally the team has to be constantly thinking about what to write next, what roles to hire for, which podcast episodes to produce next, and so on and your input can help with all that.
The previous survey helped get me working on the podcast full time, and people telling us what they wanted from future episodes got us to interview David Denkenberger a second time, and to cover new problems in interviews with Cal Newport and Nina Schick among others.
We have a fair few things going on at 80,000 Hours including this show, our other show 80k After Hours, our job board, our various different kinds of research articles, our one-on-one advising and our marketing efforts to reach new folks. So prioritisation can be a challenge and there’s a lot of topics you might be able to give feedback on.
Normally we do this once a year, but we skipped it in 2021 so we could stay focused on just delivering our projects, and that means we’re particularly keen to know how things have shifted for users over the last 2 years, during a time when 80,000 Hours has grown and changed a lot.
If you’ve filled it out a few years ago it’s very likely that your plans and opinions have changed since then so if you’re open to filling it out again we’d really appreciate that.
We’re keen to hear how 80,000 Hours might have affected your plans for doing good, both in your career and otherwise, and also to get feedback from anyone who has engaged with us and hasn’t changed their plans.
On average people take 25 minutes to fill it out — if you were moving fast or had simple things to say you could probably get through in 10-15 minutes, while someone with a complex story and subtle things to communicate would take longer.
However much you write I can promise you every entry gets read all the way through by multiple people. That’s 80000hours.org/survey
———
Pushing on, as I mentioned in the intro 80,000 Hours is currently looking for a new marketer.
From the start of this year we’ve begun investing much more heavily in getting the word out about what we have to offer, since that’s a natural way to do more good as an organisation.
Since then, we’ve found a few things that we think are working, and so want to try doing more of them – but to make that happen we need more people focused on marketing than just Bella Forristal.
Someone who’d be right for this role would be pretty excited about effective altruism or 80,000 Hours’ mission, and might have a background in marketing, especially digital and influencer marketing, but our experience is that that’s that’s not essential and we want to hear from people without jobs like that on their CV as well.
This isn’t exactly a traditional marketing position, since we’re a nonprofit & we’re not actually selling anything for money — so we’d love you to apply even if you aren’t otherwise thinking of yourself as a professional marketer per se.
The role is also full-time, ideally it would be done in person at our office in London, and it would pay around £60k assuming you have little to no prior experience – but more if you do.
If that piques your interest at all you can find plenty more about the role at 80000hours.org/marketer. Applications close pretty soon on the 23rd August 2022.
All right, kudos to you if you stuck with us through all of that!
The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Audio mastering and technical editing by Ben Cordell.
Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.
Thanks for joining, talk to you again soon.