#136 – Will MacAskill on what we owe the future

  1. People who exist in the future deserve some degree of moral consideration.
  2. The future could be very big, very long, and/or very good.
  3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
  4. So trying to make the world better for future generations is a key priority of our time.

This is the simple four-step argument for ‘longtermism’ put forward in What We Owe The Future, the latest book from today’s guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

But Will is upfront that longtermism is also counterintuitive. To start with, he’s willing to contemplate timescales far beyond what’s typically discussed:

If we last as long as a typical mammal species, that’s another 700,000 years. If we last until the Earth is no longer habitable, that’s hundreds of millions of years. If we manage one day to take to the stars and build a civilisation there, we could live for hundreds of trillions of years. […] Future people [could] outnumber us a thousand or a million or a trillion to one.

A natural objection to thinking millions of years ahead is that it’s hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn’t matter how important something might be if you can’t predictably change it.

This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

But over seven years he gradually changed his mind, and in What We Owe The Future, Will argues that in fact there are clear ways we might act now that could benefit not just a few but all future generations.

He highlights two effects that could be very enduring: “…reducing risks of extinction of human beings or of the collapse of civilisation, and ensuring that the values and ideas that guide future society are better ones rather than worse.”

The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren’t coming back.

But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

For thousands of years, almost everyone — from philosophers to slaves themselves — regarded slavery as acceptable in principle. At the time the British Empire ended its participation in the slave trade, the industry was booming and earning enormous profits. It’s estimated that abolition cost Britain 2% of its GDP for 50 years.

So why did it happen? The global abolition movement seems to have originated within the peculiar culture of the Quakers, who were the first to argue slavery was unacceptable in all cases and campaign for its elimination, gradually convincing those around them with both Enlightenment and Christian arguments. If a few such moral pioneers had fallen off their horses at the wrong time, maybe the abolition movement never would have gotten off the ground and slavery would remain widespread today.

If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don’t eliminate a bad practice now, it may be with us forever. In today’s in-depth conversation, we discuss the possibility of a harmful moral ‘lock-in’ as well as:

  • How Will was eventually won over to longtermism
  • The three best lines of argument against longtermism
  • How to avoid moral fanaticism
  • Which technologies or events are most likely to have permanent effects
  • What ‘longtermists’ do today in practice
  • How to predict the long-term effect of our actions
  • Whether the future is likely to be good or bad
  • Concrete ideas to make the future better
  • What Will donates his money to personally
  • Potatoes and megafauna
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Highlights

The case for longtermism

Will MacAskill: I think the core argument is very simple. It’s that future people matter morally. It’s that there could be enormous numbers of future people. And then finally, it’s that we can make a difference to the world they inhabit. So we really can make a difference to all of those lives that may be lived.

Will MacAskill: Homo sapiens have been around for about 300,000 years. If we live as long as typical mammal species, we will survive for hundreds of thousands of years. If we last until the Earth is no longer habitable, we will last for hundreds of millions of years. If one day we take to the stars and have a civilisation that is interstellar, then we could survive for hundreds of trillions of years. I don’t know which of those it will be. I think we should give some probability to all of them, as well as some probability to near-term extinction, maybe within our lifetimes or the coming centuries.

Will MacAskill: But taking that all into account, and even on the kind of low estimates — such as us living as long as a typical mammal species — the future is truly vast. So on that low estimate, there are about 1,000 people in the future for every person alive today. When we look at those longer time scales that civilisation could last for, there are millions, billions, or even trillions of people to come for every person alive today.

Will MacAskill: Imagine you’re hiking on a trail and you drop some glass. Suppose you know that in 100 years’ time, someone will cut themselves on that glass. Is there any reason at all for not taking the time to clean up after yourself, that the person who will be harmed lives in 100 years’ time?

Will MacAskill: Maybe hasn’t even been born. And it seems like the answer is no. Or if you could prevent a genocide in 1,000 years versus 10,000 years versus 100,000 years, and it will kill 100,000 people, does it make any difference when those lives will be lived? Again, it just seems like intuitively not. Harm is harm wherever it occurs. And in that way, distance in time is quite like distance in space. The fact that someone will suffer is bad in and of itself, even if they live on the other side of the world. The fact that someone will suffer is bad in and of itself, even if they will live 10,000 years from now. So I think when we reflect on thought experiments like this, we see that, yeah, we want to give a lot of moral weight to future people.

Will MacAskill: And in many other areas where we know we’ll have long-term impact: disposal of radioactive nuclear waste, for example. It’s just perfectly common sense that the fact that this waste will be radioactive to some extent for hundreds of thousands of years, well, we should think a little bit about how we ensure that we’re not harming people in the far future. And that just seems really pretty common sense. So I think there is a strong element of common sense, at least in this idea that future people matter morally, that’s just really a pretty common-sense idea.

What's distinctive about longtermism

Will MacAskill: I think the thing that’s most distinctive is what we mean by “long term” — the sheer scale of how we’re thinking about things. People criticise current political thought for being short-termist, or companies for being short-termist, and what they mean is, “Oh, companies are focused on the next quarter profits. Our political cycles are focused on the next election. And they should be thinking further out on the order of years or decades.”

Will MacAskill: But I think we should not be so myopic. In fact, we should take seriously the whole possible scale of the future of humanity. So one thing that we’ve just learned in the last 100 years of science is that there is a truly vast future in front of us. And it feels odd, possibly even grandiose, to start thinking about the things that could happen in our lifetime that could have an impact — not just over decades, but over centuries, thousands, millions, or even billions of years. But if we’re taking seriously that future people matter morally, and that it really doesn’t matter when harms or benefits occur, then we really should take seriously this question of whether there could be events that occur in our lifetimes that have not just long-lasting, but indefinitely persistent effects.

What longtermists are actually doing

Will MacAskill: One focus area is pandemic prevention, in particular from worst-case pandemics. You might think this is a very trendy thing to be hopping on the bandwagon of, but we have been concerned about this for many, many years. 80,000 Hours started recommending this as a career area in 2014, I believe. And why are we so concerned about this? Well, developments in synthetic biology could enable the creation of pathogens with unprecedented destructive power, such that the COVID-19 pandemic — while killing tens of millions of people, wreaking trillions of dollars of damage, being an enormous tragedy — it would make that tragedy look kind of small scale. In fact, at the limit we could create pathogens that could kill literally everyone on the planet. And if the human race goes extinct, that’s a persistent effect. We’re not coming back from that.

Will MacAskill: So what are we doing? Well, there’s various possible options. One thing we’re doing is investing in technology that can be used to prevent worst-case pandemics. Something I’m particularly excited about at the moment is far-UVC. This is quite a narrow spectrum of light, you can have comparatively high-energy light in the spectrum, and the hope is that it basically sterilises rooms that the light is shining on. Because it’s a physical means of sterilising surfaces and even sterilising air, if it really works, it’s protective against a very wide array of pathogens. It’s not necessarily something that clever and mal-intentioned biologists could guard against. But yet if this were implanted as a standard in all light bulbs around the world, then potentially we could just actually be protected against all pandemics, ever, as well as all respiratory diseases, just as a bonus. So this is very early stage, we’re going to fund this a lot, but potentially at least extremely exciting.

Will MacAskill: Another example within pandemic preparedness would be early detection. At the moment we do very little for screening for novel pathogens, but you could imagine a programme that just all around the world is constantly screening wastewater, for example, to just see, is there anything in these samples that we just don’t know about that looks like a new virus or a new bacterial infection? And then can ring the alarm bell. It means we could respond much, much faster to some new pandemic outbreak.

Will MacAskill: The other big focus by far is on artificial intelligence. So the development of artificial intelligence that’s not just narrow — we already use AI all the time when we’re using Google search, let’s say — but AI that is more advanced, more able to do a very wide array of tasks and able to act basically like an agent, like an actor in the world in the way that humans do. There’s good arguments for thinking that could be among the most important inventions ever, and really pivotal for the long-run trajectory of civilisation.

Will MacAskill: And that’s kind of for two reasons. One is that technological progress could go much, much faster. At the moment, why does technological progress go at the pace it does? Well, there’s just only so much human research labour that can go onto it, and that takes time. What if we automate that? What if now AI is doing R&D? Economic models produce the result that technological progress could go much, much faster. So in our lifetimes, perhaps actually it’s the equivalent of thousands of years of technological advancement that are happening.

Rob Wiblin: I suppose it could be thousands of years, but even if it was just decades occurring in a single year, that would still be massive. And I guess we’ll be going into this process not knowing how much it’s going to speed things up, which is pretty unnerving.

Will MacAskill: Which is pretty unnerving, exactly. It could lead to enormous concentrations of power — perhaps with a single company, single country, or the AI systems themselves. And so this is the scenario kind of normally referred to as “The Terminator scenario.” Many researchers don’t like that. I think we should own it.

How Will started to embrace longtermism on a deeper level

Will MacAskill: The first [step] was just starting to appreciate the scale of the future, and that really came pretty early on. I think that the future, in expectation, is very big indeed. That’s a bit of technical terminology that means once you take both probabilities and scale into account. Life expectancy is like this: if you’ve got a 10% chance of living 100 more years, that would increase your life expectancy by 10 years. But in expectation, the future is very big.

Will MacAskill: Yes, perhaps we’ll go extinct in the next few centuries, but if we don’t and we get to a kind of safe place, things could be really big indeed. As I said, hundreds of millions of years until the Earth is no longer habitable, hundreds of trillions of years until the last stars fade. So the idea that there’s just this enormous amount of moral value there. In fact, whatever you care about, essentially — whether that’s joy, adventure, achievement, the natural environment — it’s almost all in the future. That I had liked; I bought into fairly early on.

Will MacAskill: And then the two things that really changed were, firstly, the philosophical arguments becoming just more robust over time, I think. The issue of population ethics, for example. Questions about is it a moral loss if we fail to create someone in a future date who would have a happy flourishing life? That’s actually essentially irrelevant to the case for longtermism, because I think there are things that we can do that aren’t just about increasing the number of people in the future, but about how well or badly the future goes, given that the future is long. So the difference between a flourishing utopian future and one of just perpetual dictatorship.

Rob Wiblin: Yeah, OK. So it might affect where you focus, whether you focus on creating a future that has way more people in it versus not. But even if you weren’t bought in on there being more people — because there will probably be tonnes of people, or at least in expectation there’ll be lots of people in the future — then you’d still care about the long-term impacts, because you’d want to improve their quality of life and leave them in a better situation.

Will MacAskill: For sure, exactly. That’s one. Secondly, I think getting more clarity on the arguments. Why economic growth, for example, is at least not directly a way of positively influencing the very long-term future, because at some point in time we will plateau. Perhaps you speed up a little bit the point at which we get as well off technologically as we ever will, but that’s something that we will achieve over the course of not that long, but certainly thousands of years. That’s not something that’s affecting the really long-run trajectory of civilisation. Whereas the things that actually do affect it are perhaps a much narrower set of activities, such as AI and pandemic prevention.

Will MacAskill: But then the second category of things was less philosophical and more empirical, where the sorts of actions that one would recommend as a longtermist moved out of the category of, “Let’s sit and think more about this in a philosophy seminar” or something equivalent — where there’s just a real worry that’s like, “Are we really achieving anything here?” — and instead are now just very concrete. So within AI, there’s this huge boom in machine learning research. We are now able to experiment with models to see if they display deceptive behaviour. Can we make them more truthful? Can we make them less likely to cause harm? In the case of pandemics – we’ve learned a lot over the last few years. We have very concrete ways of making progress. So the actions that we now recommend have really moved out of something where it feels very brittle, it feels very like we could easily be fooling ourselves, and more to just, “Look, it’s just this list of things that we can really get traction on.”

Strongest arguments against longtermism

Will MacAskill: The one that I think really weighs on me most is just the following argument: look over our intellectual progress in the last few hundred years. There’s a series of just enormous intellectual changes that Nick Bostrom would call “crucial considerations,” that greatly change how we should think about our top priorities. Probability theory was only developed in the 17th century by Blaise Pascal, even in its very early stages. The idea that the future might be enormously large and the universe might be enormously big, but yet unpopulated — that was only the early 20th century. The idea of AI actually has a pretty decent history going. The first computer science pioneers, Alan Turing and I. J. Good, basically just understood the risks that AI posed.

Rob Wiblin: Seems like almost immediately.

Will MacAskill: Immediately, it was incredible. I mean, they were very smart people. I mean, they don’t have the arguments really well worked out and I think a lot of intellectual contribution comes from really going deep. But you look at the quotes in Turing and I. J. Good in the ’50s and early ’60s, and it does seem like they’re getting some of the key issues. The idea that artificial beings could be much smarter than us, and that there’s a question of what goals do we give them. Also, the idea that they’re immortal because any AI system can just replicate itself indefinitely.

Rob Wiblin: And the positive feedback loop really jumped out at them. They’re like, “Oh, you make these smart machines. Then they can improve themselves better than we could and so it could take off.”

Will MacAskill: Exactly. That was I. J. Good, stated that just very cleanly. Pandemics as well. The first-ever piece of dystopian science fiction was by Mary Shelley, called The Last Man. That was in the 19th century, and was about a pandemic that killed everyone in the world. So actually, there is a good track record of some people being strikingly prophetic. Having said that, it was still only in the ’80s that population ethics really became a field. Nick Bostrom’s astronomical waste argument was in the 2000s. You know, nanotech was still one of the top causes in 2010.

Will MacAskill: And then I honestly feel like we’re learning a lot in terms of what are the right ways to tackle priorities. Even if the priorities aren’t changing, like the high risk of AI, we’re learning a lot about how best to tackle them. So I think we’re still learning a lot. In 100 years’ time, might there be very major, crucial considerations — such that we would look back at us today and think, “Oh, they were really getting some major things wrong”? In the same ways that we would look back at people in the 19th century, and say, “Oh wow, they have really misunderstood.”

Is humanity likely to converge on doing the same thing regardless?

Rob Wiblin: You might think that if humanity survives for a really long time, then we’re going to just converge on the same values and the same activities no matter what, because we’ll just think about it a lot. And no matter your intellectual starting point, ultimately the right arguments are going to win out. What do you think of that idea?

Will MacAskill: I think it’s something we should be open to at least. But one way in which I differ from certain other people who work on longtermism is that I put a lot less weight on that. In that sense, I’m a lot less optimistic. I think it might be very hard for society to figure out what the correct thing to do is. It might require a very long or at least reasonably long period of no catastrophes happening — no race dynamics of people fighting over control of the future, a very cooperative approach. And in particular, people who just really care about getting the right moral views — and once they’re in positions of power, are then thinking, “OK, how do we figure out what’s correct? Understanding that probably I’m very badly wrong.” Rather than getting into a position of power and saying, “OK, great. Now I can implement my moral views.”

Rob Wiblin: “The stuff that I care about.” It’s not the historical norm.

Will MacAskill: Exactly. How many people have gotten to a position of power and then immediately hired lots of advisors, including moral philosophers and other people, to try and change their ideology? My guess is it’s zero. I certainly don’t know of an example. Whereas I know of lots of examples of people getting power and then immediately wanting to just lock in the ideology.

Rob Wiblin: Yeah. In building this case that what people decide to do in the future could be contingent on actions that we take now, I guess that the main argument that you bring is that it seems like the values that we hold today, and the things that we think today, are highly contingent on actions that people took in the past. And that’s maybe the best kind of evidence that can be brought to bear on this question. What are some examples of that sort of contingency that we can see in views that we hold and practices that we engage in today?

Will MacAskill: There are some that I think are pretty clear that they’re kind of contingent. Attitudes to diet, attitudes to animals are things that vary a lot from country to country. Most of the world’s vegetarians live in India. That goes back thousands of years to kind of Vedic teachings. It’s an interesting question. Imagine if the Industrial Revolution had happened in India rather than in Britain. Would we be on this podcast talking about, “Whoa, imagine a possible world in which animals were just incarcerated and raised in the billions to provide food for people.” And you were saying, “Wow, no. That’s just way too crazy. That’s just not possible.” It seems like a pretty contingent event that that’s the way our kind of moral beliefs developed.

Rob Wiblin: Yeah. So that’s an interesting case, where we don’t know the identities of the people, potentially many different people, who contributed to the subcontinent — India and other nearby areas — taking this philosophical path where they were much more concerned about the wellbeing of animals than was the case in Europe, or indeed most other places. But we know that there must have been some people who made a difference, because evidently, it didn’t happen everywhere. It’s not something that everyone converges on.

Will MacAskill: Yeah.

Rob Wiblin: So it seems like almost necessarily there has to have been decisions that were made by some people there, and not elsewhere, that caused this path to be taken.

Will MacAskill: Yeah. Absolutely. Then when we look at other things too. I think the rise of monotheistic religions, certainly the rise of the Abrahamic religions, seems pretty contingent. There’s not really a strong argument, I think, one can give about why monotheism should have become so popular rather than polytheism.

Are things getting better rather than worse?

Will MacAskill: Yeah, I think two things: there’s the trajectory of the world, and my guess is that it is getting better, even after you factor in the enormous amount of suffering that humanity has brought via factory farms. I think if you look at the underlying mechanisms, it’s more positive. Where in the long-term future, how many beings do you expect to be “agents” — as in, they’re making decisions and they’re reasoning and making plans — versus “patients,” which are beings that are conscious, but are not able to control the circumstances?

Will MacAskill: The trajectory to date is pretty good for agents. Certainly since the Industrial Revolution, I think human wellbeing has just been getting better and better. Then for animals, it’s quite unclear. There are far fewer wild animals than there were before. Whether that’s good or bad depends on your view on whether wild animals have lives that are good or bad. There are many more animals in factory farms, and their lives are decisively worse. However, I think it’s unlikely that when you extrapolate these trends into the distant future, that you expect there to be very large numbers of moral patients, rather than it mainly consisting of moral agents who can control their circumstances.

Rob Wiblin: Yeah. It’s interesting. On this model, basically until there were humans, basically everyone was a moral patient. In the sense that maybe wild animals, with some conceivable partial exceptions, neither had really the intellectual capacity nor the practical knowhow to control the situation they’re in in order to make their lives much better. The fraction of all conscious experiences being had by agents is kind of going from zero, gradually up. And so long as we don’t allow there to be patients by effectively prohibiting future forms of slavery, then we might expect it to reach 100, and then for most of their lives to be reasonably good.

Will MacAskill: Yeah. Absolutely. And it makes sense that their lives are reasonably good, because they want their lives to be good and they’re able to change it. Then that relates to the kind of final argument — which is the more “sitting in an armchair thinking about things” argument, but which I think is very powerful. This is just that, as we said earlier, some agents systematically pursue things that are good because they’re good. Very, very few agents systematically pursue things that are bad because they are bad. Lots of people do things because they’re self-interested and that has negative side effects. Sometimes people can be confused and think what they’re doing is good, but actually it’s bad.

Will MacAskill: I think there’s this strong asymmetry between if you imagine a very best possible future, how likely is that? What’s happened? I can tell a story, which is like, “Hey, we managed to sort out these risks. We had this long deliberative process. People were able to figure things out and people just went and tried to produce the best world.” That’s not so crazy. But then if I tell you the opposite of that, it’s like, “We have the worst possible world. Everyone got together and decided to make the worst possible world.” It seems very hard indeed.

Will MacAskill: It’s very plausible to me that we squander almost all of the value we could achieve. It’s totally on the table that we could really bring about a flourishing near-best society. And then it seems much, much less plausible that we bring about the truly worst society. That’s why I think the value of the trajectory of the future skews positive.

Articles, books, and other media discussed in the show

Will’s work and media appearances:

Longtermist cause areas Will is excited about:

The risk of technological stagnation:

Other work on longtermism and related philosophical arguments:

Other 80,000 Hours Podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.