#6 – Toby Ord on why the long-term future of humanity matters more than anything else, and what we should do about it
Of all the people whose well-being we should care about, only a small fraction are alive today. The rest are members of future generations who are yet to exist. Whether they’ll be born into a world that is flourishing or disintegrating – and indeed, whether they will ever be born at all – is in large part up to us. As such, the welfare of future generations should be our number one moral concern.
This conclusion holds true regardless of whether your moral framework is based on common sense, consequences, rules of ethical conduct, cooperating with others, virtuousness, keeping options open – or just a sense of wonder about the universe we find ourselves in.
That’s the view of Dr Toby Ord, a philosophy Fellow at the University of Oxford and co-founder of the effective altruism community. In this episode of the 80,000 Hours podcast Dr Ord makes the case that aiming for a positive long-term future is likely the best way to improve the world.
We then discuss common objections to long-termism, such as the idea that benefits to future generations are less valuable than those to people alive now, or that we can’t meaningfully benefit future generations beyond taking the usual steps to improve the present.
Later the conversation turns to how individuals can and have changed the course of history, what could go wrong and why, and whether plans to colonise Mars would actually put humanity in a safer position than it is today.
This episode goes deep into one of the most distinctive features of 80,000 Hours’ advice on doing good. It’s likely the most in-depth discussion of how we and the effective altruism community think about the long term future, and why we so often give it top priority.
We’ll mail you the book, for free
Join the 80,000 Hours newsletter and we’ll send you a free copy of the book.
We’ll also send you updates on our latest research, opportunities to work on existential risk, and news from the author.
If you’re already on our newsletter, email us at [email protected] to get a copy.
Articles, books, and other media discussed in the show
- Why despite global progress, humanity is probably facing its most dangerous time ever
- If you want to do good, here’s why future generations should be your focus
- How can we actually predict the long-term consequences of our actions: Making sense of long-term indirect effects – Robert Wiblin, EA Global 2016
- Our profile on positively shaping the development of artificial intelligence
- Dr Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future
- List of Dr Toby Ord’s research
- Should we discount future health benefits when considering cost-effectiveness? by Dr Toby Ord and Robert Wiblin
- Our research into the world’s most pressing problems
- How to choose a problem to work on
- Existential Risk website and website of long-term future research leader Prof Nick Bostrom
Other 80,000 Hours article relating to the future and long-term thinking
- Where are the aliens? Three new resolutions to the Fermi Paradox. And how we could easily colonise the whole universe.
- Oxford University’s Dr Anders Sandberg on if dictators could live forever, the annual risk of nuclear war, solar flares, and more.
- Podcast: You want to do as much good as possible and have billions of dollars. What do you do?
- Guide to careers in artificial intelligence policy and strategy
- Career review: working at effective altruist organisations (pros, cons, comparing alternatives, and how to get hired)
- Our computers are fundamentally insecure. Here’s why that could lead to global catastrophe
- The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.
- Prof Yew-Kwang Ng is a visionary economist who anticipated many key ideas in effective altruism decades ago. Here’s his take on ethics and how to create a much happier world.
Transcript
Hey podcast listeners, this is Rob Wiblin, Director of Research at 80,000 Hours. Today’s episode is one my colleagues have been hammering away at me to make for months.
It dives into the aspect of 80,000 Hours ideas that is most often misunderstood. Without the ideas in this episode, so much of the rest of what our advice is going to make little sense. Outside of academic papers, this is now the most in-depth discussion of the ideas effective altruists use to think about the long term future.
It presents a complex perspective that’s hard to summarise briefly, and works through the bad objections that people instinctively raise.
The general idea is that the impacts we have on future generations are extremely important, and the main thing we should focus on if we want to greatly improve the world.
We discuss the justification and implications of this, across population ethics, economics and government.
In the show notes I’ve also linked to a YouTube video where I expand on one aspect of the discussion – how we can actually predict the long-term consequences of our actions.
If you’re motivated to work on improving the long-term prospects for humanity by helping to solve the problems we discuss, then there’s a link in the show notes or blog post about the episode which you can use to apply for personalised coaching from us.
Rather than sitting in front of your computer for two hours listening to this, I strongly recommend subscribing to the show on whatever podcasting app you use by searching for ‘80,000 Hours’. That way you can speed us up, so long as you can think faster than we can talk.
Finally if you enjoy the show please do share it on Facebook and let your friends know to subscribe!
And now I bring you, Toby Ord.
—
Today I’m speaking with Toby Ord. Toby is a moral Philosopher at Oxford University. His work focuses on the big picture questions facing humanity, like what are the most important issues of our time, and how can we best address them?
Toby’s earlier work explored the ethics of global health and global poverty, and this led him to create an international society called Giving What We Can whose members have pledged over $1.4 billion to the most effective charities, hoping to improve the world. He also co-founded the wider Effective Altruism movement, of which 80,000 Hours is a part, encouraging thousands of people to use reason and evidence to help others as much as possible.
Toby has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, the Cabinet Office, and the Government Office for Science, and his work has been featured more than 100 times in the national and international media.
I should add that Toby is a trustee of the charity that I work at, 80,000 Hours, but any sycophancy here is entirely sincere, I assure you.
His current research is on avoiding the threat of human extinction and thus safeguarding a positive future for humanity, so thanks for coming on the podcast, Toby.
Toby Ord: Thank you, Rob. It’s great to be here.
Robert Wiblin: You’re actually working on a book about preserving the future of humanity at the moment, right?
Toby Ord: Yeah, that’s right. I think it’s one of the most important topics of our time, and it’s something that I’ve worked on for quite a few years at the Future of Humanity Institute at Oxford. We realised that it’s missing a book to really tell the story.
Robert Wiblin: Maybe just lay out the broad arguments that you’re making in the book. Why is the long-term future of humanity such a big deal, and perhaps the most important issue for us to be thinking about?
Toby Ord: Okay. Allow me to start by recapping human history so far. I think that’s a good way of letting us see where we’ve got to. About 200,000 years ago, we had the rise of humanity, our species, Homo sapiens. We’re not that remarkable on our own, but we had enough ability to learn, and to share knowledge with a few other humans and small groups, and to cooperate with them. This led to the slow but steady accumulation of knowledge over time.
If you jump forward about 190,000 years ahead — 95% of the way to the present — there was a very major transition, which was the Agricultural Revolution. By moving from foraging to farming, we’re able to get enough food to have enough people in one place that we could develop cities and writing, and these two technologies let us have enough people in one location, on the order of a million people, to enable specialisation, and cooperation on a grand scale. Writing enabled us to share the information much better between generations, and so then the number of people cooperating, instead of being about 100 people, was millions of people over dozens of generations. This really meant that we had civilisation, and things really took off with mathematics, and law, and money, and metalworking, and lots of other things.
Then if we go forward quite a long way again, another big breakthrough was the Scientific Revolution. This was a huge change in our ability to understand the world. It led to a massive rise in technology which continues to this day, and also to this idea of revolutionary progress, where people are aware that within their own lifetime, the very foundations of how the world works may change dramatically.
Another 100 years later, so 300 years ago, this blossomed into the Enlightenment, which was when we applied these ideas of reason and evidence to the social and political world, as well as to the natural world. This led to these massive improvements in political systems, and great improvements in the well-being of people who were under these more liberal systems.
Then about 200 years ago, we had the Industrial Revolution, where we used coal, which enabled us to capture the power of millions of years of condensed sunlight, leading to a huge increase in our energy supply, and to automation. This gave us a huge increase in income, that for the first time in human history outstripped population growth, so we grew richer per capita, and for the first time in history, this meant that prosperity spread beyond a small elite.
Okay, so that’s a lot of things that we will have read about in the history books, but I think also that there’s an era of human history that’s even more important than any of these famous transitions, and that is something that started about 70 years ago. I think the key to that was that this increasing power, created by cities, and the scientific revolution, and the Industrial Revolution, this massive increase in technological power, meant that we finally were so capable that we had the ability to wield destructive forces that could destroy humanity itself.
This came with the rise of nuclear weapons in the 20th century, and it continues now. In the 21st century, we’re having the rise of synthetic biology, artificial intelligence, maybe nanotechnology, creating ever-newer technologies which could cause threats to humanity itself. This is a time where unless we get our act together as a species, there’s only so many of these centuries that we’re going to be able to survive. There’s this precipitous rise in the risk that we’re undergoing, threatening this very thread of humanity itself. We hopefully will see a whole lot more of these eras in the future, and a whole lot of great, additional improvements in a lot of humanity, but unless we get this time right, then we’re not going to see any of that at all. It’ll be the end of our story.
Robert Wiblin: There’s a sense in which things have gotten a lot better over the last few hundred years. Obviously people are richer and they’re living longer, but there’s also a sense in which humanity’s situation has become more precarious, because 200 years ago, there was no way that the stupid action of a single person could drive humanity into the ground really quickly. Today, just, there’s a few people who have the authority to launch nuclear weapons, and if they caused an all-out nuclear war, it would basically drive humanity back into the Stone Age very quickly, within a day.
Toby Ord: That’s exactly right, and it might drive us even further than that. It might be the end of humanity itself.
Robert Wiblin: We will consider some philosophical and practical objections to thinking about the long run future, later on, but maybe you just want to lay out how bad it could be, if humanity went extinct. How much could we potentially be losing?
Toby Ord: Sure. I think that the key way to think about this is really in terms of how good it would be to have the continued flourishing of humanity, and then to see extinction being bad by denying that to us. I think it is pretty intuitive that if there’s really a substantial chance of humanity’s future being extinguished in the coming few centuries, a lot of people would accept that that is probably one of the most pressing issues that we face. I doubt that you would see key geopolitical figures, such as Angela Merkel saying “Oh, you know, the fact there’s a 50% chance that humanity will be destroyed during my tenure in this job is kind of irrelevant, because there’d be no one around for it to be bad for.” It helps to see how absurd that is.
Similarly, when we think of a catastrophe killing thousands of people, we see clearly that that’s terrible, and it’s even worse when it kills millions or billions of people, and also that generally, killing more people can’t make it less bad. It’s not that once it kills another billion people it becomes good, or okay. That’s very implausible.
So that’s just a few points about the intuitiveness of this. Here are a few different arguments, as to where the value is in preserving humanity. I think that the strongest argument is that it’s in the well-being of the future generations. All of the hopes and dreams that they have in their lives, the great experiences they have, the things they do, that there’s so much more of human history that may be yet to come. If you think about this, there have only been about 10,000 years of human civilisation, this time period where we’ve had these kinds of advances in art and culture and so forth, where our lifespans have gone up so much, and our quality of life and health is so high. Violence has gone down, and we’re free from illness.
Our species itself is 200,000 years old, so there’s every reason to expect that if we don’t cause our own extinction, that naturally we could live for another 2,000 centuries (200 times longer than civilisation so far), or potentially much longer. The Earth looks like it will remain habitable. It’s been inhabited for about three billion years, possibly longer, and it looks like it will remain habitable for another 500 million years to a billion years, so there may well be potentially a billion years of human history to come, which is vastly longer than what we’ve had so far. If so, or even if there was only a small chance of that, really, or even if it was only going to be as long as we’ve had so far, there would be so much more of it to come than we’ve had in civilisation so far. That’s one argument, that it’s about all of these future people who could exist, and all of the great lives they would have.
You might think that our happiness isn’t what matters. What matters are the great achievements that humanity reaches — these great moments of art and culture that we’ve had in the past. Again, there’s every reason to think that if we’ve only had a small fraction so far of the human civilisation that we could have, that most of these great achievements are actually going to be in the future. We’ll achieve even greater things than the great works of music, and art, and science that we’ve had so far.
A third approach is to think in terms of a partnership across the generations. There was a politician and political theorist, Edmund Burke, who had this great analogy, thinking about society as a partnership across the generations, where the types of things that we do are too big for any generation to do on their own, but what we do with a country, or with our global civilisation, is that we build up institutions and knowledge, build up these norms for which to live by, and our art and culture.
Each generation does their part, improves it, and hands this down to the next generation, who continues this grand project across the generations. We could be the first generation to ruin that, and just to destroy this legacy which we’ve been handed down, and to have it go nowhere from here. That’s another angle on this, a deontological angle on which failing to preserve humanity, and in particular, actively destroying humanity would make us pretty clearly the worst generation that had ever lived.
Robert Wiblin: Make us really uncooperative jerks in the scheme of things.
Toby Ord: Indeed! And so that’s another argument for the importance of this, and also for the central importance as to why we would be at a pivotal time, among the most important times to live, and to have shirked this most important of all responsibilities.
Another approach is to think about virtue, and virtue ethics. You could think of this here on the individual level, that trying to safeguard human civilisation shows a real imaginative compassion or fairness (fairness between generations), but also a certain kind of generalisation of love, and really, if you think about the type of love that a parent has for their child, or the kind sometimes called procreative love of bringing into existence this new being and creating this wonderful life for them, that’s one way you could think about this yearning to create this great and glorious future ahead of us.
In terms of virtue, I think that there’s an even stronger argument in terms of what I call civilisational virtues. If you think of humanity itself, or human civilisation as a group agent, and you think about what qualities would be its virtues, you can see that we’re really lacking in wisdom at the moment. We have some, and we perhaps have more than we did a while ago, but it’s growing much more slowly than our power. We’ve had this problem of having rapid increases in power driven by this amazing exponential increase in technology, which has outstripped our growth of wisdom and coordination. We could instead try to really push for more wisdom as a civilisation. In particular, we could be less reckless, and think about the virtue of prudence at the civilisation level. If it really is the case that we could have a billion years of civilisation, and that we risk that for a century of improving our own lot, then that’s equivalent, at the human level time scale, to risking your entire life because of what’s happening in this minute of your life. Also, the virtue of patience means thinking about this really long future that we could have, and trying to work out how to really achieve things over the stretch of time, instead of being very impatient.
Finally, people have suggested that another reason involves our cosmic significance. For all we know, we’re alone in the universe and the life here on Earth may well be the only life. Even if it isn’t, humanity quite possibly is the only intelligent life in the universe. It’s very possible that we might be the most amazing and rare part of the whole universe, the only part of the universe capable of understanding the universe itself and appreciating its wonders. In which case it might be even more important that we don’t squander this, and destroy this one most special part of all of creation.
Robert Wiblin: We’ve got an argument there from the positive consequences that the future could have, arguments about how being foolhardy, and taking enormous risks with human civilisation violates the rules of decent conduct of how generations ought to treat one another, an argument from it being a virtuous thing to do: to think about the long-term consequences of your actions, and care about future generations, and nurturing them. Also perhaps a more spiritual argument, in a way, about the potential cosmic significance of complicated life, and intelligence, and not wanting to squander that without really having thought of whether it would be the right thing to do.
Toby Ord: That’s right, and I think some people who’ve questioned it were pretty much just thinking about the well-being of the future generations, and may not have realised that there’s actually reasons from many of the different traditions of ethical thinking, which converge to say that the destruction of humanity would be one of the very worst things that could happen.
Robert Wiblin: Another line of argument I’ve heard is just that even if you didn’t care intrinsically about the welfare of future generations, the current generation does just care a lot that their actions today have a long-term meaning behind them, and you can think about this. Imagine that you found out that everyone was going to die in the year 2040. Then just so much of the significance of life is completely stripped away, because an enormous amount of what we do is about trying to build up future generations, and improve the long-term trajectory of civilisation. A lot of the research we’re doing, having children, building buildings, trying to produce great works of art, if you know that it’s all just going to come to an end in 20 or 30 years’ times, then just the whole point of life is much reduced.
Toby Ord: Yeah, and one can take that even further if you think about this aspect, that we’re currently in this very special time by the standards of human history, when our actions could destroy our world, or at least it’s very plausible that they could. In this very special time, not only is it the case that, as you say, there might be less meaning in our lives if we knew that humanity wasn’t going to continue, but I think there’s additional meaning to our time, and that ultimately this fills our time with particular meaning, that what we have to do is to stop this happening, and to navigate this period, and get out the other side, to a world that has really got its act together over these issues, and is no longer the threat that a single person’s actions, say the President of the United States of America, could launch a massive nuclear attack or something else, which might lead to the destruction of everything that we value.
Robert Wiblin: You’ve pointed out that basically from every mainstream philosophical position, it would be a tragedy, perhaps the greatest tragedy, if we drove ourselves extinct for no good reason. Maybe let’s just dive in on the consequences argument, and look at the case that’s sometimes made that the long-term future is just overwhelming. You’ve pointed out that even if humans just stayed with their present population on Earth for thousands of years, then that would make this issue of ensuring civilisation or continuity very important, but how far can you push the argument?
Toby Ord: That’s a good question. I’m not sure, so one way that I think about it is that a lot of the value of the future comes from duration, that the future could be very much longer than the present, just as the past has been very much longer than the present time, or the present generation. That gives a very large multiplier in terms of how much good we can do by helping to preserve that very long period of time. There’s also the aspect that the scale might be very much larger. It might be that instead of just one planet, that we’ve managed to spread to other planets, and spread perhaps through the 100 billion stars of our galaxy, or even through billions of galaxies around us. It could be that the type of thing that we’re giving up is much more than this pale blue dot, and the civilisation as we know it, we might be giving up something much, much larger. These things suggest that the value could be much bigger than this.
However, I think it’s tricky here. You’ve got to be careful with these conclusions. I think some of the early thought on this was a bit rash. Nick Beckstead actually clarifies a lot of this. He has a very good PhD thesis [TO1] on the topic. One of the points that he makes is that this argument about extinction suggests that it’s a much bigger deal than the very immediate effects of other actions that we could take — that’s fairly clear. However, if we do something else — suppose we save an individual’s life — if we look at that just over a short period of time, it’s much smaller than saving the life of humanity itself. But it will have effects that go on for a long time as well. A simple model of that could be that actually there’s an extra person in every generation for all time, or even that there’s a seven billionth more people in every generation for all time. That would’ve been a reasonable model so far with the exponential growth of human population. And then if you actually look at these long-term effect of other actions, they could have very big effects over the whole future.
It’s not actually that clear which is bigger, so the key there is to think that these aspects, which say the future may be a lot bigger, apply to everything when you’re thinking about the very long-term effects of your actions. What they really say is that if you are a longtermist, and you don’t think that people matter different amounts just based on when they live, then it follows that the long-term consequences of actions are much more important than you would’ve otherwise thought. If these are systematic long-term consequences, we can predict something about the value of them. Then they may well actually be the best way of judging your actions rather than based on the short-term, immediate consequences. A very clear one that could be very important is preserving humanity itself, but perhaps there are other ways of having very good long-term impact based on path dependence but short of saving most of the value in civilisation — maybe you just make it a hundredth better for all time to come. Nick Beckstead calls these trajectory changes.
I think there is still an open question about whether there are techniques like this of affecting the long term, which could rival trying to avoid extinction.
Robert Wiblin: There’s an interesting thing that goes on when you make the more modest argument that it would be good to ensure that we don’t go extinct, so that your children and your grandchildren and your great-grandchildren can continue to live and have good lives. People are inclined to think, to go along with that, and then when you make a seemingly dominatingly stronger argument — that it will be good if your children, and your grandchildren, and your great-grandchildren plus another 100,000 generates, and perhaps creatures that are post-human in a way, when we might have changed humanity such that we’re no longer immediately recognisable, people… Even though that is a strictly stronger argument, in a sense, because it includes all of the things that you cared about before, plus other additional things that might be even more valuable, people are more skeptical about that, and tend to push back, and that they feel like they’re getting tricked here somehow. Do you have any thoughts on that?
Toby Ord: Yeah. I think one aspect is that it just starts to get very abstract, and often when things are more abstract and less concrete, we start to say some pretty strange things. Maybe we use it more as an opportunity to say something clever-sounding, or to express some of our values rather than to actually just push for the things that are clearly of value. That’s my best guess on that.
Robert Wiblin: Yeah, so that’s your explanation for why, as someone who really cares about ensuring that the healthcare system saves a lot of lives, might at the pub say, “Ah, who cares if we all die?” Are they fully thinking this through? Maybe we should just abolish the national healthcare service, because who really cares one way or the other?
Toby Ord: Yeah. Exactly. I think it’s getting too abstract for them, and also those conversations at the pub normally don’t really matter. It’s not that someone’s going to implement the thing that they said. If they were told, “Someone is going to implement the thing that you say,” they’d probably be quite a lot more careful before saying things like that.
Robert Wiblin: Yeah. If they were told, “Oh, well, that’s an interesting argument you made. We’re going to remove all of the safeguards on the use of nuclear weapons now, cause it just doesn’t really matter whether they’re used or not, whether humanity goes extinct,” maybe they would want to do some further research before really committing to that.
Toby Ord: I think so, and it is interesting to trace these consequences, and try to see where the inconsistencies are between the different things people say, and then try to see how they can make them consistent. I think that this is actually a lot of what moral philosophy is about. A lot of people who don’t study this do wonder what the hell it is that we do, but a lot of it is trying to think about our intuitions, and to notice when they’re in conflict with each other, then to try to work out the best ways of resolving those conflicts.
Another interesting example is climate change: this is going to have a lot of bad effects for many generations, and some people suggest that it may even be so bad as to perhaps be able to cause human extinction itself. Every time I’ve heard that brought up, people I think correctly say, “Wow. That’s even worse than we thought.”
They don’t say, “Oh, well, I guess to the extent which it could cause human extinction. It’s actually better than I thought, because that wouldn’t be bad at all if that happens.”
Robert Wiblin: I think it would certainly raise eyebrows, as you said, if you ran on that as a political platform.
Toby Ord: Exactly. Outside of the pub, no one would say that. I think it’s a kind of … Yeah.
Robert Wiblin: There’s a certain glibness to it.
Toby Ord: Exactly.
Robert Wiblin: We’ll move on to objections to this view in a minute, but are there any other reasons to prioritise thinking about the long-term future of humanity that you wanted to highlight first?
Toby Ord: Sure. I think it’s interesting to just see where all of this comes from. The way I see it is that all of this is an immediate consequence of what I call longtermism, which is just this very natural view that people matter just as much, no matter when they exist. I think most people would agree with that as a principle, if stated clearly like that.
I don’t think that people reflect much on what would follow from it, and I think that what’s surprising is that there are ways of affecting the well-being of people who might live millions of years hence. This comes from the fact that there are ways of locking in a terrible outcome, that the people of the future can’t do anything to recover from. You can see that if it wasn’t locked in like that, then there would be this question about, well, how is it that I’m more capable of helping someone in a million years’ time than I am of helping someone now?
You can see that in a case like extinction, where there’s no people left to be able to recover from extinction, that that would be a case where we — before the event — are uniquely privileged to be able to do something about it. We’re in this position of very high leverage. And that is a way in which it’s also fairly clear that no improvements in technology in the intervening time period would get the other people out of the mess that they’re in, because there won’t be any people to invent these technologies — there will be no way to come back from it. It’s a fairly clear case of why it could be that our actions now could affect the lives — whether there are lives at all — of people over millions of years. Then combined with this longtermism, it really has this very dramatic effect. That’s true whether you think about the well-being of these lives, or whether you think about the achievements within them.
There are also other cases beyond extinction, for example, a perpetual global totalitarianism, where some stable system of ensuring dictatorial rule is set up in such a way that people within the regime can never break out. If people want to see a good example of that, Orwell’s 1984 is basically set up exactly to do that, building a very detailed and very stable system where it is quite unclear about whether humanity could break out of it over many thousands of years, and so these cases —
Robert Wiblin: That situation kind of exists in North Korea today.
Toby Ord: Yeah. That’s a good point. North Korea may be like that. I think that it’s not as clear based on the leaders, if the leaders were to die, whether it is stable to that. People were seriously worried about this reaching a global level last century with the rise of Nazism, and where they attempted to create a perpetual global totalitarianism. People were trying for this. North Korea is probably less stable to threats from without, but if it reached a global level then this could be stable.
There are other, more benign sounding ways in which we could reach that. If you imagine a civilisation where we think about ethics, and we get very excited by some ethical system, which actually misses most of the value that we could have. I think everyone probably has their own ideas about that, their own favorite systems that they think miss out on most of what really matters. Whether that’s because those systems perhaps just count pleasure (and don’t count other deeper things of life) or whether they’re systems that just want everyday life like it is today to continue, rather than things which could be much better.
If we prematurely converged to a really problematic ethical system, which actually misses out on most of the value that we could’ve created, then we may even enthusiastically teach this system to our children via their moral education, and extol its virtues, and be very worried that someone will create a different system. We might deliberately lock ourselves into a set of values which stop us producing most of the value we could’ve created.
There are various other forms of lock-in where something like our values gets locked in, or a particular form of political institution that’s oppressive gets locked in. Maybe that just represents a very small minority’s values. You can even have cases where no one’s values are being supported. It’s just certain kinds of horrible institutional equilibria, where no one can really do anything unilaterally to break out of it.
There’s a whole lot of different ways in which we could have this lock-in, and irrevocably lose almost all of the value that we could add. These are called existential risks. They include extinction, and all of these other types of cases that are logically similar in that there’s this irrevocable loss of almost all the value that we could’ve had.
Robert Wiblin: You use the term ‘longtermism’. Is that the name now for this school of thought?
Toby Ord: That’s the word I’m using. I haven’t heard much detailed conversation around this connection to existential risk, but I think it’s very useful to actually just have a name for this wider set of ideas that what we really want to be thinking about is the long term. Some people in the effective altruism community have played around with terms like thinking about the ‘far future’, or the ‘distant future’, and this captures one aspect very well, which is that we’re not merely interested in the next 100 years, or 200 years — that it could be a very long time period — which really gives power to this argument. However, it also conjures up this idea that we’re just concerned with what they do at that time, and not at any of the intervening times. So it seems as if we’re preoccupied with what some possibly small group of people do a million years hence, whereas I think it breaks a lot of the mystery out of it, and just gets more to the root of what we’re talking about, if we’re thinking about the long term.
We’re just thinking about every generation equally, rather than having this connotation that we’re thinking about generations that are distant rather than generations that are nearby. We’re just actually impartial between them.
Robert Wiblin: I’ve heard other terms like preservationism, and I suppose you could also … We could’ve called it conservationism, except that that has a clear environmental meaning now.
Toby Ord: That’s right. I think there’s still interesting questions about what’s the best terminology for some of this — I totally agree. I also think that there’s some very interesting connections between this very simple, basic idea about safeguarding civilisation and avoiding existential risk, compared to environmentalism. Environmentalism is fundamentally about avoiding the irrevocable destruction of ecosystems, and avoiding the irrevocable loss of species. These are two of the key ideas of environmentalism, and very similar regarding this irrevocable loss. The fact that it would happen for all time, is this very deep aspect of it. It’s also to do with a kind of lock-in. It’s something that you lose, and that you can see why it is that you can’t get it back. It is very similar in that regard. It’s just that what we’re talking about here is something that’s much bigger than that.
I also think that conservatism, as a political system, actually has some similarities as well. It’s partly based on this idea that there’s something very special about this society, and that if we were too radical, too progressive, and we change too quickly, maybe we’ll lose some of these aspects, these norms and institutions that actually enable us to function very well and have these great aspects of our society. It could be an irrevocable loss of what really made things work, and that maybe we need to go somewhat more slowly and carefully when we’re doing it. I think that these ideas are very attractive to people, and that make sense, that there is something to be said for prudence in these areas. In fact, when I think of conservatism like that, I think it’s a much more reasonable view than I do when I hear conservatives talking about it…
Robert Wiblin: The current generation, some people I guess count as longtermists, and some people don’t. I guess if you’re being negative about that, you might think that the current generation has become a bit narcissistic, and is extremely focused just on its own welfare, and perhaps isn’t thinking enough about what’s going to come of our children, and our grandchildren. What do we know, though, about the views of previous generations? Was this view how most people thought about civilisation, and the future and the past? Were they more concerned than we are with ensuring that things kept on going?
Toby Ord: That’s a good question. I think that for most of human history, they weren’t, but ultimately the natural risks that they faced are relatively small. We can see that natural risks to humanity must be fairly small, because humanity has survived about 2,000 centuries so far, so this means that the risk can’t really be that much higher than about one in 2,000 per century, or you have to explain why coincidentally we managed to get through all of those centuries. You can make that argument tighter, and I’ve written a paper on this. You end up with the idea that for natural risks, the mean time between extinction events, is something between about 200,000 years all the way up to about 100 million years, but it really can’t be in the area of 1,000 years, or even 10,000 years. It must be the case that we could expect to survive many, many more centuries of natural risks, even if we didn’t do anything about them.
In contrast, it’s only since the development of nuclear weapons that we’ve really had enough power as a species to be able to destroy ourselves, and ushering in these new anthropogenic risks, which I think are at much higher levels. This is the reason why common sense morality (the kinds of ideas about morality that we heard growing up, and that our parents and our grandparents instantiate) doesn’t talk much about the conservation of humanity itself. This just wasn’t really an issue until the 1940s. There was one generation from then until the end of the Cold War, and then another generation since that. We’re actually at the vanguard of people who’ve seen a particular threat — nuclear weapons — and are now starting to see the rise of new threats like this, and are generalising it to the idea of existential risk.
When it comes to that first threat of nuclear weapons, there was a lot of action on this. My parents went on anti-nuclear marches, and I think took me along as a baby on these marches. It was a huge topic. It swept through popular culture. There’s lots of songs, satirical songs, and more earnest songs about avoiding this nuclear threat, and there are many different parts of our culture it pervaded, and caused a lot of people to really fight this fight. It did all of that under the banner of ‘nuclear weapons’ and the ‘anti-nuclear movement’, rather than generalising to the idea of trying to preserve humanity. That idea was only really able to be grasped once we started to think about additional threats beyond nuclear weapons.
I really see this as growing out of that anti-nuclear movement, and I think that was the first time when humanity really started to rise to that challenge. And then our generation is a generation that mostly grew up outside of the Cold War, and this shadow of nuclear war. We’ve missed out on a bit of that moral seriousness that happened back then.
Robert Wiblin: Just thinking about the attitudes of past generations a little bit more, it seems like in the Middle Ages, there would’ve been at least a lot of concern about the continuation of a particular cultural or ethnic group, and ensuring that it wasn’t wiped out by other groups. Such as the Scots want to preserve their integrity, and make sure that Scotland continues to exist. That might’ve been more common at that time because it was a more violent era in which some groups really did displace others completely through war. Looking back even further, we’re both Australians, and I imagine we’re both familiar with the idea that indigenous Australians, before Europeans arrived, had a very long-term perspective within their culture about the connection to the land and the need to preserve it, so that many, many future generations of their own tribe could continue to benefit from it. I think it’s when looking at groups like that, that the current generation does seem a little bit narcissistic.
Perhaps that’s an idealised impression of what indigenous Australians thought, and on a day-to-day basis, their concerns were more prosaic…
Toby Ord: It’s a good question. That could be a good place to look to for inspiration. The figure when I was growing up was that the Australian Aboriginal people had been around for about 40,000 years before now, and I believe there has just been some very strong new evidence that extends that up to 70,000 years, so that’s about a third of the lifespan of Homo sapiens that they’ve been around in Australia. They have really managed to survive for incredibly long time. In Tasmania, the population there — like you were talking about with Scotland or something — managed to be driven completely to extinction by white settlers, and they actually saw a localised version of an existential disaster. There could be a lot to think about from the Australian example.
Robert Wiblin: Yeah. Just to build on that, I think almost everyone regards that as virtuous. Maybe it’s true, maybe it’s not, but if it was true that indigenous Australians spent tens of thousands of years passing the torch from one generation to another, making sure that at no point did they damage the land so much that future generations couldn’t survive on it and flourish on it, that … I think almost everyone thinks that that is an impressive achievement, and a good thing that they did, and it speaks well of them. It’s just interesting that I think those people then might not push to have the same attitude in a sense shared in the current generation as much as perhaps they should if they really thought about it.
Toby Ord: Yeah. I think that there could be an inconsistency there.
Robert Wiblin: By contrast, the current generation is just constantly inventing new, dangerous things, and changing culture incredibly rapidly, and taking risks, and running the risk of international war, looks quite foolhardy by comparison.
Toby Ord: Yeah.
To be clear: I’m no opponent of technology. This rise in technology was associated with this amazing rise in prosperity and bringing evermore people out of poverty in absolute terms. If you exclude the possibility of these existential risks, then it’s very clear that actually technology was very positive. The smaller risks that it had were dwarfed by the improvements, and you can see that just by the fact that human lifespan basically doubled under this period of rapid technological improvement, even taking into account all of the ill health that the technology produced as well. The net effect was to double lifespan, so it’s produced huge benefits. I don’t know if you can really blame people that much, at least individuals, for not really having seen that it’s exposing us to these risks — risks that haven’t yet eventuated — but this is this kind of double-edged sword of technology, that it comes with this.
Because they haven’t yet eventuated, it also makes it very hard to work out the actual probabilities of these things. Although there was an interesting attempt by John F. Kennedy who, after the Cuban Missile Crisis, wrote down that he’d put the risk of precipitating a thermonuclear war with the Soviet Union as somewhere between one-in-three and evens, and he was one of the two people who could’ve precipitated that war. We have some very serious judgments by people who should know, which suggests that the chance of massive nuclear war in that situation was very high. Those words are not that widely known, though.
Robert Wiblin: We’ve made the case here — and I think that the audience can probably tell if they didn’t know already — that I’m in pretty strong agreement. Maybe let’s consider some of the objections: the serious, thoughtful objections that people make, both to the underlying idea philosophically, and then to whether this actually has any practical implications for us. Probably the most common philosophical objection I hear is called the person-affecting view, which is the idea that we should be concerned with raising the welfare of the current generation, but not so much changing the number of people who exist in the future. Do you want to talk about that for a minute?
Toby Ord: Sure. This is a pretty popular intuition that people have when it comes to population ethics (which is the study of how to assess acts that change who it is who will come into existence). The slogan that person-affecting views often appeal to is that ‘ethics is about making people happy, not about making happy people’. And a lot of the intuition comes from a related idea, which is that we shouldn’t make sacrifices just for the sake of merely possible people. For example, if someone who is a longtermist suggested that maybe we should try to avoid existential risk, and someone else said:
“It doesn’t matter: if the disaster happens then those people never will exist — they remain merely possible. They’re not actual, and so how can it be wrong if it doesn’t affect any real, actual people? It would only affect these possible people.”
I think that that sounds rhetorically good, but if you think about it, it doesn’t really work. There’s a couple of reasons for this. One is that there’s a bit of extra rhetorical effect that’s coming from the idea of being merely possible. For example, a unicorn is a merely possible being. There are no actual unicorns, and indeed, therefore, they don’t make any difference in ethics. You shouldn’t have any ethical role for unicorns. But the possible people who would exist if we avoided a disaster are not merely possible in the sense that a unicorn is. They would exist if we were to take a different action — in fact, their existence hinges upon the action that we’re about to take. So it’s a little bit like saying, “I’m not going to consider a merely possible healthcare system and because I’m not going to sign the bill, it doesn’t matter, because it will remain a merely possible healthcare system”, or something like that. It’s a bit ridiculous.
So that’s one aspect. Another is that ultimately, the only plausible person-affecting views also care about these merely possible people. The way to see this is to consider people whose lives are worse than nothing, so people who have such a tortured existence. Maybe they’re born with a very rare and terrible condition, or maybe they grow up in some terrible, totalitarian regime or something, where their life is worse than nothing, filled with pain and suffering. Everyone agrees that it’s a bad thing if these people come into existence rather than not. We should go to some effort to avoid there being additional tortured lives. Most people with person-affecting views also agree with that, and they would make sacrifices to avoid these people coming into existence, which means that they would make sacrifices on behalf of what end up being merely possible people. The whole stuff about merely possible is ultimately a red herring, and one shouldn’t appeal to that intuition at all, because everyone ultimately agrees that in some cases you should help those merely possible people.
The people with person-affecting views generally instead move to the following kind of claim called the Asymmetry: that it is bad to add lives that aren’t worth living, but it isn’t good to add lives that are worth living. There’s an asymmetric situation, they would argue. It’s just neutral to add lives that are worth living. It’s actively bad to add those that are tortured lives. I think that this sounds plausible, though it’s not as rhetorically powerful as the mere possibility argument that ultimately didn’t work.
But it’s actually very difficult to get this asymmetry to work as well. Here’s the nub of it. Consider the following three options. You could not add anyone to the population, or you could add a new person who has a modest quality of life, or you could add that exact same person, but at a higher quality of life.
Clearly it’s better to add them at a higher quality than to add them at a modest quality. Everyone’s got to agree with that, otherwise, you get ridiculous consequences that if you create people, it doesn’t matter how well off they are. They agree that it’s better to add them at a higher quality than at a modest quality, but they also want to say that adding them at a modest quality is equal to not adding them, and adding them at a high quality is equal to not adding them. If you hold all those views simultaneously, then you’ve got a contradiction, because it can’t be better to have them at a high quality than at a modest quality while both those options are also equally good as not adding anyone. So you have a contradiction there, and we can try to get around that, but it’s actually very difficult to get it to work.
I won’t get into the exact technicalities, but you can try to have partial orderings, and you can try to have various bits of nonstandard decision theory. People have been trying to get a version of this off the ground since the Asymmetry was first discussed in 1967 (so the last 50 years) and there’s no plausible, worked out theory that’s been developed, and no consensus has arisen. In fact, I’m not aware of any philosophers who in their papers use the person-affecting theory worked out by another philosopher. It seems to be just that people create their own system, and then people find problems with that, and people sometimes support their own view, but they never support each other’s views. It’s just really scattered, so it doesn’t seem to be working out. And among other things, any attempt to do this basically requires modifying standard decision theory in order to even state these theories. They violate a number of axioms of rationality, which leads to time-inconsistency and cyclic preferences.
They get into situations where they think that if you have options A and B available, you should do B. If you have options B and C available, you should do C, and if you have A and C available, you should do A. This leads them to have all kinds of problems. People who study rationality tend to say that a system really can’t have these properties and still be rational. I think it’s very hard to get a version of these person-affecting views to work. It may be possible, and maybe I’ll be surprised, and a consensus will emerge in the next 10 years or so, and people will work out how to do it.
Even if that’s true, person-affecting views just argue against the idea that it’s the opportunity cost of the well-being of people in future generations which makes avoiding existential risk really important. All of the other types of arguments about the achievements that we could have for humanity, about virtues, about the kind of contract of the chain of unbroken generations, and about our cosmic significance, all of those would still be left untouched by this, I think. Ultimately, I don’t think it’s too much of a danger.
Robert Wiblin: I’ve never actually heard a description of the person-affecting view that is clear enough that it makes sense to me. It doesn’t avoid, that it’s not incredibly vague in order to avoid directly contradicting itself, but even if the person-affecting view were true, it seems like it would create much stronger consequences than what most of its supporters expect it to. One thing would be, what if you believe, as I think is sensible, that people’s identity changes over time as they age, and that in different futures, if you go down different paths in life, then in a sense you’re a different person? You could then end up indifferent between those two different parts, right, because there’s no common person between the two of them. Have you heard that argument?
Toby Ord: I haven’t seen that in the literature, but it’s what you get if you think about person-affecting views combined with the kind of personal identity ideas that Derek Parfit has, and so maybe he talks about this somewhere. I’ve thought about it a bit myself before, and it does seem to be an issue for those views.
Robert Wiblin: Another line of objections that I encounter is that we shouldn’t be too concerned about future generations, because we should have a high discount rate on their welfare, or just on the future in general. My training is in economics, so when you bring up this issue with economists, they tend to quickly start talking about discount rates. Do you want to describe why you think that is a misguided concern?
Toby Ord: Sure. I can say quite a bit about that!
A discount rate, if any of the audience haven’t heard the term, is where you have some percentage amount per unit time that the value of things happening in the future gets discounted by — it makes them less important. For example you might say that there’s a discount rate of 5% per annum, so things that happen a year from now only matter 95% as much as things that happened this year. Things that happened a year after that only matter 95% squared times as much, so that’s around about 90% as much as things that happen now. The things that happen in 100 years matter a very tiny amount compared to things that happen now. Effectively, you’re just multiplying things by an exponentially decreasing function. In fact, it’s equivalent to if you thought that there was a 5% chance of extinction every year, because if you thought that there was a 5% chance of extinction every year, then you’d think that the value of those times was smaller by this same factor.
There are a whole lot of good reasons to do discounting, and in fact the main times when economists are discounting is where they discount things which are measured in dollars. There are a whole lot of good reasons to discount things that are measured in dollars. The main reason that this is done is because in the future, we’ll be richer — economic growth will have continued — and also there’s diminishing marginal returns on additional income or consumption, in terms of money. One measurement of this suggests that the value you get out of money is logarithmic in the amount of money that you have access to. Such that doubling the your income gives you one more unit of happiness, no matter what where you started. If you combine those ideas of diminishing marginal returns on money, and the fact that we’ll be richer in the future, it means that financial benefits in the future have to be bigger in order to make the same impact as ones that happen now.
That all just makes sense, and this is the Ramsey method of discounting, but on top of that very sensible thing, they often add an additional discount rate called the pure rate of time preference. That’s the contentious thing. They just add in this extra 1% or so of discounting on top of that, and pretty much all philosophers who have ever considered this — possibly 100% of them — think that the extra 1% or whatever is just a mistake. This 1% gets produced by economists asking people questions about whether you’d rather have this benefit in a year or this other benefit in two years, and a whole lot of questions like that. If you look at the literature on this, there’s no stable answer to these questions. They depend a very large amount on the framing, and also on the precise way the question’s asked.
In fact, if you look at the more detailed literature that’s come out in the last 20 years on this, you will see that the pure time preference people have is actually not an exponential rate of decay. It’s a hyperbolic rate of decay, which leads to problems with rationality and time inconsistency, and we’re basically just measuring a form of human impatience and irrationality, then trying to add it into political decision making. It doesn’t seem to be the kind of thing that one should be respecting at all. It’s just like finding a cognitive bias that we have, and then adding it back into your economic analysis in order to make your analysis biased in the same way. That’s my summary of pure rate of time preference.
You should discount future well-being by the chance that we’re not around to realise it. That definitely makes sense, so you could discount by the extinction rate. There’s some very good articles on this, actually, by Yew-Kwang Ng, who did some foundational work on this I think in the nineties. Ultimately for the long-term future, we should just be discounting it by the extinction rate. This is the basic idea that Nicholas Stern incorporated into his famous Stern Review as well.
Robert Wiblin: I think we needn’t dwell on this too long, because as you say, it has basically 0% support among people who seriously thought about it, but just to give an idea how crazy it is, if you applied a time preference of just 1% per annum, pure rate of time preference of just 1% per annum, that would imply that the welfare of Tutankhamun was more important than that of all seven billion humans that are alive today, which I think is an example of why basically no one, having thought about this properly, believes that this is a sensible moral, philosophical view.
Toby Ord: Exactly, and I should add that with this experimental work that’s being done to try to measure people’s impatience, if you ask a different question you get very different answers — if you don’t ask about yourself but you ask about someone else. For example, we know that people have these problems of time inconsistency. The classic one is dieting: that people in the moment want to eat the cake, but they later wish they hadn’t done that. We know that people have these kinds of problems of impatience, but if they’re trying to advise their child or their partner, they won’t exhibit the same kind of time preference. They’ll actually be thinking about the long-term benefits to that person when they’re being altruistic about it. Economists have actually asked the question: would it be better if there was a social programme that would produce a whole lot of benefits now, or some other programme that produces these benefits at a later time?
They found that people are indifferent to these things, so they actually don’t exhibit this pure time preference when asked about altruistic questions — at least the early bits of research on that. This seems to be exactly the type of question that we’re asking now, and so I think it is pretty clear that one shouldn’t be using this pure time preference on moral questions about how should we value future generations.
Robert Wiblin: A third class of objections is about whether the future will actually be desirable, because if you thought that it was as likely for the future to be bad morally (an unpleasant kind of place), as to be good, then that really would I think shift your focus. Perhaps you would still be focused on the long-term consequences and trying to make it more likely to be positive than negative, but it would certainly make you less concerned about extinction. Do you want to comment on that?
Toby Ord: Ultimately that is right. At least if we were taking the well-being type justification. If the expected well-being in future generations was zero or negative, then yes, there wouldn’t be any point in preserving it. That would bite.
I don’t think that there’s that much justification for believing this. If you look through the history of writing on this topic, there have been people at all times who have been very pessimistic and thought that perhaps it’s all not worth it. Schopenhauer is a famous example, who thought maybe it would be better if the earth had been as lifeless as the moon. I do think that if you’re not in a moment of clinical depression, and you actually read some things about the great achievements of human history, and our civilisation, and what it has been able to do, or if you think about various moments of tenderness and peak experiences in your life, and others — I think it’s pretty clear that it’s very largely net positive. And that it was a while back, and it’s got even better subsequently with all of the improvements to the human condition that have come from modern prosperity.
I think that the best argument that it’s actually bad overall (or that it’s going to be bad) is when thinking about our treatment of animals, which has gotten worse over time. You might also say the same about our treatment of the environment, but actually in a lot of ways, the treatment of the environment has gotten worse and peaked in badness in the 20th century, and then it started to become less bad. This shape of curve about how much damage we’re doing — that starts off small, becomes big, and then goes back to being small again — has been suggested as this Kuznets curves in economics. My guess is that a similar thing is going to happen with treatment of animals: that it’s got worse, but as we get more and more prosperous, since there are people who value the well-being of the animals and there’s almost no-one who’s just going out of their way to actually hurt animals (it’s mainly just that people want slightly cheaper chicken) that as prosperity rises, it will become less and less important to them to get the slightly cheaper chicken. Then when it effectively costs just a penny more to get the ethically raised one, then either they will just do that, or governments will find that there’s not much political resistance to just banning these things, whereas there’s a lot of votes to be gained from banning them.
That would move to humane practices, so I think this really does seem to make sense, that our treatment will become better in the future. I think there’s a really good quote by Carl Sagan about this, so I’ll say what he had to say about this question.
“I do not imagine that is precisely we with our present customs and social conventions who will be out there. If we continue to accumulate only power and not wisdom, we will surely destroy ourselves. Our very existence in that distant time requires that we will have changed our institutions and ourselves. How can I dare to guess about humans in the far future? It is, I think, only a matter of natural selection. If we become even slightly more violent, shortsighted, ignorant, and selfish than we are now, almost certainly we will have no future.”
I think that that is right, that this is a very difficult time that we’re in now, and it’s going to be tricky. We’ll have to rise to the challenge if we want to get through it.
I think that conditional upon us being the type of species and civilisation that does rise to that challenge, and manages to achieve more civilisational wisdom, that we’re even more likely to produce a positive outcome compared to a negative outcome than you might’ve thought before that.
Robert Wiblin: I’m fairly optimistic about the future because as our technology improves, and our wisdom improves (admittedly at a slower rate), we’re getting more and more capable of shaping our environment and ensuring that it’s pleasant for us. It seems fairly clear that human well-being has risen over the last few hundred years, although I guess you could object that perhaps now we have weaker community even though we’re richer, and so in fact we’re not more fulfilled. Some people do make that argument, but even if it’s not true in the past, I think if we do stick around for many hundreds of years, it’s unlikely that we won’t eventually at some point figure out the science or the technology required to really create very high levels of human flourishing, higher than what people today enjoy. If you look at farm animals, as you say, it seems pretty likely that their lives have more unpleasant things in them than pleasant things. If we want to predict what the future will be like, one option will be to look at the past, and see what that has been like in general, and protect that forward.
The largest group of beings in the world today, and certainly throughout history, has been wild animals. They are neither quite like humans nor like farmed animals. Do you think their lives overall have been positive or negative or neutral?
Toby Ord: That’s a very good question. I’ll first say — just to modify that slightly — if you count them, there are definitely more wild mammals than there are humans, more wild birds than there are humans, and so on. But they’re generally very small, so it’s the smallest ones that produce most of these population numbers. They’re tiny, little voles and things like that for the mammals rather than (as we tend to imagine) the tigers, and things like that. If you look at the brains of all of these animals, the brains of the collective humanity actually have more neurons in them than all of the other mammals put together, including the wild mammals. The same is true for the wild mammals plus birds put together. Because we don’t really know what it is about a brain that produces conscious experiences, it’s plausible that more neurons means more suffering, or more pleasure. We don’t really know how all of that works, and it is actually plausible that we are, from this kind of intrinsic perspective, more important than all of these other things put together. I don’t know if that’s true or not.
Then to speak to the real question, which is the suffering in nature. Yeah, there’s a tremendous amount of it, and we sometimes have these idealised pictures of nature, but if you read what biologists, who go out in the field, write about these things, that there is a tremendous amount of suffering, nature being red in tooth and claw.
I don’t know whether ultimately the lives of all of these beings are net positive or net negative. I do know that that’s not going to sort itself out on its own, so if we are worried that it might be negative on balance, then I think that the only way we can fix that, and have a positive natural world, lies in a long and great future with humanity doing something about it. As we grow more and more powerful, and require a smaller amount of our resources to do something about this, we might be able to actually make the natural world better in this regard.
Robert Wiblin: My guess is that we’re not going to continue to have farmed animals for that much longer, as long as humanity continues, because just the rate of progress in developing alternatives to animal agriculture is quite startling. It’s possible that it could be phased out in 100, 200 years, and then you’re saying as well that at some point in the future, humanity might have the technology to make the wilderness better for wild animals, if we conclude that it’s actually very unpleasant for them.
Toby Ord: Yeah. It’s quite unclear how we would do that, and you could obviously seriously mess it up if you did anything short-sighted or misguided about it. Humanity want to be extremely careful, but there may be ways of doing some minor genetic engineering, which means that they feel less pain in situations which would otherwise be very painful, or to make it such that there’s less just direct suffering from cold, and hunger, and things like this. That is a grand project which is probably on the scale of interplanetary settlement or something like that. We’re talking something that we might be able to do if a civilisation lives. It’s been 2,000 years since the birth of Christ, and less than that, since the collapse of the Roman Empire, and ultimately, who knows what we’ll be doing in another 2,000 years, let alone if we can survive for another million years? Maybe we’ll have the abilities to actually intervene and improve the lot of the natural world.
Robert Wiblin: What if you’re not sure how good the future is going to be, or you’re not sure yet whether the future is really important? What do you think someone who’s agnostic about these questions ought to think and do?
Toby Ord: That’s a good point. Suppose that you’re … You’ve heard all of these different reasons for why the future might be very important that I listed, the well-being, the achievement, the partnership of the generations, virtue, cosmic significance, and suppose you are skeptical of all of them. Will MacAskill has written a great piece about imagining you’re in that situation, and there’s still a very strong argument that actually existential risk is one of the most pressing issues of our time. This comes from the idea of moral uncertainty and option value, and the idea here is that in the future, we might be able to … You would be able to get more information about how good these arguments are. If they’re a really bad argument, you might be able to see it be contradicted, or if it is a better argument than you thought, you might hear additional supporting reasons which are very compelling. If we let the world be destroyed, then we lose all of this option value. There’s no coming back from that, but if we preserve it, then we may find out more information about the value of humanity.
We may find that it’s particularly important, but we need to preserve humanity to be in these chances, to really lead to this great future.
Robert Wiblin: I guess the point is, if we’re all dead in 100 years, then even if it would turn out that we would’ve discovered that the long-term future is incredibly morally important, we’ll never get to do that. On the other hand, if we stick around, then we can still figure out whether it’s really important, and decide in the future whether this is something that we want to prioritise.
Toby Ord: That’s right, and it’s particularly striking if you are entertaining the possibility that maybe, that humanity is ultimately net negative, and that either we cause destruction to everything else around us, or that our lives are worse than they are good. If you are entertaining a view like that, well, then it’s particularly important that we don’t decide to not have any more of human history, but rather we could always decide that later, once we’ve got more information, but if we get it wrong in the other direction, there’s no coming back. Then it becomes particularly stark, that there’s a, creates a very asymmetrical argument that it’s very important to preserve humanity long enough to really sort out those arguments, and only act to let humanity go extinct if we, in the future, had very strong evidence for that, which was compelling to everyone.
Robert Wiblin: We’ve discussed a few philosophical objections there. Let’s maybe move on to more practical issues that people raise about whether this really has any meaningful implications. Many people might now think there’s enough arguments here that the long-term future is a really important issue, and I’m convinced about that, but I’m not really sure that there’s anything that I can do today that will make things better in hundreds or thousands of years, in a predictable way. Other than just trying to improve the present day in ways that people are already familiar with, like improving education, or improving people’s health. What do you say to that? Are there ways of making the long-term go better that are actually likely to succeed, that are different from what people are already trying to do?
Toby Ord: Okay, so there’s a few key issues here behind this question. I think it’s a great question. So one of them is a distinction that Nick Beckstead has made, between what he calls broad and targeted interventions. A targeted intervention to deal with existential risk is one that takes a source of risk (such as nuclear weapons) or a new technology (such as artificial intelligence), and tries to directly work on that in order to limit the risk that it produces. A broad intervention might be something that just tries to improve, say, the wisdom of civilisation, or our coordination, or maybe education, or even the economy, and just tries to use these very long-term productive methods that societies use, that have really helped us get where we are now. It’s not immediately obvious which of those is the best approach, whether the targeted things are better than the broad things. I think that they are, but I don’t think it’s an obvious, kind of open, shut case.
Okay. That’s the first. The second type of distinction I think’s really important is this question ‘What can humanity do about this?’ versus this question, ‘What can I do about it?’ I think it’s pretty clear that humanity can do a lot about this, in particular, the risks that I’m worried about are anthropogenic risks. They’re human-caused risks, such as nuclear war. Humanity itself can do a lot about nuclear war. We can just not have one, right? For imagining ‘How could we collectively act?’ it’s pretty clear that we could act such that we don’t have nuclear war. There is some group of humans who, if they controlled their actions, they would just destroy all the nuclear weapons.
I think that it’s more difficult, but I think it’s also possible that if humanity got its act together, with large numbers of people who really took this seriously, that we could also continue to develop new technologies, equivalent to developing atomic power, and so forth. So perhaps we could develop synthetic biology, we can develop artificial intelligence without incurring a very large amount of risk while doing so. We could take it slow and steady, and we could really devote a lot of hours also not just to speeding up the process — and making it all happen quickly — but to thinking through the implications, working out the key policy approaches, and also working out a lot of technical aspects about safety. I think that’s fairly clear that we could do something about human-caused risk as humanity. I think that’s the main way that I’m going to be addressing it in the book: in terms of saying that safeguarding human civilisation is a central issue of our time, one of the key things that humanity needs to deal with.
The way that you particularly asked the question was how could I as an individual do much about this? I do think that is a bit harder to think about, but I’m also not sure it’s the best framing to think about it in. Ultimately, I think what you want to consider is what would the best portfolio of action look like, and then what role could I play in that? I think that helps to make it a bit easier to work out whether something’s worth doing.
An example would be that maybe you think that if there was a big protest with 100,000 people marching in the street against a particular policy that’s just been enacted, and that’s fairly unpopular, and you think there’s a reasonable chance that you could get overturned. You could think about ‘Would it be worth 100,000 people’s time to march in the street to stop this thing?’ Rather than trying to think, ‘Well, what could I do if I changed the number from N people to N plus one people? What’s the chance that that additional one person is going to overturn this thing?’ Maybe it’s incredibly small, and I think that trying to estimate those things just replaces a relatively sensible kind of problem with a very difficult to understand problem, which is much less intuitive, and that sometimes within effective altruism, we go a little bit too far in terms of what difference would I make? It becomes very intractable to measure even though one can see what the best portfolio of action is, and the role you could take in it.
Robert Wiblin: Yeah. Just, if I could interrupt, that it seems like it is … They’re very similar questions, and they should spit out very similar answers, but in our minds, we treat them very differently. It doesn’t seem like it can be the case that it is where the 100,000 people showing up to the protest, but it’s not worth any individual one of them to do it. That doesn’t really make sense. That’s a contradiction, so if it’s worth it for the group to do it, then it has to be worth it for at least some of the individuals to do it, and I think that’s one way of resolving these paradoxes that people end up in their mind, where they’re like, “But it’s not worth me showing up. I know one person can do it, but it is worth it to everyone.” No, it’s like, if it’s worth it for everyone, then it may well be worth it for you.
Toby Ord: Yes. Barring kind of some unusual situations, like if we were all really sure that the other people are not going to turn up or something like that — then maybe we could all end up being rational not to turn up, but only in virtue of us having this silly belief about what everyone else thinks. You can get certain kind of bizarre cases, but ultimately I think that we do pretty well just with this fairly common sense way of approaching it, and thinking about large-ish groups of people. In the case of your listeners, sometimes that will be the wider effective altruism movement, trying to think about what could we do — not on the margins of an individual life, or an individual career — but what could we do on the margin of the thousands of careers that your listeners have, and are thinking of having? I actually think a lot could be done with that.
Even if we just go with this question, this harder to think about version, where we’re thinking about, what could I do as an individual. Could I realistically make this massive difference? Could any individual do that?
One certainly shouldn’t think that the standard for deciding to work on something is that you will literally save the world, the whole of humanity, and that otherwise you’re not going to do it, right? That’s a ridiculous standard to set, and in terms of a high bar if you’re getting off the sofa or something. We should instead be thinking something like, “Could I move the needle on this in terms of the chance of this happening by even some very small amount?” It’s pretty clear that again, not that many individuals can move the needle by, say, one part in a thousand, because then if more than a thousand people were trying to do that, there’s nowhere else for the needle to go. Ultimately, if it’s an individual, we’re talking about some pretty small chances, but at critical moments in human history, individuals have done this type of thing, and we do have examples of this.
One case is thinking about Leo Szilard in the development of the atomic bomb. He was a visionary scientist who in the 1930s came up with the idea for the chain reaction that would lead to the fission bomb. He was 10 years ahead of his time, and he had years to think about what would happen with this. He was an exile from Europe with the rise of fascism, and he was immensely worried about Hitler getting the bomb. He ended up going over to America, working with the Americans to build an atomic bomb, but also he worked with the atomic scientist community. He helped urge them to secrecy. This in some cases worked, and some cases it didn’t. Sometimes the people he told to not publish their work on nuclear chain reactions published it anyway, leading to this work being much more widely known, and the Germans commencing a nuclear programme, but in other cases, he convinced the American scientific establishment to classify all of their scientific research on those topics, which helped to protect secrets such as the secret behind the plutonium bomb from being discovered elsewhere. This was very critical, as that turned out to be the much easier bomb to create. History relates this episode in great detail, and we can actually see the effects that he had.
There are other individuals that had large effect. Niels Bohr is another case. He didn’t succeed in his main thing he was trying to do, which was to try to convince Roosevelt and Churchill to share the nuclear secrets with the Soviet Union in order to avoid a Cold War. He had some good ideas behind this as to how to avoid a Cold War, and he foresaw a Cold War even years before the end of World War II, but he wasn’t listened to on this. He had a very good shot at it, and we know exactly the details of all the meetings he had and so forth. He and a few other people almost achieved this, and then that could’ve had dramatic effects.
And then there’s Stanislav Petrov, who in the 1980s, in the autumn equinox incident, he was a officer in the Soviet army, and he was in charge of missile launch station. He witnessed several flashes of light that looked like launches on his screen coming from America. I think it was just a few — it may have just been one. And he was puzzled by this, as to why they would launch so few missiles. He thought, “It probably is a mistake of some sort,” but his orders were that if this happens, that he needs to escalate to a retaliatory strike on America, and he decided not to escalate. Indeed, it was just sunlight and the kind of configuration that the Earth was in at the autumn equinox. It was sunlight reflecting off clouds at a certain altitude, that looked like missile launches. His decision of going against this protocol, it’s not clear how far — how much further that would’ve escalated. Maybe someone else would’ve stopped it, but certainly a very small number of people were in the position to stop this, and he was there, and he did.
There were a whole lot of other near-miss scenarios like this during the Cold War, where we got very close to a hot war. John F. Kennedy was involved in some of them. All of his military counsel urged him to invade Cuba during the Cuban Missile Crisis. It was unanimous. He was the only dissenter, and he said, “No, we won’t do it.” Many other presidents would have, and it turned out that Castro had asked Khrushchev to initiate a nuclear strike using the missiles in Cuba on the mainland to the United States if they were invaded. It’s not clear if Khrushchev would’ve followed through with that, but if he would’ve, then that would’ve precipitated a major nuclear war.
Robert Wiblin: Another detail is actually that the field commanders, the Soviet field commanders in Cuba, had independent launch authority I’ve read, which only came out quite recently, that information was declassified. It means that they independently of Khrushchev could have decided to start a nuclear war if they were invaded.
Toby Ord: That would’ve been very bad. Castro later had said that he would have done a nuclear retaliatory strike, and that moreover, when he was asked, did he know what would’ve happened if he’d done that, he replied, “It would’ve led to the complete annihilation of Cuba.” And he not only could have, but did make this kind of command to Khrushchev to do this retaliatory strike.
Robert Wiblin: It was very committed to Communism. You can certainly give him that, at least.
Toby Ord: He was a maniac, and also Kennedy and his council had decided that if a U2 spy plane was shot down — one of their planes — that could only be done with Soviet equipment in Cuba, and that if it was shot down, they would immediately invade, and they didn’t need to reconvene the war council. The next night it was shot down, and Kennedy called to reconvene the war council, rather than immediately invading. There are a lot of these moments like this during this crisis.
Another famous one was with Arkhipov — Vasili Arkhipov — who was a commander of a flotilla of nuclear submarines which had been sent in by the Soviets to take some nuclear materials into Cuba. They were deeply submerged. They didn’t have radio contact with the world, and the Americans started depth charging them. The Americans were using what they called ‘practice depth charges’, which are a low-yield depth charger, to try to just annoy them, and force them to surface so that they could be searched or something like that.
The Soviets didn’t know that. They thought they were real depth charges, and that they were under attack, and that a third World War had probably started at that point, since there was active hostilities against them. The submarine’s commander decided to launch their tactical nuclear weapon, which had about the yield of the Hiroshima bomb, and although it wouldn’t gone on the land, which would’ve been at least some kind of good news, it would’ve just destroyed a fleet, so maybe they wouldn’t precipitated a nuclear war… He was trying to launch this nuclear torpedo, but on that particular submarine, because the flotilla commander was also there as well as the submarine’s commander, he had the ability to override. That was Arkhipov, and he overrode that order.
So various stories like this have only just recently come out as the relevant people have either retired or died, and the memoirs have been released, and we’re finding out a lot more about these very close near-misses.
There are all kinds of other cases where live nuclear bombs were accidentally dropped out of bombers, in one case over the US (by a US bomber); in one case over Spain. One of these bombs was never recovered. It just sunk into some marshland, and they could never excavate it. All kinds of crazy stuff happened, and a lot of near misses. A lot of these are cases where just a few individuals were involved, and made some really important decisions at the last moment.
Robert Wiblin: I imagine that some people listening will be thinking, “Sure, if you’re a nuclear launch officer, you’re the President of the United States, yes, in that case, you can have an effect on the probability of extinction, but I’m not one of those people, and not likely to become one of those people.” I suppose in this case, because we’ve been focused on the nuclear threat, which is the oldest extinction threat, at least in the modern era, we’re thinking a lot about the military, and advanced scientific research, and government and international diplomacy, should we expect it to be possible to predict what kinds of positions, what kind of people will be able to have an effect in future similar situations, if there’s other technologies like nuclear weapons that are developed? Can people realistically position themselves to be in the right place at the right time?
Toby Ord: A good question! The cases that history lets us judge as to the kind of clearest impact of individuals are the cases where they’re at the last possible moment before a disaster. But that doesn’t mean that people only have causal consequences when they’re the final part of the chain. It’s just that those are the cases where we can clearly assess. There’ll be a lot of people who are earlier in these chains that would’ve led to a disaster or not as well, and one can go into positions like that. Also, as you suggest, nuclear weapons are a classified technology, but various other new technologies — which I think are going to be like the atomic energy of their century, which are things that are going to have very good aspects, but also very big threats as well — include synthetic biology and artificial intelligence.
These are technologies which are not classified, and which people can actually work on now, and can start engaging with. I think that there are a whole lot of really important types of roles related to that, which people could very easily put themselves into positions related to. I’ll give you a few examples.
In the world of government and policy making, there are relatively few people who come from a science background. Lots of people from a humanities or social sciences background, very little in terms of natural sciences. But people from those backgrounds need to actually come into that world, and be the science interfacing people within policy making, so that government — when critical moments present themselves — has more scientifically literate policy ability.
Another example is on the other side. You need people with a good understanding of ethics and policy in science and technology. So the people who are, say, working in synthetic biology, or if you imagine roles in, say, the leading scientific societies, like the National Academy of Sciences, or the Royal Society — you need people there who have an eye to future generations, and the ethics of these questions, and can clearly write things, and put them out from the science perspective. Or people who are leading the professional societies of synthetic biology and artificial intelligence.
And also people who are working more closely with these technologies. They can do good technical work on safety regarding those technologies, and they could also work on strategy for how to safely manage these issues. There’s a whole lot of areas to do with technical safety, and strategy, and governance, and policy, connected to these new technologies. Since I think a lot of these threats are going to come from radical new technologies that we invent. I think that there’s a lot of potential there.
Robert Wiblin: We’ve just been discussing what you would do if you wanted to position yourself to have a really large impact in this situation that synthetic biology ends up being a really big deal, and we’re at some kind of critical juncture in history. Who invents synthetic biology, or how it’s applied, or how it’s regulated, ends up being really essential. There’s this other approaches that you could take, of course, which is to try to just improve society in general. Perhaps one approach that has been suggested is to improve our ability to foresee future problems in general, and so we’ll be able to anticipate them. Do you want to describe some of the more promising avenues that you see for improving the long-term there?
Toby Ord: Yeah. I think that there’s a lot of different issues here, so here are a few more. One can work on research, so research could include, as we have just said, work on particular risks. There’s also a lot of really good research that can happen on these general questions about existential risk in the long-term future — the type of research that I do, and that’s going into my book.
Then there’s also work that could be done on addressing risk factors. If we think about particular risks, such as the risk of synthetic biology, it’s a technology that might be able to destroy humanity or humanity’s potential, and we could assign some probability to that. I don’t know, maybe it has a 1% chance that it will do that this century. There are other things that aren’t themselves existential risks, but which might increase existential risk by a certain amount.
Take, for example, great power war — war between great powers, such as the USA and Russia, or the USA and China. Such a war may not in itself be an existential risk. It’s not just because they had a war that we went extinct, but it may increase the amount of risk overall. In fact you can think about this in terms of, ‘What would the risk of human extinction over the century be conditional upon there not being a great power war, compared to if we didn’t condition upon that?’ My guess is that if there is a great power war this century, then the chance of getting through the century intact goes down by a percentage point or more compared to if there isn’t. In which case great power war itself might instrumentally create more risk than the particular technology of synthetic biology. That’s just an example, and those numbers are made up, but it’s actually plausible that some of these risk factors, these things that exacerbate risk, could themselves be larger than the various particular risks. Great power war this century is definitely a larger factor than asteroid risk, which we know is 0.0001% per century, or something like that.
That gives you a whole lot of additional levers and things to work on, such as working on peace, international cooperation, and things like that, and we could also work on fostering these civilisational virtues of increased prudence and patience, and wisdom for civilisation. Maybe that sounds a lot like the broad approach of doing the types of general service for the world that you might’ve thought were fairly natural if you hadn’t considered existential risk.
Robert Wiblin: Are there some approaches that you think are just obviously too broad?
Toby Ord: Good question. I think just saying, “Okay, what about improving science?” my guess is that because this begets technology, which begets some of the risk, it’s unlikely that just pushing on that is a plausible thing to particularly help. While I really like this distinction between targeted approaches compared to those really broad approaches, my guess is that targeted ones are often better, but they don’t need to be targeted directly at the technology that might destroy us. They might be able to be targeted on some other mechanism which is important to getting through all of this. Again, this is really early days on this, and in fact, coming up with some really strong arguments one way or the other on that will be very helpful. It’s sometimes pointed out, and I think this is a really interesting kind of argument, that what about if this was 200 years ago, or 300 years ago? What would someone who cared about these issues be working on back then?
They probably wouldn’t even know about asteroid risk, or super volcano risk, or various other natural risks. The anthropogenic risks hadn’t even been created. And it seems pretty plausible that broad approaches there — continuing with the enlightenment — was one of the best things that they could’ve done to put us into a good position for a long-term future.
Although I actually think that stockpiling food probably would’ve been a key thing to do back then, as a very generic kind of protection against things. It wouldn’t be a total kind of impossible thing to guess, that if you’re worried about the end of human civilisation, that maybe stockpiling food could help. It turns out that it would’ve helped quite a bit with asteroid impact, and so maybe even back then some targeted things could’ve been helpful. I do agree with the general point that very broad things also were very helpful back then.
Robert Wiblin: I suppose there’s also the somewhat broad approach of just promoting the kind of views that you’re promoting now. Someone 200 years ago could’ve said, “It’s very important that we preserve the long term. We should think more about how we could do this, and make sure that future generations are concerned about this.”
Toby Ord: That’s exactly right, and that is also fairly natural in things that people did work on. There were various moral philosophers who tried to work out, what were the most important issues facing humans who were thinking about this, and then promulgating these ideas to shape how future generations would make ethical decisions. That was something that was done by various people, and therefore, if they’d realised that preserving human civilisation was one of these things, then they could’ve tried to incorporate that, and it probably would’ve worked. It would’ve helped mean that common sense morality was already much more focused on this than it is now. And maybe as you mentioned, maybe the approach of the Australian aboriginal people was in that vein, in terms of really thinking about the long term.
Robert Wiblin: We’ve talked a lot about reducing risks to the future. What about thinking about the opposite of that, which is extremely large upsides? Are there any practical ways that people might go about not so much preventing extinction or something horrible, but also trying to create something that’s much more positive than what we have reason to hope for?
Toby Ord: It’s a good question. I’m not sure.
On some level, we had something like this happen. If you tracked the expected value of the future over the past, once we found out how long the world had been around, and how long it was expected to be around for in the future, the scope of humanity might have increased by a huge amount. Similarly, when people discovered that these tiny little dots that moved around in the sky, the planets, were other worlds, like other Earths, the scope might’ve increased a lot. Particularly when they found out that these myriad points of lights are the stars, were other suns, which might contain their own planets, the scope at that point increased by a factor of 100 billion with that discovery. There are at least cases where the expected value of the future went up by a lot, but that was more just by realising that there was much more that we could achieve rather than really working out some opportunity that perhaps is only here once, and that we could grab.
The key behind these ideas of existential risk, as opposed to just any old way of trying to help with the long term, is this idea of the really high leverage. Because it’s something that is an irrevocable loss if it happens, such that avoiding it is an opportunity that you only get this once, to create a huge amount of benefit. In order to have this existential hope scenario, it wouldn’t be enough that there was some really clever thing we could do, say, to rearrange our society in some way that’s much better. It would have to be the case that if we didn’t do it now, then there was no other opportunity to get it, in order to really be the reverse. I’m not sure of things like that, but very little thought has gone into this, and maybe if people thought about it for more than an hour — perhaps more than a year! —they’ll find something there, or at least the hints of something, and might be able to make some progress.
Robert Wiblin: Another practical objection that people sometimes raise is that, of course because you and I don’t want to die any time soon, and countries don’t want to disappear off the face of the Earth, they already have a pretty reasonable incentive to try to reduce the risk of a catastrophic disaster. It’s not a neglected problem. It might be a problem, but it’s something that we’re already dealing with as well as we can. What do you think of that?
Toby Ord: I think that that generally is a very good way of reasoning, and it’s a very useful sanity check, particularly if someone comes up to you and says, “Hey, here’s this area that may be actually one of the central issues of our time, and yet is neglected.” You could question that, and say, “Well, why would it be so neglected?”
This is a case where we have some pretty good answers to this. Take a very large and powerful country, like the United States of America. The US only has 5% of the world’s population within its borders (a little bit less) and if you think about these risks, actually they’re only internalising about 5% of the damages in terms of the world’s population. We’d be expecting them to underweight this by about a factor of 20, but it gets even worse when you consider that actually most of the terribleness that I’ve been talking about comes from not having the entire future of humanity, whether that be hundreds or thousands of additional generations or more, or via the great achievements that they’ll produce.
This is called a global public good, and in particular, an intergenerational global public good. It’s the exact type of thing that we would expect to have a market failure, and to be under-prioritised by individual nation states. The UN on the whole might be in a position to do this, but it’s not all that smoothly functioning, and it’s very difficult to get all of the different heads of state round the table to agree on things. It’s very slow to act, so I think you can actually challenge this efficient market idea quite successfully here, and debunk it, and show why this could still be happening.
On top of that, I should say that I’ve talked to government about these things, to many people in the British government, and they have all basically said the same thing to me, which is, “Wow, this sounds like really serious stuff, and I would love to be able to spend more time thinking about it.” They were really serious about this. It was clearly the most interesting meeting that they’d had all week, but they didn’t have the time to think about it. They had to go back to dealing with the kind of fires that need to be put out, and the burning issues where the Minister really needs your report on their desk by next Wednesday. They don’t have the time to deal with these things. Even among the people the government has in charge of assessing risks, these are not issues that are able to get that much attention in a very short news-cycle political process.
Robert Wiblin: Another objection that people raise is just a very high level of skepticism that catastrophic risks are likely to occur. Perhaps that sounds a bit surprising given the cases you’ve mentioned, where it seemed like we were very close to having an all-out nuclear war, that presumably would’ve put us at some risk. Are there any other points that you want to make to people who think, “Nah, things are fine. Things are secure. Humanity will continue on more or less no matter what.”
Toby Ord: I think that this is a somewhat reasonable empirical belief to have. While I think that if you’re familiar with the evidence on the near-misses with the Cold War, you should think that there’s a realistic chance that that could’ve turned into an all-out nuclear war, maybe on the order of one in three. However, it’s not clear at all that an all-out nuclear war would’ve led to our extinction. The early work on nuclear winter suggested that it was really quite likely, that it would be very severe, but more recent analysis suggests that while it would be very severe, and would be clearly a global catastrophe of unprecedented proportion, it would be unlikely to cause the actual extinction of humanity.
The main way that extinction could happen is if the models that are currently being used just aren’t the right models, which is fairly plausible. It is a bit of an unprecedented situation. We can’t demand that we have so many examples of massive global nuclear wars to get our p-values down to 0.05 and know exactly what’s going to happen.
Robert Wiblin: Come outside. Just run the experiment.
Toby Ord: We have to decide this in our position of uncertainty. But it wouldn’t be unreasonable to think that maybe it’s just very hard to actually cause human extinction — even with an all-out nuclear war, with all of the massive global cooling and things that would happen. I think that that’s somewhat plausible. So it’s unknown. And then they might think, “Well, maybe that’s going to be true for the new technologies as we’re gaining power at this accelerating rate.”
It starts to get less plausible with the technologies we’re having this century, and then the ones we’ll have the century after, and so on. That unless we’re likely to run out of radical new technology soon (and just invented all the ones that will ever be invented or something), it does look like this process is going to produce some pretty large risk. That’s what I think, and I would put the risk for this century at something like one in six, like Russian roulette.
If someone said, “No, I think it’s much lower,” then consider the following kind of idea. When you’re thinking about the value of the future of humanity, there’s actually this interesting tension between a couple of things. One is whether there’s enough risk to actually reduce, and the other is how long the future will be. If people think that it’s almost impossible to destroy humanity — maybe they think that the risk per century is something like one in a million — then they also seem to be somewhat committed to thinking that we’re going to get about a million centuries if we survive this century, because that’s what it looks like if you have a constant hazard rate of one in a million per century, that was the expectation. In which case the amount of value if we avoid extinction, goes up, roughly proportionately, such that even though there’s only a small amount of risk, if we could avoid that risk, it would actually have a similar value compared to if there was a 1% chance (in which case there’d be fewer expected centuries).
This doesn’t work out as cleanly with more realistic models, but those more realistic models actually suggest the future is even more important than this. It is interesting, if you actually play around with these numbers, and try to work it out, that people who think that there’s very little risk really are kind of committed to there being a very long future as well, unless they hold a very elaborate series of beliefs: if there’s very little risk now, but there will be a lot of risk later, and that risk later will be impossible to do anything about now, and some things like that. I would say, “Well, you previously sounded like a skeptic. Now you sound like someone who has a lot of very precise beliefs about the distant future, and what’s going to happen then.” It is actually tricky to get this to work out, trickier than you might think, and on some of the most sensible models of looking at this, the smaller the amount of risk there is, the more important it becomes to eliminate it, somewhat surprisingly.
Robert Wiblin: I guess unless you adopt one of the views that we discussed earlier, about how you’re not convinced by any of these arguments that the long-term future has a lot of value. You’ve also chosen a whole lot of philosophical positions, perhaps some were sort of contradicting ones, that will give you an idea that only the present generations matters.
Toby Ord: Yeah. If you’re that impatient, and sort of near-termist, to think that it’s completely irrelevant what happens after this point, you would normally be a bit of a pariah for holding beliefs like that. People who say we shouldn’t do anything about climate change appeal to the empirical questions about it, they would look very bad if they said, “Oh, no. It’s going to destroy the lives of all the people in the future, but that just doesn’t matter at all”, or something like that.
Robert Wiblin: Yeah. You might not invite them to your next party.
Toby Ord: No, so at least it’s not publicly acceptable to hold views like that. That doesn’t immediately imply that they don’t work, but I start to get pretty skeptical.
Robert Wiblin: We’ve been a little bit vague about exactly which kind of existential risks we’re concerned about, or exactly which ones we think are most likely to put an end to human civilisation. Do you want to just kind of give your ranked list of the things that you’re most troubled by?
Toby Ord: Sure. As a specific technology or threat, I’m particularly troubled by artificial intelligence. I think that it’s an extremely powerful technology. It’s very likely to be the most powerful technology of our century, but it also potentially comes with a very big downside as well. I think Nick Bostrom has detailed this very well in his book Superintelligence, for anyone who’s interested, and I’m very worried about that technology. I think that we will get through it, and that we’ll manage it successfully, and that’s why I’m devoting quite a lot of my time to trying to do just that, but that is something that keeps me awake at night.
I’m also worried about synthetic biology, although to a slightly smaller degree, as another extremely powerful technology that has the potential to either accidentally or deliberately cause massive destruction of the ecosystem, or to have engineered viruses, which could potentially cause our extinction.
When thinking about this, it’s clear that at most time periods, if you’ve been trying to predict what would be the most big sources of risk you wouldn’t have been able to get it right, and ultimately thinking about some category of unknown unknowns would be where most of the risk was. It was in things that you didn’t even know what to call, like synthetic biology or artificial intelligence. I think that’s quite likely to still be the case, that most of the future threats will be things that we haven’t yet contemplated.
However, I also think that when it comes to our strategy for the future, what we really need to do is to move to a safe position, to go as quickly as we can while incurring as little risk as we can to get humanity to a position where we’re taking these things seriously. And we can then move forward very slowly and steadily, and incur very little risk from that point on. I think that it’s not easy, but I think that we can get to a point this century where we’re doing that.
If so, then maybe the overall risk from the unknown unknowns would actually be smaller than some of these other things, because they would strike at a point after we’d already got our act together, instead of some of these technologies which are coming up very soon, where we clearly do not have our act together as we face them.
Robert Wiblin: When you describe this safe position, some people I guess are trying to create that by colonising Mars, so you kind of have a backup of humanity that could recolonise Earth if things went really badly here. I imagine you’re thinking more about a world in which we’re extremely patient and prudent, and we’re simply not going to take risks as a species.
Toby Ord: That’s right. I think that the idea of colonising other planets is often put forward as a solution to this, and I think that that’s actually a mistake. I’m very excited about spreading beyond Earth in the long-term future, and I think that a lot of humanity’s destiny is ultimately in the heavens above us. But I don’t see it as a good solution to these risks, and that’s for two different reasons.
I think in the short term, it’s not a very cost-effective way of reducing risk. If you look at the benefits that you get from having a colony of people living on Mars and how long it would take, then it compares quite unfavorably to creating a similar kind of refuge somewhere in the desert on Earth, with shielded atmosphere, and so forth, perhaps with people who go into a kind of bunker for two years at a time, and overlapping by a year in another bunker, such that at any point there’s people who have been under there and protected for more than a year, so that circulating viruses will have been discovered before it could infect them and so forth. I think that there’s a whole lot of very clever things you could do with much less expense on Earth, ultimately.
In the very long term, I think that having multiple planets helps a lot with all of the uncorrelated types of risks, so things that could go wrong on one planet mean that you could always repopulate that planet with the other people who are elsewhere — unless they all went wrong simultaneously, which becomes extremely unlikely as we increase the number of locations that people are.
Uncorrelated risks would basically go away, but there’s still a lot of correlated risks, such as a terrible ideology, or some kind of war with catastrophic weapons — the type of thing that may affect all of the locations with the same causal mechanism. Nearly having a galaxy of worlds wouldn’t offer all that much protection from these things, and what you really need to have protection is to have the sufficiently unified civilisation that is never going to go to war with itself, or something like that.
I think that ultimately to really get these risks down very close to zero — where they have to be in order to survive for very long periods of time — it requires this kind of coordination. In the short term, you could imagine trying to achieve this. If it was the case that people, perhaps young, dedicated people who really take these ideas seriously, found their way into government, perhaps became diplomats, or ambassadors, or other forms of decision makers, who could help to shape international policy on this, then if you imagine a world where, say, several of the UN security council member countries all have governments that take this very seriously. Then we may be a long way along the path to stability.
And it may also be that there are periods like now where it’s very difficult to have innovation in global institutions. You can’t just create a new UN, or a big body like that. But there are other times, like just after the end of World War II, where there was a massive appetite for creating new global institutions, and a lot more things could happen. If we enter another period like that, we want to have a lot of people who are very well-placed to be able to take these ideas about how to protect the world, and use them to build new protective institutions that help us achieve global coordination in order to avoid these threats.
Robert Wiblin: Over the last few hours, on the one hand, we’ve talked about how wonderful the future might be, with better technology, how much better life could be, and how long things might go on. On the other hand, you’ve also said that you think there’s something like a one in six chance that human civilisation won’t make it through the next 100 years. Do you see the glasses as half-full or half-empty?
Toby Ord: Ultimately, I’m much more on the half-full side. The reason that it’s particularly alarming that we might not make it through is because we have such a great and glorious future at stake. I think that it’s likely — more likely than not — that we’ll make it through this very difficult period that we’re in at the moment with these anthropogenic, existential risks. That we’ll make it through, we’ll reach a safe position, we’ll be able to have this very long reflection where we can work out what our considered moral views are about the future, and be able to act on them, and produce a much better world, and that probably will see us spread beyond the Earth, and bring life to lifeless worlds around us. This will probably be a tremendous thing, and possibly the most — with no exaggeration —the most amazing thing in our universe.
That’s why I think it’s so important to fight for this, having this chance, and to not let down all of the generations that came before us, and helped to pass on this flame from generation to generation, and to really try to bring it about that we have this great and glorious future.
Robert Wiblin: My guest today has been Toby Ord. Thanks for coming on the podcast, Toby.
Toby Ord: Thank you very much.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.