Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

“The Precipice” is a time where we’ve reached the ability to pose existential risk to ourselves, which is substantially bigger than the natural risks, the background that we were facing before. And this is something where I now think that the risk is high enough, that this century, it’s about one in six.

Dr Toby Ord

This week Oxford academic and advisor to 80,000 Hours Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It’s about how our long-term future could be better than almost anyone believes, but also how humanity’s recklessness is putting that future at grave risk, in Toby’s reckoning a 1 in 6 chance of being extinguished this century.

I loved the book and learned a great deal from it.

While preparing for this interview I copied out 87 facts that were surprising to me or seemed important. Here’s a sample of 16:

  1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined.
  2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s.
  3. In 2008 a ‘gamma ray burst’ reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren’t sure what generates gamma ray bursts but one cause may be two neutron stars colliding.
  4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped pursuing the Bomb.
  5. If we eventually burn all the fossil fuels we’re confident we can access, the leading Earth-system models suggest we’d experience 9–13°C of warming by 2300, an absolutely catastrophic increase.
  6. In 1939, the renowned nuclear scientist Enrico Fermi told colleagues that a nuclear chain reaction was but a ‘remote possibility’. Four years later Fermi himself was personally overseeing the world’s first nuclear reactor. Wilbur Wright predicted heavier-than-air flight was at least fifty years away — just two years before he himself invented it.
  7. The Japanese bioweapons programme in the Second World War — which included using bubonic plague against China — was directly inspired by an anti-bioweapons treaty. The reasoning ran that if Western powers felt the need to outlaw their use, these weapons must especially good to have.
  8. In the early 20th century the Spanish Flu killed 3-6% of the world’s population. In the 14th century the Black Death killed 25-50% of Europeans. But that’s not the worst pandemic to date: that’s the passage of European diseases to the Americans, which may have killed as much as 90% of the local population.
  9. A recent paper estimated that even if honeybees were completely lost — and all other pollinators too — this would only create a 3 to 8 percent reduction in global crop production.
  10. In 2007, foot-and-mouth disease, a high-risk pathogen that can only be studied in labs following the top level of biosecurity, escaped from a research facility leading to an outbreak in the UK. An investigation found that the virus had escaped from a badly-maintained pipe. After repairs, the lab’s licence was renewed — only for another leak to occur two weeks later.
  11. Toby estimates that ‘great power wars effectively pose more than a percentage point of existential risk over the next century. This makes it a much larger contributor to total existential risk than all the natural risks like asteroids and volcanos combined.
  12. During the Cuban Missile Crisis, Kennedy and Khrushchev found it so hard to communicate, and the long delays so dangerous, that they established the ‘red telephone’ system so they could write to one another directly, and better avoid future crises coming so close to the brink.
  13. A US Airman claims that during a nuclear false alarm in 1962 that he himself witnessed, two airmen from one launch site were ordered to run through the underground tunnel to the launch site of another missile, with orders to shoot a lieutenant if he continued to refuse to abort the launch of his missile.
  14. In 2014 GlaxoSmithKline accidentally released 45 litres of concentrated polio virus into a river in Belgium. In 2004, SARS escaped from the National Institute of Virology in Beijing. In 2005 at the University of Medicine and Dentistry in New Jersey, three mice infected with bubonic plague went missing from the lab and were never found.
  15. The Soviet Union covered 22 million square kilometres, 16% of the world’s land area. At its height, during the reign of Genghis Khan’s grandson, Kublai Khan, the Mongol Empire had a population of 100 million, around 25% of world’s population at the time.
  16. All the methods we’ve come up with for deflecting asteroids wouldn’t work on one big enough to cause human extinction.
  17. Here’s fifty-one ideas for reducing existential risk from the book.

While I’ve been studying this topic for a long time, and known Toby eight years, a remarkable amount of what’s in the book was new to me.

Of course the book isn’t a series of isolated amusing facts, but rather a systematic review of the many ways humanity’s future could go better or worse, how we might know about them, and what might be done to improve the odds.

And that’s how we approach this conversation, first talking about each of the main risks, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved.

Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which my colleague Arden Koehler and I barely even had to work for.

For those wondering about pandemic just now, this extract about diseases like COVID-19 was the most read article in the The Guardian USA the day the book was launched.

Some topics Arden and I bring up:

  • What Toby changed his mind about while writing the book
  • Asteroids, comets, supervolcanoes, and threats from space
  • Why natural and anthropogenic risks should be treated so differently
  • Are people exaggerating when they say that climate change could actually end civilization?
  • What can we learn from historical pandemics?
  • How to estimate likelihood of nuclear war
  • Toby’s estimate of unaligned AI causing human extinction in the next century
  • Is this century be the most important time in human history, or is that a narcissistic delusion?
  • Competing visions for humanity’s ideal future
  • And more.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Highlights

The Precipice

Carl Sagan, when talking about this stuff in the ’80s, when he realized these risks, he had some great conceptual work on this that’s very similar to the work done by Derek Parfit. He attributed this to humanity growing powerful without growing commensurately wise. I think that’s a pretty good way to look at it. That it’s not the case that it’s just an argument that we shouldn’t have high technology because high technology will doom us, so we should be Luddites. But rather, you need to be aware of these fresh responsibilities that come with these new levels of power. And that we’re capable of growing our power exponentially on a lot of metrics. But there aren’t many metrics where we’d say humanity’s wisdom, or institutional ability has been growing exponentially. We generally think it’s faltering, if at all. So, he sees this as this problem, and I kind of agree. He calls it a “Time of Perils”. I call this time, “The Precipice”. A time where we’ve reached the ability to pose existential risk to ourselves, which is substantially bigger than the natural risks, the background that we were facing before. And this is something where I now think that the risk is high enough, that this century, it’s about one in six.

So I think that there’s a time period, The Precipice, that can’t go on for all that long. One of the reasons it can’t go on for all that long is if the risk keeps increasing, or if it stays the same, then you just can’t survive that many centuries with a risk like this. That changes a bit depending on how much risk you think there is now. So you might think that the case for existential risk is still really strong and important, but there’s only 1% risk. But therefore the time period could go on for a very long time, perhaps. I think that it could only go on for a handful of centuries. So it’s a bit like thinking about something like the Enlightenment, or the Renaissance. Some kind of named time period like that is what I’m talking about. I think that the risk is either going to go higher, or that we fail out of this time period. Or, that we get our act together and we lower these risks.

Estimating total natural risk

We have this catalog of risks that we’ve been building up: these things that we have found that could threaten us. A lot of which we just found in the last hundred years. So you might think, “Well, hang on, what’s the chance we’re going to find a whole lot more of those this century or next century”? Maybe the chance is pretty reasonable. If you plotted these, which maybe some enterprising EA should do, over time and when they were discovered to see if it looks like we’re running out of them. I don’t think that there are particular signs that we are, but there is an argument that can actually deal with all of them, including ones that we haven’t yet discovered or thought about, which is that we’ve been around for about 2000 centuries: homosapiens. Longer, if you think about the homo genus. And, suppose the existential risk per century were 1%. Well, what’s the chance that you would get through 2000 centuries of 1% risk? It turns out to be really low because of how exponentials work and you have almost no chance of surviving that. So this gives us a kind of argument that the risk from natural causes, assuming it hasn’t been increasing over time, that this risk must be quite low. In the book I go through a bit more about this and there’s a paper out that I wrote with Andrew Snyder-Beattie, where we go into a whole lot of mathematical detail on this. But, basically speaking, with 2000 centuries of track record, the chance ends up being something less than one in 2000 is what we can kind of learn from that. And this applies to the natural risks.

Climate change

So, when I first looked into it, one thing I wanted to be able to say when writing a section on it is, well, some people talk about the Earth becoming like Venus. So having this runaway climate change where the oceans start evaporating, creating more and more water vapor in the air and that this can run to completion and basically we’d have no oceans and the temperature goes up by hundreds of degrees and all life ends. Or at least all complex life. So I wanted to be able to at least say, “Don’t have to worry about that”.

It turns out that there is a good Nature paper saying that that can’t happen no matter how much CO2 is released. You’d need brightening of the sun, or to be closer to the sun for it to happen at any CO2 level. But normally one Nature paper saying something, you know that’s enough to say, “Yeah, probably true”. But there’s a limit to how much epistemic warrant can be created by a single Nature paper on something. But it still seems like that probably isn’t going to happen and no one’s really suggesting it is going to happen. There’s another thing that was a bit alarming there with something called a ‘moist greenhouse effect’, which is similar, but doesn’t go quite as far, but you could still get something like 40 degrees Celsius extra temperature. And the scientists are like, “Oh yeah, I mean you can’t get this runaway, but you might be able to get this moist one”. And from a lay person’s perspective, you think, “Well hang on a second, why didn’t you include that in the other category? I thought when you were giving reassurance as the other thing wasn’t possible, that you weren’t saying there’s a thing that’s, for all intents and purposes, identical, which is perhaps possible. And that one also probably can’t happen, but people are less sure and there are some models that suggest maybe it can.

Biological threats

Arden Koehler: So it seems like you think that natural pandemics, like the ones you’ve listed, although extremely serious, pose a pretty tiny chance, one in 10,000, of causing extinction in the next century. Whereas you think engineered pandemics pose a dramatically higher risk, one in 30. Why the huge difference?

Toby Ord: Yes. So the main reasons are that there is this natural risk argument that we discussed earlier whereby the total amount of natural risk can’t really be much higher than about one in 10,000 per century. And so part of that comes down to how much that argument applies to these natural pandemics. I suggested earlier that it doesn’t quite apply because they may have gotten less safe in some ways, but there’s also many ways in which we’ve gotten more safe. We understand diseases much better with the germ theory of disease. We have antibiotics, we have quarantine ideas and so on, and we’re spread much further across the world and so forth. So we have a whole lot of reasons why we’re actually more safe or less safe. And it’s hard to be sure how that all balances out. I think it leaves things somewhere in the ballpark that the natural risks is in the same level. Whereas when it comes to the engineered pandemics that we could have, there are several different ways that could happen when I’m talking about engineered pandemics to start with. There are cases of gain-of-function research where scientists make diseases more deadly or more infectious in the lab. So that’s a case where they’re being engineered for these bad qualities.

The idea is obviously to help us overall by better understanding what genetic mutations need to happen in order for this to become more lethal or more infectious so that then we can do better disease surveillance in the wild and check for these mutations. Things like that. But they pose their own risks. But I don’t think much of the risk comes from that though, partly because even if it did escape, it’s still again very difficult for it to kill everyone in the world because it’s not that different from the wild types of these diseases. It’s somewhat worse, but they’re not making it thousands of times worse or something. And then there’s also cases of diseases that are engineered for use in war: so biowarfare. And that is quite alarming because they’re trying to make them much worse. And we have a lot of known cases of that. And then there’s also possibilities of what we often think of as terrorist groups, but perhaps cults that are omnicidal and have some plan to kill everyone in the world or something like this. And they actually are at least attempting to make the thing to exactly design the thing that could kill everyone, but they’re much less resourced than a military is. So there’s a few different kinds of ways they could be engineered. But the thing that they have in common is that they’re kind of working towards an actual objective of widespread destruction, potentially aiming to kill everyone in a way that the natural things are not. They’re trying to maximize inclusive genetic fitness is what they’re being optimized for, which is not the same as killing your host. They only kill the host inasmuch as that helps them spread. So it’s quite different to what’s happening there and that’s the agency or the fact that someone’s actually trying to make something that would wipe out humanity is what makes the chance so much higher.

Artificial Intelligence

I think that a useful bird’s eye view of it all is actually helpful. I think that you can divide it into these two parts. One of them is, “What’s the chance that we will develop something more intelligent than humanity in the next 100 years”? And then, if we did, as I point out, humanity has got to this privileged position, where it’s humans who are in control of our destiny, and control of the future of the Earth-based life, in a way that, say, blackbirds or orangutans aren’t because we have unique cognitive abilities. That’s both what we think of as our intelligence and also our ability to communicate, and to learn and so forth, and to work together as teams, both within a generation, and between generations.

It’s something broadly like our intelligence that put us in this unique position. We’re talking about creating something that knocks us out of that position. So, we lose what was unique about us in controlling our potential and our future. Then, one question is what’s the chance that we develop something like that? Then, the other question is, “If we develop it conditional upon that, what’s the chance that we lose our whole potential”? Basically, you can look at my 10% as, there’s about a 50% chance that we create something that’s more intelligent than humanity this century. And then there’s only an 80% chance that we manage to survive that transition, being in charge of our future. If you put that together, you get a 10% chance that’s the time where we lost control of the future in a negative way.

Articles, books, and other media discussed in the show

The Precipice Reading Group — an effective altruism virtual program.

Toby’s work

Everything else

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.