Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

We don’t always know exactly what is important, and often people end up with more good taste for choosing important things to /work on if they’ve had a space where they can step way back and say, “Okay. So what’s really going on here? What is the game? What do I want to be focusing on?”

Owen Cotton-Barratt

From one point of view academia forms one big ‘epistemic’ system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we can think of society as a whole as a huge epistemic system, made up of these and many other subsystems.

How these systems absorb, process, combine and organise information will have a big impact on what humanity as a whole ends up doing with itself — in fact, at a broad level it basically entirely determines the direction of the future.

With that in mind, today’s guest Owen Cotton-Barratt has founded the Research Scholars Programme (RSP) at the Future of Humanity Institute at Oxford University, which gives early-stage researchers the freedom to try to understand how the world works.

Instead of you having to pay for a masters degree, the RSP pays you to spend significant amounts of time thinking about high-level questions, like “What is important to do?” and “How can I usefully contribute?”

Participants get to practice their research skills, while also thinking about research as a process and how research communities can function as epistemic systems that plug into the rest of society as productively as possible.

The programme attracts people with several years of experience who are looking to take their existing knowledge — whether that’s in physics, medicine, policy work, or something else — and apply it to what they determine to be the most important topics.

It also attracts people without much experience, but who have a lot of ideas. If you went directly into a PhD programme, you might have to narrow your focus quickly. But the RSP gives you time to explore the possibilities, and to figure out the answer to the question “What’s the topic that really matters, and that I’d be happy to spend several years of my life on?”

Owen thinks one of the most useful things about the two-year programme is being around other people — other RSP participants, as well as other researchers at the Future of Humanity Institute — who are trying to think seriously about where our civilisation is headed and how to have a positive impact on this trajectory.

Instead of being isolated in a PhD, you’re surrounded by folks with similar goals who can push back on your ideas and point out where you’re making mistakes. Saving years not pursuing an unproductive path could mean that you will ultimately have a much bigger impact with your career.

RSP applications are set to open in the Spring of 2021 — but Owen thinks it’s helpful for people to think about it in advance.

In today’s episode, Arden and Owen mostly talk about Owen’s own research. They cover:

  • Extinction risk classification and reduction strategies
  • Preventing small disasters from becoming large disasters
  • How likely we are to go from being in a collapsed state to going extinct
  • What most people should do if longtermism is true
  • Advice for mathematically-minded people
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcript: Zakee Ulhaq

Highlights

The high bar for human extinction

So if you think of pandemics, for example, if you have a disease which has a 10% case mortality rate, so 10% of people who get the disease die. Then maybe it can spread to a lot of people, and 10% of all of the people it spreads to dies. So in the scale-up phase, it’s easy to see how by infecting more and more people, it’s going to lead to more and more deaths. And it could get up to a very large number of deaths. But even if it infected everybody in the world, it’s not going to kill more than 10% of people. There could be other features. Maybe you could have something bad which happens to everybody who lives in cities. I think more than half of the world’s population live in cities.

And so if you had something which could kill everybody who lived in cities, that would kill a lot of people. But humans are just spread out, and they’re in all sorts of different circumstances. And some of those might be kind of hard to get to. And if you’re asking for what’s something that can kill everybody, then you need to get into all of those little niches.

Preventing small disasters from becoming large disasters

The probability that we end up getting to extinction is just equal to that probability that we have a small disaster in the first place, and the conditional probability that it turns into a large disaster, conditional on having a small disaster. So multiply those two together and then multiply also by the conditional probability that it causes extinction conditional on being a large disaster. And so if you half any one of those probabilities, you’ll have the thing that you get when you multiply them all together. And the reason that this may be useful at all is it just gives a rule of thumb for thinking about… Okay, how should we balance our resources between what’s going on at the different layers?

One thing that I think can often come up is the idea of diminishing returns from looking into what you can do about something. And so say we were thinking about a risk of terrorists releasing an engineered pandemic. And we noticed that we had done lots of work on trying to reduce terrorist access to the kind of biological agents which could be used to do this, but we hadn’t done any technical work on whether there were technological solutions that could be used to slow down an engineered pandemic, which are maybe different from what happens with just natural pandemics. Then that might be a clue that there could be low hanging fruit in working on that stage of things, and working on maybe there’s something that we can do to reduce the risk conditional on the first thing.

How likely are we to go from being in a collapsed state to going extinct?

I think it is plausible that the correct number is pretty big, like more than 10%. I also think that it’s pretty plausible that the correct number… Well, what I mean by correct number is maybe where I’d end up on this if I spent several years looking into it and had a team of really competent researchers at my command looking into some questions for me – but yeah, I think it’s plausible that maybe the risk I’d end up thinking there is below 0.1%.

And there’s questions both about what will be technologically feasible for a society in this state, like economically feasible. Would we be able to recapitulate the industrial revolution? Some people have argued that maybe this will be hard because the fossil fuels will all be burnt up. On the other hand, it’s likely that humanity would be approaching things from just a pretty different state of the world.

For most disasters we’d be imagining, there would still be a bunch of metal which has been lying around in these human artifacts, which has already been refined and made into things. And the type of technology that you’re maybe able to build, starting with scavenged things from that, will look rather different than starting from scratch.

What should most people do if longtermism is true?

So I think that if longtermism is true, my tentative answer is that people should, first of all, try to be good citizens themselves. And I mean, citizen in some broad way. I don’t mean citizen of their country. I don’t even mean citizen of the world as it is today. But I mean, citizen of humanity that’s stretching backwards and forwards through time and looking for being sufficiently helpful for others around them and trying to have good decision-making processes.

And secondly, trying to spread those properties and trying to encourage others to do the same and to spread some of those same properties as well. And then I also think there’s something about sometimes it’s right to be more strategic about it and to say, “Okay, I actually have a good understanding of how things might turn out here. And so the fog of uncertainty about the future is clearing a bit. We should follow a particular path”. And be prepared to do that when the fog clears enough that that’s the right thing to do.

Advice for mathematically-minded people

I think that learning a bunch of mathematics can be useful for getting better intuitions about some kinds of systems which might be important. This can be for the types of things we want to build mathematical models of, which comes up a bunch in bits of economics. It can also be for understanding the details of technologies that are being developed and what’s going on there. I do think that mathematics can provide a pretty important toolkit. I also think mathematical training can provide some other useful habits of thought. And this is something which I feel like I got out of doing mathematics degrees.

I think that in a mathematics degree, you get trained to be very careful about your arguments and notice, “Okay, which bits are solid and which bits do we need to go and dig in on something there?” And if we’re trying to get new knowledge in a domain where it’s easy to go out and test things, then that isn’t that important because at the end of the day, we’re like, “Well, okay, let’s come back to testing this”. But we want to try and understand things where we don’t have good access to empirics, and I think understanding the effects of actions on the long-term future is a pretty important case like this.

Articles, books, and other media discussed in the show

Owen’s work

Research Scholars Programme

Papers

EA Forum posts

Talks and interviews

Everything else

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.