Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

A particularly bad pandemic could incur costs which were up in the trillions, so you can’t insure the whole thing. Insurers have a much bigger bankroll and more to lose than the individual researchers, and so scope insensitivity only starts kicking in for them at a much larger scale than when we were thinking about the individual researchers doing this.

Owen Cotton-Barratt

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll?

In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way.

So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance.

Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly.

This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Owen is currently hiring for a selective, two-year research scholars programme at Oxford.

In this wide-ranging conversation Owen and I also discuss:

  • Are academics wrong to value personal interest in a topic over its importance?
  • What fraction of research has very large potential negative consequences?
  • Why do we have such different reactions to situations where the risks are known and unknown?
  • The downsides of waiting for tenure to do the work you think is most important.
  • What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly?
  • How should people balance the trade-offs between having a successful career and doing the most important work?
  • Are there any blind alleys we’ve gone down when thinking about AI safety?
  • Why did Owen give to an organisation whose research agenda he is skeptical of?

Highlights

I don’t hear stories about people just doing these things which are deadly boring to them. Maybe sometimes somebody trawls through the data set even though it’s not interesting, but they have some kind of personal drive to do that and that makes it interesting in the moment to them.

It’s a bit complicated. On the one hand you might think that academics are being kind of unvirtuous by never going and looking at things which aren’t interesting to them. On the other hand, I actually think that having a sense of what is interesting and what is boring is a powerful intellectual tool for making real progress on questions.

I think that the general principle of working out how do we build intellectual communities working on the problems that we want to, is actually a really important one. Many of the most important problems in the world it seems to me are things that we’re still a bit confused about. Things where research is going to be a key aspect of getting towards a solution. But to get good research, we need to have really good people who have a good sense of what it is that we actually need answers to and are motivated and excited to go and work on that. And so if we can work out how to set up the systems that empower those people to address those questions, that seems really valuable to me.

A PhD is a long time investment and I know a lot of people who do PhDs and then think, “This isn’t what I want to be working on.” And often people enter into PhD programs, because that’s the thing you do if you want to go and be a researcher.

I think that if you are talented, often PhD supervisors will want to work with you. And if you have ideas of, ‘actually, I think that this is a particularly valuable research topic’, you can quite likely find somebody who would be excited to supervise you doing a PhD on that. And there you’ve used some of your selection power on choosing the topic and going for things that seem particularly important. Then you can still spend a lot of your time on working out, okay, how do I actually just write good papers in this topic?

Articles, books, and other media discussed in the show

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.