A particularly bad pandemic could incur costs which were up in the trillions, so you can’t insure the whole thing. Insurers have a much bigger bankroll and more to lose than the individual researchers, and so scope insensitivity only starts kicking in for them at a much larger scale than when we were thinking about the individual researchers doing this.
A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll?
In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way.
So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance.
Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly.
This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
Owen is currently hiring for a selective, two-year research scholars programme at Oxford.
In this wide-ranging conversation Owen and I also discuss:
- Are academics wrong to value personal interest in a topic over its importance?
- What fraction of research has very large potential negative consequences?
- Why do we have such different reactions to situations where the risks are known and unknown?
- The downsides of waiting for tenure to do the work you think is most important.
- What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly?
- How should people balance the trade-offs between having a successful career and doing the most important work?
- Are there any blind alleys we’ve gone down when thinking about AI safety?
- Why did Owen give to an organisation whose research agenda he is skeptical of?