Some of the deadliest events in history have been pandemics.1 Due to developments in technology, as well as our growing interconnectedness, we face the possibility of future biological disasters that are even worse.
For reasons we discuss below, we think the chances of such a biological catastrophe are uncomfortably high. There are also a number of practical options for reducing these risks. So we think working to reduce GCBRs is a promising way to safeguard the future of humanity right now.
Note: This page gives a broad overview of the problem area and links to many resources on the topic. For an in-depth review of the issue and additional details about work in this area, see our full report on global catastrophic biological risks. (That report was published in March 2020 and largely written prior to the COVID-19 pandemic — but we think its conclusions still stand today.)
If you’re keen to work on this area in your career, we may also be able to help through our one-on-one advising.
Pandemics — alongside other global catastrophic biological risks, like bioterrorism or biological weapons — pose a substantial existential threat to humanity. As biotech progress continues, it looks increasingly plausible that it will become easier to manufacture extremely dangerous pathogens (whether deliberately or accidentally), potentially far worse than the SARS-CoV-2 virus that causes COVID-19.
We can prepare for the next pandemic — and hopefully head it off before it happens. We’re excited about a number of approaches to reduce these risks. For example, we could find technological solutions that make it easier to prevent and treat infections, and policy solutions that ensure countries and institutions respond better to pandemics. While there’s lots of work going on in this area, very little of this work is focused on the worst-case risks, and as a result, we think work to prevent potentially existential pandemics is highly neglected.
Our overall view
Recommended - highest priority
We think this is among the most pressing problems in the world.
Pandemics — especially engineered pandemics — pose a significant risk to the existence of humanity. We think there is a greater than 1 in 10,000 chance of a biological existential catastrophe within the next 100 years.2
There are promising existing approaches to improving biosecurity, including both developing technology that could reduce these risks (e.g. better bio-surveillance), and working on strategy and policy to develop plans to prevent and mitigate biological catastrophes.
Why focus your career on preventing severe pandemics?
Advances in biotechnology may pose catastrophic risks.
COVID-19 has highlighted our vulnerability to worldwide pandemics and revealed weaknesses in our ability to respond in a coordinated and sophisticated way. And historical events like the Black Death and the 1918 flu show that pandemics can be some of the most damaging disasters for humanity.
It is sobering to imagine the potential impact of a pandemic pathogen that is much more contagious than any we’ve seen so far, more deadly, or both.
Unfortunately, the emergence of such a pathogen is not out of the question, particularly in light of recent advances in biotechnology, which have allowed researchers to design and create biological agents much more easily and precisely than was possible before. If the field continues to advance along this trend, over the coming decades it may become possible for someone to create a pathogen that has been engineered to be substantially more contagious than natural pathogens, more deadly, and/or more difficult to address with standard countermeasures.3
At the same time, it may become easier for states or malicious individuals to access these pathogens, and potentially use them as weapons, because the relevant technologies are also becoming more widely available and easier to use.4
Dangerous pathogens engineered for research purposes could also be released accidentally through a failure of lab safety.5
Either scenario could result in a catastrophic ‘engineered pandemic.’ Although making a pathogen as dangerous as possible will not generally be in the interest of states or other actors (in part because it would likely threaten their own forces), a purposefully engineered pandemic pathogen does have the potential to be significantly more deadly and spreadable. Possibilities of accidents, recklessness, and unusual malice suggest we can’t rule out the prospect of a pandemic pathogen being released that could kill a large percentage of the population.
How likely we are to face such a pathogen is a matter of debate. But over the next century the likelihood doesn’t seem negligible.6
Could an engineered pandemic pose an existential threat to humanity? Again, there is reasonable debate here. In the past, societies have recovered from pandemics as severe as the Black Death, which killed around a one-third to one-half of Europeans.7 But from what we’ve seen, the future GCBRs look like some of the larger contributors to existential risk this century.8
Reducing the risk of biological catastrophes by reducing the chances of potential outbreaks or preparing to mitigate their worst effects therefore seems very important.9
There are clear actions we can take to reduce these risks.
Work with government, academia, and industry to improve the governance of gain-of-function research involving potential pandemic pathogens, commercial DNA synthesis, and other research and industries that may enable the creation of (or expand access to) particularly dangerous engineered pathogens. At times this may involve careful regulation.
Develop broad-spectrum testing, therapeutics, and other technologies and platforms that could be used to quickly test, vaccinate, and treat billions of people in the case of a large-scale, novel outbreak.11
Most existing work is not aimed at reducing risks of the worst outcomes.
The broader field of biosecurity and pandemic preparedness has made major contributions to GCBR reduction. Many of the best ways to prepare for more probable but less severe outbreaks will also reduce GCBRs, so many people who are not concerned with GCBRs in particular still do work that is useful for reducing them. For this reason, we think advancing parts of the broader field — especially in areas like vaccine research or broad-spectrum treatments — can be very valuable, even from the perspective of just trying to reduce the chances or severity of the worst potential outbreaks.
There may be even more valuable opportunities. It seems to be relatively uncommon for people in the broader field of biosecurity and pandemic preparedness to aim their work specifically at reducing GCBRs. Projects that disproportionately reduce GCBRs also seem to receive a relatively small proportion of health security funding.12 In our view, the costs of biological disasters grow nonlinearly with severity because of the increasing potential for the event to contribute to existential risk. This suggests that projects that reduce GCBRs in particular should receive more funding and attention than they currently seem to.
Moreover, insofar as more targeted interventions would be useful (and we’d guess they would be13) the fact that there is comparatively little work targeted toward reducing GCBRs right now suggests that the area is somewhat neglected. This means that if you enter the field of biosecurity and pandemic preparedness aiming to reduce GCBRs, there may be particularly good opportunities to do so that others have not already pursued.
If you do enter the field aiming to reduce GCBRs, it might be easier to work on broader efforts that have more mainstream support first, and then transition to more targeted projects later.
If you are already working in biosecurity and pandemic preparedness (or a related field), this might be a good time to advocate for a greater focus on measures likely to help us with whatever outbreak surprises us next. There may be a greater openness to ideas in this area now, as people reflect on how underprepared we were for COVID-19.14
What kinds of work are most needed?
Biosecurity and pandemic preparedness are multidisciplinary fields. To address these threats effectively, we need at least:
Technical and biological researchers to investigate and develop tools for controlling outbreaks, such as broad-spectrum testing and antivirals. (See examples of research questions.)
Strategic researchers and forecasters to develop plans, such as for how to develop or scale up vaccines quickly.
People in government to pass and implement policies aimed at reducing biological threats.
The Future of Humanity Institute (FHI) at Oxford University conducts multidisciplinary research on how to ensure a positive long-run future. With the recent hire of Piers Millett, FHI is looking to expand its research and policy functions to reduce catastrophic risks from biotechnology.
The Nuclear Threat Initiative is a US non-partisan think tank that works to prevent catastrophic attacks and accidents with nuclear, biological, radiological, chemical and cyber weapons of mass destruction and disruption. See current vacancies.
Open Philanthropy, a foundation that broadly shares our values, has made grants to both of these organisations based on assessments of the quality of their work, staff, and structure; their global influence; and how they are likely to use the grant.
However, both organisations might still have room for more funding.15 You can help fill any gaps by ‘topping up’ Open Philanthropy’s grants with your own donations.16 (Disclosure: Open Philanthropy is our single largest funder.)
We’ve looked at the reasoning behind these estimates and are uncertain about which ones we should most believe. Overall, we think the risk is around 0.1%, and very likely to be greater than 0.01%, but we haven’t thought about this in detail.↩
Gene sequencing, editing, and synthesis are all now possible and are becoming increasingly systematised, making it feasible in principle to engineer and produce biological agents in a way not too dissimilar to how we design and produce computers or other products. This may allow people to design and create pathogens that combine properties of natural pathogens or ones with wholly new features. (Read more)
Scientists are also investigating what makes pathogens more or less deadly and contagious. This improved understanding may help us better prevent and mitigate outbreaks. But it also means that the information required to design more dangerous pathogens is increasingly available.
In past decades, genetic engineering could be described as a ‘craft’ that involved a lot of uncertainty, tacit knowledge, and trial-and-error. The ambition of some synthetic biologists (1, 2, 3, and ‘BioBricks‘) has been to make this process more systematic and modular, which would allow more people with less extensive experience to create biological material reliably and economically — more like how we manufacture other products.
There has been steady progress on this front. Innovations in the last decades have made it easier to design and manufacture genetic material. Commercial synthesis is increasingly available and economical. Increasingly large libraries of genetic sequences are available, and sequencing costs are decreasing. Some steps have been taken to manage the risks from this availability, such as screening commercial synthesis orders, though more will need to be done as the industry continues to advance.
Again, it’s worth emphasising that there are benefits to these developments as well as risks. More people being able to sequence and synthesise genetic material means faster progress in a range of areas, such as innovative new drug therapies.↩
Why would well-intentioned researchers deliberately create unnaturally dangerous pathogens? One purpose is ‘gain-of-function research,’ in which scientists try to increase the contagiousness or virulence of a pathogen in order to better understand its characteristics, including whether particular mutations should be treated as a warning sign if they occur in nature. The result is typically a slightly more dangerous pathogen that’s still well within the bounds of what virologists work with on a day-to-day basis. However, when the research is performed on potential pandemic pathogens, or particularly virulent ones, the potential to create something unnaturally dangerous becomes a concern. The most publicly prominent example of gain-of-function research was a 2011 experiment to increase the transmissibility of avian flu (H5N1) in mammals. The experiment was controversial and triggered a review by the National Science Advisory Board for Biosecurity.↩
In a 2008 survey, the median expert estimated that there was a 10% chance of 1 billion people dying in an engineered pandemic before 2100, and a 2% chance of an engineered pandemic causing extinction. The authors stress that for various reasons these estimates must be taken with a grain of salt. Nonetheless, arguments like the ones presented here suggest these numbers are relatively plausible.↩
Luke Muehlhauser’s writeup on the Industrial Revolution, which also discusses some of the deadliest events in history, reviews the evidence on the Black Death as well as other outbreaks. Muehlhauser’s summary: “The most common view seems to be that about 1/3 of Europe perished in the Black Death, starting from a population of 75-80 million. However, the range of credible-looking estimates is 25%-60%.” See footnote 1 for a table of Muehlhauser’s estimates of world fatalities due to different historical events.↩
Here we are presenting the case for working on GCBRs. But of course whether this is the area you should focus on in your career depends (among other things) on your fit for the area as well as how it compares to others you could focus on instead. For example, as Greg Lewis points out, working to increase the chances of safe and beneficial AI seems orders of magnitude more neglected than work on GCBRs, and seems at least as important for safeguarding the future of humanity. This suggests that if your circumstances and fit are equally good for both areas, working to ensure safe and beneficial AI is likely to be the better choice between the two. (Of course, there are many other potential focus areas that might be even better for you.)↩
Greg Lewis estimates that a quality-adjusted ~1 billion USD is spent annually on GCBR reduction. Most of this comes from work that is not explicitly targeted at GCBRs, but is rather disproportionally useful for reducing them. The US budget for health security in general is ~14 billion. Worldwide the budget is probably something like double or triple that, so spending that’s particularly helpful for GCBR reduction is probably just a few percent of the total; the spending for explicit GCBR reduction would be much less. See the relevant section of our GCBR profile, including footnote 21.
Why might people focus less on work targeted toward GCBRs, even though they are the risks of the worst catastrophes? One answer is that people with the power to allocate resources are not sufficiently aware of GCBRs, or think that they are extremely low. Another answer is short-term thinking: since the technologies most worrying for GCBRs haven’t yet been fully developed, it’s very unlikely we’ll see a biological catastrophe in the next few years; people subject to political and other pressures to prioritise the near future may therefore be less inclined to focus on them. Finally, GCBRs are a type of ‘public good‘ problem, so we generally have reason to expect them to be somewhat neglected. Read more.↩
How useful more targeted work is for reducing GCBRs vs growing the broader field of biosecurity and pandemic preparedness is a matter of debate. In a recent podcast episode, Marc Lipsitch argued that the best way to address GCBRs may be to simply build up the broader field, because of the substantial overlap between biological threats of all sizes and the tools needed to combat them. In our writeup on reducing GCBRs, Greg Lewis suggested that this strategy — which he calls “buying the index” of conventional biosecurity — would probably be less effective than trying to complement the existing portfolio with work that’s particularly important for reducing GCBRs.
We suspect Greg’s view is closer to the truth, though it’s not obvious, and Greg also expresses uncertainty on the matter. We have a general heuristic: all else equal, a more targeted intervention — one whose primary goal is to make progress on a smaller number of issues — is likely to have a bigger effect on those issues than a less targeted intervention that has more goals. In pursuing the less targeted intervention, you can face more tradeoffs between the different goals, which can reduce your impact on each one considered separately.
Furthermore, with regard to this particular case: the people who have shaped the broader field of biosecurity and pandemic preparedness seem to have generally been optimising for reducing the risks of smaller, more likely pandemic outbreaks. It would be surprising if in doing so they also optimised the field for reducing GCBRs, such that just building the field in general was the best thing someone could do to reduce GCBRs.
That said, because there’s already a lot of support for the kinds of interventions favoured by the broader field, including interventions that do reduce GCBRs, it could in some cases be higher impact to expand the field or in some other way make it more effective at achieving its goals. For example, if you could manage to expand the entire field by 1% in terms of funding and labour, that might easily be better than a more targeted project aimed at reducing GCBRs.
Indeed, Greg also guesses that it’s sometimes better to complement the existing biosecurity portfolio with work that’s especially useful for reducing GCBRs (but which is also helpful for addressing threats of less severe events) than it would be to explicitly target reducing GCBRs. This suggests that the argument for targeting interventions can be taken too far.↩
How to prevent the next unknown would-be pandemic is, of course, an existing field of research. The COVID-19 pandemic may raise this field’s profile, or it might lower it. This depends on whether enough people are inspired to focus on preventing anything like COVID-19 (or worse) from happening in the future, or if the large majority of people prioritise combating the current crisis. The publication of this New York Times Magazine article suggests that there is mainstream interest in pursuing more broad prevention efforts.↩
Open Philanthropy generally doesn’t aim to provide 100% of an organisation’s funding, as doing so might make the organisation too dependent on them alone. This creates a need for smaller donors to match or ‘top up’ their funding.↩
Because of Open Philanthropy’s research capacity, we find that one of the most efficient ways to donate effectively is to simply ‘top up’ their grants — filling the funding gap left by their grants to organisations they’ve selected. Read more about this methodology.↩