Preventing catastrophic pandemics

Some of the deadliest events in history have been pandemics. COVID-19 demonstrated that we’re still vulnerable to these events, and future outbreaks could be far more lethal.

In fact, we face the possibility of biological disasters that are worse than ever before due to developments in technology.

The chances of such catastrophic pandemics — bad enough to potentially derail civilisation and threaten humanity’s future — seem uncomfortably high. We believe this risk is one of the world’s most pressing problems.

And there are a number of practical options for reducing global catastrophic biological risks (GCBRs). So we think working to reduce GCBRs is one of the most promising ways to safeguard the future of humanity right now.

Summary

Scale

Pandemics — especially engineered pandemics — pose a significant risk to the existence of humanity. Though the risk is difficult to assess, some researchers estimate that there is a greater than 1 in 10,000 chance of a biological catastrophe leading to human extinction within the next 100 years, and potentially as high as 1 in 100. (See below.) And a biological catastrophe killing a large percentage of the population is even more likely — and could contribute to existential risk.

Neglectedness

Pandemic prevention is currently under-resourced. Even in the aftermath of the COVID-19 outbreak, spending on biodefense in the US, for instance, has only grown modestly — from an estimated $17 billion in 2019 to $24 billion in 2023.

And little of existing pandemic prevention funding is specifically targeted at preventing biological disasters that could be most catastrophic.

Solvability

There are promising approaches to improve biosecurity and reducing pandemics risk, including research, policy interventions, and defensive technology development.

Why focus your career on preventing severe pandemics?

COVID-19 highlighted our vulnerability to worldwide pandemics and revealed weaknesses in our ability to respond. Despite advances in medicine and public health, around seven million deaths worldwide from the disease have been recorded, and many estimates put the figure far higher.

Historical events like the Black Death and the 1918 flu show that pandemics can be some of the most damaging disasters for humanity, killing tens of millions and significant portions of the global population.

It is sobering to imagine the potential impact of a pandemic pathogen that is much more contagious and deadly than any we’ve seen so far.

Unfortunately, such a pathogen is possible in principle, particularly in light of advancing biotechnology. Researchers can design and create biological agents much more easily and precisely than before. (More on this below.) As the field advances, it may become increasingly feasible to engineer a pathogen that poses a major threat to all of humanity.

States or malicious actors with access to these pathogens could use them as offensive weapons or wield them as threats to obtain leverage over others.

Dangerous pathogens engineered for research purposes could also be released accidentally through a failure of lab safety.

Either scenario could result in a catastrophic ‘engineered pandemic,’ which we believe could pose an even greater threat to humanity than pandemics that arise naturally, as we argue below.

Thankfully, few people seek to use disease as a weapon, and even those willing to conduct such attacks may not aim to produce the most harmful pathogen possible. But the combined possibilities of accident, recklessness, desperation, and unusual malice suggest a disturbingly high chance of a pandemic pathogen being released that could kill a very large percentage of the population. The world might be especially at risk during great power conflicts.

But could an engineered pandemic pose an extinction threat to humanity?

There is reasonable debate here. In the past, societies have recovered from pandemics that killed as much as 50% of the population, and perhaps more.1

But we believe future pandemics may be one of the largest contributors to existential risk this century, because it now seems within the reach of near-term biological advances to create pandemics that would kill greater than 50% of the population — not just in a particular area, but globally. It’s possible they could be bad enough to drive humanity to extinction, or at least be so damaging that civilisation never recovers.

Reducing the risk of biological catastrophes by constructing safeguards against potential outbreaks and preparing to mitigate their worst effects therefore seems extremely important.

It seems relatively uncommon for people in the broader field of biosecurity and pandemic preparedness to work specifically on reducing catastrophic risks and engineered pandemics. Projects that reduce the risk of biological catastrophe also seem to receive a relatively small proportion of health security funding.2

In our view, the costs of biological disasters grow nonlinearly with severity because of the increasing potential for the event to contribute to existential risk. This suggests that projects to prevent the gravest outcomes in particular should receive more funding and attention than they currently do.

In the rest of this section, we’ll discuss how artificial pandemics compare to natural pandemic risks. Later on, we’ll discuss what kind of work can and should be done in this area to reduce the risks.

We also have a career review of biorisk research, strategy, and policy paths, which gives more specific and concrete advice about impactful roles to aim for and how to enter the field.

Natural pandemics show how destructive biological threats can be

Four of the worst pandemics in recorded history were:3

  1. The Plague of Justinian (541-542 CE) is thought to have arisen in Asia before spreading into the Byzantine Empire around the Mediterranean. The initial outbreak is thought to have killed around 6 million (about ~3% of world population)4 and contributed to reversing the territorial gains of the Byzantine empire.
  2. The Black Death (1335-1355 CE) is estimated to have killed 20–75 million people (about 10% of world population) and believed to have had profound impacts on the course of European history.
  3. The Columbian Exchange (1500-1600 CE) was a succession of pandemics, likely including smallpox and paratyphoid, brought by the European colonists that devastated Native American populations. It likely played a major role in the loss of around 80% of Mexico’s native population during the 16th century. Other groups in the Americas appear to have lost even greater proportions of their communities. Some groups may have lost as much as 98% of their people to these diseases.5
  4. The 1918 Influenza Pandemic (1918 CE) spread across almost the whole globe and killed 50–100 million people (2.5%–5% of the world population). It may have been deadlier than either world war.

These historical pandemics show the potential for mass destruction from biological threats, and they are a threat worth mitigating all on their own. They also show that the key features of a global catastrophe, such as high proportional mortality and civilisational collapse, can be driven by highly destructive pandemics.

But despite the horror of these past events, it seems unlikely that a natural pandemic could be bad enough on its own to drive humanity to total extinction in the foreseeable future, given what we know of events in natural history.6

As philosopher Toby Ord argues in the section on natural risks in his book The Precipice, history suggests humanity faces a very low baseline extinction risk — the chance of being wiped out in ordinary circumstances — from natural causes over the course of, say, 100 years.

That’s because if the baseline risk were around 10% per century, we’d have to conclude we’ve gotten very lucky for the 200,000 years or so of humanity’s existence. The fact of our existence is much less surprising if the risk has been about 0.001% per century.

None of the worst plagues we know about in history was enough to destabilise civilization worldwide or clearly imperil our species’ future. And more broadly, pathogen-driven extinction events in nature appear to be relatively rare for animals.7

Is the risk from natural pandemics increasing or decreasing?

Are we safer from pandemics now than we used to be? Or do developments in human society actually put us at greater risk from natural pandemics?

Good data on these questions is hard to find. The burden of infectious disease generally in human society is on a downward trend, but this doesn’t tell us much about whether infrequent outbreaks of mass pandemics could be getting worse.

In the abstract, we can think of many reasons that the risk from naturally arising pandemics might be falling. They include:

  • We have better hygiene and sanitation than past eras, and these will likely continue to improve.
  • We can produce effective vaccinations and therapeutics.
  • We better understand disease transmission, infection, and effects on the body.
  • The human population is healthier overall.

On the other hand:

  • Trade and air travel allow much faster and wider transmission of disease.8 For example, air travel seems to have played a large role in the spread of COVID-19 from country to country.9 In previous eras, the difficulty of travelling over long distances likely kept disease outbreaks more geographically confined.
  • Climate change may increase the likelihood of new zoonotic diseases.
  • Greater human population density may increase the likelihood that diseases will spread rapidly.
  • Much larger populations of domestic animals can potentially pass diseases on to humans.

There are likely many other relevant considerations. Our guess is that the frequency of natural pandemics is increasing, but that they’ll be less bad on average.10 A further guess is that the second factor is more important than the first factor, netting out to reduced overall danger. There remain many open questions.

Engineered pathogens could be even more dangerous

But even if natural pandemic risks are declining, the risks from engineered pathogens are almost certainly growing.

This is because advancing technology makes it increasingly feasible to create threatening viruses and infectious agents.11 Accidental and deliberate misuse of this technology is a credible global catastrophic risk and could potentially threaten humanity’s future.

One way this could play out is if some dangerous actor wanted to bring back catastrophic outbreaks of the past.

Polio, the 1918 pandemic influenza strain, and most recently horsepox (a close relative of smallpox) have all been recreated from scratch. The genetic sequence of all these pathogens and others are publicly available, and the progress and proliferation of biotechnology opens up terrifying opportunities.12

Beyond the resurrection of past plagues, advanced biotechnology could let someone engineer a pathogen more dangerous than those that have occurred in natural history.

When viruses evolve, they aren’t naturally selected to be as deadly or destructive as possible. But someone who is deliberately trying to cause harm could intentionally combine the worst features of possible viruses in a way that is very unlikely to happen naturally.

Gene sequencing, editing, and synthesis are now possible and becoming easier. We’re getting closer to being able to produce biological agents the way we design and produce computers or other products (though how long it takes remains unclear). This may allow people to design and create pathogens that are deadlier or more transmissible, or perhaps have wholly new features. (Read more.)

Scientists are also investigating what makes pathogens more or less lethal and contagious, which may help us better prevent and mitigate outbreaks.

But it also means that the information required to design more dangerous pathogens is increasingly available.

All the technologies involved have potential medical uses in addition to hazards. For example, viral engineering has been employed in gene therapy and vaccines (including some used to combat COVID-19).

Yet knowledge of how to engineer viruses to be better as vaccines or therapeutics could be misused to develop ‘better’ biological weapons. Properly handling these advances involves a delicate balancing act.

Hints of the dangers can be seen in the scientific literature. Gain-of-function experiments with influenza suggested that artificial selection could lead to pathogens with properties that enhance their danger.13

And the scientific community has yet to establish strong enough norms to discourage and prevent the unrestricted sharing of dangerous findings, such as methods for making a virus deadlier. That’s why we warn people going to work in this field that biosecurity involves information hazards. It’s essential for people handling these risks to have good judgement.

Scientists can make dangerous discoveries unintentionally in lab work. For example, vaccine research can uncover virus mutations that make a disease more infectious. And other areas of biology, such as enzyme research, show how our advancing technology can unlock new and potentially threatening capabilities that haven’t appeared before in nature.14

In a world of many ‘unknown unknowns,’ we may find many novel dangers.

So while the march of science brings great progress, it also brings the potential for bad actors to intentionally produce new or modified pathogens. Even with the vast majority of scientific expertise focused on benefiting humanity, a much smaller group can use the community’s advances to do great harm.

If someone or some group has enough motivation, resources, and sufficient technical skill, it’s difficult to place an upper limit on how catastrophic an engineered pandemic they might one day create. As technology progresses, the tools for creating a biological disaster will become increasingly accessible; the barriers to achieving terrifying results may get lower and lower — raising the risk of a major attack. The advancement of AI, in particular, may catalyse the risk. (See more about this below.)

Both accidental and deliberate misuse are threats

We can divide the risks of artificially created pandemics into accidental and deliberate misuse — roughly speaking, imagine a science experiment gone wrong compared to a bioterrorist attack.

The history of accidents and lab leaks which exposed people to dangerous pathogens is chilling:

  • In 1977, an unusual flu strain emerged that disproportionately sickened young people and was found to be genetically frozen in time from a 1950 strain, suggesting a lab origin from a faulty vaccine trial.
  • In 1978, a lab leak at a UK facility resulted in the last smallpox death.
  • In 1979, an apparent bioweapons lab in the USSR accidentally released anthrax spores that drifted over a town, sickening residents and animals, and killing about 60 people. Though initially covered up, Russian President Boris Yeltsin later revealed it was an airborne release from a military lab accident.
  • In 2014, dozens of CDC workers were potentially exposed to live anthrax after samples meant to be inactivated were improperly killed and shipped to lower-level labs that didn’t always use proper protective equipment.
  • We don’t really know how often this kind of thing happens because lab leaks are not consistently tracked. And there have been many more close calls.

And history has seen many terrorist attacks and state development of mass-casualty weapons. Incidents of bioterrorism and biological warfare include:

  • In 1763, British forces at Fort Pitt gave blankets from a smallpox ward to Native American tribes, aiming to spread the disease and weaken these communities. It’s unclear if this effort achieved its aims, though smallpox devastated many of these groups.
  • During World War II, the Japanese military’s Unit 731 conducted horrific human experiments and biological warfare in China. They used anthrax, cholera, and plague, killing thousands and potentially many more. The details of these events were only uncovered later.
  • In the 1960s and 1970s, the South African government developed a covert chemical and biological warfare program known as Project Coast. The program aimed to develop biological and chemical agents targeted at specific ethnic groups and political opponents, including efforts to develop sterilisation and infertility drugs.
  • In 1984, followers of the Rajneesh movement contaminated salad bars in Oregon with Salmonella, causing more than 750 infections. It was an attempt to influence an upcoming election.
  • In 2001, shortly after the September 11 attacks, anthrax spores were mailed to several news outlets and two U.S. Senators, causing 22 infections and five deaths.

So should we be more concerned about accidents or bioterrorism? We’re not sure. There’s not a lot of data to go on, and considerations pull in both directions.

It may seem releasing a deadly pathogen on purpose is more concerning. As discussed, the worst pandemics would most likely be intentionally created rather than emerge by chance, as discussed above. Plus, there are ways to make a pathogen’s release more or less harmful, and an accidental release probably wouldn’t be optimised for maximum damage.

On the other hand, many more people are well-intentioned and want to use biotechnology to help the world rather than harm it. And efforts to eliminate state bioweapons programs likely reduce the number of potential attackers. (But see more about the limits on these efforts below.) So it seems most plausible that there are more opportunities for a disastrous accident to occur than for a malicious actor to pull off a mass biological attack.

We guess that, all things considered, the former considerations are the more significant factors.15 So we suspect that deliberate misuse is more dangerous than accidental releases, though both are certainly worth guarding against.

This image is borrowed from Claire Zabel’s talk on biosecurity.16

Overall, the risk seems substantial

We’ve seen a variety of estimates regarding the chances of an existential biological catastrophe, including the possibility of engineered pandemics.17 Perhaps the best estimates come from the Existential Risk Persuasion Tournament (XPT).

This project involved getting groups of both subject matter experts and experienced forecasters to estimate the likelihood of extreme events. For biological risks, the range of median estimates between forecasters and domain experts were as follows:

  • Catastrophic event (meaning an event in which 10% or more of the human population dies) by 2100: ~1–3%
  • Human extinction event: 1 in 50,000 to 1 in 100
  • Genetically engineered pathogen killing more than 1% of the population by 2100: 4–10%18
  • Note: the forecasters tended to have lower estimates of the risk than domain experts.

Although they are the best available figures we’ve seen, these numbers have plenty of caveats. The main three are:

  1. There is little evidence that anyone can achieve long-term forecasting accuracy. Previous forecasting work has assessed performance for questions that would resolve in months or years, not decades.
  2. There was a lot of variation in estimates within and between groups — some individuals gave numbers many times, or even many orders of magnitude, higher or lower than one another.19
  3. The domain experts were selected for those already working on catastrophic risks — the typical expert in some areas of public health, for example, might generally rate extreme risks lower.

It’s hard to be confident about how to weigh up these different kinds of estimates and considerations, and we think reasonable people will come to different conclusions.

Our view is that given how bad a catastrophic pandemic would be, the fact that there seems to be few limits on how destructive an engineered pandemic could be, and how broadly beneficial mitigation measures are, many more people should be working on this problem than current are.

Reducing catastrophic biological risks is highly valuable according to a range of worldviews

Because we prioritise world problems that could have a significant impact on future generations, we care most about work that will reduce the biggest biological threats — especially those that could cause human extinction or derail civilisation.

But biosecurity and catastrophic risk reduction could be highly impactful for people with a range of worldviews, because:

  1. Catastrophic biological threats would harm near-term interests too. As COVID-19 showed, large pandemics can bring extraordinary costs to people today, and even more virulent or deadly diseases would cause even greater death and suffering.
  2. Interventions that reduce the largest biological risks are also often beneficial for preventing more common illnesses. Disease surveillance can detect both large and small outbreaks; counter-proliferation efforts can stop both higher- and lower-consequence acts of deliberate misuse; better PPE could prevent all kinds of infections; and so on.

There is also substantial overlap between biosecurity and other world problems, such as global health (e.g. the Global Health Security Agenda), factory farming (e.g. ‘One Health‘ initiatives), and AI.

How do catastrophic biorisks compare to AI risk?

Of those who study existential risks, many believe that biological risks and AI risks are the two biggest existential threats. Our guess is that threats from catastrophic pandemics are somewhat less pressing than threats stemming from advanced AI systems.

But they’re probably not massively less pressing.

One feature of a problem that makes it more pressing is whether there are tractable solutions to work on in the area. Many solutions in the biosecurity space seem particularly tractable because:

  • There are already large existing fields of public health and biosecurity to work within.
  • The sciences of disease and medicine are well-established.
  • There are many promising interventions and research ideas that people can pursue. (See the next section.)

We think there are also exciting opportunities to work on reducing risks from AI, but the field is much less developed than the science of medicine.

The existence of this infrastructure in the biosecurity field may make the work more tractable, but it also makes it arguably less neglected — which would make it a less pressing problem. In part because AI risk has generally been seen as more speculative, and it would represent essentially a novel threat, fewer people have been working in the area. This has made AI risk more neglected than biorisk.

In 2023, interest in AI safety and governance began to grow rather rapidly, making these fields somewhat less neglected than they had been previously. But they’re still quite new and so still relatively neglected compared to the field of biosecurity. Since we view more neglected problems as more pressing, this factor probably counts in favour of working on AI risk.

We also consider problems that are larger in scale to be more pressing. We might measure the scale of the problem purely in terms of the likelihood of causing human extinction or an outcome comparably as bad. 80,000 Hours assesses the risk of an AI-caused existential catastrophe to be between 3% and 50% this century (though there’s a lot of disagreement on this question). Few if any researchers we know believe comparable biorisk is that high.

At the same time, AI risk is more speculative than the risk from pandemics, because we know from direct experience that pandemics can be deadly on a large scale. So some people investigating these questions find biorisk to be a much more plausible threat.

But in most cases, which problem you choose to work on shouldn’t be determined solely by your view of how pressing it is (though this does matter a lot!). You should also take into account your personal fit and comparative advantage.

Finally, a note about how these issues relate:

  1. AI progress may be increasing catastrophic biorisk. Some researchers believe that advancing AI capabilities may increase the risk of a biological catastrophe. Jonas Sandrink at Oxford University, for example, has argued that advanced large language models may decrease the barriers to creating dangerous pathogens. AI biological design tools could also eventually enable sophisticated actors to cause even more harm than they otherwise would.
  2. There is overlap in the policy space between working to reduce biorisks and AI risks. Both require balancing the risk and reward of emerging technology, and the policy skills needed to succeed in these areas are similar. You can potentially pursue a career reducing risks from both frontier technologies.

If your work can reduce risks on both fronts, then you might view the problems as more similarly pressing.

There are clear actions we can take to reduce these risks

Biosecurity and pandemic preparedness are multidisciplinary fields. To address these threats effectively, we need a range of approaches, including:

  • Technical and biological researchers to investigate and develop tools for controlling outbreaks
  • Entrepreneurs and industry professionals to develop and implement these
  • Strategic researchers and forecasters to develop plans
  • People in government to pass and implement policies aimed at reducing biological threats

Specifically, you could:

  • Work with government, academia, industry, and international organisations to improve the governance of gain-of-function research involving potential pandemic pathogens, commercial DNA synthesis, and other research and industries that may enable the creation of (or expand access to) particularly dangerous engineered pathogens
  • Strengthen international commitments to not develop or deploy biological weapons, e.g. the Biological Weapons Convention (see below)
  • Develop new technologies that can mitigate or detect pandemics, or the use of biological weapons,20 including:
    • Broad-spectrum testing, therapeutics, and vaccines — and ways to develop, manufacture, and distribute all of these quickly in an emergency21
    • Detection methods, such as wastewater surveillance, that can find novel and dangerous outbreaks
    • Non-pharmaceutical interventions, such as better personal protective equipment
    • Other mechanisms for impeding high-risk disease transmission, such as anti-microbial far UVC light
  • Deploying and otherwise promoting the above technologies to protect society against pandemics and to lower the incentives for trying to create one
  • Improving information security to protect biological research that could be dangerous in the wrong hands
  • Investigating whether advances in AI will exacerbate biorisks and potential solutions to this challenge
  • For more discussion of biosecurity priorities, you can read our article on advice from biosecurity experts about the best way to fight the next pandemic.

The broader field of biosecurity and pandemic preparedness has made major contributions to reducing catastrophic risks. Many of the best ways to prepare for more probable but less severe outbreaks will also reduce the worst risks.

For example, if we develop broad-spectrum vaccines and therapeutics to prevent and treat a wide range of potential pandemic pathogens, this will be widely beneficial for public health and biosecurity. But it also likely decreases the risk of the worst-case scenarios we’ve been discussing — it’s harder to launch a catastrophic bioterrorist attack on a world that is prepared to protect itself against the most plausible disease candidates. And if any state or other actor who might consider manufacturing such a threat knows the world has a high chance of being protected against it, they have even less reason to try in the first place.

Similar arguments can be made about improved PPE, some forms of disease surveillance, and indoor air purification.

But if your focus is preventing the worst-case outcomes, you may want to focus on particular interventions within biosecurity and pandemic prevention over others.

Some experts in this area, such as MIT biologist Kevin Esvelt, believe that the best interventions for reducing the risk from human-made pandemics will come from the world of physics and engineering, rather than biology.

This is because for every biological countermeasure to reduce pandemic risk, such as vaccines, there may be tools in the biological sciences to overcome these obstacles — just as viruses can evolve to evade vaccine-induced immunity.

And yet, there may be hard limits to the ability of biological threats to overcome physical countermeasures. For instance, it seems plausible that there may just be no viable way to design a virus that can penetrate sufficiently secure personal protective equipment or to survive under far-UVC light. If this argument is correct, then these or similar interventions could provide some of the strongest protection against the biggest pandemic threats.

Two example ways to reduce catastrophe biological risks

We illustrate two specific examples of work to reduce catastrophic biological risks below, though note that many other options are available (and may even be more tractable).

1. Strengthen the Biological Weapons Convention

The principal defence against proliferation of biological weapons among states is the Biological Weapons Convention. The vast majority of eligible states have signed or ratified it.

Yet some states that signed or ratified the convention have also covertly pursued biological weapons programmes. The leading example was the Biopreparat programme of the USSR,22 which at its height spent billions and employed tens of thousands of people across a network of secret facilities.23

Its activities are alleged to have included industrial-scale production of weaponised agents like plague, smallpox, and anthrax. They even reportedly succeeded in engineering pathogens for increased lethality, multi-resistance to therapeutics, evasion of laboratory detection, vaccine escape, and novel mechanisms of disease not observed in nature.24 Other past and ongoing violations in a number of countries are widely suspected.25

The Biological Weapons Convention faces ongoing difficulties:

  • The convention lacks verification mechanisms for countries to demonstrate their compliance, and the technical and political feasibility of verification is fraught.
  • It also lacks an enforcement mechanism, so there are no consequences even if a state were out of compliance.
  • The convention struggles for resources. It has only a handful of full-time staff, and many states do not fulfil their financial obligations. The 2017 meeting of states’ parties was only possible thanks to overpayment by some states, and the 2018 meeting had to be cut short by a day due to insufficient funds.26

Working to improve the convention’s effectiveness, increasing its funding, or promoting new international efforts that better achieve its aims could help reduce the risk of a major biological catastrophe.

2. Govern dual-use research of concern

As discussed above, some well-meaning research has the potential to increase catastrophic risks. Such research is often called ‘dual-use research of concern,’ since the research could be used in either beneficial or harmful ways.

The primary concerns are that dangerous pathogens could be accidentally released or dangerous specimens and information produced by the research could fall into the hands of bad actors.

Gain-of-function experiments by Yoshihiro Kawaoka and Ron Fouchier raised concerns in 2011. They published results showing they had modified avian flu to spread in ferrets — raising fears that it might also be enabled to spread to humans.

The synthesis of horsepox is a more recent case. Good governance of this kind of research remains more aspiration than reality.

Individual investigators often have a surprising amount of discretion when carrying out risky experiments. It’s plausible that typical scientific norms are not well-suited to appropriately managing the dangers intrinsic in some of this work.

Even in the best case, where the scientific community is solely composed of those who only perform work which they sincerely believe is on balance good for the world, we might still face the unilateralist curse. This occurs when only one individual mistakenly concludes that a dangerous course of action should be taken, even when all their peers have ruled it out. This makes the chance of disaster much more likely, because it only takes one person making an incorrect risk assessment to impose major costs on the rest of society.

And in reality, scientists are subject to other incentives besides the public good, such as publications, patents, and prestige. It would be better if safety-enhancing discoveries were found before easier to make dangerous discoveries arise. But the existing incentives may encourage researchers to conduct their work in ways that aren’t always optimal for the social good.

Governance and oversight can mitigate risks posed by individual foibles or mistakes. The track record of such oversight bodies identifying concerns in advance is imperfect. The gain-of-function work on avian flu was initially funded by the NIH (the same body which would subsequently declare a moratorium on gain-of-function experiments), and passed institutional checks and oversight — concerns only began after the results of the work became known.

When reporting the horsepox synthesis to the WHO advisory committee on variola virus research, the scientists noted:

Professor Evans’ laboratory brought this activity to the attention of appropriate regulatory authorities, soliciting their approval to initiate and undertake the synthesis. It was the view of the researchers that these authorities, however, may not have fully appreciated the significance of, or potential need for, regulation or approval of any steps or services involved in the use of commercial companies performing commercial DNA synthesis, laboratory facilities, and the federal mail service to synthesise and replicate a virulent horse pathogen.

One challenge is there is no bright line one can draw to rule out all concerning research. List-based approaches, such as select agent lists or the seven experiments of concern, may increasingly be unsuited to current and emerging practice, particularly in such a dynamic field.

But it’s not clear what the alternative to necessarily incomplete lists would be. The consequences of scientific discovery are often not obvious ahead of time, so it may be difficult to say which kinds of experiments pose the greatest risks or in which cases the benefits outweigh the costs.

Even if a more reliable governance could be constructed, the geographic scope would remain a challenge. Practitioners inclined toward more concerning work could migrate to more permissive jurisdictions. And even if one journal declines to publish a new finding on public safety grounds, a researcher can resubmit to another journal with laxer standards.27

But we believe these challenges are surmountable.

Research governance can adapt to modern challenges. Greater awareness of biosecurity issues can be spread in the scientific community. We can construct better means of risk assessment than blacklists (cf. Lewis et al. (2019)). Broader cooperation can mitigate some of the dangers of the unilateralist’s curse. There is ongoing work in all of these areas, and we can continue to improve practices and policies.

Example reader

What jobs are available?

For our full article on pursuing work in biosecurity, you can read our biosecurity research and policy career review.

If you want to focus on catastrophic pandemics in the biosecurity world, it might be easier to work on broader efforts that have more mainstream support first and then transition to more targeted projects later. If you are already working in biosecurity and pandemic preparedness (or a related field), you might want to advocate for a greater focus on measures that reduce risk robustly across the board, including in the worst-case scenarios.

The world could be doing a lot more to reduce the risk of natural pandemics on the scale of COVID-19. It might be easiest to push for interventions targeted at this threat before looking to address the less likely, but more catastrophic possibilities. On the other hand, potential attacks or perceived threats to national security often receive disproportionate attention from governments compared to standard public health threats, so there may be more opportunities to reduce risks from engineered pandemics under some circumstances.

To get a sense of what kinds of roles you might take on, you can check out our job board for openings related to reducing biological threats. This isn’t comprehensive, but it’s a good place to start:

Our job board features opportunities in biosecurity and pandemic preparedness:

    View all opportunities

    Want to work on reducing risks of the worst biological disasters? We want to help.

    We’ve helped people formulate plans, find resources, and put them in touch with mentors. If you want to work in this area, apply for our free one-on-one advising service.

    Apply for advising

    We thank Gregory Lewis for contributing to this article, and thank Anemone Franz and Elika Somani for comments on the draft.

    Learn more

    Top recommendations

    Podcasts

    Further recommendations

    Notes and references

    1. Luke Muehlhauser’s writeup on the Industrial Revolution, which also discusses some of the deadliest events in history, reviews the evidence on the Black Death, as well as other outbreaks. Muehlhauser’s summary: “The most common view seems to be that about one-third of Europe perished in the Black Death, starting from a population of 75–80 million. However, the range of credible-looking estimates is 25–60%.”

    2. Gregory Lewis, one of the contributors to this piece, has previously estimated that a quality-adjusted ~$1 billion is spent annually on global catastrophic biological risk reduction. Most of this comes from work that is not explicitly targeted at GCBRs but is rather disproportionally useful for reducing them.

      For the most up-to-date analysis of biodefense spending in the US we’ve seen, check out the Biodefense Budget Breakdown from the Council on Strategic Risks.

    3. All of the impacts of the cases listed are deeply uncertain, as:

      • Vital statistics range from at best very patchy (1918) to absent. Historical populations (let alone their mortality rate or the mortality attributable to a given outbreak) are very imprecisely estimated.
      • Proxy indicators (e.g. historical accounts, archaeology) have very poor resolution, leaving a lot to educated guesswork and extrapolation.
      • Attribution of historical consequences of an outbreak are highly contestable. Other events can offer competing (or overdetermining) explanations.

      Although these factors add ‘simple’ uncertainty, we would guess academic incentives and selection effects introduce a bias to over-estimates for historical cases. For this reason, we used Muehlhauser’s estimates for ‘death tolls’ (generally much more conservative than typical estimates, such as ’75–200 million died in the black death’), and reiterate the possible historical consequences are ‘credible’ rather than confidently asserted.

      For example, it’s not clear the plague of Justinian should be on the list at all. Mordechai et al. (2019) survey the circumstantial archeological data around the time of the Justinian Plague and find little evidence of a discontinuity over this period suggestive of a major disaster: papyri and inscriptions suggest stable rates of administrative activity, and pollen measures suggest stable land use. They also offer reasonable alternative explanations for measures that did show a sharp decline — new laws declined during the ‘plague period’, but this could be explained by government efforts at legal consolidation having coincidentally finished beforehand.

      Even if one takes the supposed impacts of each at face value, each has features that may disqualify it as a ‘true’ global catastrophe. The first three, although afflicting a large part of humanity, left another large part unscathed (the Eurasian and American populations were effectively separated). The 1918 Flu had a very high total death toll and global reach, but not the highest proportional mortality and relatively limited historical impact. The Columbian Exchange, although having a high proportional mortality and a crippling impact on the affected civilisations, had comparatively little effect on global population owing to the smaller population in the Americas and the concurrent population growth of the immigrant European population.

    4. The precise death toll of the Justinian plague, common to all instances of ‘historical epidemiology,’ is very hard to establish — note for example this recent study suggesting a much lower death toll. Luke Muehlhauser ably discusses the issue here. Others might lean towards somewhat higher estimates given the circumstantial evidence for an Asian origin of this plague (and thus possible impact on Asian civilisations in addition to Byzantium), but with such wide ranges of uncertainty, quibbles over point estimates matter little.

    5. It may have contributed to subsequent climate changes ‘Little Ice Age.’

    6. There might be some debate about what counts as a pandemic arising ‘naturally.’ For instance, if a pandemic only occurs because climate change shifted the risk landscape, or because air travel allowed a virus to spread more than it would have had planes never been invented, it could arguably be considered a ‘human-caused’ pandemic. For our purposes, though, when we discuss ‘natural’ pandemics, we mean any pandemic that doesn’t arise from either purposeful introduction of the pathogen into the population or the accidental release of a pathogen in a laboratory or clinical setting.

    7. “We used the IUCN Red List of Threatened and Endangered Species and literature indexed in the ISI Web of Science to assess the role of infectious disease in global species loss. Infectious disease was listed as a contributing factor in [less than] 4% of species extinctions known to have occurred since 1500 (833 plants and animals) and as contributing to a species’ status as critically endangered in [less than] 8% of cases (2,852 critically endangered plants and animals).”

      But note also: “Although infectious diseases appear to play a minor role in global species loss, our findings underscore two important limitations in the available evidence: uncertainty surrounding the threats to species survival and a temporal bias in the data.”

      From: Smith KF, Sax DF, Lafferty KD. Evidence for the role of infectious disease in species extinction and endangerment. Conserv Biol. 2006 Oct;20(5):1349-57.

    8. Although this is uncertain. Thompson et al (2019) suggest greater air travel could be protective, the mechanism being that greater travel rates and population mixing allow wider spread of ‘pre-pandemic’ pathogen strains, and thus build up cross-immunity in the world population, giving the global population greater protection to the subsequent pandemic strain.

    9. See, for instance, “COVID-19 and the aviation industry: The interrelationship between the spread of the COVID-19 pandemic and the frequency of flights on the EU market” by Anyu Liu, Yoo Ri Kim, and John Frankie O’Connell.

    10. One suggestive datapoint comes from an AIR worldwide study that modelled what would happen if the 1918 influenza outbreak happened today. It suggests that although the absolute numbers of deaths would be similar — in the tens of millions — the proportional mortality of the global population would be much lower.

    11. One can motivate this by a mix of qualitative and quantitative metrics. On the former, one can talk of recent big biotechnological breakthroughs (CRISPR-Cas9 genome editing, synthetic bacterial cells, the Human Genome Project, etc). Quantitatively, metrics of sequencing costs or publications show accelerating trends.

    12. Although there’s wide agreement on the direction of this effect, the magnitude is less clear. There remain formidable operational challenges beyond the ‘in principle’ science to perform a biological weapon attack, and historically many state and non-state biological weapon programmes stumbled at these hurdles (Ouagrham-Gormley 2014). Biotechnological advances probably have lesser (but non-zero) effects in reducing these challenges.

    13. The interpretation of these gain-of-function experiments is complicated by the fact that the resulting strains had relatively ineffective mammalian transmission and lower pathogenicity, although derived from highly pathogenic avian influenza .

    14. See, for example, the following reports: (1, 2, 3, 4, 5).

    15. There is some loosely analogous evidence with respect to accidental/deliberate misuse of other things. Although firearm accidents kill many (albeit less than ‘deliberate use’, at least in the United States), it is hard to find such an accident that killed more than five people; far more people die from road traffic accidents than vehicle ramming attacks, yet the latter are over-represented in terms of large casualty events; aircraft accidents kill many more (relatively) than terrorist attacks using aircraft, yet the event with the largest death toll was from the latter category, etc.

      By even looser analogy, it is very hard to find accidents which have killed more than 10,000 people, less so deliberate acts like war or genocide (although a lot turns on both how events are individuated, and how widely the consequences are tracked and attributed to the ‘initial event’).

    16. Claire Zabel is a program director at Open Philanthropy, which is the largest funder of 80,000 Hours.

    17. This risk surpasses the risk of non-genetically engineered pathogens by 2100.

    18. See for example this figure, for a less than 1% population death event from a genetically engineered pathogen by 2100:

      The superforecasters (orange) tend to give much lower estimates than the biorisk people (~4% vs. 10%, and the boxplots do not overlap). Each point represents a forecaster, so within each of the groups the estimates range from between (at least) 1% to 25%.

    19. Much has been written on specific technologies. For example, Broad-Spectrum Antiviral Agents: A Crucial Pandemic Tool (2019). A number of the podcast episodes listed discuss some of the most promising ideas, as do many of the papers in the “Other resources” section.

    20. Though note that, as discussed above, researchers working on these technologies should be careful to mitigate risks from any dual-use findings that could cause harm.

    21. The USSR aggregate military budget was kept secret and can only be estimated. A further challenge is that the Soviet environment led to a distinction between ‘hard currency’ (denominated in dollars, which could be spent internationally) and ‘domestic’ (denominated in rubles) budgets. Alibek (a defector from the Soviet programme) claimed $1 billion was spent on BW R&D in 1990 alone; Pasechik (another defector) estimated a hard currency budget of $50–150 million in hard currency per year from 1974–1989 for foreign equipment and supplies. A confidential interview reported by Mangold and Goldberg’s Plague Wars implies a decision in 1969 to spend 3% of the military budget on BW. Guessing from these gives order of magnitude estimates around 0.1% to 1% of total GDP.

    22. Russia admitted in 1992 that the USSR conducted an offensive biological warfare programme since joining the BWC and made undertakings to immediately cease (see e.g. the 1992 Joint US/UK/Russian statement, Dahlberg (1992)).

      Notably, Russian sources of this time only admit research rather than possessing or stockpiling offensive weapons. The two Russian (non-governmental) sources widely cited by western academic work on Russia’s 1992 admission are a report by Victor Livotkin in Izvestiia (a broadsheet) and an interview with Anatoly Kuntsevich.

    23. Public information on any state’s biological warfare activity is shrouded in secrecy (especially in the post-BWC era). The most accessible (if not the most reliable) source on Biopreparat is Alibek’s book Biohazard. The activities I note are also alleged in the much more authoritative The Soviet Biological Weapons Program: A History, by Leitenburg and Zilinskas.

    24. Establishing that a state possesses or is pursuing biological weapons is very difficult. Such programmes are inevitably conducted in secret, and accusations of this can be expected to be fiercely denied regardless of their truth. Further, given their political sensitivity, states may have other reasons that govern whether they claim another state is or is not pursuing a BW programme independent of the evidence it possesses.

      One commonly cited source is the US State Department report on arms control compliance. The (2019) declassified version of this accuses only North Korea of possessing a biological weapons programme. It also notes US concerns with respect to Russia, China, and Iran’s compliance with the BWC. Carus (2017) gives a contemporary review of suspected past and ongoing BW programmes.

    25. See this public ‘pitch’ to Bill Gates to cover the funding shortfall.

    26. In a pre-submission enquiry the horsepox synthesis group made to a journal, the editor replied:

      While recognizing the technical achievement, ultimately we have decided that your paper would not offer Science readers a sufficient gain of novel biological knowledge to offset the significant administrative burden the manuscript represents in terms of dual-use research of concern. [Emphasis added]