Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already.

And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it’s too late?

Kevin Esvelt

In today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.

They cover:

  • Why it makes sense to focus on deliberately released pandemics
  • Case studies of people who actually wanted to kill billions of humans
  • How many people have the technical ability to produce dangerous viruses
  • The different threats of stealth and wildfire pandemics that could crash civilisation
  • The potential for AI models to increase access to dangerous pathogens
  • Why scientists try to identify new pandemic-capable pathogens, and the case against that research
  • Technological solutions, including UV lights and advanced PPE
  • Using CRISPR-based gene drive to fight diseases and reduce animal suffering
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Highlights

Risks from biological weapons are worse than they used to be

Kevin Esvelt: In the past, people were mainly concerned with nasty things that you could aerosolise and spray over a city: think crop dusters. And this is why, if you look at the select agent list in the United States, it’s full of things like anthrax. And indeed, Aum Shinrikyo tried to mass produce anthrax, aerosolise it, and spray it over a city. Turns out that’s hard. It’s hard to make that much pure anthrax. It’s hard to aerosolise it without killing it. It’s hard to disperse it over a large area — and, you know, the wind conditions have to be right, it needs to be done at the right time, whatever.

Lot of complexity goes into all stages of that. But above all else, you do need to make a lot of it. You need large-scale fermenters, not the kind of thing that you can buy and put in a garage lab. And it’s complicated and it’s an optimisation process and there are no protocols.

Luisa Rodriguez: I find that very reassuring. Is there a reason to think that’s going to get easier?

Kevin Esvelt: Maybe. But at the end of the day, that’s the kind of thing that can kill maybe 105 people, even if they do it right. That’s bad — traditional security people need to worry about that — but that doesn’t meet my minimum bar for “I need to do something about this.”

But if you think about a pandemic virus, it spreads on its own. So how many people do you need to infect in order to trigger a new pandemic? Depends on the virus. If it’s highly contagious, more often than not, one could be enough.

Even if it’s not very contagious, if you infect a dozen people, that is almost certainly enough if it is a pandemic-capable virus. So now, I guess the benefit of COVID is that everyone understands what R0 means, the basic reproductive number: How many people does the typical infected person go on to infect? If it’s above 1, it’s likely to take off. But of course there’s chance, there’s randomness: maybe this person will infect five people, maybe they won’t infect anyone.

And SARS-CoV-2 relied heavily on superspreaders. So any one person is pretty unlikely to infect anyone, but you infect six or eight people and one of them is likely to be a superspreader, is going to infect a lot more than that. So it depends on how contagious the virus is and how much it relies on superspreading — with lower contagious and more superspreading meaning less likely to cause a pandemic per infected person.

But note that now we’re in a very different ballgame. How much purified virus do you need to infect four people? Twelve people at most? That’s just not very much.

And this is why the bottom line is I think the game is different now. Yeah, we still need to worry about aerosolised anthrax, because there’s new technologies that could plausibly make that easier. But it’s not the kind of thing where the scientific community is deliberately making it as easy as possible for scientists around the world to obtain the relevant agent and in quantities necessary to start a self-perpetuating spread of death.

Omnicidal actors in history

Luisa Rodriguez: I imagine some of our listeners will be confused about the idea that people might be trying to kill everyone. I think when I first heard this argument, I found it very counterintuitive and just really hard to wrap my head around. Can you help make it a bit more intuitive? Why would any individual or group want to actually do this? Kill billions of people, maybe everyone, including themselves?

Kevin Esvelt: Well, I think there’s a big difference between people who want to kill everyone and people who just want to bring down civilisation. Simplest possible example: Suppose you believe that most of the value in the world comes from the beautiful complexity of nature, that the tapestry of the world and all of the different species and the siren song of life is what’s most important. Well, humanity is currently perpetrating the sixth great mass extinction. When I was a teenager, I was a fairly radical environmentalist. I was not very sympathetic to humanity’s right to severely damage other ecosystems and extinguish the amazingly beautiful, awe-inspiring wonders that nature creates all the time. And if you’re a sufficiently extreme deep ecologist, you might reason that nature would be better off without humanity. Many, many people have expressed this attitude.

If you’ve had a particularly heinous day or you’re in a very low spot, you might think that life for most people is really not worth living, and people who think that it is must just be deluding themselves. Because if you’re depressed and you look around at the world, it’s just hard to imagine that there could be enough light in it to make up for all the despair.

Indeed, rather than the ecologists who want to preserve nature, you can take the opposite perspective and say, “I’m concerned with suffering, and I’m worried that nature has too much suffering.” And if you’re concerned with eliminating suffering, you may not be able to do much about the nature part of it, but you can certainly do something about the human part of it, and possibly humans making nature worse if you’re pessimistic about where technology is going.

Supposing you do care about humans, you think life as humans is worth living, but obviously evolution shaped us to live in ways very differently from the way we live now. Right now, life is confusing. Things move incredibly quickly. A lot of people feel like their basic existence is outside of their control. They feel helpless. They’re stressed all the time. There’s a barrage of negative information. Perhaps we would all be happier as hunter-gatherers. Perhaps the market is causing us to warp our inherent dignity, causing us to do things that are different from what we evolved to do, the things that would make us happy. And suppose that we will eventually even engineer ourselves to remove the things that truly make us human to satisfy the dictates of the market.

If you view it that way, perhaps civilisation is the problem and humanity would be better off if we started over. In which case, you don’t want to wipe out all humans, but you do want to bring down civilisation. And there’s one very famous individual who thought this way: Ted Kaczynski, the Unabomber.

I hate to recommend a manifesto written by a mass murderer, but it was pretty darn prescient considering that he wrote it in the early 1980s. And the basic thesis is exactly what I described: He viewed the market system and technology as creating socioeconomic, sociotechnical incentives that would eventually cause us to use what he called the “immense power” of biotechnology to change who we fundamentally are, to make ourselves less than human in order to compete more effectively in the marketplace — and that we would thereby make ourselves increasingly miserable and an increasing travesty relative to what humanity should have been.

This is what got him against technology. And this is a man who went to Harvard, who became a mathematics professor at Berkeley, and then threw it all over to live in a cabin in the woods and develop his philosophy and try to thwart progress by murdering people with incredibly sophisticated mail bombs that completely threw off the FBI for over a decade.

Why we might deliberately design new pathogen-capable pathogens

Kevin Esvelt: It is sexy to be able to say, “This virus could cause the next pandemic.” And I can point you to a Cell paper published late last year that said, “This primate arterivirus is primed to spill over and cause the next pandemic. Here’s all of our molecular characterisation that suggests that it could.” And all that controversial research at the Wuhan Institute of Virology. What were they doing? They were taking natural viruses that they thought could cause the next pandemic, they were shuffling them and making chimaeras of different components of them, and then they were testing their growth potential in the laboratory.

And the controversial DARPA proposal that got turned down to insert a furin cleavage site into these coronaviruses: that was what they were hoping to do, and then they were going to measure transmission in mouse models expressing the human receptor. They wanted to know which viruses could cause pandemics, they wanted to know which ones could evade the existing immune system.

To this day, many virologists are trying to predict what the next variant of SARS-CoV-2 is going to be. So they’re collecting tonnes of data, running structural studies, running mutational studies — where they make a mutation in every residue of different existing strains and see which ones are important for recognition by antibodies that are common in current people, and which ones are tolerated by the virus.

And you put all this information together and you can do a pretty fair job of predicting which set of mutations will escape immunity and yet still remain functional for getting into our cells. And you put all that together, and if you can make the next variant, you just made something that could plausibly infect most of humanity. And then if you made that more virulent, because somebody else perhaps stumbled across some way of making things very virulent — or you can use standard virology techniques for doing that, but that’s research, so I’m not really worried about terrorists doing that — I think there’s going to be many ways of increasing the virulence of pathogens artificially. Some that nature does by co-opting them, some using different ways that nature is not going to try — because, again, nature is not trying to kill us.

But we will learn how to do that. We will publish that information, because we have a very strong prior that open data and open science are very important. And sooner or later, someone will put the pieces together. And naively, well-meaning, they’ll say, “We should really be concerned about this. I think if you combine this, this, and this, it could cause a 90%-lethality measles.” And they will try to warn the world so that we do something about it, right?

And then there will be controversy. Would it actually work? Well, controversy in science induces journals to sit up and take interest — because if it’s a topic in the news and it’s controversial, then that means that we want to resolve the controversy. We want experiments that will determine who is right. So scientists correctly appreciate that, when there is controversy, you can get a paper in Nature, Science, or Cell — the top journals which are the best for your career.

Therefore, the incentives favour scientists identifying pandemic-capable viruses and determining whether posited cataclysmically destructive viruses and other forms of attack would actually function. That is, I expect it would be: “I think this would work.” “No it wouldn’t.” “Yes it would.” “No it wouldn’t.” “All right, I’m going to test it.” And then you get a high-profile publication for testing it. And then they would say, “Well, the other piece wouldn’t work though.” “Yes it would.” “No it wouldn’t.” “Yes it would.” “No it wouldn’t.” “I’m going to test it.”

And I have not seen any appreciable counter-incentives that could be anywhere near as powerful as the ones favouring our desire to know. Because almost all the time, it is better for us to know. And in biology, unlike physics, we tend to trust the institutions. Because at the dawn of recombinant DNA, partly because many biologists at the time had been physicists, they called a moratorium on all recombinant DNA research and then all got together to hash it out. They decided at the time that we were decades away from learning to build things that would spread on their own, and we were decades away from editing the human germline. And therefore, here is a set of self-regulation principles that we will follow, biosafety principles. And that became the basis for the NIH guidelines on recombinant DNA that has governed us ever since. Success! They were right on all counts.

That was 47 years ago. I invented CRISPR-based gene drive a decade ago. Now we can reliably make things spread on their own, as best we can tell. And I would be willing to bet that there are many other ways of doing that. Gene drive favours defence; pandemics decidedly do not. And if you happen to care that “we don’t know how to edit the human germline” — well, we can do that now too.

Can a virus be very deadly and very transmissible at the same time?

Luisa Rodriguez: It sounds like you don’t have some exact percent lethality that you need to have to definitely be in a wildfire scenario — where people are unwilling to go to work, causing something like civilisational collapse. I’ve heard something like the reason we haven’t seen pandemics that have both high transmissibility and high lethality before in a way that causes this kind of particularly horrible situation is because those things come with evolutionary tradeoffs. Is that right?

Kevin Esvelt: That’s probably right for some pathogens and not for others. Certainly the Black Death had both. Certainly smallpox has both — or at least the variola major strain is 30% lethal and R0 between 3.5 and 6. So is that in wildfire territory? That’s the only one, though, that we label as being probably transmissible enough — because the Black Death, even if it weren’t susceptible to antibiotics, was just not transmissible enough in the modern world.

So the only one we know about is smallpox that we think would possibly be wildfire level today, at 30% lethality. And it’s worth noting that the Soviets almost certainly enhanced it, to the point where when there was an accidental outbreak in the Aralsk region, out of 10 known victims, the three who were unvaccinated all died and it was transmitted efficiently by vaccinated people, which wild-type smallpox does not do. So clearly it is possible. And again, this caused an outbreak. They managed to contain it: they shut down all the trains, they got it under control.

But we have to assume that you can go from an existing natural thing to something higher. And again, that is still sort of playing by nature’s rules, using natural-like things. And once we get good enough at programming biology, such that these other capabilities can apply, we just don’t know what is going to become possible. I would not assume that whatever natural tradeoff exists between contagiousness and virulence is necessarily going to always apply. That is one of the things where, if you’re governed primarily by natural selection, then for many classes of pathogens that appears to be a thing — though probably not for all of them, as best we can tell; it’s not a hard-and-fast rule.

And what’s more, even if there is a pathogen that is not evolutionarily stable — that is, mutants will accumulate that will, say, reduce the lethality over time — that doesn’t mean it can’t crash civilisation first. Because it doesn’t take that many transmission events for a sufficiently high R0 virus to go from release across a bunch of airports to infecting enough essential workers to bring down civilisation. That’s just because if you have a high enough multiplier of every person infects six or eight additional people, you don’t require that many transmission events in that chain until you get to very large numbers.

Luisa Rodriguez: Right. Just to make sure I understand the evolutionary tradeoff, is it basically at some high enough level of lethality, it can’t actually spread very far, because it’s killing people before it spreads?

Kevin Esvelt: Yes, it seems to be linked to whether or not it kills you before you have a chance to transmit it. If the transmission window ends and then it kills you — the stealth is an extreme version of that — then there’s no potential limit. Some known pathogens often kill you after the transmission window is mostly closed, so those ones don’t seem to be particularly subject to the tradeoff.

Luisa Rodriguez: Got it. That’s helpful. It sounds like it is a bit of a tradeoff, though not a rule. And to what extent does engineering change that? You’ve made it sound like it might change the extent to which the tradeoff keeps existing.

Kevin Esvelt: Here is where I have to acknowledge there’s a difference between my view and that of traditional biosecurity researchers. As an actual biotech practitioner, if nature usually does something and it doesn’t always apply, then it definitely doesn’t apply to engineering. A lot of people would disagree with that, but from the engineering perspective, that’s just how it is. If nature has some way around an apparent restriction, we can absolutely leverage that, and probably come up with more ways around it. If nature flat-out never does something, that does not mean that we can’t do it: that just means it’s not necessarily going to be trivial. It might be challenging, it might be impossible, but I would not assume that we can’t do it — because nature is subject to fundamental limitations on how it discovers things, how it samples mutations, and the number of possible ways it can combine them in its discovery strategy that just do not limit us.

Using metagenomics to stop a "stealth" pandemic

Kevin Esvelt: Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it’s too late?

Luisa Rodriguez: Yeah, that’s pretty horrific. In the case of HIV, while it’s horrific, we’ve gotten very lucky in that it’s not nearly as transmissible as respiratory illnesses. But it doesn’t have to be that way. There could be something that was much more transmissible but had this long lag. But it sounds like you think there are ways of actually detecting these pathogens early, despite the fact that people aren’t having any symptoms. Can you explain how we do that?

Kevin Esvelt: The first way is we look for things that we think are suspicious, ways that we imagine such a thing might be created, what viruses or bacteria it might be based on — and we look for those together with signatures of engineering. So we figure this is probably not going to happen naturally, although we should be looking for it, right? The notion that NIH will fund tonnes and tonnes of research to cure or prevent HIV and basically none on detecting the next one suggests that our society is a little bit overly obsessed with cures at the expense of prevention, which we all know is better. But we can look for suspected signatures. The problem is that that’s not reliable, because if an adversary knows what we’re looking for, they can engineer something that we won’t detect.

You can do [look for these in] one of two ways. You can take clinical samples — imagine SARS-CoV-2-class nasal swabs — and then just do metagenomic sequencing of everything that’s in there. The problem is you’ll always get some of their DNA then, and therefore you’ll get some of their genome, and there’s privacy concerns because they’re individual people. The other way to do it is to sequence wastewater. You can imagine just municipal wastewater plants, but the one we’re probably more excited about is sequencing aeroplane lavatory wastewater, because we know that all human pathogens spread through the air traffic network. So you can get a leg up on them if you specifically look for them in aeroplane laboratory wastewater.

Luisa Rodriguez: Genius. And then is that reliable? Do you have to know what you’re looking for in order to pick up on it? Or are you just like, “This is kind of a weird, unexpected thing,” and then you happen to notice it looks engineered?

Kevin Esvelt: You can look for specific signatures of particular sequences that you think will be present if you have some idea of how to build one. But that’s still not reliable, because the adversary can engineer around that sort of thing. If they know what genetic engineering detection algorithm you’re using, then obviously they can check the thing they’re making to make sure that it doesn’t trigger it.

But what we’re also looking into is the reliable way — because I always want to be cautious when it comes to the possible end of civilisation, and my children dying, and me dying, and all the hopes and dreams of everyone being shattered. Here’s the genius thing: Whatever the threat is, if it’s biological, it’s made of nucleic acids in its genome, and it needs to spread rapidly. Which means it needs to become more common in our samples and across the world in a pattern that should match that of novel variants of SARS-CoV-2 — or, on different timescales, other things. That is, we should see some pattern of exponential-like growth.

And if you build a system and look for those signatures, you should see every new variant of every existing human pathogen. For example, you’ll also see some weird spikes when the airline changes its food sourcing, and you start seeing plant viruses from whatever lettuce they’re serving now. So you’ll see some weird spikes and there will be some background; it won’t just be human. But the nice thing about the aeroplane lavatory is that it is almost all human samples, plus whatever the airline just fed people.

And the point is, you should see everything spreading through humans: every human pathogen, every new variant, every new mutation that they’re accumulating that is starting to spread and eventually to everywhere in the world. We should see them all. And there’s just not that many of them. There’s only 200-something viruses that are known to infect humans, period. So once you’re monitoring all of them, that’s not so many that you can’t have a human look at them.

And so even if it’s designed to evade your engineering detection algorithms, you can still have an expert human look at anything that is new. Anything, anything, anything that is new, it is worth having an expert — who is paranoid and suspicious and very good at engineering biology — look at it and say, “Do I think there is anything at all concerning about this? Is it a baseline pathogen or even commensal mutualist that’s spreading rapidly? What do we think the fitness advantage is here that’s causing it to spread rapidly? Is it doing anything unusual? Is it expected to interact with any biological system? Are there any signs of genes that would not normally be there, based on all of our other samples of things like this?” Maybe they look natural, but it’s really statistically unusual to see a gene there for this viral family.

Using CRISPR to solve a range of world problems

Kevin Esvelt: It might matter to some listeners, they might be concerned about the moral implications of actually driving a species to extinction. Which, of course, is also what we’re proposing for the malaria parasite (but not the mosquitoes) and also for the schistosoma. But for something that is not a major human disease, that’s [not] a microbe, here we’d be proposing eradicating the screwworm itself — the fly, the macroscopic thing from the ecosystem everywhere in the world.

But it’s worth noting that this is actually reversible, because screwworm is one of those comparatively few insects whereby you can freeze the larvae and unfreeze them decades later and they’re perfectly viable. So we don’t have to drive them extinct, we just need to remove them from the wild and then we can keep them on ice. So if for some reason we decide we need them again later, we can reintroduce them. It’s just we’ve got to ensure, if you want the animal welfare benefit…

One of the things that really I find attractive is, when you think about how much suffering humans have inflicted on animals in the course of our species, it almost certainly does not outweigh 1015 mammals and birds devoured alive by flesh-eating maggots. So to the extent that we’re now net negative on the scale, all we have to do is, before civilisation collapses, or we disassemble the Earth or whatever futurists think we’re going to be doing — or even if we lose, even if we fail and civilisation collapses, or even we go extinct — as long as we remove the New World screwworm first, we will be in morally net positive territory when it comes to our impacts on other species’ wellbeing. That’s tremendously inspiring.

Kevin Esvelt: But it’s none of my business, because I don’t live in South America. It’s their environment; it’s their call. And so I would urge folks, if you want to reach out and know who to support in South America to fund that project, I’d be happy to connect folks — but moralising about how they have this moral duty to do this for the benefit of all humanity, probably not very helpful. If they decide to do it, it’s going to be for their own reasons, and us hectoring them is not going to be useful to the cause if you care about seeing it happen.

Luisa Rodriguez: Yeah, it sounds like in this case it’s a win-win. Doesn’t sound like anyone in South America is enjoying their livestock being eaten alive by these worms. But yes, that sounds totally right. It does sound like we, on our podcast, should not spend too much time moralising when it is their land.

Kevin Esvelt: And in the long run, you can imagine using gene drive for a series of more elegant tweaks. So got problems with pests eating your crops? Program them to not like the taste and otherwise go about their normal ecological business. Got a problem with predators devouring their prey in ways that cause suffering? Program them to secrete anaesthetic from their fangs.

Articles, books, and other media discussed in the show

Kevin’s work:

Biotech self-regulation efforts:

Dangerous work in the name of biosafety:

80,000 Hours resources and podcast episodes:

Everything else:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.