Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

The system doesn’t seem like it’s actually crucial but these are the people thanklessly doing the work to try to prevent people from getting infected at a population level … when epidemics happen in a city, hospitals pick up the phone and say “What is it we should be doing? How should we be operating now that an epidemic’s underway?”

Tom Inglesby

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits.

If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block.

Dr. Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’.

In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US.

But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help.)

But there are positive signs as well – the center Inglesby leads recently received a $16 million grant from Open Philanthropy to further their work preventing global catastrophes. It also runs the Emerging Leaders in Biosecurity Fellowship to train the next generation of biosecurity experts for the US government. Inglesby regularly testifies to Congress on the threats we all face and how to address them.

In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security, and also discuss:

  • Should more people in medicine work on security?
  • What are the top jobs for people who want to improve health security and how do they work towards getting them?
  • What people can do to protect funding for the Global Health Security Agenda.
  • Should we be more concerned about natural or human caused pandemics? Which is more neglected?
  • Should we be allocating more attention and resources to global catastrophic risk scenarios?
  • Why are senior figures reluctant to prioritize one project or area at the expense of another?
  • What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures?
  • Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them?
  • How is the current US government performing in these areas?
  • Which agencies are empowered to think about low probability high magnitude events?
  • Are there any scientific breakthroughs that carry particular risk of harm?
  • How do we approach safety in terms of rogue groups looking to inflict harm? How is that different from preventing accidents?
  • If a terrorist group were pursuing biological weapons, how much would the CIA or other organizations then become involved in the process?
  • What are the biggest unsolved questions in health security?

The 80,000 Hours podcast is produced by Keiran Harris.


I don’t think it’s a good approach to think about it [catastrophic biological risk] as zero sum with other epidemic problems and here’s why: I think in many cases it’s gonna be similar communities that are thinking about these problems. I don’t think it’s likely, even if we really decided to get very serious as a world, I don’t think it’s likely that there will be a community solely dedicated. I don’t want to say never, because it could happen, but I don’t think it’s likely that there will be a robust enduring community of professionals that would only, solely be dedicated to global catastrophic risk, biological risks alone.

The reason I think that is because to understand global catastrophic risks you really need to understand the fundamentals of normal epidemics. Catastrophic risks, at least in the natural world, are going to arise out of this frothy ocean of continual infectious diseases happening in the world and that meets conditions, ultimately, of breaking free of the normal froth and becoming an epidemic or a pandemic of terrible proportions. It’s true that on the deliberate side or the accidental side we could see things that have absolutely no precedent and seem completely different from all things that have come before but ultimately, the principles that we use to try and control epidemics are going to be similar. People are going to be in hospitals. They’re going to be spreading disease. The things that we do, they’re going to need medicines, they’re going to need vaccines, they’re going to need diagnosis. Those communities are going to overlap very heavily, no matter how you slice it, with the communities that worry about much smaller events.

I do think the way my view, at this point, it could evolve, but my view at this point is that what we should try to do is to open the aperture further of the people who work on these things to gather more people into the field who care a lot about global catastrophic events. But bring them into the larger field as opposed to creating a separate field that, in some ways, is kind of feels like it has to be zero sum with people who focus on more limited or more common events.

What I would say is that there is not a called out responsibility in government at this point for preparing for extraordinarily large events. I think what people would say if asked about that would be we have so much trouble just thinking about the day in, day out problems in front of us, we don’t have any time or people or money to think about things that are bigger than that. So our job, if you’re not in that actual job of trying to run that agency, our job outside of that agency is to build an argument for that work and to talk about tractable things that could be done in that space and to generate kind of momentum around those things.

It may seem like well that seems like a remote, that seems too hard, that seems too hard to do from the outside, but if you look at how things have changed in the past, I think there are a couple of interesting examples. In the 70s when people began to think, and then the 80s began to think about nuclear winter, it was really a lot of discussion that was happening almost entirely out of government that created pressure in society and in government to take that problem seriously. It’s not like we’ve solved that problem, but certainly it cause a lot of discussion and a lot of thinking about nuclear arms control and influence government thinking hugely.

So norms themselves, the development of scientific norms, the development of expectations of behavior can be a powerful tool. Over time we have agreed as scientific communities not to do certain kinds of things in science, or to do certain things under certain kinds of conditions like clinical trials are performed in very specific ways. Human subjects research is performed in a very particular way. Research related to radiation is performed in particular ways. So we have agreed as a community to follow certain sets of guidelines when it comes to certain kinds of experiments.

At this point, we haven’t really had good national and global scientific dialogues and discussions about what to do about new technologies that substantially change pandemic risks. So we’re still talking about those risks in the same way we talk about the risks that we’ve had up until now, biosafety risks or other kinds of laboratory risks. In my view, what we should be doing now, moving towards and we’ve had some of these discussions in the community, is what is the appropriate scientific community reaction to experiments that could increase pandemic risks. So one category of work would be to engage in the science world, possibly in the world of practice that uses these tools, there are plenty of examples of people who are very powerful users of these tools who themselves have put their hands up and said, “I think we should be having different kinds of discussions about this. I think we should be managing these technologies in a special way.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.