Anonymous answers: How can we manage infohazards in biosecurity?

This is Part Three of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Two: Fighting pandemics, and Part Four: AI and biorisk.

In the field of biosecurity, many experts are concerned with managing information hazards (or infohazards). This is information that some believe could be dangerous if it were widely known — such as the gene sequence of a deadly virus or particular threat models.

Navigating the complexities of infohazards and the potential misuse of biological knowledge is contentious, and experts often disagree about how to approach this issue.

So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the third instalment of our biosecurity anonymous answers series. Below, we present 11 responses from these experts addressing their views on managing information hazards in biosecurity, particularly as it relates to global catastrophic risks

To make them feel comfortable speaking candidly, we offered the experts we spoke to anonymity. Sometimes disagreements in this space can get contentious, and certainly many of the experts we spoke to disagree with one another. We don’t endorse every position they’ve articulated below.

We think, though, that it’s helpful to lay out the range of expert opinions from people who we think are trustworthy and established in the field. We hope this will inform our readers about ongoing debates and issues that are important to understand — and perhaps highlight areas of disagreement that need more attention.

The group of experts includes policymakers serving in national governments, grantmakers for foundations, and researchers in both academia and the private sector. Some of them identify as being part of the effective altruism community, while others do not. All the experts are mid-career or more senior. Experts chose to provide their answers either in calls or in written form. As we conducted the interviews almost one year ago, some experts may have updated their views in the meantime.

Note: the numbering of the experts is not consistent across the different parts of the series.

Some key topics and areas of disagreement that emerged include:

  • How to balance the need for transparency with the risks of information misuse
  • The extent to which discussing biological threats could inspire malicious actors
  • Whether current approaches to information hazards are too conservative or not cautious enough
  • How to share sensitive information responsibly with different audiences
  • The impact of information restrictions on scientific progress and problem solving
  • The role of public awareness in biosecurity risks

Here’s what the experts had to say.

Expert 1: Strike the right balance

I think we underestimate bad actors. We have to talk about these threats.

On the other hand, we need to be very careful not to give away recipes or step-by-step descriptions or very specific ideas. And that’s always about striking a balance between what you can and should talk about to advocate for more resources on biodefense while not revealing too much about what’s required to make biological weapons.

Now, in terms of information hazards, I think we’re in a new world. In 2004, the National Academies of Science in the United States released the Fink Report, which basically recommended that all sequence data of pathogens should be made publicly available. They argued that the benefit to research was greater than the potential risk.

I think in today’s world, as DNA synthesis becomes more accessible, that’s no longer the case. I don’t know what we can do about the fact that the sequences for some of the more deadly strains of variola virus that causes smallpox are already available. I don’t know if we can put the horse back in the barn, but we can certainly decide not to publicise the sequences of new potential pandemic pathogens.

Expert 2: Help researchers use information responsibly

As someone who works in the government sector, I can get critical information into the hands of people who can actually effect change, like developing policies or creating new governance structures. But I think researchers who aren’t in that sector, like academics, really struggle with handling sensitive information. Potentially, we should create avenues for them to safely share this kind of thing.

If you’re not in the government but you work with the government, there are two types of information that are classified. First, there are government documents with secret or confidential information. If you want to make a new document using that information, there are rules about how to do that.

But also, if you’re working on a government contract and you come up with something new that elucidates a vulnerability, you have to classify it using a classification guide. So if you, for example, just figured out how to kill a million people with a paperclip, that might be automatically secret. New vulnerabilities with paperclips are suddenly classified as secret, even though I just made it up. We need guidelines for non-government researchers who uncover information that could cause great harm so that they can use the information responsibly and disclose it to those who could use it to mitigate the vulnerability.

Expert 3: Limit secrecy to improve problem solving

I run into this issue frequently and find it frustrating. When I started working in biosecurity, I found it surprisingly difficult even to define the problem I was trying to solve because people weren’t willing to articulate it. I think we currently keep information too contained out of a faulty assumption that getting a small number of people in a room from very similar backgrounds will solve the problem effectively.

Secrecy results in suboptimal problem solving, under- or over-estimating risks, fewer people working on these issues, and scepticism about whether worries about global catastrophic biological risks are justified. Placing barriers in the way of people doing work they care about for secret reasons that you can’t disclose is frustrating. It leads to resentment and some degree of ridicule — rather than buy-in and collaboration.

I think a binary framing of information being ‘safe’ or ‘unsafe’ to share is unhelpful, because there’s a big difference between sharing information in small circles and issuing a press release. Whether we talk about them or not, serious threats will arise, and the threat landscape will worsen over time.

It’s challenging to prevent everyone in the world from releasing a piece of information, and ideas and knowledge get rediscovered independently all the time. Making people in key disciplines aware of these risks expands the pool of minds developing solutions, and the solutions to these problems require examining the issue from many different perspectives, e.g. biology, engineering, social science, security, and policy perspectives. I’ve met a lot of early career researchers outside of the effective altruism community who were delighted to hear I was working on these issues because they’d had concerns about the misuse of their own research. But they didn’t know anyone was working in this area, or they had trouble breaking into it because they didn’t know the right people.

Expert 4: Keep the goal in mind when sharing information

I don’t think I have a really disciplined way that I deal with information hazards. Generally, I try to think of what the goal is and what I am trying to get done. If it involves sharing information with key people, then maybe that is worthwhile.

But I also don’t do activities in which I intentionally try to think of all the bad things that could happen. That is not part of my job.

When I think someone is wrong about something, it is usually just something hasn’t occurred to them and no one has shared the information with them. I think that’s generally true for things around dual-use work. Maybe a key difference is how likely malicious actors are to do things, and that is unclear to me. And so if you think that most people are generally good or people that are bad are not going to try very hard, then a lot of the concern around dual use is just overblown, and I don’t have good numbers for that. So that might be part of the disconnect.

Expert 5: Discuss threats more openly to reduce risks

The concern is that drawing attention to types of dangerous biological pathogens would inspire bad actors. I think these science fiction stories are way more inspiring than the reality of developing biological weapons.

When you say someone could build a weapon that would kill a billion people in a garage in two months, that sounds amazing to those who want to cause harm. But in reality, it would take two years; there would be tons of obstacles. It’s really expensive. You’d probably get caught. It’s a terrible idea. You wouldn’t want to engage in that.

To me, talking about the reality of what it would take to build a weapon is a much more compelling way to avoid inspiring a lone-wolf attacker. So I’m much less concerned about dual-use information.

It drives me nuts when people start telling all these exciting, exotic stories because that’s what’s interesting. That’s what’s going to drive dangerous people into this idea, even if when they get into it, they find they can’t make any progress. But you’re going to motivate them, right? Because you’re telling them the sexy version of the story, not the reality of it.

So if you could get the Ebola virus genome, there are viral rescue strategies in the literature for getting from genome to live virus. There is information that would help you along that stepwise progression. But we also understand the genome of that organism really well. We can detect synthetic orders for that DNA. So it’s really hard for me to imagine that you could assemble that genome and engage in all of the rescue strategies and get to the point where you had a bunch of live virus ready to release and weaponize. And that’s perhaps the comparatively easier side of things.

The really hard part is if you want to invent a novel sci-fi virus that’ll kill a billion people. And to me, you’re completely out of your mind if you’re going to try that. It takes virologists decades of specialised training to just figure out how to work with a specific family of viruses. I really doubt that a dangerous actor is going to be able to achieve this.

Do I think navigating information hazards can affect day-to-day work of people in the field? Absolutely. When people handle DNA sequence orders, for example, they have to make risk assessments about whether this thing is weaponizable. They can ask the government for information to carry out this risk assessment, but the government only gives some, not all, of the DNA sequences they believe may be dangerous.

So there are sequences that the government may know can be used to cause harm and people might be ordering them online, but they won’t tell the people handling the orders because of this information hazard concern.

The implication is that the people responsible for policing the digital to biological transition are treated as if they’re untrusted third parties. This makes no sense to me. That is probably the number one argument against information hazards.

Expert 6: Communicate based on the audience

I am very aware of infohazards. They influence what work is done and how the results are communicated. Communicating about potentially sensitive information is not binary — there are options for to whom it is communicated, how, in what format, and in what terms. For example, communicating to national security professionals might be factual and detailed, communicating to the broader biosecurity community might be more generic but highlighting risks, and communicating to the general public might require not discussing the potential for malign misuse.

Expert 7: Consider the effects of withholding information

Navigating information hazards certainly affects my work. But I think the problems caused by people keeping important information from me are bigger than the difficulties I have managing sensitive information.

There are real risks, but generally I think people who believe that they have lots of this information should be significantly less conservative. I’m generally pretty unimpressed when people share things with me that I shouldn’t share onwards, and often think the costs of secrecy outweigh the benefits. To be clear, I’ll continue to keep these things secret because I think it’s important to be cooperative and trustworthy in this environment. But I think the problems caused by people being excessively careful are much bigger currently than the problems with people not being careful enough.

For many problems, it’s very hard to prioritise when you don’t have a detailed model of what your job actually is. It’s very hard to reason for yourself about what you should and shouldn’t be doing or what the most important things are. And if this is a problem for me or other people at my seniority level, it’s much more severe for people one or two levels down. It’s very demoralising and can be a significant contributor to a lot of people’s psychological difficulties working in biosecurity. It creates very unhealthy status hierarchies that are much stronger than in other cause areas.

A small number of people in biosecurity have used information hazard concerns to accumulate a great deal of power and influence in biosecurity. I like most of these people, but when I see them work and reason in domains where I do have enough information to evaluate them, their error rate is not low enough for me to give them that kind of trust. I suspect there are probably severe mistakes in people’s threat models, but it’s impossible to know what they are or call them out because we don’t have enough information.

I also think there are a lot more social reasons why secrecy is bad, and systematic psychological reasons for humans to overrate secrecy and underrate openness. In my opinion, the thinking around infohazards in biosecurity is often pretty unsophisticated and underrates a lot of the reasons why this is harmful.

Expert 8: Handle information sensitively to support diplomacy

I’m always kind of irritated when my colleagues find a scary paper and then write another paper that explains exactly how to misuse the results in that paper. I want to say, “Maybe you should think about shutting the hell up instead and just leaving it as an exercise for the reader?”

Information hazards are a major concern for me, and these risks often influence what work I do or what question I choose to work on.

But often, there are more likely to be sensitivities rather than directly dangerous information. The reason why people don’t accuse countries of having bioweapons programs is usually not because of infohazard considerations. It’s because they want to work with these countries in the future and don’t want to piss them off by accusing them of conducting war crimes or crimes against humanity. So standard diplomacy might prevent sharing of information rather than the risk of the info itself.

I do think it’s really worth being careful about information hazards in general, but there have been a lot of unpleasant side effects to these concerns in the biosecurity and effective altruism community. It’s kind of irritating when you can’t talk about important issues, or others can’t talk with you about it.

Expert 9: Share information more widely (but responsibly)

We have to take seriously the extent to which talking about threats and risks and vulnerabilities increases the threat that we’re worried about. On the flip side though, I’m a big believer in transparency. Transparency is particularly important when the vulnerabilities we identify require political action and resources to solve them. These vulnerabilities usually don’t fix themselves.

So trying to keep all of this secret and not talk about it is counterproductive because then you are denying the ability of governments to actually fix the problems that we’re examining. I’ve had that direct experience looking at the synthesis of horsepox virus, for example. And I think the calculus I made was that the threat posed by the synthesis of variola major and other orthopoxviruses was not getting the level of attention it deserves. There are policy solutions that could be implemented, and therefore you need to publicise the risks that are out there and mobilise action.

In a lot of these cases, if you’re assuming that there’s an adversary who is able to take advantage of advances in biotechnology, then they’re probably smart enough to read and understand the relevant literature. Just saying that there’s a theoretical possibility of X or Y is not actually giving anyone who knows anything a new idea. And if you are giving new ideas to somebody who doesn’t have that knowledge or skills, they’re not going to be able to capitalise on it, or it’ll be very hard for them.

So overall, I’m more in favour of publicising information about vulnerabilities and highlighting threats that are out there. But I try to do so in a constructive way that points toward solutions. I think you need to do it responsibly. And I think that’s key to navigating the dilemma about how much information you make public or what you withhold.

Responsible sharing also means getting the tone right when you talk about risks. There are ways to talk about risks that are highly inflammatory and alarmist — that’s counterproductive. There are other ways to talk about the risks that are measured and objective and are not designed to provoke an overreaction.

And the audience you’re talking with matters a lot. Engaging policymakers and scientists is one thing. If your immediate reaction is to publish an op-ed, then that’s different.

Discussions about information hazards need to be multidisciplinary. You can’t have it just be the researchers talking amongst themselves about it. They need to bring in the biosecurity experts, they need to bring in the ethicists, they need to bring in the public policy people to understand what is the magnitude of the risk, how imminent it is, what’s the scope, and what can we do about it.

Communities that try and keep this stuff entirely in house and don’t engage other stakeholders are losing out on getting these other perspectives that are really vital for devising pragmatic strategies that actually reduce the risks.

Expert 10: Consider the purpose of sharing and manage credibility

Dealing with dual-use information and the risk of inspiring bad actors — or drawing attention to truly terrible outcomes — is something that I constantly struggle with. One way I try to think through the choice of sharing information with others is better understanding the purpose of sharing this information. Is it necessary to inspire action? Is the person I am sharing the information with someone who has the power to enact said action? Is the person I am sharing with able to be discrete?

If the answer is no to any of those questions, I think quite hard about whether it’s necessary to spread a particular message and will often consult with trusted colleagues first.

Information should never be shared to make yourself look smarter, more important, or provocative. In fact, that is likely when it shouldn’t be shared in the first place.

Something else to consider when you are sharing dual-use information is whether the perspective will make the recipient feel uneasy about you and your priorities. It takes a delicate approach to communicate clearly about these scary and (to some) outlandish scenarios. Without some skill, discussing these ideas can decrease your personal credibility (which can be hard to regain).

Expert 11: Disclose information publicly only when absolutely necessary

Navigating the delicate balance of information hazards in biosecurity is a difficult but important issue. I worry in particular about attention hazards — the risk that raising awareness about biological threats could perversely inspire more bad actors to exploit these vulnerabilities. In the early 1940s, one of the most closely guarded secrets wasn’t just the design of the atomic bomb, but the very fact that such a weapon was possible. Similarly, in the realm of biosecurity, sometimes the most dangerous information is the mere acknowledgment that it is possible to cause harm. While increasing understanding of biosecurity risks is crucial, there’s a real dilemma in the possibility that public awareness might do more harm than good, actually escalating the risks we face.

My approach to mitigating this is targeted audience engagement. I aim to be circumspect about what information is disseminated and to whom. This includes threat models, specific examples, but also some broader categories of information and framing. The goal is to only disclose information publicly when it’s absolutely necessary for reaching the intended audience — be it policymakers, experts in the field, or other relevant stakeholders. Broad public discussions might draw attention but can inadvertently make the situation riskier. Biosecurity needs to recruit more experts, but I do not think it should ever be a populous movement.

Learn more