Anonymous answers: could advances in AI supercharge biorisk?

This is Part Four of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Two: Fighting pandemics, and Part Three: Infohazards.

One of the most prominently discussed catastrophic risks from AI is the potential for an AI-enabled bioweapon.

But discussions of future technologies are necessarily speculative. So it’s not surprising that there’s no consensus among biosecurity experts about the impact AI is likely to have on their field.

We decided to talk to more than a dozen biosecurity experts to better understand their views on the potential for AI to exacerbate biorisk. This is the fourth and final instalment of our biosecurity anonymous answers series. Below, we present 11 answers from these experts about whether recent advances in AI — such as ChatGPT and AlphaFold — have changed their biosecurity priorities and what interventions they think are promising to reduce the risks. (As we conducted the interviews around one year ago, some experts may have updated their views in the meantime.)

To make them feel comfortable speaking candidly, we offered the experts we spoke to anonymity. Sometimes disagreements in this space can get contentious, and certainly many of the experts we spoke to disagree with one another. We don’t endorse every position they’ve articulated below.

We think, though, that it’s helpful to lay out the range of expert opinions from people who we think are trustworthy and established in the field. We hope this will inform our readers about ongoing debates and issues that are important to understand — and perhaps highlight areas of disagreement that need more attention.

The group of experts includes policymakers serving in national governments, grantmakers for foundations, and researchers in both academia and the private sector. Some of them identify as being part of the effective altruism community, while others do not. All the experts are mid-career or more senior. Experts chose to provide their answers either in calls or in written form.

Note: the numbering of the experts is not consistent across the different parts of the series.

Some key topics and areas of disagreement that emerged include:

  • The extent to which recent AI developments have changed biosecurity priorities
  • The potential of AI to lower barriers for creating biological threats
  • The effectiveness of current AI models in the biological domain
  • The balance between AI as a threat multiplier and as a tool for defence
  • The urgency of developing new interventions to address AI-enhanced biosecurity risks
  • The role of AI companies and policymakers in mitigating potential dangers

Here’s what the experts had to say.

Expert 1: A unique policy window to mitigate harms

The intersection of artificial intelligence and biosecurity has shifted my work focus and priorities. We are at a pivotal moment; the current and imminent applications of narrow AI pose significant biosecurity risks, creating an urgent need for action. Moreover, there’s a unique policy window open now, as governments globally are demonstrating increased interest in this issue.

It’s worth noting that AI-biosecurity is a largely nascent field. Some interventions from the (still relatively new, but longer standing) AI safety field are promising, but others do not readily apply, especially when it comes to smaller AI models and biological tools. It has been increasingly obvious that despite the gravity of the issue, only a small cohort of experts worldwide has spent more than a few hundred hours thinking on AI-bio problems and exploring solutions. This presents an invaluable opportunity for capable young individuals to make meaningful contributions. However, rushing and lack of scrutiny carries risks, such as the potential for the field to become prematurely anchored to certain paradigms or overlook risks altogether.

My approach to addressing this complex issue is to tackle it step-by-step. At this juncture, the focus should first be on situational awareness and defining scope. Governments must develop robust capabilities for both passive and active monitoring of dangerous capabilities in AI development and deployment. This is foundational; meaningful governance strategies will only emerge if we have a comprehensive understanding of which AI systems pose risks that need to be regulated. If the scope is too narrow, such as limited to only large language models, some dangerous capabilities risk being overlooked. Likewise, if the scope is too broad, regulation could stifle non-risky scientific progress and be very difficult to enact.

It’s important to recognise that any intervention strategy will likely resemble a ‘Swiss cheese model,’ filled with gaps and imperfections which reduces, but does not eliminate, the risk. Interestingly, the most effective solution may not lie in AI regulation but rather at the point of material access — robust DNA synthesis screening requirements of both customers and sequences could make it exceedingly difficult for malicious actors to acquire pathogens. Although this approach presents its own set of challenges, it may be more straightforward to enact and enforce compared to regulating AI tools in the life sciences, especially those released open source models.

Expert 2: Restricting digital to physical capabilities

I can imagine an argument for restricting the use of AI models. And I can also imagine a really good argument for restricting the ability to go from taking information and directly creating physical things — like having a system that comes up with a DNA sequence and synthesises a virus from it. I think that this will probably get the most traction. But it can be hard to implement this in practice.

Expert 3: A plausible emerging threat

My hot take is that AI is obviously a big deal, but I’m not sure it’s actually as big a deal in biosecurity as it might be for other areas. The intuition behind that is that bio is just quite messy and hard for machine learning to understand the same way it could understand chess, for example.

Now obviously I could be easily proven wrong with the passage of time, but it’s notable that AlphaFold has not transformed biology just yet. It’s been a few years since its release. It’s not clear the large language models are really giving you that much yet in terms of performance in bio, in general. It does obviously seem to be a plausible emerging threat. I’m very pleased that other people are working on it.

I don’t think the current AI models are that capable. Anything a large language model can say is something which you could currently discover with Google.

However, a common theme in this field is you’re worried not just about raising the ceiling of misuse but lowering the floor in terms of tacit knowledge. I think there is some risk of that.

Expert 4: Reducing ‘meta’ knowledge barriers

Like many people, I am disappointed with how fast these machines have grown so good. And honestly, we got super lucky. Because a few years ago, you might have assessed the state of the art in AI and thought it had no chance of doing anything dangerous with biology. And in a year from now, it might be incredibly skilled at creating bioweapons; the genie may have gotten out of the bottle.

But we happen to be observing AI systems at a very critical time where they have some kernels of the right answer about biotechnology. Some questions they answer very well, while for others, the answers are terrible. It has a clear trend of getting better, and we got super lucky observing this now at a time before it’s too late.

I think knowledge barriers are going to be reduced faster than I thought. This includes the “meta” knowledge barriers, which are the ones I worry the most about. These are the answers to questions like: what do I have to know to create harm?

Some malevolent actor might, for example, want to create a pandemic with a specific virus — and they could ask about how to do that with that specific virus. But, there are also a bunch of other questions you’d have to ask to make sure you really get what you want. And I think most actors don’t even know how to think about that problem, how to think about a biological weapon as a system. And so I think very soon, if we don’t do anything about it, these AI systems will be able to just do that without much specific prompting at all.

Expert 5: Large language models are not a threat, but other AI tools may be

So my biosecurity priorities have not changed at all since the release of ChatGPT. I think it’s completely uninteresting. I don’t think it poses any threat at all. Some researchers argue that language models could increase access to dangerous biotechnology, and I think these are just hyperbolic sci-fi stories. It’s much, much more difficult to learn how to do complex and potentially dangerous biolab work from a language model.

AlphaFold and the other biodesign-tool-based AI systems have changed my priorities in the sense that I now see that the homology-based screening that we do will, at some point in the not too distant future, no longer be up to the task of assessing the risk of the DNA sequences that have been ordered for synthesis. We’ll need AI tools that can themselves assess whether a particular DNA sequence being ordered would be dangerous based on a functional biological analysis, rather than just matching the sequences to known dangerous pathogens. It will be too easy to get around the simple threat screening. And so that has made me think we need the investment of the federal government to build AI-enabled tools that do these risk assessments. Because companies don’t have the kind of money or domain expertise that it would take to build those tools. We need these programs to start up yesterday, because they need to be able to keep pace with the pace of advancement of the biodesign tool space, which of course is fast and furious.

I assume in the future everyone is going to have access to reasonably powerful open-source models. They’re going to be able to design whatever constructs they want. And the only real point of access control is going to be when those designs are instantiated in the physical world in the form of DNA. And that borderline between digital and physical in terms of synthetic DNA and screening is going to be incredibly important.

Expert 6: Using AI tools to help detect threats

My priorities have changed. I’m less worried about the threat of chatbots than many people, but AI applied to biology has created new risks and increased the number of actors who could create a given threat. I see the potential for AI to greatly accelerate the pace of scientific discovery across many disciplines, which could make it very difficult to stay on top of emerging risks.

I’m optimistic, though. The tools used to design biological threats can also be used to detect them. Therefore, we need to be proactive with national security plans and ensure the infrastructure for detecting and responding to biological threats is sophisticated and stable enough to address what’s coming. We’re not even very well placed to deal with natural threats, so we have a long way to go.

Pathogen surveillance programs won’t, by default, be geared toward looking for AI-generated sequences that don’t resemble natural ones or asking whether a biological agent has been generated or manipulated in a lab. Fixing this will require us to work collaboratively with other players. We can develop add-ons to public health and surveillance efforts to use existing infrastructure and investments to detect new threats. We can also work with AI developers to make it easier for them to safeguard their tools. Expecting AI developers to become experts in biosecurity or stop working on AI is unrealistic — we should give them tools and resources to do their work more safely.

Expert 7: Enabling both lone and state actors

I think the intersection of AI and biology, or AI-enabled biology, is a new risk. I think it falls into two baskets. One is enabling the lone actor or terrorist group like the internet did, but only on steroids. It gives them access to how-to information. So we have to work with the AI companies that are developing these large language models to put in some guardrails, which is hard because of the dual-use nature of biotechnology. But some companies that make safety a priority, like Anthropic, are doing some really important work in this area.

And hopefully all of the AI companies, at least the Western ones, will agree to some guardrails. And obviously UK Prime Minister Rishi Sunak tried to galvanise that by hosting a summit on this.

The other area is the more sophisticated advanced bioweapons actors at this point, mostly state actors. They can use bioengineering tools to enhance pathogens in a much more sophisticated way than in the past. And that’s very disturbing.

Expert 8: Accelerating risk and mitigation approaches

AI may really accelerate biorisk. Unfortunately, I don’t think we have yet figured out great tools [sic] to manage that risk. There are some no-brainer things we should do, like universal gene synthesis screening, model evaluations, and controlled access through an API to highly capable AI models.

I think the tougher questions are, should there be controls on the data that can be used to train the next generation of AI and the generation of that data, or should there be some norms around the generation of those kinds of annotated data sets and how they’re used and how they’re published? I think that’s a really tough question where you’ll have strong feelings on many sides of that issue.

And we could do all the things that I just mentioned and still only partially mitigate the risk at best. So this is also an area where we need new thinking and new ideas.

Expert 9: New technologies and overhype

I’m sceptical about a lot of the hype just because every new technology is supposedly a game-changer that will transform, will democratise biotechnology, and will make everyone capable of becoming a bioterrorist. I’ve heard this too much over the last 20 years.

Every new technology gets overhyped, and then people realise it’s not as good as we thought it was. We’re in the hype part of that cycle now. There’s also a long history of people from the technology field oversimplifying the ease of misusing biology and of the risks posed by new technologies for biosecurity. So again, that’s very common.

However, some of the AI stuff is potentially very powerful, and it is becoming very accessible very quickly.

So it does raise some concerns. I would favour multidisciplinary discussions about these risks that involve people who actually understand AI, who understand biosecurity, who understand the actual security. And I think we need to have that discussion among those experts to get a better baseline. And what I’ve been hearing so far has been a little too alarmist.

Expert 10: Collapsing timelines and missing datasets

The timelines in which we will need to tackle major technical challenges have collapsed. Work that was anticipated to take decades (e.g. designing novel pathogens) may be feasible in months or years. The datasets needed to empower tools that could be used to cause harm exist or could be created in the short-to-medium term. The datasets required to identify malign misuse do not, and probably cannot, be produced.

We need to ensure we integrate high throughput approaches to be able to develop, produce, and deploy novel medical countermeasures to counter unknown threats.

Expert 11: Early days and space to contribute

I haven’t worked on this personally, so I don’t have very detailed takes other than it seems really important. A lot of people have noticed that it’s really important and are still kind of scrambling to figure out exactly what would be useful to do, if anything. So it’s a pretty good time to be getting into this, and it’s something that people with AI and biology expertise can both contribute to. There isn’t anyone yet who’s an established expert. It’s very early days, and there’s a lot left to figure out, and thus a lot of space for people to make useful contributions.

Recent AI advances have shortened everyone’s timelines in a way I don’t appreciate. It also shortens timelines to catastrophic pandemics. It’s made everything a lot scarier and more urgent in a field that was already fairly scary and urgent. It’s hard to think about, and I think a lot of people in biosecurity are not really thinking about it. I and a lot of other people have mostly continued to work on things we were already working on. I think this often makes sense, but there’s at least some fraction of people who should stop what they’re doing and reorient towards AI. More people should probably make this switch than have already done so, though I’m not very confident about this.

On the other hand, I know several people who have pivoted fully away from biosecurity and into AI work in the last year. And I don’t know how I feel about this. AI sure is important, but we have a pretty strong need for people in biosecurity and I think the loss of lots of highly capable people to AI might be a mistake if it lasts.

Expert 12: Renewed urgency

My priorities have not changed significantly in light of the developments in AI. I think that the latest advances in AI demonstrate clearly the need for stronger controls and policies to prevent access to biological materials, deter and disrupt progress in biological design for bad actors, and prepare for biological incidents of any origin. We need to keep doing what we are doing but with renewed urgency.

Learn more