Updates to our research about AI risk and careers

This week, we’re sharing new updates on:

  1. Top career paths for reducing risks from AI
  2. An AI bill in California that’s getting a lot of attention
  3. The potential for catastrophic misuse of advanced AI
  4. Whether to work at frontier AI companies if you want to reduce catastrophic risks
  5. The variety of approaches in AI governance

Here’s what’s new:

1. We now rank AI governance and policy at the top of our list of impactful career paths

It’s swapped places with AI technical safety research, which is now second.

Here are our reasons for the change:

  • Many experts in the field have been increasingly excited about “technical AI governance” — people using technical expertise to inform and shape policies. For example, people can develop sophisticated compute governance policies and norms around evaluating increasingly advanced AI models for dangerous capabilities.
  • We know of many people with technical talent and track records choosing to work in governance right now because they think it’s where they can make a bigger difference.
  • It’s become more clear that policy-shaping and governance positions within key AI organisations can play critical roles in how the technology progresses.
  • We’re seeing a particularly large increase in the number of roles available in AI governance and policy, and we’re excited to encourage (even) more people to get involved now vs before. Governments are also more poised to take action now than they appeared to be just a few years ago.
  • AI governance is still a less developed field than AI safety technical research.
  • We now see clear efforts from the industry to push back against efforts to create risk-reducing AI policy, so it’s plausible that more work is needed to advocate for sensible approaches.
  • Good AI governance will be needed to reduce a range of risks from AI — not just misalignment but also catastrophic misuse (discussed below), as well as emerging societal risks, like the potential suffering of digital minds or stable totalitarianism. It’s plausible (though highly uncertain) that these other risks could make up the majority of the potential bad outcomes in worlds with transformative AI.
  • As AI progress accelerates and competition intensifies, it’s become increasingly clear that strategic decision making about AI development may be necessary to give humanity additional time to hone technical safety measures. This could help us resist the urge to succumb to competitive pressures, which would drive up the risk of catastrophe.
  • Even if researchers make technical breakthroughs that significantly reduce the risk of catastrophic misalignment from AI systems, we will likely need governance measures and effective policies to ensure that they’re deployed consistently. Some people in the field have expected this would be the case, but we think it seems increasingly plausible that it’s correct and that it’s feasible.

To be clear, we still think AI safety technical research is extremely valuable and will easily be many people’s best option if they are a good fit for it. There’s also a blurry boundary between the two fields, and some kinds of work could go under either umbrella.

Check out our overview of AI governance careers

2. New interview about California’s AI bill

This one is particularly timely: we’ve just released an interview on our podcast with Nathan Calvin on SB-1047, California’s AI regulation bill. The bill was passed by the California State Assembly and Senate this week, which means the governor now has to decide whether or not to sign it.

Nathan and host Luisa Rodriguez discussed what’s in the bill, how it’s changed, why it’s controversial, and what it aims to do. Nathan works as senior policy counsel to the Center for AI Safety Action Fund, which has done work on the bill.

If you’re interested in hearing his case for the bill, as well as his response to a series of objections Luisa raised, we recommend listening to the episode or checking out the transcript.

Check out the interview

3. Catastrophic misuse of AI

While a lot of our work has focused on the potential risk of unintentionally creating power-seeking AI systems, we don’t think that’s the only way advanced AI could have catastrophic consequences for our future.

Humans might use advanced AI in ways that might threaten the long-term future, including:

  • Bioweapons: AI might lower barriers for creating dangerous pathogens that extremist or state actors could use.
  • Empowering authoritarianism: Advanced AI could enable unprecedented levels of surveillance and control, potentially leading to stable, long-term totalitarian regimes.
  • Transforming war: AI could destabilise nuclear deterrence, lead to the development of autonomous weapons, and create strategic advantages that might lead to catastrophic conflict.

We think working on some of these risks (including continuing to investigate how high they are) might be just as impactful as trying to reduce the risk of unintentionally creating power-seeking systems.

Learn more about AI misuse risks

4. Working at a frontier AI company: opportunities and downsides

If you want to help reduce the largest risks from advanced AI, does it make sense to work for a frontier AI company like OpenAI, Google DeepMind, or Anthropic?

There’s ongoing debate about this among people interested in AI safety, and we don’t have definite answers. We surveyed experts in the field and spoke to a wide range of people with different roles at different organisations. Even among people who have similar views about the nature of the risks, there’s a lot of disagreement.

In our updated article on the subject, we discuss some key considerations:

  • Potential positive role impact: Some roles at these companies may be among the best for reducing AI risks, even if other (even most) roles at these companies could make things worse.
    • We think roles aimed at reducing catastrophic risks (e.g., AI safety research, security roles, and some governance roles) are much more likely to be beneficial than others, especially those that clearly accelerate AI progress and don’t reduce major risks. But deciding whether any particular role is more beneficial than harmful relies on weighing up lots of interrelated and contested concerns.
  • Potential positive company impact: We think it’s possible for a responsible frontier AI company to be a force for good by leading in safety practices, conducting valuable research, and influencing policy in positive ways. (But some companies seem like they’re probably more responsible than others.)
  • Risk of harm: There’s a very real danger that most roles working at these companies accelerate progress towards powerful AI systems before adequate safety measures are in place.
  • Career capital: Working at these companies can provide excellent industry insights and career advancement opportunities. (Though there are also some downsides.)

We also give advice on ways you can mitigate the downsides of working at a frontier AI company if you do decide to do so, as well as factors to consider for your particular case.

We originally published this article in June 2023, and we’ve updated it now to reflect more recent developments and thinking.

Read more about working at frontier AI companies

5. Emerging approaches in AI governance

When we first recommended that readers consider pursuing careers in AI governance and policy, there were very few roles actually working on the most important problems. Working on AI governance largely meant researching nascent ideas.

That’s not the case anymore — AI policy is now an active and exciting field with lots of concrete ideas. In fact, there has been a surprising amount of action taken in a short period of time — for example, international summits, export controls on AI hardware, President Biden’s Executive Order on AI, and the passage of the EU AI Act.

Several new approaches, which could shape the future of AI policy, are actively being debated:

  • Creating standards and evaluation protocols
  • Requiring companies prepare ‘safety cases’ before deploying models
  • Information security standards
  • Clarifying liability law
  • Compute governance
  • Societal adaptation strategies

We give an overview of these and other policy approaches in an updated section of our AI governance career review:

Learn more about policy approaches

Learn more

    Notes and references

    1. Open Philanthropy is 80,000 Hours’ largest funder.