The idea this week: technical expertise is needed in AI governance and policy.

How do you prevent a new and rapidly evolving technology from spiralling out of control? How can governments, policymakers, and civil society ensure that we’re making the best decisions about how to integrate artificial intelligence into our society?

To answer these kinds of questions, we need people with technical expertise — in machine learning, information security, computing hardware, or other relevant technical domains — to work in AI governance and policy making.

Of course, there are roles for people with many different backgrounds to play in AI governance and policy. Experience in law, international coordination, communications, operations management, and more are all potentially valuable in this space.

But we think people with technical backgrounds may underrate their ability to contribute to AI policy. We’ve long regarded AI technical safety research as an extremely high-impact career option, and we still do. But this sometimes gives readers the impression that if they’ve got a technical background or aptitude, it’s the main path for them to consider if they want to help prevent an AI-related catastrophe.

But this isn’t necessarily true.

Technical knowledge is crucial in AI governance for understanding the current landscape and likely trajectories of the technology, as well as for designing and implementing policies that can reduce the biggest risks. Lennart Heim, an AI governance researcher, provides more details about why these skills are useful in a recent blog post.

We’ve spoken to experts who work in Washington, D.C. who say that technical credentials that some may regard as fairly modest — such as computer science bachelor’s degrees or a master’s in machine learning — can be highly sought after in policy roles.

People with greater experience with AI or related fields could be especially impactful in governance work, particularly if they have backgrounds in the following areas:

Other specific technical backgrounds can also be highly valuable. People with knowledge of virology, for example, could work on reducing the risk from AI-related biological threats.

If you have a technical background and want to work on reducing catastrophic risks from artificial intelligence, we’d encourage you to apply for 1-1 advising with our team to learn about how you might use your skills and what opportunities might be available for you.

We also recommend checking out Emerging Tech Policy Careers, which has extensive resources to learn about opportunities in US policy.

And we’ve been excited to see many promising opportunities coming out of the UK government’s new AI Safety Institute. The US Department of Commerce is also setting up its own AI Safety Institute, and we feature many AI-related roles in the US federal government on our job board. What’s more, the European Union is poised to pass a new AI Act, which will likely require new personnel to implement its provisions — offering even more opportunities.

But important work in AI governance can be done without working directly for a particular government. For example:

These kinds of roles might help people with technical experience pick up some of the policy skills and knowledge they might lack.

For more information on these and related paths, read our career review of AI governance.

This blog post was first released to our newsletter subscribers.

Join over 400,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!

Learn more: