AI governance and policy
As advancing AI capabilities gained widespread attention in late 2022 and 2023, interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI has become more prominent, potentially opening up opportunities for policy that could mitigate the threats.
There’s still a lot of uncertainty about which AI governance strategies would be best. Many have proposed policies and strategies aimed at reducing the largest risks, which we discuss below.
But there’s no roadmap here. There’s plenty of room for debate about what’s needed, and we may not have found the best ideas yet in this space. In any case, there’s still a lot of work to figure out how promising policies and strategies would work in practice. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and policy.
Summary
In a nutshell: Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks. There are opportunities in the broad field of AI governance to positively shape how society responds to and prepares for the challenges posed by the technology.
Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.
Recommended
If you are well suited to this career, it may be the best way for you to have a social impact.
Review status
Based on an in-depth investigation
Table of Contents
- 1 Summary
- 2 Why this could be a high-impact career path
- 3 What kinds of work might contribute to AI governance?
- 4 What policies and practices would reduce the largest risks?
- 5 Examples of people pursuing this path
- 6 How to assess your fit and get started
- 7 Where can this kind of work be done?
- 8 How this career path can go wrong
- 9 What the increased attention on AI means
- 10 Read next
- 11 Learn more
“What you’re doing has enormous potential and enormous danger.” — US President Joe Biden, to the leaders of the top AI companies
Why this could be a high-impact career path
Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks.
And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous.
We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems, raising living standards and helping us build a more flourishing society.
But there are also substantial risks. Advanced AI could be used to do a lot of harm. And we worry it could accidentally lead to a major catastrophe — and perhaps even cause human disempowerment or extinction. We discuss the arguments that these risks exist in our in-depth problem profile.
Because of these risks, we encourage people to work on finding ways to reduce the danger through technical research and engineering.
But we need a range of strategies for risk reduction. Public policy and corporate governance in particular may be necessary to ensure that advanced AI is broadly beneficial and low risk.
Governance generally refers to the processes, structures, and systems that carry out decision making for organisations and societies at a high level. In the case of AI, we expect the governance structures that matter most to be national governments and organisations developing AI — as well as some international organisations and perhaps subnational governments.
Some aims of AI governance work could include:
- Preventing the deployment of any AI systems that pose a significant and direct threat of catastrophe
- Mitigating the negative impact of AI technology on other catastrophic risks, such as nuclear weapons and biotechnology
- Guiding the integration of AI technology into our society and economy with limited harms and to the advantage of all
- Reducing the risk of an “AI arms race” between nations and between companies
- Ensuring that advanced AI developers are incentivised to be cooperative and concerned about safety
- Slowing down the development and deployment of new systems if the advancements are likely to outpace our ability to keep them safe and under control
We need a community of experts who understand modern AI systems and policy, as well as the severe threats and potential solutions. This field is still young, and many of the paths within it aren’t clear and are not sure to pan out. But there are relevant professional paths that will provide you valuable career capital for a variety of positions and types of roles.
The rest of this article explains what work in this area might involve, how you can develop career capital and test your fit, and some promising places to work.
What kinds of work might contribute to AI governance?
There are a variety of ways to pursue AI governance strategies, and as the field becomes more mature, the paths are likely to become clearer and more established.
We generally don’t think people early in their careers should aim for a specific high-impact job. They should instead aim to develop skills, experience, knowledge, judgement, networks, and credentials — what we call career capital — that they can use later to have an impact.
This may involve following a standard career trajectory or moving around in different kinds of roles. Sometimes, you just have to apply to many different roles and test your fit for various types of work before you know what you’ll be good at. Most importantly, you should try to get excellent at something for which you have strong personal fit and that will let you contribute to solving pressing problems.
In the AI governance space, we see at least six broad categories of work that we think are important:
- Government work
- Research on AI policy and strategy
- Industry work
- Advocacy and lobbying
- Third-party auditing and evaluation
- International work and coordination
Thinking about the different kinds of career capital that are useful for the categories of work that appeal to you may suggest some next steps in your path. (We discuss how to assess your fit and enter this field below.)
You may want to move between these different categories of work at different points in your career. You can also test out your fit for various roles by taking internships, fellowships, entry-level jobs, temporary placements, or even doing independent research, all of which can serve as career capital for a range of paths.
We have also reviewed career paths in AI technical safety research and engineering, information security, and AI hardware expertise, which may be crucial to reducing risks from AI. These fields may also play a significant role in an effective governance agenda. People serious about pursuing a career in AI governance should familiarise themselves with these subjects as well.
Government work
Taking a role within an influential government could help you play an important role in the development, enactment, and enforcement of AI policy.
We generally expect that the US federal government will be the most significant player in AI governance for the foreseeable future. This is because of its global influence and its jurisdiction over much of the AI industry, including the most prominent AI companies such as Anthropic, OpenAI, and Google DeepMind. It also has jurisdiction over key parts of the AI chip supply chain. Much of this article focuses on US policy and government.1
But other governments and international institutions matter too. For example, the UK government, the European Union, China, and others may present opportunities for impactful AI governance work. Some US state-level governments, such as California, may have opportunities for impact and gaining career capital.
What would this work involve? Sections below discuss how to enter US policy work and which areas of the government that you might aim for.
In 2023, the US and UK governments both announced new institutes for AI safety — both of which should provide valuable opportunities for career capital and potential impact.
But at the broadest level, people interested in positively shaping AI policy should gain skills and experience to work in areas of government with some connection to AI or emerging technology policy.
This can include roles in: legislative branches, domestic regulation, national security, diplomacy, appropriations and budgeting, and other policy areas.
If you can get a role already working directly on this issue, such as in one of the AI safety institutes or working for a lawmaker focused on AI, that could be a great opportunity.
Otherwise, you should seek to learn as much as you can about how policy works and which government roles might allow you to have the most impact. Try to establish yourself as someone who’s knowledgeable about the AI policy landscape. Having almost any significant government role that touches on some aspect of AI, or having some impressive AI-related credential, may be enough to go quite far.
One way to advance your career in government on a specific topic is what some call “getting visibility.” This involves using your position to learn about the landscape and connect with the actors and institutions in the policy area. You’ll want to engage socially with others in the policy field, get invited to meetings with other officials and agencies, and be asked for input on decisions. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.
Career trajectories within government can be broken down roughly as follows:
- Standard government track: This involves entering government at a relatively low level and climbing the seniority ladder. For the highest impact, you’d ideally reach senior levels by sticking around, forming relationships, gaining skills and experience, and getting promoted. You may move between agencies, departments, or branches.
- Specialisation career capital: You can also move in and out of government throughout your career. People on this trajectory also work at nonprofits, think tanks, the private sector, government contractors, political parties, academia, and other organisations. But they will primarily focus on becoming an expert in a topic — such as AI. It can be harder to get seniority this way, but the value of expertise can sometimes be greater than the value of seniority.
- Direct-impact work: Some people move into government jobs without a longer plan to build career capital because they see an opportunity for direct, immediate impact. This might involve getting tapped to lead an important commission or providing valuable input on an urgent project. This isn’t necessarily a strategy you can plan a career around, but it’s good to be aware of it as an option that might be worth taking at some point.
Read more about how to evaluate your fit and get started building relevant career capital in our article on policy and political skills.
Research on AI policy and strategy
There’s still a lot of research to be done on AI governance strategy and implementation. The world needs more concrete policies that would really start to tackle the biggest threats; developing such policies and deepening our understanding of the strategic needs of the AI governance space are high priorities.
Other relevant research could involve surveys of public and expert opinion, legal research about the feasibility of proposed policies, technical research on issues like compute governance, and even higher-level theoretical research into questions about the societal implications of advanced AI.
Some research, such as that done by Epoch AI, focuses on forecasting the future course of AI developments, which can influence AI governance decisions.
However, several experts we’ve talked to warn that a lot of research on AI governance may prove to be useless. So it’s important to be reflective and seek input from others in the field about what kind of contribution you can make. We list several research organisations below that we think pursue promising research on this topic and could provide useful mentorship.
One approach for testing your fit for this work — especially when starting out — is to write up analyses and responses to existing work on AI policy or investigate some questions in this area that haven’t received much attention. You can then share your work widely, send it out for feedback from people in the field, and evaluate how you enjoy the work and how you might contribute to this field.
But don’t spend too long testing your fit without making much progress, and note that some are best able to contribute when they’re working on a team. So don’t over-invest in independent work, especially if there are few signs it’s working out especially well for you. This kind of project can make sense for maybe a month or a bit longer — but it’s unlikely to be a good idea to spend much more than that without funding or some really encouraging feedback from people working in the field.
If you have the experience to be hired as a researcher, work on AI governance can be done in academia, nonprofit organisations, and think tanks. Some government agencies and committees, too, perform valuable research.
Note that universities and academia have their own priorities and incentives that often aren’t aligned with producing the most impactful work. If you’re already an established researcher with tenure, it may be highly valuable to pivot into work on AI governance — your position may even give you a credible platform from which to advocate for important ideas.
But if you’re just starting out a research career and want to focus on this issue, you should carefully consider whether your work will be best supported inside academia. For example, if you know of a specific programme with particular mentors who will help you pursue answers to critical questions in this field, it might be worth doing. We’re less inclined to encourage people on this path to pursue generic academic-track roles without a clear idea of how they can do important research on AI governance.
Advanced degrees in policy or relevant technical fields may well be valuable, though — see more discussion of this in the section on how to assess your fit and get started.
You can also learn more in our article about how to become a researcher.
Industry work
Internal policy and corporate governance at the largest AI companies themselves is also important for reducing risks from AI.
At the highest level, deciding who sits on corporate boards, what kind of influence those boards have, and the incentives the organisation faces can have a major impact on a company’s choices. Many of these roles are filled by people with extensive management and organisational leadership experience, such as founding and running companies.
If you’re able to join a policy team at a major company, you can model threats and help develop, implement, and evaluate proposals to reduce risks. And you can build consensus around best practices, such as strong information security, using outside evaluators to find vulnerabilities and dangerous behaviours in AI systems (red teaming), and testing out the latest techniques from the field of AI safety.
And if, as we expect, AI companies face increasing government oversight, ensuring compliance with relevant laws and regulations will be a high priority. Communicating with government actors and facilitating coordination from inside the companies could be impactful work.
In general, it seems better for AI companies to be highly cooperative with each other2 and with outside groups seeking to minimise risks. And this doesn’t seem to be an outlandish hope — many industry leaders have expressed concern about catastrophic risks and have even called for regulation of the frontier technology they’re creating.
That said, cooperation will likely take a lot of effort. Companies creating powerful AI systems may resist some risk-reducing policies, because they’ll have strong incentives to commercialise their products. So getting buy-in from the key players, increasing trust and information-sharing, and building a consensus around high-level safety strategies will be valuable.
Advocacy and lobbying
People outside of government or AI companies can influence the shape of public policy and corporate governance with advocacy and lobbying.
Advocacy is the general term for efforts to promote certain ideas and shape the public discourse, often around policy-related topics. Lobbying is a more targeted effort aimed at influencing legislation and policy, often by engaging with lawmakers and other officials.
If you believe AI companies may be disposed to advocate for generally beneficial regulation, you might work with them to push the government to adopt specific policies. It’s plausible that AI companies have the best understanding of the technology, as well as the risks, failure modes, and safest paths — and so are best positioned to inform policymakers.
On the other hand, AI companies might have too much of a vested interest in the shape of regulations to reliably advocate for broadly beneficial policies. If that’s right, it may be better to join or create advocacy organisations unconnected from the industry — perhaps supported by donations — that can take stances opposed to commercial interests.
For example, some believe it might be best to deliberately slow down or halt the development of increasingly powerful AI models. Advocates could make this demand of the companies themselves or of the government. But pushing for this step may be difficult for those involved with the companies creating advanced AI systems.
It’s also possible that the best outcomes will result from a balance of perspectives from inside and outside industry.
Advocacy can also:
- Highlight neglected but promising approaches to governance that have been uncovered in research
- Facilitate the work of policymakers by showcasing the public’s support for governance measures
- Build bridges between researchers, policymakers, the media, and the public by communicating complicated ideas in an accessible way
- Pressure companies to proceed more cautiously
- Change public sentiment around AI and discourage irresponsible behaviour by individual actors
However, note that advocacy can sometimes backfire because predicting how information will be received isn’t straightforward. Be aware that:
- Drawing attention to a cause area can sometimes trigger a backlash
- Certain styles of rhetoric can alienate people or polarise public opinion
- Spreading mistaken messages can discredit yourself and others
It’s important to keep these risks in mind and consult with others (particularly those who you respect but might disagree with tactically). And you should educate yourself deeply about the topic before explaining it to the public.
You can read more in the section about doing harm below. We also recommend reading our article on ways people trying to do good accidentally make things worse and how to avoid them. And you may find it useful to read our article on the skills needed for communicating important ideas.
Case study: the Center for AI Safety statement
In May 2023, the Center for AI Safety released a single-sentence statement saying: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Most notably, the statement was supported by more than 100 signatories, including leaders of major AI companies, including OpenAI, Google Deepmind, and Anthropic, as well as top researchers in the field, Geoffrey Hinton and Yoshua Bengio. It also includes a member of the US Congress, other public officials, economists, philosophers, business leaders, and more.
This statement drew media attention at the time, and UK Prime Minister Rishi Sunak and White House press secretary both reacted to the statement with expressions of concern. Both the UK government and the US government have subsequently moved forward with efforts to start to address these risks.
The statement has also helped clarify and inform the discourse about AI risk, as evidence that being concerned about catastrophes on the scale of human extinction is not a fringe position.
Third-party auditing and evaluation
If regulatory measures are put in place to reduce the risks of advanced AI, some agencies and outside organisations will need to audit companies and systems to make sure that regulations are being followed.
Governments often rely on third-party auditors when regulating because the government lacks much of the expertise that the private sector has. There aren’t many such opportunities available in this type of role for AI-related auditing that we know of, but such roles play a critical part of an effective AI governance framework.
AI companies and the AI systems they create may be subject to audits and evaluations out of safety concerns.
One nonprofit, Model Evaluation and Threat Research (METR, formally known as ARC Evals), has been at the forefront of work to evaluate the capabilities of advanced AI models.3 In early 2023, the organisation partnered with two leading AI companies, OpenAI and Anthropic, to evaluate the capabilities of the latest versions of their chatbot models prior to their release. They sought to determine if the models had any potentially dangerous capabilities in a controlled environment.
The companies voluntarily cooperated with METR for this project, but at some point in the future, these evaluations may be legally required.
Other types of auditing and evaluation may be required as well. METR has said it intends to develop methods to determine which models are appropriately aligned — that is, that they will behave as their users intend them to behave — prior to release.
Governments may also want to employ auditors to evaluate the amount of compute that AI developers have access to, their information security practices, the uses of models, the data used to train models, and more.
Acquiring the technical skills and knowledge to perform these types of evaluations, and joining organisations that will be tasked to perform them, could be the foundation of a highly impactful career. This kind of work will also likely have to be facilitated by people who can manage complex relationships across industry and government. Someone with experience in both sectors could have a lot to contribute.
Some of these types of roles may have some overlap with work in AI technical safety research.
International work and coordination
US-China
For someone with the right fit, working to improve coordination with China on the safe development of AI could be a particularly impactful career path.
The Chinese government has been a major funder in the field of AI, and the country has giant tech companies that could potentially drive forward advances.
Given tensions between the US and China, and the risks posed by advanced AI, there’s a lot to be gained from increasing trust, understanding, and coordination between the two countries. The world will likely be much better off if we can avoid a major conflict between great powers and if the most significant players in emerging technology can avoid exacerbating any global risks.
We have a separate career review that goes into more depth on China-related AI safety and governance paths.
Other governments and international organisations
As we’ve said, we focus most on US policy and government roles. This is largely because we anticipate that the US is now and will likely continue to be the most pivotal actor when it comes to regulating AI, with a major caveat being China, as discussed in the previous section.
But many people interested in working on this issue can’t or don’t want to work in US policy — perhaps because they live in another country and don’t intend on moving.
Much of the advice above still applies to these people, because roles in AI governance research and advocacy can be done outside of the US.4 And while we don’t think it’s generally as impactful in expectation as US government work, opportunities in other governments and international organisations can be complementary to the work to be done in the US.
The United Kingdom, for instance, may present another strong opportunity for AI policy work that would complement US work. Top UK officials have expressed interest in developing policy around AI, a new international agency, and reducing extreme risks. And the UK government announced the creation of its own AI Safety Institute in 2024 to develop evaluations for AI systems and coordinate globally on AI policy.
The European Union has shown that its data protection standards — the General Data Protection Regulation (GDPR) — affect corporate behaviour well beyond its geographical boundaries. EU officials have also pushed forward on regulating AI, and some research has explored the hypothesis that the impact of the EU’s AI regulations will extend far beyond the continent — the so-called “Brussels effect.”
And any relatively wealthy country could fund some AI safety research, though much of it requires access to top talent and state of the art tech. Any significant advances in AI safety research could inform researchers working on the most powerful models.
Other countries might also develop liability standards for the creators of AI systems that could incentivise corporations to proceed cautiously before releasing models.
And at some point, there may be AI treaties and international regulations, just as the international community has created the International Atomic Energy Agency, the Biological Weapons Convention, and Intergovernmental Panel on Climate Change to coordinate around and mitigate other global catastrophic threats.
Efforts to coordinate governments around the world to understand and share information about threats posed by AI may end up being extremely important in some future scenarios. The Organisation for Economic Cooperation and Development, for instance, has already created the AI Policy Observatory.
Third-party countries may also be able to facilitate cooperation and reduce tensions betweens the United States and China, whether around AI or other potential flashpoints.
What policies and practices would reduce the largest risks?
People working in AI policy have proposed a range of approaches to reducing risk as AI systems get more powerful.
We don’t necessarily endorse all the ideas below, but what follows is a list of some prominent policy approaches that could be aimed at reducing the largest dangers from AI:5
- Responsible scaling policies: some major AI companies have already begun developing internal frameworks for assessing safety as they scale up the size and capabilities of their systems. These frameworks introduce safeguards that are intended to become increasingly stringent as AI systems become potentially more dangerous, and they ensure that AI systems’ capabilities don’t outpace companies’ abilities to keep systems safe. Many argue that these internal policies are not sufficient for safety, but they may represent a promising step for reducing risk. You can see versions of such policies from Anthropic, Google DeepMind, and OpenAI.
- Standards and evaluation: governments might develop industry-wide benchmarks and testing protocols to assess whether AI systems pose major risks. The non-profit METR and the UK AI Safety Institute are among the organisations currently developing these evaluations to test AI models before and after they are released. This can include creating standardised metrics for an AI system’s capabilities and potential to cause harm, as well as its propensity for power-seeking or misalignment.
- Safety cases: this practice involves requiring AI developers to provide comprehensive documentation demonstrating the safety and reliability of their systems before deployment. This approach is similar to safety cases used in other high-risk industries like aviation or nuclear power.6 You can see discussion of this idea in a paper from Clymer et al and in a post from Geoffrey Irving at the UK AI Safety Institute.
- Information security standards: we can establish robust rules for protecting AI-related data, algorithms, and infrastructure from unauthorised access or manipulation — particularly the AI model weights. Rand released a detailed report analysing the security risks to major AI companies, particularly from state actors.
- Liability law: existing law already imposes some liability on companies that create dangerous products or cause significant harm to the public, but its application to AI models and their risks in particular is unclear. Clarifying how liability applies to companies that create dangerous AI models could incentivise them to take additional steps to reduce risk. Law professor Gabriel Weil has written about this idea.
- Compute governance: governments may regulate access to and use of high-performance computing resources necessary for training large AI models. The US restrictions on exporting state-of-the-art chips to China is one example of this kind of policy, and others are possible. Companies could also be required to install hardware-level safety features directly into AI chips or processors. These could be used to track chips and verify they’re not in the possession of anyone who shouldn’t have them or for other purposes. You can learn more about this topic in our interview with Lennart Heim and in this report from the Center for a New American Security.
- International coordination: Fostering global cooperation on AI governance to ensure consistent standards may be crucial. This could involve treaties, international organisations, or multilateral agreements on AI development and deployment. We discuss some related considerations in our article on China-related AI safety and governance paths.
- Societal adaptation: it may be critically important to prepare society for the widespread integration of AI and the potential risks it poses. For example, we might need to develop new information security measures to protect crucial data in a world with AI-enabled hacking. Or we may want to implement strong controls to prevent handing over key societal decisions to AI systems.7
- Pausing scaling if appropriate: some argue that we should currently pause all scaling of larger AI models because of the dangers the technology poses. We have featured some discussion of this idea on our podcast. It seems hard to know if or when this would be a good idea. If carried out, it could involve industry-wide agreements or regulatory mandates to pause scaling efforts.
The details, benefits, and downsides of many of these ideas have yet to be fully worked out, so it’s crucial that we do more research and get more input from informed stakeholders. And this list isn’t comprehensive — there are likely other important policy interventions and governance strategies worth pursuing.
You can also check out a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy,8 an article about AI policy proposals from Vox‘s Dylan Matthews, and a survey of expert opinion on best practices in AI safety and governance.
Examples of people pursuing this path
How to assess your fit and get started
If you’re early on in your career, you should focus first on getting skills and other career capital to successfully contribute to the beneficial governance and regulation of AI.
You can gain career capital for roles in many ways. Broadly speaking, working in or studying fields such as politics, law, international relations, communications, and economics can all be beneficial for going into policy work.
And expertise in AI itself, gained by studying and working in machine learning and technical AI safety, or potentially related fields such as computer hardware and information security, should also give you a big advantage.
Testing your fit
Try to find relatively “cheap” tests to assess your fit for different paths. This could mean, for example, taking a policy internship, applying for a fellowship, doing a short bout of independent research, or taking classes or courses on technical machine learning or computer engineering.
It can also involve talking to people doing a job and finding out what the day-to-day experience of the work is and what skills are needed.
All of these factors can be difficult to predict in advance. While we grouped “government work” into a single category above, that label covers a wide range of roles. Finding the right fit can take years, and it can depend on factors out of your control, such as the colleagues you work closely with. That’s one reason it’s useful to build broadly valuable career capital that gives you more options.
Don’t underestimate the value of applying to many relevant openings in the field and sector you’re aiming for to see what happens. You’ll likely face a lot of rejection with this strategy, but you’ll be able to better assess your fit for roles after you see how far you get in the process. This can give you more information than guessing about whether you have the right experience.
Try to rule out certain types of work if you gather evidence that you’re not a strong fit. For example, if you invest a lot of effort trying to get into reputable universities or nonprofit institutions to do AI governance research, but you get no promising offers and receive little encouragement, this might be a significant signal that you’re unlikely to thrive in that path.
That wouldn’t mean you have nothing to contribute, but your comparative advantage may lie elsewhere.
Read the section of our career guide on finding a job that fits you.
Types of career capital
A mix of people with technical and policy expertise — and some people with both — is needed in AI governance.
While anyone involved in this field should work to maintain an understanding of both the technical and policy details, you’ll probably start out focusing on either policy or technical skills to gain career capital.
This section covers:
- Generally useful career capital
- Policy-related career capital
- Technical career capital
- Other specific forms of career capital
Much of this advice is geared toward roles in the US, though it may be relevant in other contexts.
Generally useful career capital
The chapter of the 80,000 Hours career guide on career capital lists five key components that will be useful in any path: skills and knowledge, connections, credentials, character, and runway.
For most jobs touching on policy, social skills, networking, and — for lack of a better word — political skills will be a huge asset. This can probably be learned to some extent, but some people may find they don’t have these kinds of skills and can’t or don’t want to acquire them.
That’s OK — there are many other routes to having a fulfilling and impactful career, and there may be some roles within this path that demand these skills to a much lesser extent. That’s why testing your fit is important.
Read the full section of the career guide on career capital.
Policy-related career capital
To gain skills in policy, you can pursue education in many relevant fields, such as political science, economics, and law.
Many master’s programmes offer specific coursework on public policy, science and society, security studies, international relations, and other topics; having a graduate degree or law degree will give you a leg up for many positions.
In the US, a master’s, a law degree, or a PhD is particularly useful if you want to climb the federal bureaucracy. Our article on US policy master’s degrees provides detailed information about how to assess the many options.
Internships in DC are a promising route to test your fit for policy and get career capital. Many academic institutions now offer a strategic “Semester in DC” programme, which can let you explore placements in Congress, federal agencies, or think tanks.
The Virtual Student Federal Service (VSFS) also offers part-time, remote government internships. Students in this program work alongside their studies.
Once you have a suitable background, you can take entry-level positions within parts of the government and build a professional network while developing key skills. In the US, you can become a congressional staffer, or take a position at a relevant federal department, such as the Department of Commerce, Department of Energy, or the Department of State. Alternatively, you can gain experience in think tanks (a particularly promising option if you have an aptitude for research). Some government contractors can also be a strong option.
Many people say Washington, D.C. has a unique culture, particularly for those working in and around the federal government. There’s a big focus on networking, bureaucratic politics, status-seeking, and influence-peddling. We’ve also been told that while merit matters to a degree in US government work, it is not the primary determinant of who is most successful. People who think they wouldn’t feel able or comfortable to be in this kind of environment for the long term should consider whether other paths would be best.
If you find you can enjoy government and political work, impress your colleagues, and advance in your career, though, you may be a good fit. Just being able to thrive in government work can be a valuable comparative advantage.
US citizenship
Your citizenship may affect which opportunities are available to you. Many of the most important AI governance roles within the US — particularly in the executive branch and Congress — are only open to, or will at least heavily favour, American citizens. All key national security roles that might be especially important will be restricted to those with US citizenship, which is required to obtain a security clearance.
This may mean that those who lack US citizenship will want to consider not pursuing roles that require it. Alternatively, they could plan to move to the US and pursue the long process of becoming a citizen. For more details on immigration pathways and types of policy work available to non-citizens, see this post on working in US policy as a foreign national. Consider also participating in the annual diversity visa lottery if you’re from an eligible country, as this is low effort and allows you to win a US green card if you’re lucky.
Technical career capital
Technical experience in machine learning, AI hardware, and related fields can be a valuable asset for an AI governance career. So it will be very helpful if you’ve studied a relevant subject area for an undergraduate or graduate degree, or did a particularly productive course of independent study.
We have a guide to technical AI safety careers, which explains how to learn the basics of machine learning.
Working at an AI company or lab in technical roles, or other companies that use advanced AI systems and hardware, may also provide significant career capital in AI policy paths. Read our career review discussing the pros and cons of working at a top AI company.
We also have a separate career review on how becoming an expert in AI hardware could be very valuable in governance work.
Many politicians and policymakers are generalists, as their roles require them to work in many different subject areas and on different types of problems. This means they’ll need to rely on expert knowledge when crafting and implementing policy on AI technology that they don’t fully understand. So if you can provide them this information, especially if you’re skilled at communicating it clearly, you can potentially fill influential roles.
Some people who may have initially been interested in pursuing a technical AI safety career, but who have found that they either are no longer interested in that path or find more promising policy opportunities, might also decide that they can effectively pivot into a policy-oriented career.
It is common for people with STEM backgrounds to enter and succeed in US policy careers. People with technical credentials that they may regard as fairly modest — such as a computer science bachelor’s degree or a master’s in machine learning — often find their knowledge is highly valued in Washington, DC.
Most DC jobs don’t have specific degree requirements, so you don’t need to have a policy degree to work in DC. Roles specifically addressing science and technology policy are particularly well-suited for people with technical backgrounds, and people hiring for these roles will value higher credentials like a master’s or, better even, a terminal degree like a PhD or MD.
There are many fellowship programmes specifically aiming to support people with STEM backgrounds to enter policy careers; some are listed below.
Policy work won’t be right for everybody — many technical experts may not have the right disposition or skills. People in policy paths often benefit from strong writing and social skills, as well as being comfortable navigating bureaucracies and working with people holding very different motivations and worldviews.
Other specific forms of career capital
There are other ways to gain useful career capital that could be applied in this career path.
- If you have or gain great communication skills as, say, a journalist or an activist, these skills could be very useful in advocacy and lobbying around AI governance.
- Especially since advocacy around AI issues is still in its early stages, it will likely need people with experience advocating in other important cause areas to share their knowledge and skills.
- Academics with relevant skill sets are sometimes brought into government for limited stints to serve as advisors in agencies such as the US Office of Science and Technology. This may or may not be the foundation of a longer career in government, but it should give an academic deeper insight into policy and politics.
- You can work at an AI company or lab in non-technical roles, gaining a deeper familiarity with the technology, the business, and the culture.
- You could work on political campaigns and get involved in party politics. This is one way to get involved in legislation, learn about policy, and help impactful lawmakers, and you can also potentially help shape the discourse around AI governance. Note, though, there are downsides of potentially polarising public opinion around AI policy (discussed more below); and entering party politics may limit your potential for impact whenever the party you’ve joined doesn’t hold power.
- You could even try to become an elected official yourself, though it’s competitive. If you take this route, make sure you find trustworthy and informed advisors to build expertise in AI since politicians have many other responsibilities and can’t focus as much on any particular issue.
- You can focus on developing specific skill sets that might be valuable in AI governance, such as information security, intelligence work, diplomacy with China, etc.
- Other skills: Organisational, entrepreneurial, management, diplomatic, and bureaucratic skills will also likely prove highly valuable in this career path. There may be new auditing agencies to set up or policy regimes to implement. Someone who has worked at high levels in other high-stakes industries, started an influential company, or coordinated complicated negotiations between various groups, would bring important skills to the table.
Want one-on-one advice on pursuing this path?
Because this is one of our priority paths, if you think this path might be a great option for you, we’d be especially excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities.
Where can this kind of work be done?
Since successful AI governance will require work from governments, industry, and other parties, there will be many potential jobs and places to work for people in this path. The landscape will likely shift over time, so if you’re just starting out on this path, the places that seem most important might be different by the time you’re pivoting to using your career capital to make progress on the issue.
Within the US government, for instance, it’s not clear which bodies will be most impactful when it comes to AI policy in five years. It will likely depend on choices that are made in the meantime.
That said, it seems useful to give our understanding of which parts of the government are generally influential in technology governance and most involved right now to help you orient. Gaining AI-related experience in government right now should still serve you well if you end up wanting to move into a more impactful AI-related role down the line when the highest-impact areas to work in are clearer.
We’ll also give our current sense of important actors outside government where you might be able to build career capital and potentially have a big impact.
Note that this list has by far the most detail about places to work within the US government. We would like to expand it to include more options over time. (Note: the fact that an option isn’t on this list shouldn’t be taken to mean we recommend against it or even that it would necessarily be less impactful than the places listed.)
We have more detail on other options in separate (and older) career reviews, including the following:
- China-related AI safety and governance paths
- Party politics (UK-focused)
- Policy-oriented government jobs (UK-focused)
Here are some of the places where someone could do promising work or gain valuable career capital:
In Congress, you can either work directly for lawmakers themselves or as staff on legislative committees. Staff roles on the committees are generally more influential on legislation and more prestigious, but for that reason, they’re more competitive. If you don’t have that much experience, you could start out in an entry-level job staffing a lawmaker and then later try to transition to staffing a committee.
Some people we’ve spoken to expect the following committees — and some of their subcommittees — in the House and Senate to be most impactful in the field of AI. You might aim to work on these committees or for lawmakers who have significant influence on these committees.
House of Representatives
- House Committee on Energy and Commerce
- House Judiciary Committee
- House Committee on Space, Science, and Technology
- House Committee on Appropriations
- House Armed Services Committee
- House Committee on Foreign Affairs
- House Permanent Select Committee on Intelligence
Senate
- Senate Committee on Commerce, Science, and Transportation
- Senate Judiciary Committee
- Senate Committee on Foreign Relations
- Senate Committee on Homeland Security and Government Affairs
- Senate Committee on Appropriations
- Senate Committee on Armed Services
- Senate Select Committee on Intelligence
- Senate Committee on Energy & Natural Resources
- Senate Committee on Banking, Housing, and Urban Affairs
The Congressional Research Service, a nonpartisan legislative agency, also offers opportunities to conduct research that can impact policy design across all subjects.
In general, we don’t recommend taking low-ranking jobs within the executive branch for this path because it’s very difficult to progress your career through the bureaucracy at this level. It’s better to get a law degree or a relevant graduate degree, which can give you the opportunity to start with more seniority.
The influence of different agencies over AI regulation may shift over time. For example, in late 2023, the federal government announced the creation of the US Artificial Intelligence Safety Institute, which may be a particularly promising place to work.
Whichever agency may be most influential in the future, it will be useful to accrue career capital working effectively in government, creating a professional network, learning about day-to-day policy work, and deepening your knowledge of all things AI.
We have a lot of uncertainty about this topic, but here are some of the agencies that may have significant influence on at least one key dimension of AI policy as of this writing:
- Executive Office of the President (EOP)
- Office of Management and Budget (OMB)
- National Security Council (NSC)
- Office of Science and Technology Policy (OSTP)
- Department of State
- Office of the Special Envoy for Critical and Emerging Technology (S/TECH)
- Bureau of Cyberspace and Digital Policy (CDP)
- Bureau of Arms Control, Verification and Compliance (AVC)
- Office of Emerging Security Challenges (ESC)
- Federal Trade Commission
- Department of Defense (DOD)
- Chief Digital and Artificial Intelligence Office (CDAO)
- Emerging Capabilities Policy Office
- Defense Advanced Research Projects Agency (DARPA)
- Defense Technology Security Administration (DTSA)
- Intelligence Community (IC)
- Intelligence Advanced Research Projects Activity (IARPA)
- National Security Agency (NSA)
- Science advisor roles within the various agencies that make up the intelligence community
- Department of Commerce (DOC)
- The Bureau of Industry and Security (BIS)
- The National Institute of Standards and Technology (NIST)
- The US Artificial Intelligence Safety Institute
- CHIPS Program Office
- Department of Energy (DOE)
- Artificial Intelligence and Technology Office (AITO)
- Advanced Scientific Computing Research (ASCR) Program Office
- National Science Foundation (NSF)
- Directorate for Computer and Information Science and Engineering (CISE)
- Directorate for Technology, Innovation and Partnerships (TIP)
- Cybersecurity and Infrastructure Security Agency (CISA)
Readers can find listings for roles in these departments and agencies at the federal government’s job board, USAJOBS; a more curated list of openings for potentially high impact roles and career capital is on the 80,000 Hours job board.
We do not currently recommend attempting to join the US government via the military if you are aiming for a career in AI policy. There are many levels of seniority to rise through, many people competing for places, and initially you have to spend all of your time doing work unrelated to AI.
However, having military experience already can be valuable career capital for other important roles in government, particularly national security positions. We would consider this route more competitive for military personnel who have been to an elite military academy, such as West Point, or for commissioned officers at rank O-3 or above.
Policy fellowships are among the best entryways into policy work. They offer many benefits like first-hand policy experience, funding, training, mentoring, and networking. While many require an advanced degree, some are open to college graduates.
- Center for Security and Emerging Technology (CSET)
- Center for a New American Security
- RAND Corporation
- The MITRE Corporation
- Brookings Institution
- Carnegie Endowment for International Peace
- Center for Strategic and International Studies (CSIS)
- Federation of American Scientists (FAS)
- Alignment Research Center
- Open Philanthropy8
- Rethink Priorities
- Epoch AI
- Centre for the Governance of AI (GovAI)
- Center for AI Safety (CAIS)
- Legal Priorities Project
- Apollo Research
- Centre for Long-Term Resilience
- AI Impacts
- Johns Hopkins Applied Physics Lab
- Anthropic is an AI safety company working on building interpretable and safe AI systems. They focus on empirical AI safety research. Anthropic cofounders Daniela and Dario Amodei gave an interview about the lab on the Future of Life Institute podcast. On our podcast, we spoke to Chris Olah, who leads Anthropic’s research into interpretability, and Nova DasSarma, who works on systems infrastructure at Anthropic.
- Google DeepMind is probably the largest and most well-known research group developing general artificial machine intelligence, and is famous for its work creating AlphaGo, AlphaZero, and AlphaFold. It is not principally focused on safety, but has two teams focused on AI safety, with the Scalable Alignment Team focusing on aligning existing state-of-the-art systems, and the Alignment Team focused on research bets for aligning future systems.
- OpenAI, founded in 2015, is a company that is trying to build artificial general intelligence that is safe and benefits all of humanity. OpenAI is well known for its language models like GPT-4. Like DeepMind, it is not principally focused on safety, but has a safety team and a governance team. Jan Leike (head of the alignment team) appeared on the 80,000 Hours Podcast to discuss how he thinks about AI alignment.
(Read our career review discussing the pros and cons of working at a top AI company.)
- Organisation for Economic Co-operation and Development (OECD)
- International Atomic Energy Agency (IAEA)
- International Telecommunication Union (ITU)
- International Organization for Standardization (ISO)
- European Union institutions (e.g. European Commission)
- Simon Institute for Longterm Governance
Our job board features opportunities in AI safety and policy:
How this career path can go wrong
Doing harm
As we discuss in an article on accidental harm, there are many ways to set back a new field that you’re working in when you’re trying to do good, and this could mean your impact is negative rather than positive. (You may also want to read our article on harmful careers.)
There’s a lot of potential to inadvertently cause harm in the emerging field of AI governance. We discussed some possibilities in the section on advocacy and lobbying. Some other possibilities include:
- Pushing for a given policy to the detriment of a superior policy
- Communicating about the risks of AI in a way that ratchets up geopolitical tensions
- Enacting a policy that has the opposite impact of its intended effect
- Setting policy precedents that could be exploited by dangerous actors down the line
- Funding projects in AI that turn out to be dangerous
- Sending the message, implicitly or explicitly, that the risks are being managed when they aren’t or that they’re lower than they in fact are
- Suppressing technology that would actually be extremely beneficial for society
We have to act with incomplete information, so it may never be very clear when or if people in AI governance are falling into these traps. Being aware that they are potential ways of causing harm will help you keep alert for these possibilities, though, and you should remain open to changing course if you find evidence that your actions may be damaging.
And we recommend keeping in mind the following pieces of general guidance from our article on accidental harm:
- Ideally, eliminate courses of action that might have a big negative impact.
- Don’t be a naive optimizer.
- Have a degree of humility.
- Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom.
- Follow cooperative norms.
- Match your capabilities to your project and influence.
- Avoid hard-to-reverse actions.
Burning out
We think this work is exceptionally pressing and valuable, so we encourage our readers who might be interested to test their fit for governance work. But going into government, in particular, can be difficult. Some people we’ve advised have gone into policy roles with the hope of having an impact, only to burn out and move on.
At the same time, many policy practitioners find their work very meaningful, interesting, and varied.
Some roles in government may be especially challenging for the following reasons:
- The work can be very fast-paced, involving relatively high stress and long hours. This is particularly true in Congress and senior executive branch positions and much less so in think tanks or junior agency roles.
- It can take a long time to get into positions with much autonomy or decision-making authority.
- Progress on the issues you care about can be slow, and you often have to work on other priorities. Congressional staffers in particular typically have very broad policy portfolios.
- Work within bureaucracies faces many limitations, which can be frustrating.
- It can be demotivating to work with people who don’t share your values. Though note that policy can select for altruistic people — even if they have different beliefs about how to do good.
- The work isn’t typically well-paid relative to comparable positions outside of government.
So we recommend speaking to people in the kinds of positions you might aim to have in order to get a sense of whether the career path would be right for you. And if you do choose to pursue it, look out for signs that the work may be having a negative effect on you and seek support from people who understand what you care about.
If you end up wanting or needing to leave and transition into a new path, that’s not necessarily a loss or a reason for regret. You will likely make important connections and learn a lot of useful information and skills. This career capital can be useful as you transition into another role, perhaps pursuing a complementary approach to AI governance.
What the increased attention on AI means
We’ve been concerned about risks posed by AI for years. Based on the arguments that this technology could potentially cause a global catastrophe, and otherwise have a dramatic impact on future generations, we’ve advised many people to work to mitigate the risks.
The arguments for the risk aren’t completely conclusive, in our view. But the arguments are worth taking seriously, and given the fact that few others in the world seemed to be devoting much time to even figuring out how big the threat was or how to mitigate it (while at the same time progress in making AI systems more powerful was accelerating) we concluded it was worth ranking among our top priorities.
Now that there’s increased attention on AI, some might conclude that it’s less neglected and thus less pressing to work on. However, the increased attention on AI also makes many interventions potentially more tractable than they had been previously, as policymakers and others are more open to the idea of crafting AI regulations.
And while more attention is now being paid to AI, it’s not clear it will be focused on the most important risks. So there’s likely still a lot of room for important and pressing work positively shaping the development of AI policy.
Read next
If you’re interested in this career path, we recommend checking out some of the following articles next.
These degrees are highly valuable for those hoping to take on important roles in the US federal government.
The US government is likely to be a key actor in how advanced AI is developed and used in society, whether directly or indirectly.
Working at a leading AI company is an important career option to consider, but the impact of any given role is complex to assess.
Learn more
Top recommendations
- AI Governance Course – AGI Safety Fundamentals from BlueDot Impact
- Podcast: Tantum Collins on what he’s learned as an AI policy insider
- US AI policy resources, think tanks, fellowships, and more
Further recommendations
- Article: Working in US AI policy
- Podcast: Tom Kalil on how to have a big impact in government & huge organisations, based on 16 years’ experience in the White House
- Podcast: Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk
- Podcast: Nick Joseph on whether Anthropic’s AI safety policy is up to the task
- Podcast: Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy
- Podcast: Carl Shulman on the economy and national security and government and society after AGI
- Podcast: Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT
- Podcast: Lennart Heim on the compute governance era and what has to come after
- Podcast: Sella Nevo on who’s trying to steal frontier AI models, and what could they do with them
- Podcast: Nathan Labenz on the final push for AGI, understanding OpenAI’s leadership drama, and red-teaming frontier models
- Podcast: Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government
- Career review: China-related AI safety and governance paths
- Podcast collection: The 80,000 Hours Podcast on Artificial Intelligence
- Emerging Tech Policy Careers
- Jobs that can help with the most important century by Holden Karnofsky
- 12 tentative ideas for US AI policy by Luke Muehlhauser of Open Philanthropy
- Why and how governments should monitor AI development by Jess Whittlestone and Jack Clark
- AGI safety career advice by Richard Ngo of OpenAI
- The longtermist AI governance landscape: a basic overview on the Effective Altruism forum
- Four Battlegrounds: Power in the Age of Artificial Intelligence by Paul Scharre
- The New Fire: War, Peace, and Democracy in the Age of AI by Ben Buchanan and Andrew Imbrie
- Think tank reports, such as from CSET, CNAS, CSIS
- Government strategies, such as the White House’s 2023 US National Artificial Intelligence R&D Strategic Plan, NIST’s 2023 AI Risk Management Framework, the DOD’s 2022 Responsible AI Strategy and Implementation Pathway, and the 2021 Final Report of the National Security Commission on AI
- Lessons from the Development of the Atomic Bomb by Toby Ord
- Collection of work on ‘Should you should focus on the EU if you’re interested in AI governance for longtermist/x-risk reasons?’ on the Effective Altruism Forum
Read next: Learn about other high-impact careers
Want to consider more paths? See our list of the highest-impact career paths according to our research.
Notes and references
- If you are not a United States citizen but aim to work in US policy, we think this article offers solid advice.↩
- There may be good reasons in favour of the companies cooperating to reduce risks, but there might also be legal obstacles to some forms of cooperation — such as anti-trust laws. Figuring out how companies can act responsibly while also complying with all relevant laws may be an impactful course of action.↩
- METR is advised by Holden Karnofsky, a co-founder of Open Philanthropy, which is 80,000 Hours’ largest funder.↩
- There are a few important caveats to this claim. A lot of important AI policy research and advocacy appears likely to happen in DC-based think tanks, and it can be difficult to do this work right if you lack the local context.↩
- This list is not exhaustive. There are likely many other policy approaches that would be both worthwhile and justified to pursue, but are not targeted at reducing the biggest risks.
While such policies are likely worth doing alongside policies that reduce catastrophic risks, they are not the primary focus of this article.↩
- See, for instance: Bishop, P. G. & Bloomfield, R. E. (1998). A Methodology for Safety Case Development. In: Redmill, F. & Anderson, T. (Eds.), Industrial Perspectives of Safety-critical Systems: Proceedings of the Sixth Safety-critical Systems Symposium, Birmingham 1998. . London, UK: Springer. ISBN 3540761896↩
- For more information, see: Bernardi, Jamie, et al. “Societal Adaptation to Advanced AI.” arXiv preprint arXiv:2405.10295 (2024).↩
- Open Philanthropy is 80,000 Hours’ largest funder.↩