At 80,000 Hours we think a significant number of people should build expertise to work on United States (US) policy relevant to the long-term effects of the development and use of artificial intelligence (AI).

In this article we go into more detail on this claim, as well as discussing arguments in favor and against.1 We also briefly outline which specific career paths to aim for and discuss which sorts of people we think might suit these roles best.

This article is based on multiple conversations with three senior US Government officials, three federal employees working on science and technology issues, three congressional staffers, and several other people who have served as advisors to government from within academia and non-profits. We also spoke with several research scientists at top AI labs and in academia, as well as relevant experts from foundations and nonprofits.

We have hired Niel Bowerman as our in-house specialist on AI policy careers. If you are a US citizen interested in pursuing a career in AI public policy, please let us know and Niel may be able to work with you to help you enter this career path.

Summary

  • The US Government is likely to be a key actor in how advanced AI is developed and used in society, whether directly or indirectly.
  • One of the main ways that AI might not yield substantial benefits to society is if there is a race to the bottom on AI safety. Governments are likely to be key actors that could contribute to an environment leading to such a race, or could actively prevent one.
  • Good scenarios seem more likely if there are more thoughtful people working in government who have expertise in AI development and are concerned about its effects on society over the long-term.
  • This is a high-risk, high-reward career option, and there is a chance that pursuing this career path will result in little social impact over your career. However we think there are scenarios in which this work is remarkably important, and so the overall value of work on AI policy seems high.
  • We think there is room for hundreds of people to build expertise and career capital in roles that may one day allow them to work on the most relevant areas of AI policy.
  • If you’re a thoughtful American interested in developing expertise and technical abilities in the domain of AI policy, then this may be one of your highest impact options, particularly if you have been to or can get into a top grad school in law, policy, international relations or machine learning. (If you’re not American, working on AI policy may also be a good option, but some of the best long-term positions in the US won’t be open to you.)

Still in her 20s, Terah Lyons has risen to the top of the artificial intelligence (AI) policy world.

Less than two years after finishing her Harvard undergraduate, she was working in the Obama White House, writing a report laying out the administration’s policies on AI. There she set up the White House Initiative on AI, leading a national public outreach campaign on AI policy.

Today, she leads Partnership on AI, a non-profit founded by Facebook, Amazon, Google, and other tech giants. This non-profit intends to develop and share best practices on AI, and is made up of partners from the ACLU to Apple to the Future of Humanity Institute at Oxford University.

Terah is at the vanguard of a growing group of policy experts trying to grapple with the impact AI will have in the 21st Century.

The problem to solve

“As we consider all the positive ways AI will be used in the future, the government must also consider the ways it could be used to harm individuals and society and prepare for how to mitigate these harms.”

— Subcommittee on Information Technology, Committee on Oversight and Government Reform, US House of Representatives, September 2018

At 80,000 Hours we think that one of the most impactful things that people can work on is ensuring that the transition to a world with advanced AI technology benefits all of humanity. We do not attempt to justify this claim here. Rather we refer the reader to a soon-to-be-published article by OpenAI on the importance of, and difficulties associated with, this transition. Until that article is published, we refer the reader to our problem profile and Nick Bostrom’s ‘Superintelligence‘ to make this case.

We can categorize the progress that needs to be made into two broad areas: AI safety and AI policy. We address each area in turn below.

AI safety

“[T]he failure modes of AI technologies are poorly understood. [Work to make AI more robust] is essential for the Department [of Defense] to deploy AI technologies, particularly to the tactical edge, where reliable performance is required.”

— AI Next Campaign description, DARPA, US Department of Defense, September 2018

One type of progress is on what is often referred to as AI safety research. As our profile on this problem explains, this subfield within computer science addresses the technical question of how to ensure advanced AI systems do what we want them to do without unwanted side-effects. It’s being studied by groups like Prof. Stuart Russell’s Centre on Human-compatible AI at University of Berkeley, OpenAI’s safety team, DeepMind’s safety team, Oxford University’s Future of Humanity Institute’s safety group,2 and MIRI.

The DeepMind safety team give a good high-level overview of the work required in this field in this post. An alternative landscape overview is given in ‘Concrete Problems in AI safety‘ (summary). We also have a podcast with OpenAI safety researcher Paul Christiano which goes into detail on his perspective of this field.

AI policy

The other type of progress required is on what OpenAI is calling the AI policy challenge. Below we list a non-exhaustive, partially-overlapping categorization of the issues central to the AI policy challenge that concern us most:3

  • Ensuring broad sharing of the benefits from developing powerful AI systems.
  • Avoiding exacerbating military competition or conflict caused by increasingly powerful AI systems.
  • Ensuring that the groups that develop AI are working together to avoid rivalrous development and are instead developing and implementing safety features.

When we discuss AI policy here, we are referring not just to law and formal policy, but also to norms shared between actors, best practice, private contracts, and so on.

We do not go into detail on the AI policy challenge here, but will instead refer the reader to OpenAI’s article on this topic when it is published.

At 80,000 Hours, we think that ensuring that advanced AI benefits all humanity requires substantial progress to be made on both AI safety and AI policy, particularly work with a longer-term perspective that considers more advanced AI systems. Our impression is that these two topics are similarly important, because they both seem difficult to solve and because it looks like we will need to solve both of them in order to see beneficial outcomes from the deployment of advanced AI systems. Despite our view that these two challenges have similar importance, AI safety is the older field and so there are currently more people working on AI safety than there are working on AI policy. We have written about AI safety before, and in the remainder of this article we focus on AI policy.

Advanced AI as a coordination problem

“We have a coordination challenge, and need to find a focal point… that allows us to work constructively together to wrestle these problems tomorrow — with more information, and when our security isn’t in jeopardy.”

— Brendan McCord, Joint AI Research Center, US Department of Defense, speaking at The Aspen Institute in November 2018

Like many other modern problems, the AI policy challenge constitutes a coordination problem.4 One element of this coordination problem is that the perceived rewards from accelerating the development of AI capabilities may create a race-to-the-bottom. That is, they create an incentive to cut research into AI safety, and instead focus on increasing AI capabilities in order to get ahead in a perceived race.

Racing in this way may be counterproductive even from actors’ self-interest.5 Additionally, perceptions or misperceptions of a race could exacerbate rivalrous development,6 as the nuclear arms race did during the cold war, potentially even leading to conflict. In such a scenario, a lack of coordination risks a worst-case outcome for all actors, while a coordinated response in which parties credibly pre-commit to the broad sharing of benefits could allow a good outcome for all.7

Solving a coordination problem of this nature would require coordination between a number of key players. It is difficult to know which actors might be involved in the development of advanced AI systems, but plausible candidates include leading AI development labs and states.

Definitions: artificial general intelligence and other types of advanced AI system

Artificial general intelligence (AGI) is a hypothetical highly autonomous system that we define as being able outperform humans at most economically valuable work.8 This term is often used in contrast to narrow AI which can outperform humans in a specific intellectual task, such as playing the board game Go, but could not perform outside its area of expertise, for example if the algorithm tried to control a robot attempting to set a table.

Both AGI and narrow AI lie on a spectrum, and we use the term advanced AI system as a vaguer, broader term that includes both AGI and future narrow AI systems that are less capable than AGI but still substantially more capable than AI systems in existence today.

Key actors

At the moment most cutting-edge AI R&D is done by private, non-state actors. Leading AI labs that are working towards developing AGI include DeepMind and OpenAI.

Researchers at DeepMind and OpenAI are already studying the AI policy challenge, and conversations on how to make progress on these issues are happening via industry groups such as Partnership on AI where Terah Lyons works.

Governments often have a mandate to address pressing social problems and thus invest less in long-term, more speculative problems. With AI, the combination of uncertainty and the potentially rapid rate of technological advancement mean that long-term effects might be decision-relevant soon. However we have not seen evidence of much engagement on more speculative, long-term risks by governments. We discuss the relative lack of engagement by the US Government on advanced AI systems more in a section below.

There are a number of ways to improve coordination between the actors that may be involved in this challenge. Given China’s ambitions to lead the world in AI by 2030, we think another promising option is becoming a “China specialist” with a focus on AI. We cover this path elsewhere.

In this article, we address what we think is one of the most important and neglected paths: helping develop policies relevant to the development and societal implementation of advanced AI systems, with a focus on the US government.

This work could involve researching strategies that could help increase the chances of good outcomes, or helping to formulate or implement government policy around existing promising strategies, or some combination of the two.

We think that for our readers who are US citizens, an AI public policy career may be one of the most impactful career opportunities currently available.

What do we mean by US AI public policy careers?

We are defining US AI public policy careers as any career path aimed primarily at improving the US government’s eventual engagement with, and response to, AI.

Which parts of AI policy do we think are most important to work on?

Within the category of US AI public policy careers, we are particularly interested in careers aimed at improving the eventual government response to, and engagement with, advanced AI systems, as we think this is likely to include the highest-impact opportunities within the field of AI public policy. This is because more powerful systems are likely to have more societal impact, and the worst-case scenarios are more likely to be catastrophic with advanced AI systems.

While advanced AI systems will likely have outsized impact, decisions made today could be key to shaping the environment in which advanced AI systems are developed and deployed. There are likely to be path dependencies and feedback loops in how people think about and respond to current AI issues that will impact on the potential development and deployment of advanced AI systems.9

Relatedly, today we can learn, and help build trust and collaboration on near-term issues before dealing with more tricky policy challenges associated with advanced AI systems. Additionally, current issues in AI provide opportunities to build expertise, influence, relationships, and career capital.

Overall we are somewhat uncertain as to which are the most impactful areas within AI policy to work on. However, to be clear, this does not mean that we think people should go and work on just about any area of AI policy. Rather we recommend trying hard to figure out what is and isn’t going to have leverage over the highest-stakes developments, and we think that some guesses are more plausible than others based on the current state of knowledge.

As we discuss later, one way of thinking about contributing to AI policy is that the most valuable contributions may be in spotting policy traps, path dependencies, or feedback loops before they become overt,10 because when they are overt it is often too late.

When thinking about where within AI policy to work, you can probably develop career capital and insights in a variety of areas, and so we think you should primarily be asking “which area is most likely to contain path-dependencies that might be important to the development and societal integration of advanced AI systems?”

Based on our conversations with experts, as well as the above heuristic, here are some of the areas of AI policy that we think are likely to be particularly impactful to work on now:

  • Issues related to AI, global cooperation and diplomacy, such as developing research collaborations and reducing risks from races for AI capabilities.
  • Issues relevant to AI race dynamics, such as foreign investment controls on AI startups, visa restrictions on AI talent, and government spending on AI R&D.
  • The US government’s approach to AI safety, such as the verification and validation agenda, funding of AI safety research, and the use of AI in critical systems, particularly in the military and intelligence communities.

Policy careers involve moving between different types of roles and institutions

US AI public policy careers could involve moving between a number of different types of institutions, from industry labs to think tanks to advocacy organizations to academia. They may also involve working for the US government directly, perhaps in the Department of Defense (DoD), the intelligence community, White House Office of Science and Technology Policy (OSTP), Department of State, or elsewhere. We go into more depth about which roles to aim for later.

These careers don’t necessarily involve working on AI in the near-term, but could instead involve developing skills and expertise that would be relevant to AI roles in the future. For example one might do graduate study, or work at DoD in non-AI roles for instance, with the aim of building experience required to excel in AI roles later in one’s career.

The aim of these roles is to build up expertise relevant to AI public policy, and then contribute to solving the AI policy challenge described earlier. Eventually this will likely involve activities such as developing and implementing coordination mechanisms, and brokering verifiable agreements between key players that may develop advanced AI systems.11

There is a spectrum of activities from research to implementation

Broadly, we can categorize work on AI policy as falling on a spectrum from research to implementation.

By research we mean identifying potential problems and developing solutions to the problems posed by the development and deployment of advanced AI systems. By implementation we mean helping to make sure existing policy proposals are successfully put into practice. A more fine-grained description of the spectrum between research and implementation is given in this footnote.12

At the research end of the spectrum are people like Prof. Allan Dafoe, who leads the Center for the Governance of AI group at Oxford University’s Future of Humanity Institute. Allan does international relations research into the AI policy challenge. Researchers tend to focus on more high-level questions about how key actors should respond.

At the implementation end of the spectrum are people who translate high-level academic policy papers into concrete proposals for how policy and laws should be changed. It also includes the people responsible for getting proposals passed, and making sure policy changes are actually implemented and enforced once they’re agreed on. Terah Lyons, arguably fit into this category when she worked at OSTP, developing policy on AI within the US Government.

In practice, many roles involve some of both research and implementation, and we anticipate that many US AI public policy careers will involve moving between roles in a range of positions on this spectrum.

Why pursue this path?

At 80,000 Hours, we currently rank pursuing an AI public policy career as one of our few top-recommended priority paths.

We think US government AI public policy roles both have high potential for impact, and build up some of the most in-demand skills in the community. In both the 2017 and 2018 80,000 Hours’ talent surveys, leaders of organizations in the broader effective altruism (EA) community ranked government and policy expertise as the skill most needed by the EA community as a whole.

Why do we rank US AI public policy careers so highly? In brief, it’s because we think it is both important and neglected to play a key role in helping the US Government navigate the transition to a world with advanced AI systems. Below we give four reasons why we think US AI public policy roles are more impactful than many people might think, and we go into each reason in more detail in the headings below:

  1. The US government is a powerful actor that may play a key role in the development and deployment of advanced AI systems.
  2. Your chances of reaching a government role in which you can have a large positive impact on the world are probably high enough that the expected value of your work is likely to be substantial given government’s huge influence.
  3. There are very few people with relevant expertise in the US Government working on issues related to advanced AI systems, and the government is somewhat keen to hire, so these roles are unusually high impact and there is potential for plenty of opportunities to advance.
  4. Working for the US government in an influential policy role gives you many options for what to do afterwards.

Because of the large uncertainties regarding career decisions in this domain and our relative lack of experience with government careers, there is a chance that 80,000 Hours’ recommendations here are not sound advice. Nonetheless, they represent our current best guess on this potentially highly impactful career path, and we will endeavor to express our uncertainty whenever we make recommendations.

1. The US government is a powerful actor that may play a key role in the development and deployment of advanced AI systems

The US Government is likely to be involved with the development and deployment of advanced AI systems

AI and society

Whenever technology causes rapid societal change, it is likely that government will get involved in developing or governing the use of that technology.

AI is a general purpose technology with wide-ranging applications. In the coming decades we are likely to see AI revolutionizing sectors across society, from transport to healthcare, and from education to manufacturing.

If poorly managed, advanced AI systems could have myriad societal impacts, from substantially increasing economic growth to finding cures for many illnesses, but also potentially from mass unemployment to the possibility of extreme wealth inequality.13 As AI systems become more advanced and more societal issues arise, governments may become increasingly involved.14

Currently AI development is primarily within private companies, and the ultimate involvement–direct or indirect–of governments remains uncertain.

AI and national security

AI is also likely to transform intelligence gathering, warfighting and the domain of national security more broadly. The US military may get involved in the development and deployment of powerful AI systems for this reason.

The US military has kept its superpower status over the past half-century in large part through maintaining technological superiority over its adversaries. The US is now looking to AI to help it maintain a technological edge.

In the national security domain, AI systems are already being used in applications ranging from cybersecurity, military logistics, and image processing. AI systems are being tested for use in unmanned aerial vehicles and undersea warfare.15 An AI system can already beat a retired professional fighter pilot in simulated dog fights.

We have argued above that the US national security community is likely to get involved in the development and deployment of advanced AI systems because AI is likely to be a powerful technology with wide-ranging applications. Below we argue that the US could end up influencing the long-term trajectory of AI simply because it is an involved powerful actor.

The US Government is a powerful actor

The US Government is well resourced

The US Government’s federal budget is approximately $4 trillion/year. This is approximately the annual revenue of the world’s 14 largest companies by revenue.

The US’s military expenditure is larger than that of the next 7-10 countries combined, depending on how you count. To give a sense of scale, the Department of Defense (DoD) employs almost 3 million people, half a million of whom are in active duty overseas at any time. DoD also has a budget of over $700 billion per year. It is worth noting, however, that while this funding is substantial, government budgets are much less flexible than corporate or nonprofit budgets.

DoD has been ramping up its spending on AI technologies, spending $2.4 billion in 2017 on AI-related projects, a figure which has been growing by roughly 15% per annum since 2012.16 However almost all of this funding is being spent on development and implementation as opposed to research, and we have not looked into how much of this budget is going into research.

Despite these large budgets, the level of technical AI expertise at leading AI labs is substantially higher than that within the US military.

The US Government has unique abilities

In addition to being well resourced, the US Government is able to yield unique influences. Governments have many tools at their disposal, including taxation and tax breaks, regulation, antitrust actions, requiring companies to prioritize government contracts, or through the many other levers of influence available to the US Government.

The use of these levers of influence is of course kept in check by the three branches of Government, and in turn by public opinion and the many other forces that influence political decisions.

Nonetheless, because of its resources, abilities, and strong incentives to engage with the issue, the US Government is likely to be relevant, whether directly or indirectly, to the development and deployment of advanced AI systems.

2. Individuals can have a large positive impact in government

Relevant experts on positively shaping AI have told us that they think that US AI public policy careers are likely to have high expected value. People who understand government careers have argued that they expect ambitious people pursuing this path to move into roles that can have a large positive impact on the world substantially quicker than the typical time required to advance up the career ladder in government. However it has also been suggested to us that it will generally take someone longer to advance up the career ladder in government than in other industries because government is often less meritocratic than industry or academia.17

Can individual federal employees influence government?

Most mid-level and senior federal employees that we spoke with were able to give us an example of how they had a large positive impact through their role. Some of their examples involved starting new impactful programs worth $10s of millions, saving Americans $100s of millions, or moving $billions to something potentially more impactful. We have not been able to vet these stories, but they persuaded us that, at the very least, mid-level and senior federal employees feel as though they can sometimes have a large positive influence on government.

One thing we found interesting was that federal employees repeatedly argued that it was relatively rare for them to be responsible for remarkably good things happening, and instead more frequently they thought they had positively influenced government by preventing unintentionally harmful policies from being adopted.18 We expect this pattern to repeat itself for people working in the domain of AI policy. This phenomenon makes it easier to have a positive influence than it would be if you could only have impact by actively making positive things happen (which is what people often think of when they imagine having an impact in government).

A different type of argument that federal employees can influence government comes from looking at elected officials. One might naively assume that elected officials make the decisions and federal employees simply carry out the wishes of the elected officials. We do not think this is how government works in practice. Elected officials are necessarily generalists so they rely heavily on specialists within government not just to implement things, but also to heavily inform their priorities, strategies and overall policy goals.

Yet another line of argument that individual federal employees can impact government comes from looking at the budgets controlled by different types of people within government.

How large are the budgets that government officials oversee?

This is only a very rough heuristic, but by dividing the $1.2 trillion discretionary federal budget by the number of people at different levels of seniority, we can estimate the average budget that different subsets of people in government oversee.

Subset of peopleApproximate numberBudget per person per year within this subset
All federal employees (except US Postal Service workers)2.1M$600,000
Federal employees working in Washington D.C.173,000$6.9M
Senior Executive Service and political appointees11,000$109M
Political appointees4,000$300M

Note that this method is more of an upper bound on the average influence that people in different groups have over budgets. There is substantial double-counting going on: the political appointee, SES employee, and federal employee can’t all have complete control over their entire budgets. In practice each person will be heavily constrained by their superiors, and managers will often need to delegate control of parts of their budgets to their teams. We do not know how budgetary control is divided between the different levels of seniority within government, but we would guess that this method greatly overestimates the influence of junior people and underestimates it for senior people.

Nevertheless, these figures are so high that it makes it plausible that some people have a significant scale of influence, especially if you can be either promoted into the SES or picked as a political appointee.

Additionally, much of the influence that you will have as a federal employee is probably not best thought of as influencing budgets. Governments have all sorts of abilities that do not involve spending money, some of which are described above. In the case of AI, we can imagine many important actions that would not involve much government expenditure, such as negotiating agreements to avoid a race-to-the-bottom on AI safety.

We have argued that individual federal employees can have considerable influence over government, especially in more senior roles. But what are your chances of moving into one of these senior roles?

What are your chances of moving into senior roles in government?

A very simplistic model in which only Washington DC employees are promoted into Senior Executive Service (SES) roles suggests that in expectation you will spend 6% of your career in the SES. Of course this is only an average and most employees will not be promoted into the SES, while some will spend much of their career in it. Because no-one spends their entire career in the SES, and based on our understanding of the demographics of our readership, we would guess that the average 80,000 Hours reader has a >6% chance of being promoted into the SES if they become a federal employee in Washington, D.C.

If you have been or can go to a top law or policy school, then your chances of making it into the SES or political appointees are particularly good. We have not collected data to support this view, but it is perceived wisdom in DC that Yale and Harvard Law grads have higher chances of being promoted to the SES and/or becoming political appointees. To illustrate this point, one person we spoke with said that if you go to Yale Law, then when people in government find out about your educational background they sometimes awkwardly joke that you will probably be their boss one day.

This fits with our impression of other political domains. In the past we have made estimates of the chances of people with the most predictive academic credentials getting elected into congress and the UK parliament, and found that they are surprisingly high.19

Comparing the competitiveness of government roles against those in industry

Our perception is that government roles are generally less competitive than similarly influential roles in industry, in part because many people prefer the freedom and pay of industry more than they like the influence of roles in government.

To illustrate this point with an example, we can make a direct comparison between working in government and earning-to-give. When someone is earning money in order to give it away, this is a competitive endeavor because everyone would like more money. In contrast there are far weaker personal incentives to want to influence government budgets, and so we should expect the influence per role as measured in dollars to be higher in government than in earning to give. Of course government money is less flexible than personal donor dollars, but even once we factor this in we think the effect still holds.

Despite this positive outlook, elections and other political factors make career progress somewhat unpredictable, which can be tough to bear even if the expected value is high. We discuss this and various other downsides of US AI public policy careers below.

How meritocratic is career progression within government?

Civil servants we spoke with thought that while government careers are perhaps less meritocratic than in industry or academia, they are still fairly meritocratic and the most well-suited people have better odds of rising to the top.20 Senior officials in government are often very impressive individuals, and have on occasion had very successful careers in academia and the corporate world beforehand. As mentioned elsewhere, particularly talented and hardworking individuals may be promoted faster by rising up through think tanks or academia and then moving into government.

3. There is an AI skills shortage in the US Government

AI skills are in demand in government

Senior staff within governments and top AI labs tell us that they are struggling to find experienced and qualified AI talent to employ. At every national security and technology conference that 80,000 Hours has attended, speakers have lamented the government’s lack of expertise on AI, and noted the substantial demand for such expertise within government. For example, DoD’s new Joint AI Center alone is apparently looking to hire up to 200 people.

While much of this demand is probably for machine learning engineers and other technical specialists, policy practitioners with technical AI expertise also seem to be in demand.

On the other hand, it seems like there is currently less demand for technical expertise in general in the US Government than there has been in previous administrations, though this may be confounded by the fact that many scientists appear reticent to work for this administration. We expect demand for technical expertise to increase again in future administrations.

This demand for AI expertise is not limited to the Department of Defense. Because AI has so many uses and impacts, many parts of government are likely to be involved in the development of policy related to AI. For example, to date we have seen activity on AI from dozens of different agencies, offices and committees within congress and the executive branch.

In addition to roles in government, there are likely to be dozens of US AI public policy roles in industry, think tanks, non-governmental organizations, inter-governmental organizations, intelligence and defense contractors, academia, and elsewhere in the US AI policy community.

Very few people in government are thinking about advanced AI systems

There are perhaps four levels on which US AI public policy relevant to advanced AI systems is currently being neglected. We outline them below:

  1. Safety and policy issues related to advanced AI systems are very neglected relative to the scale of the issue. We argue this point in more detail in our problem profile on this issue.
  2. Within issues related to advanced AI systems, the AI policy challenge described above is neglected relative to work on AI safety, despite the fact that they seem equally important to us. We currently know of less than a dozen people working on this issue full-time, and almost no published work on the topic because it is so new.21
  3. While there is a fair bit of work happening within AI policy, we see work relevant to long-term AI public policy as being currently neglected. Additionally, there are few AI policy practitioners with a technical AI background, leaving this perspective neglected.
  4. Within work on the AI policy challenge, we think that working within key governments such as those of the US and China, is more neglected than working in top AI labs.

There are many more roles available for talented people

Eventually we think it would be valuable for hundreds of people to build expertise and career capital in policy areas relevant to the development and deployment of advanced AI systems.

We can motivate this claim with an estimate looking at senior positions. We think it would be valuable to have at least 20 senior people with a background in and familiarity with issues related to the development and societal integration of advanced AI systems, as we think at least this many senior offices are likely to be involved. If a typical entrant has about a 10% chance of making it into a senior role, then it would be worth perhaps 200 people developing expertise and pursuing these careers.

Additionally we would like to see an academic field of dozens or perhaps hundreds of people studying the AI policy challenge described above, similar to the fields that have emerged developing policy related to nuclear weapons, biosecurity, or cybersecurity.

It is worth highlighting that thinking about the ideal numbers of people in an area ignores opportunity cost, and that there are diminishing returns to the number of people with expertise on these issues. If enough people enter this broad career path then additional people would become less impactful, so it might cease to be a top option on the margin, despite still being valuable. But we think we are currently a long way away from that point, and we expect that the first few dozen people to pursue US AI public policy careers are likely to be particularly impactful in expectation. So on the current margin we encourage people to consider pursuing this option if they have the right background and can develop relevant expertise.

Given that there are perhaps a dozen people currently pursuing US AI public policy careers focused on advanced AI systems, there is room for far more people to pursue this option.

Additionally, the small size of the field also means you can advance faster, and play a key role in the field early on.

4. Government careers were already a good default option

We expect many of the positions in this career path to open up opportunities to have a large positive impact even in important policy areas besides AI policy:

  • There are many pressing global problems you could work on in these roles. For example, from within the national security community, you could help reduce biological global catastrophic risks, or you could work to reduce the chances of a great power war.
  • As discussed above, governments usually control large quantities of resources per decision-maker, and so if you can become one of those decision-makers then you can have outsized impact if you make good decisions.
  • Government careers have reasonable exit options into other social impact jobs. In government you learn a range of widely applicable skills such as management, stakeholder engagement, and quantitative analysis of real-world problems. Working in the US Government in influential policy roles is generally seen as relatively prestigious when applying for future jobs, though it depends on the audience and your exact role (e.g. entry level top-end management consulting jobs are often seen as more prestigious than entry-level policy jobs).

This means that even if a career in AI policy doesn’t work out, you’ll have plenty of opportunities to work on other key global problems within government, or to exit into other relevant social impact jobs.

As mentioned above, US AI public policy roles are not just in government. Similar arguments on career capital and impact to those given above apply for think tanks and academia (see links for our career profiles of these options), however these options are generally more competitive. For US AI public policy roles in nonprofits, we discuss how well some of these arguments apply here. Note that none of these links discusses career capital in the context of AI.

Arguments against

Your social impact in this career path is high risk, high reward

Your chances of having a large social impact by working in AI policy is highly uncertain. You would be intentionally taking a big, low probability bet, not just with regard to your own contribution (which is generally uncertain in any cause), but also with regard to the overall case that this area is promising to work in. While in expectation we think this is one of the highest impact career paths, it may be a bad fit if you’d be demotivated by the high probability your impact will end up very small.

In order for you to have outsized impact you would need to:

  • Be in the right place at the right time
  • Have the foresight to take the right actions in government
  • In addition to being thoughtful with good judgment, you need to have the right expertise, which can be somewhat difficult to predict in advance

Additionally, the following conditions would need to be met:

  • The development and societal integration of AI would need to have major potential downsides and/or upsides
  • Coordination problems would need to be solvable, but not too solvable
  • There would need to be critical junctures, path dependencies, policy traps, or strong feedback loops in the development and governance of AI systems

As we discuss in the rest of this section, all of these things happening in concert seems unlikely, and so we think the median outcome is that individuals working on AI policy will have little social impact. However in the scenarios in which all of the above and other caveats are satisfied, your impact could be very large indeed, and so we still recommend this career because of its large expected impact.

AI policy research might be intractable

It is difficult to know what actions to take now to improve the development and deployment of advanced AI systems decades in the future. The correct actions to take could be highly dependent on the hard-to-predict specifics about the future. This is one reason why we think that people with foresight who are well-calibrated when assessing uncertainty are likely to be able to make valuable contributions in this domain.

If AI policy research does turn out to be intractable, then it might be better to focus on positioning your career so that you can do AI policy research and implementation in this area in the future when it becomes more tractable. However climbing the career ladder to position yourself well for future impact can also have its challenges, as described below.

You need to be in the right place at the right time

It is difficult to predict exactly when and where key AI policy decisions might be made. To be in the relevant part of government to participate in these decisions, you need to either get lucky, or you need to predict how jurisdiction disputes between parts of government will play out and position yourself well in advance. You can mitigate this risk somewhat by becoming someone whose opinion people seek out for advice, so that you will be consulted regardless of where AI policy is made within government.

Relative to other industries, it can take a long time to climb the career ladder in government to get into positions of influence. You need to be in a position of influence at a time when AI technologies have advanced enough that key decisions on AI policy need to be made. You may be able to shortcut this career climb by rising up through academia or think tanks and then transitioning into roles that are politically appointed, however this route is extremely competitive.

Governments can be difficult to predict

Governments can be difficult to predict, in large part because there are so many moving parts in any decision. As a consequence it can be difficult to predict whether a given action is on balance positive or negative. We hope that individuals will get better at this over time as their political instincts improve. Nonetheless, it may be relatively unclear how to use any influence you gain to effect positive change in the world, with the possible exception of helping to prevent obviously bad things from happening. Additionally, we don’t know to what extent having additional concern for the long-term outcomes of AI will allow you to make better decisions than a typical talented government employee or political appointee.

International coordination problems often haven’t been solved historically

Earlier we argued that advanced AI systems pose an international coordination problem. We argued that helping solve this coordination problem could be one of the impactful things that you could work on. Unfortunately, humanity doesn’t have a good track record of solving hard international coordination problems, though this argument cuts both ways as described below.22

Climate change presents a classic example of the challenges associated with solving international coordination problems. Nations have discussed agreements to limit carbon emissions for coming up on three decades. Despite this effort and modest wins in the form of the Kyoto Protocol and Paris Accords, only a handful of countries have legally binding goals to reduce their emissions, and indeed global carbon emissions continue to increase.

On the other hand we can point at successful efforts to limit emissions of ozone-depleting gases through the Montreal Protocol as a positive example of international coordination. The agreement took effect in 1989, causing dramatic reductions in the emissions of many ozone-destroying CFC gases, and consequently the ozone hole over the Antarctic is slowly shrinking.23 However it is arguably much easier to reduce emissions of CFCs than CO2, the greenhouse gas contributing most to global warming.24

The agreements to limit emissions of CFCs and CO2 described above involved almost every nation in the world coordinating through the UN. Additionally, within nations, almost everyone emits CO2. In general it is harder to reach agreement between all parties when there are more parties involved. Coordinating on advanced AI systems might be easier if only a few actors have the capabilities to develop or deploy the technology.25

The development of advanced AI systems could be difficult to coordinate on, because the benefits of being ahead in its development could be substantial. On the other hand AI might be easier to coordinate on than climate change if the consequences of failing to coordinate on AI looked likely to occur sooner than the worst impacts of climate change,26 or if the worst-case scenarios from developing advanced AI systems could be even worse than climate change.27

If we are unable to make progress on the international coordination issues around advanced AI systems, then this career path becomes less tractable and less appealing. However note that this argument cuts both ways, and working on this problem is also lower impact if the problem is likely to solve itself. Additionally, if a coordinated solution is necessary to avoid the worst potential risks from advanced AI, then working on the coordination problem becomes more important. From a consequentialist perspective, your contribution will be most impactful if the problem is precisely hard enough that your contribution makes a difference to whether it is solved.

Will you be able to influence any critical junctures associated with the development of AI?

A critical juncture is a term in political science used to describe the point at which path dependencies occur. Critical junctures could occur in the development and deployment of advanced AI systems for a variety of reasons detailed in this footnote.28

If we are looking for opportunities to have a large positive impact on the development of AI then critical junctures look particularly relevant, especially if you subscribe to the long-term value thesis.29 Critical junctures seem important because they are moments when, for example, we might lock ourselves out of development trajectories that are particularly good, or into trajectories that are particularly bad. However there is no guarantee that critical junctures will exist in the development of advanced AI systems,30 and even if there are we may not be able to affect them.31

At 80,000 Hours we tend to think it is plausible but unlikely that there will be no critical junctures associated with the development and deployment of advanced AI systems.

Critical junctures seem most plausible to us when advanced AI systems of interest constitute AGI, and the development of AGI is discontinuous or unexpected. This suggests that you might want to focus your efforts more on discontinuous AGI scenarios because these are the scenarios in which you might be able to have the most positive impact.32

In addition to the risk of having little impact, there are other impact-based reasons against pursuing a US AI policy career now.

It is probably better to work in AI policy at a top AI lab

We would generally recommend taking an AI policy role at Google, Microsoft, Facebook, Intel, DeepMind or OpenAI over a similarly senior role in DC for the following reasons:

  • In the near-term leading AI labs are likely to be more relevant to the development of AI than the US Government
  • There are fewer people in senior roles at leading AI labs, so a given role likely has more impact over the development of AI
  • We think it will likely be relatively easy to move into the US Government having worked on AI policy within a top AI lab
  • We think there is a reduced risk of needing to work on less important problems to advance your career in AI labs
  • You are likely to develop more domain-specific knowledge about AI at a top AI lab than at almost any other organization

Note that these arguments point against working in government today, but they do not rule it out as an impactful place to go later in your career after working in a top AI lab, for example if the US government decides to take a more active role in the development and deployment of AI.

If you receive an offer for a policy role from a top AI lab then we recommend strongly considering it. Nonetheless we also think that roles focused more on US AI public policy are also likely to be highly impactful.

Other arguments against

  • Senior decision-makers in the US Government need to be experts on a range of issues beyond AI. In order to gain the skills required to be promoted into more senior positions you will likely spend most of your time on issues that are not directly related to AI. Some of these skills will not be transferable to new contexts if you leave, making this route provide less flexible career capital. However many of these things will likely be more relevant than one might think, helping you learn how to craft policy in different contexts, improving your political instincts, developing stakeholder engagement skills, learning how to make things happen in large bureaucracies, etc. Nonetheless, this may be an argument in favor of working in, for example, a top AI lab where you can focus your attention more on learning about AI, and then later transition into a government role where you can also focus full-time on AI.

  • Becoming a political appointee for one administration can hurt your ability to work for another administration of the opposing political party. As a result it is worth thinking through your political allegiances relatively carefully before declaring them. You can also work as an apolitical civil servant until you are senior enough to need to ‘pick a side’ to get political appointments.

  • There are also some downsides to this career that will affect your personal life, such as low salaries, which we cover below.

How to pursue a career in US AI public policy

We hope to discuss how to pursue AI public policy careers in more detail in future content. Here we briefly outline where to aim your career long-term, before listing seven initial routes into US AI public policy careers.

Where to aim long-term

Political appointees and the Senior Executive Service

Ultimately, we think that the roles that are most likely to have influence over US policy relevant to the development and societal integration of advanced AI systems are likely to be senior positions within the US Government, particularly those within the national security community broadly construed.

Senior roles in the US government broadly fall into two categories: political appointees and the senior executive service.

Political appointees are selected by the President, Vice President, or agency heads. There are approximately 4,000 political appointees in an administration.33

The Senior Executive Service (SES) forms the level just below political appointees, and serves as a link between the political appointees and the rest of the Federal (civil service) workforce. There are approximately 7,000 SES positions in an administration. Additionally, up to 10% of these SES roles may be occupied by political appointees.

Political appointees have a variety of backgrounds, and are frequently drawn from industry, academia, think tanks, retired military officers, and former federal employees.

The SES, on the other hand, is more typically made up of federal employees who have worked their way up within government.

Ultimately we think that political appointees are likely to have the most influence over US AI policy, though we think that the SES can also wield significant influence while also acting as a valuable stepping stone to political appointments.

Which parts of government are most relevant?

After talking with several people with knowledge of DC and AI, we put together this list of the parts of government that we guess are likely to be most relevant to US policy on the development and deployment of advanced AI systems:

  • Congress
    • Armed services committees
    • Intelligence committees
    • Appropriations committees
    • Commerce committees
  • White House
    • National Security Council
    • Office of Science & Technology Policy (OSTP)34
  • Department of Defense
    • Office of the Undersecretary of Defense for Policy
    • Office of the Undersecretary of Defense for Research & Engineering
    • DARPA
    • Joint AI Research Center
    • Defense Science Board
    • Defense Innovation Board
    • Defense Innovation Unit
    • Office of Net Assessment35
    • Research laboratories of each of the armed services
    • Cyber Command
    • JASON36
  • Intelligence Community
    • IARPA
    • In-Q-Tel
    • National Security Agency (NSA)
    • Science advisors within the various agencies that make up the intelligence community
  • Other agencies
    • Department of State
    • Office of Management and Budget (OMB)
    • Committee on Foreign Investment in the United States (CFIUS), within Treasury
    • National Institute of Standards and Technology (NIST), within the Department of Commerce

Other relevant actors

A variety of other organizations and actors are likely to have influence over US policy relevant to the development and societal integration of advanced AI systems, however from talking with people in government it seems like most outside organizations and actors are likely to be less influential than the key actors within the US government. The other actors that are likely to be involved include:

  • Research organizations
    • Think tanks
    • Universities
    • Federally funded research and development centers
  • Industry
    • AI labs, such as OpenAI and Google
    • Industry groups, such as Partnership on AI
  • Media organizations
  • Advocacy organizations

In summary we think that the roles that are likely to have the biggest influence over US policy towards advanced AI systems are likely to be political appointee positions within parts of the national security community.

To get into these roles, you must build expertise, experience, and a reputation at the intersection of AI and national security, within and/or outside government. Below we list seven ways to start doing this.

Key routes in

We have ranked the top routes to establish your career early on that we think are most promising right now:

  1. If you can get a position working on policy or ML research at a top AI lab, such as OpenAI or DeepMind, then this could be a valuable opportunity to skill up before moving into government in the future.

  2. We recommend applying for a prestigious fellowship that comes with a job in the US government, such as the Presidential Management Fellows for recent graduates of advanced degrees, the AAAS fellowship for people with science PhDs or engineering master’s,37 or the TechCongress fellowship for mid-career tech professionals.

  3. A postgraduate course where you can work primarily on policy questions relevant to advanced AI systems, such as the Research Scholars program or a PhD in collaboration with the Center for the Governance of AI, both at the University of Oxford’s Future of Humanity Institute.38 This will give you a good grounding in many of the concepts relevant to AI policy in the context of advanced AI systems, as well as experience doing novel research and useful career capital.

  4. Work directly on AI policy through a top DC think tank such as the Heritage Foundation,39 the Wilson Center, the Brookings Institution, or the Center for a New American Security. These roles are usually very competitive. The Belfer Center at Harvard University, and CISAC and The Hoover Institution at Stanford University are also worth considering for their focus on the intersection of science, technology and international affairs, despite being outside DC.

  5. For many people we think one of the best routes into the US Government is to do a prestigious master’s and then enter government as a federal employee. Master’s programs that we currently recommend doing include:

    • A prestigious law JD from Yale or Harvard, or possibly another top 6 law school.
    • Security studies, public policy or international relations at one of the top 6 policy schools
    • A master’s in machine learning at a top 10 CS department.40
    • Or possibly a master’s in another relevant subject, such as S&T policy, war studies, computer science, economics, or cyber-security, though we often recommend law, security studies, public policy, international relations, and machine learning over these options.
  6. We also recommend interning and working on the Hill as a congressional staffer, with the aim of working your way onto relevant committee staff. We have a detailed career profile on this path. We particularly recommend working on the Hill as an intern, after or during a master’s degree, or after an undergrad if you have reasons for not wanting to do a master’s. We generally recommend working as a congressional staffer over a graduate-entry federal employee role, because we think you’ll learn more and get better connections.

  7. Taking an entry-level role as a federal employee, ideally working on something relevant to AI. This option could be ranked much higher in this list if the role was more senior or more relevant to AI. For example we would probably rank an entry-level role at OSTP working on AI at or near the top of this list, especially if it was with an administration that prioritized S&T policy.

  8. Working as a defense contractor can also be a valuable route into government jobs. Many government agencies find it easier to hire a contractor than to hire a federal employee, so this can often be an easier way to obtain a security clearance and get a job.

The specifics of which option ranks where in this list are highly uncertain, and will depend heavily on your background and personal fit, as well as specifics of the available role. We have highlighted a selection of promising roles that you can apply for now on our job board.

One common piece of advice we heard regularly was to:

  1. Identify someone who is rising up the ranks quickly and has a decent chance of becoming the most senior government official in your domain of interest
  2. Work closely with them, win their trust, and become their mentee
  3. Make yourself indispensable to them

Government officials we interviewed thought that pursuing the above strategy well, especially if you get lucky with the person you pick as a mentor, can lead to having a larger positive impact faster than pretty much any other route.

Alternative routes that we have investigated less

If you are already a tenure-track academic in a relevant subject at a prestigious US university, then there are alternative routes for getting involved in AI public policy, however we have investigated them in less detail. We would be interested to hear from you if you fit this description, and we can put you in touch with other academics in a similar position.

We are also interested in investigating working on political campaigns as a potential route into AI public policy careers in the US Government. We have not investigated this route in detail, but currently think that it may be promising.

Routes we do not recommend

We do not currently recommend attempting to join the US Government via the US military if you are aiming for a career in AI policy in the long-run. There are many levels of seniority to rise through and many people competing for places, and initially you have to spend all of your time doing work unrelated to AI. We would consider this route more competitive for military personnel who have been to an elite military academy such as West Point, or for commissioned officers at rank O-3 or above.

Be careful to avoid doing harm

We think that working on US AI policy has many opportunities for substantial impact, however for similar reasons there are also many opportunities to do substantial harm. We have a much longer article on this topic which we recommend reading here.

One of the most robust ways of avoiding doing harm is to be thoughtful and honest. Scheming of any sort can be harmful, especially in more uncertain environments such as AI policy. This relates to a more general point that you shouldn’t be motivated by identity-based loyalties, but rather from a genuine desire to improve the long-run future.

There are more obvious ways of doing harm, such as pushing for bad policies, but below we address three other specific risks that are particularly salient in the context of US AI Policy.

If you are associated with a niche community such as the effective altruism community, then any negative actions that you take in government can end up reflecting poorly on the broader community and even harming others from that community who might want to work with government. For this reason we think it is particularly important to be trustworthy, to have integrity, and to be very careful about inadvertently or intentionally burning community capital. This article discusses this issue in more detail.

Information security is important, and holding a security clearance and being trusted by stakeholders such as top AI labs requires being extremely careful with confidential information. You will need to track where each piece of confidential information you receive came from, and not reveal it to people outside its intended audience. This is a skill important enough that it is worth practicing now.

Finally there is a risk that some types of government involvement in AI could exacerbate a race to the bottom on AI safety as discussed here. If this were true, then one would want to be careful to avoid inadvertently increasing the chances of such a race to the bottom.

Fit: Who is especially well placed to pursue this career?

In short, we recommend that US citizens interested in building experience and technical expertise in AI policy consider this option. If you can get a policy job at a top AI lab, then we recommend doing that instead and potentially transitioning into US policymaking in the future. If you have or can get a top 10 computer science PhD then we recommend considering working on a safety team at a top lab.

What predicts fit?

Below, we outline some of the key features of this work, and explain what they suggest about predicting personal fit.

Have relevant expertise in an area relevant to AI policy

People who are a good fit for US AI policy careers will have developed, or be developing, expertise in a topic relevant to AI policy so that they can make valuable contributions to the field. At this stage we are not sure what areas will be most valuable to develop expertise in, but some plausible options include: international relations, security studies, public policy, machine learning, AI safety, economics, S&T policy, game theory, law, diplomacy, industrial integrated circuit design and manufacturing, and cybersecurity.

Many roles in US AI public policy careers require excelling at research-related activities. Research activities generally require a combination of intelligence, conscientiousness and curiosity, among other traits. For a longer discussion of personal fit in research roles, see our career profile on academic research.

The importance of relationships

As Ed Feulner, founder and former president of the Heritage Foundation says, “people are policy”. He argues that one’s ability to make things happen is determined at least as much by who you know as what you know. Successful policy practitioners tend to have extensive rolodexes of friends and acquaintances in the DC community that they can collaborate with when a relevant issue comes up.

One US Government employee told us that the ability to appear extroverted when first meeting someone was key to building such a strong network. An academic mentioned how in DC people are constantly looking for ways to help each other and add value, much more than they are in other communities. This person argued that policymaking requires buy-in from so many stakeholders, such that cooperation is key to getting things done, probably even more than in other industries.

We have some practical advice on building and maintaining a strong network of relationships with others here.

Conventional credentials matter

Perhaps because the US Government and policymaking world is so large, people in the policy community will frequently have to make snap judgments about you. As a result, the DC world tends to rely heavily on conventional credentials, potentially more than other industries. This is especially true when policymakers have to make judgments about the people that their connections are working with, who they have never interacted with. People from a few courses at a handful elite universities are heavily overrepresented in the upper echelons of the US Government.41

As a result, if you have already graduated from a prestigious school, then you may be at a natural advantage. If you haven’t, then doing a top law, policy, international relations or machine learning master’s can be a useful first step before entering government. Even if you don’t have the right credentials now, it is quite possible to get them either before or while working within the US Government.
For similar reasons, federal employees tend to stick to relatively conventional styles of dress and behavior, so if you are not willing to dress in a suit every day this might not be a good career option for you, and likewise with other unconventional lifestyle choices.

Governments are not nimble

It can be frustrating to work within a system as politically-sensitive and bureaucratic as the US policy making process. This is particularly noticeable in comparison with, say, a start-up.

In US policy making, you can occasionally have substantial impact over major decisions. However there are so many stakeholders involved in any decision that most of the time your work will not have much effect on the outcome. This is one of the ways in which US AI public policy is a high-risk, high-reward career path.

Former President Barack Obama makes an analogy between the US Government and a supertanker: any steering you do changes its direction very slightly, but doing so can eventually end up with the tanker in a completely different location.42

Good judgment and calibration

Good judgment is important to pursuing all of our priority paths, but we are including it here to emphasise its importance to this career path as well.

One of the key ways of contributing to AI policy is likely to be using foresight to spot possible policy traps and path-dependencies before they get locked in, which is often before they become overt. Doing this well requires judgment, good calibration, and extensive domain knowledge.

Other traits

  • A couple of people we spoke with emphasized the importance of communication, especially being able to translate between technical and non-technical language. Very few decision-makers in the US Government have technical backgrounds, and so being able to explain technical topics in layman’s terms is a valuable skill.

  • In conversations on this topic, multiple people mentioned how government employees frequently have to work on a wide range of issues, most of which will be outside their area of expertise. This is especially true in small teams that have to cover a wide range of issues, such as a congressperson’s staff. We were told stories about people with science or tech backgrounds becoming responsible for important issues outside their area of expertise, simply because others frequently assumed that they were the most knowledgeable person in the room on any science or tech issue.

  • Additionally, you need to be comfortable taking a risk on the impact you will have over your career. There may not be any critical junctures relevant to AI during your career, and even if there are there may not be opportunities to impact them. As a result, this path carries a significant risk of having little impact. Nonetheless, in expected value terms we think work to help make the deployment and societal integration of advanced AI systems go smoothly is still extremely high impact.43

Other personal fit considerations

  • US Government jobs have substantially lower pay than corporate options.44
  • More senior roles often have very long hours, and people who want to advance quickly often work up to 80 hours per week.
  • You will sometimes need to be willing to work on policies you don’t care about, or even actively disagree with. You may occasionally find yourself responsible for executing on a plan created by someone more senior than you that you disagree with.
  • Most jobs are located in Washington, D.C., so you’ll probably need to move to get a good job if you live outside D.C.

Which paths could be even better?

US AI policy roles have the potential to be highly impactful, and span a wide range of activities from research to operations to stakeholder engagement. As such there are roles within this domain for people with many different skill sets. Nonetheless for some people we think there are likely to be even better options.

Questions for further investigation

We still have many uncertainties about this career path that we would like to look into in the future, including:

  • Is it better to work on campaigns and/or eventually pursue political office?
  • Where, more specifically, should people try and work in their first roles in government?
  • How should you tradeoff between seniority and role-relevance when selecting roles? How easy is it to move between agencies?
  • How valuable are roles in congress relative to roles in the executive branch?
  • How important is it to be in DC, given that AI-relevant government entities such as DIU and JAIC exist or are being set up outside of DC?
  • How should we rank the various grad school options that we recommend?
  • Which types of people are likely to be promoted into senior roles faster within government vs outside it?
  • When are the ideal times to transition into or out of government roles?
  • How valuable is technical expertise relative to other skills when working on US AI policy?
  • How much does technical expertise help move into more senior roles in government quickly?
  • Which suits you up better to do US AI Policy: doing a law JD at Yale or Harvard; or doing a master’s in public policy, security studies or international relations from a top school?
  • Are there ways of signaling your intelligence during a policy master’s that are as much of a positive signal to people in government as going to Yale Law or Harvard Law?

Conclusion

We’ve argued that the US Government is likely to be a key actor in the development and deployment of advanced AI systems. One of the main mechanisms that could lead to negative outcomes from deploying advanced AI systems is a race to the bottom on AI safety. Governments are likely to be key actors that could contribute to an environment leading to such a race, or could actively prevent one. As a result we argued that US AI public policy could be one of the most important issues this century.

We gave some arguments for and against working on US AI policy, but overall we argued that if you have the following list of traits, then you may have a decent shot at becoming a senior AI policymaker, and that a career in US AI policy may be one of the most impactful things that you could do:

  • Are thoughtful and have good judgment
  • Are an American citizen (though similar roles can be good options for other nationalities.)
  • Have or are interested in developing technical expertise in a field relevant to AI policy
  • Are good at building and maintaining relationships
  • Are comfortable interfacing with a large bureaucracy
  • Have or can get a law JD or a master’s in policy, international relations, or machine learning from a top school

If you are interested in pursuing a US AI public policy career path, or are interested in learning more, then we would like to hear from you below.

American interested in working on AI policy?

We’ve helped dozens of people transition into policy careers. We can offer introductions to people and funding opportunities, and we can help answer specific questions you might have.

If you are a US citizen interested in building expertise to work on US AI policy, apply for our free coaching service.

Apply for coaching

Further reading

Getting started in this career

Learning more about AI policy

Notes and references

  1. Thanks to Ben Buchanan, Jack Clark, Allan Dafoe, Carrick Flynn, Ben Garfinkel, Keiran Harris, Roxanne Heston, Howie Lempel, Terah Lyons, Brenton Mayer, Luke Muehlhauser, Michael Page, Nicole Ross, Benjamin Todd, Helen Toner, Rob Wiblin, Claire Zabel and Remco Zwetsloot for useful conversations and comments on this article. Thanks also to Richard Batty, Nick Beckstead, Miles Brundage, Owen Cotton-Barratt, Tom Kalil, Jade Leung, Alex Mann, Carl Shulman, and many current and former US Government employees and researchers at AI labs for conversations that helped shape this article. This article does not represent any of these people's views and all mistakes are our own.
  2. We are affiliated with the Future of Humanity Institute at Oxford University, but we have not included them in this list because of this affiliation.
  3. Carl Shulman, Howie Lampel and Benjamin Todd adapted this description of the collection of challenges arising from the development of advanced AI systems from the OpenAI charter.
  4. This framing is taken from Page, M, et al. ‘Advanced AI as a global collective action problem,’ unpublished.
  5. See forthcoming analysis by Amanda Askell for more detailed discussion of this issue.
  6. As Ben Buchanan points out, it is not enough to commit to not racing. One must actually convince the other side that it is not racing, and one's actions must not be misperceived. In international security this is difficult to do, and history is rife with examples of misperception.
  7. It is not clear that credible commitment mechanisms of this nature currently exist; this is an active area of research within the field of AI policy, for example by researchers like Miles Brundage at OpenAI.
  8. This definition is taken from the OpenAI Charter.
  9. Thinking of AI policy as a coordination problem, we can imagine that there might be bi-modal equilibria (or basins of attraction) outcomes depending on actions taken over the coming decades. If this is the case then we should be very concerned about the location of tipping points between good and bad outcomes.
  10. See, for example, the Security Dilemma or more generally vicious and virtuous cycles.
  11. How any verification mechanisms might work is still an active area of research. Miles Brundage at OpenAI, among others, is working on this problem.
  12. Following the policy change model, we can break the policy process down into different stages. We can think of at least the stages up until and including policy implementation as lying on a spectrum, from research to implementation:

    1. Research into possible problems: in broad strokes, researching potential problems related to advanced AI systems.
    2. Research into possible solutions: researching potential solutions to the problems described above
    3. Agenda-setting: raising the profile of problems within government
    4. Policy formulation: developing ideas for solutions into specific policy proposals
    5. Legitimation: work to gain legislative, executive, or other types of formal approval for a policy proposal
    6. Policy implementation: implementing a legitimized policy proposal

    Once a policy has been implemented we can add two more stages:

    7. Evaluation
    8. Policy maintenance, succession or termination.

    Other models of policy change in the context of existential risk reduction are presented here.
  13. See Garfinkel, B. 'The impact of artificial intelligence', forthcoming.
  14. We are already seeing activity from government on AI, including half-a-dozen pieces of AI-focused legislation, a new commission, a new center on AI within the Department of Defense, and $2bn in new DARPA AI spending.
  15. Scharre, P. 'Army of none'. Tantor, (2018)
  16. Figures are taken from ‘Department of Defense artificial intelligence, big data and cloud taxonomy’, govini.
  17. This would actually be an argument in favor if you were worse than the typical federal employee, however if you are likely to be worse than the typical federal employee then we would recommend against entering government in most roles!
  18. A couple examples of this phenomenon are given here: ‘The story of a lucky economist
  19. For instance, we estimated that graduates of Harvard and Yale law schools in the US had between a 1 in 8 and 1 in 12 chance of successfully being elected to congress if they tried. Other elite law schools such as those at Georgetown and University of Texas had odds that were not much worse. Similarly we estimated that congressional staffers have perhaps a 1 in 40 chance of being elected to congress if they tried.

    We estimated similarly strong odds for Oxford University PPE students who wanted to be elected into the British Parliament, and even better odds for former presidents of Oxford University’s two ‘unions’. While the British estimates obviously don’t apply to the US context, we see these estimates as additional evidence that there are pockets of society that have strong chances of making it into influential positions in government. If you are in one of these pockets, or if you can get into them, then we think you have a decent chance of making it into an influential role.
  20. Different industries select for different traits, and we think that promotion in government may select more for social intelligence than other industries.
  21. A few articles that have been published on the coordination problem caused by advanced AI systems include:

    * Armstrong, S., N. Bostrom, and C. Shulman ‘Racing to the precipice: a model of artificial intelligence development’, AI & Society (2016)
    * Bostrom, N., A. Dafoe, C. Flynn *'Public policy and superintelligent AI: a vector field approach'
    in 'Ethics of Artificial Intelligence', Oxford University Press, forthcoming (2019)
    * Cave, S. & OhEieartaigh, S. ‘An AI Race for Strategic Advantage: Rhetoric and Risks’ AAAI (2018)
    * Dafoe, A. 'AI Governance: A Research Agenda'. (2018)
    * Danzig, R. 'Technology roulette: managing loss of control as many militaries pursue technological superiority' (2018)
    * Meserole, C., 'Artificial intelligence and the security dilemma', Lawfare, (2018).
  22. Ben Garfinkel points out that if you define collective action problems broadly, as cases where parties fail to achieve Pareto optimal outcomes, then nearly all international security issues are examples. The world wars are particularly salient examples from the past century, where the (in principle unnecessary) harms were nearly existential for the participating countries.
  23. And other ozone-depleting substances controlled by the protocol.
  24. Processes producing CFCs as a waste gas contributed less to global GDP than the industrial sectors producing CO2 as a waste gas. Additionally the cost of switching over from CFCs to alternative gases that were less polluting was far cheaper than the cost of phasing out carbon emissions.
  25. It is difficult to know how many actors are likely to be involved in the development of advanced AI systems. Costs of training AI systems have been increasing rapidly as they require more compute. If this trend continues, then this line of argument gestures at a possible future in which only the best-resourced governments and companies can afford to train the most advanced and largest deep learning systems.
  26. This statement assumes that we will get good leading indicators on advanced AI systems and possible coordination failures, which seems plausible but not certain.
  27. Global temperatures are projected (see ch. 5) to peak at the earliest late this century, and most likely in a future century. Experts expect the full automation of all human jobs to occur sometime later this century or beyond, with a median estimate of this occurring in 122 years.
  28. Ben Garfinkel gives some ways in which critical junctures could occur during the development and adoption of advanced AI systems, including:

    * The risk of irreversible harm to civilization from wars partly caused by tensions around AI development
    * The possibility that there will be a fleeting and non-repeated opportunity to establish a more unitary or generally well-coordinated international order, to deal with or pre-empt various issues caused by AI
    * The possibility that certain developments around AI will enable the ‘lock-in’ of particular political institutions, maybe most saliently robust totalitarianism, without it being inevitable which ones will lock in
    * The possibility that there will be a big ‘winner’ of the AI race, who will have disproportionate influence over other critical juncture-ish choices (like how to handle the development/deployment of other important technologies, how to initiate space colonization, etc.)

    More generally, we could see critical junctures as a result of lock-in, positive feedback, increasing returns (the more a choice is made, the bigger its benefits), self-reinforcement (which creates forces sustaining the decision), and reactive sequences (a series of events triggering one-another in a rapid near-unstoppable sequence like dominoes falling).
  29. For a more detailed discussion of this topic, see p6 of Beckstead, N. 'On the overwhelming importance of shaping the far future', PhD thesis, Rutgers (2013)
  30. There could be no critical junctures because development and adoption of AI could be fairly gradual without any substantive discontinuities. Paul Christiano discusses a number of topics relevant to the likelihood of this scenario on the 80,000 Hours podcast here.

    Critical junctures could also fail to materialize if the long-run impacts from the development of AI are not particularly path dependent. To motivate this point, consider the analogy of the industrial revolution, which has given some countries in the West an economic and military advantage which has lasted for a couple of centuries. At first glance this looks like it might be a path dependency, however the West's economic and military lead is now eroding, and it seems plausible that major economies in East will catch up in the future, at least in the absence of advanced AI systems. This example illustrates how the advantage from technological progress to one actor or group of actors can erode away over time. We give this as one example of how, with the benefit of hindsight, the path dependencies or critical junctures that one might initially expect from technological progress may not be as path dependent as originally thought.
  31. If critical junctures exist in history, in which a person could take path dependent actions that steer humanity onto a better or worse trajectory, then the industrial revolution is a plausible candidate. And yet, without the benefit of hindsight, it seems plausible that a smart, motivated person would have struggled to make the industrial revolution go better or worse from a long-term perspective. This motivates the intuition that it may be difficult to affect critical junctures even when they do occur. Nonetheless, the possiblity of critical junctures seem like particularly important opportunities to improve the long-run future so they seem worth paying attention to in case you can affect them.
  32. Toby Ord and Owen Cotton-Barratt discuss this phenomenon in more detail in the general case here and here. One note of caution, though, is that if very few people are working on an issue, then people should coordinate their efforts and shouldn't all focus on discontinuous scenarios.
  33. Of course, the US government is vast, and very few of these roles would be at all relevant to AI.
  34. OSTP is heavily understaffed in the current administration, and people we interviewed thought that it is not particularly influential at present. In the previous administration it was an influential part of government though. We recommend OSTP more strongly in administrations when it is more influential.
  35. The Office of Net Assessment is only likely to be relevant if it manages to continue to have the level of influence that it had under its retired, legendary founder Andy Marshall.
  36. JASON is not part of the Department of Defense, but it advises it so I am including it here.
  37. The deadline for applications each year is 1 November by 11:59pm EST.
  38. 80,000 Hours is affiliated with the University of Oxford's Future of Humanity Institute, however this affiliation has no effect on our recommendation of this program.
  39. The Heritage Foundation is known as being ideological, and working there will probably brand you as a staunch conservative to the point where it will likely be difficult to work for a future democratic administration.
  40. Ideally at a school with strong links to government, such as CMU, Harvard, or MIT. We are currently uncertain about how to rank this option relative to the others in this list. It also has a wider range of options of things to do afterwards, so if you are not confident that you want to work on AI policy then this looks like a particularly good option.
  41. Rothkopf, D., ch. 1 of ‘Running the World’, Public Affairs, NY
  42. Skidmore, D. ‘The Obama Presidency and US Foreign Policy: Where's the Multilateralism?’ International Studies Perspectives (2012) doi:10.1111/j.1528‐3585.2011
  43. See chapter 6 of 80,000 Hours co-founder William MacAskill’s book Doing Good Better for more discussion of the importance of expected value when assessing social impact.
  44. We have a discussion of congressional staffer salaries here.
  45. The Open Philanthropy Project is a funder of 80,000 Hours, however we have not recommended them because they fund us.