This article was originally published in 2019. It was revised in July 2023 to reflect more recent developments and thinking.

At 80,000 Hours, we think a significant number of people should build expertise to work on United States policy relevant to the long-term effects and potentially catastrophic risks of the development and use of artificial intelligence.

In this article we go into more detail on this claim as well as discuss arguments in favour and against working in this area.1 We also briefly outline which specific career paths to aim for and discuss which sorts of people we think might suit these roles best.

We have separate articles that cover a broader set of potential career paths in the AI governance and coordination space and in AI safety technical research.

Summary

  • The US government is likely to be a key actor in how advanced AI is developed and used in society, whether directly or indirectly.
  • One of the main ways that AI might not yield substantial benefits to society is if there is a race to the bottom on AI safety. Governments are likely to be key actors that could contribute to an environment leading to such a race or could actively prevent one.
  • Good scenarios seem more likely if there are more thoughtful people working in government who have expertise in AI development and are concerned about its effects on society over the long-term.
  • In some ways, this may be a high-risk, high-reward career option. There’s a significant chance that pursuing this career path could result in little social impact. But we think impactful progress in this area could be extremely important, so the overall value of aiming to work on US AI policy seems high.
  • We think there is potentially room for hundreds of people to build expertise and career capital in roles that may allow them to work on the most relevant areas of AI policy.
  • If you’re a thoughtful American interested in developing expertise and technical abilities in the domain of AI policy, then this may be one of your highest-impact options, particularly if you have been to or can get into a top grad school in law, policy, international relations or machine learning. (If you’re not American, working on AI policy may also be a good option, but some of the best long-term positions in the US may be much harder for you to get.)

Realising the promise of AI by managing the risk is going to require some new laws, regulations, and oversight.

President Joe Biden, remarks on artificial intelligence from the White House on July 21, 2023

The aims of AI policy

In our career review on AI governance and coordination, we outline a number of potential aims for impactful policy in this area, while noting that there a lot of remaining uncertainties about the best approach going forward:

  • Preventing the deployment of any AI systems that pose a significant and direct threat of catastrophe
  • Mitigating the negative impact of AI technology on other catastrophic risks, such as nuclear weapons and biotechnology
  • Guiding the integration of AI technology into our society and economy with limited harms and to the advantage of all
  • Reducing the risk of an “AI arms race,” in which competition leads to technological advancement without the necessary safeguards and caution — between nations and between companies
  • Ensuring that those creating the most advanced AI models are incentivised to be cooperative and concerned about safety
  • Slowing down the development and deployment of new systems if the advancements are likely to outpace our ability to keep them safe and under control

When we discuss AI policy here, we are referring not just to law and formal policy but also to norms shared between actors, best practice, private contracts, and so on.

Advanced AI as a coordination problem

Like many other modern problems, AI governance constitutes a coordination problem.2 One element of this coordination problem is that the perceived rewards from accelerating the development of AI capabilities may create a race-to-the-bottom. That is, they create an incentive to cut research into AI safety, and instead focus on increasing AI capabilities in order to get ahead in a perceived race.

Racing in this way may be counterproductive even for the actors’ own interests.3 Additionally, perceptions or misperceptions of a race could exacerbate rivalrous development,4 as the nuclear arms race did during the cold war, potentially even leading to conflict. In such a scenario, a lack of coordination risks a worst-case outcome for all actors, while a coordinated response in which parties credibly pre-commit to the broad sharing of benefits could allow a good outcome for all.

Solving a coordination problem of this nature would require coordination between a number of key players. It is difficult to know which actors might be involved in the development of advanced AI systems, but plausible candidates include leading AI development labs and states.

Definitions: artificial general intelligence and other types of advanced AI system

Artificial general intelligence (AGI) is a hypothetical highly autonomous system that we define as being able to outperform humans at most economically valuable work.5 This term is often used in contrast to narrow AI which can outperform humans in a specific intellectual task, such as playing the board game Go, but could not perform outside its area of expertise, for example if the algorithm tried to control a robot attempting to set a table.

Both AGI and narrow AI lie on a spectrum, and we use the term advanced AI system as a vaguer, broader term that includes both AGI and future narrow AI systems that are less capable than AGI but still substantially more capable than AI systems in existence today.

Key actors

At the moment, most cutting-edge AI R&D is done by private, non-state actors. Leading AI labs that are working towards developing AGI include Google DeepMind, Anthropic, and OpenAI, among others.

Researchers at Open Philanthropy,6 the Centre for AI Governance (GovAI), the Institute for AI Policy and Strategy, Google DeepMind, Anthropic, OpenAI, and elsewhere are already studying AI policy. Academic research groups, including those at MIT, Oxford, Cambridge, Carnegie Mellon University, and UC Berkeley, also do important work on AI safety and related issues.

Governments often have a mandate to address pressing social problems and thus invest less in long-term, more speculative problems. With AI, the combination of uncertainty and the potentially rapid rate of technological advancement means that long-term effects might be decision-relevant soon.

There are a number of ways to improve coordination between the actors that may be involved in this challenge. We have written separately about why US-China relations may be particularly important to focus on to reduce risks from advanced AI.

In this article, we address what we think is one of the most important paths: helping create policies relevant to the development and societal implementation of advanced AI systems, with a focus on the US government.

This work could involve researching strategies that could help increase the chances of good outcomes, or helping to formulate or implement government policy around existing promising strategies, or some combination of the two.

We think that for our readers who are US citizens, an AI public policy career may be one of the most impactful career opportunities currently available.

Which parts of AI policy do we think are most important to work on?

We have no choice but to acknowledge that AI’s changes are coming, and in many cases are already here. We ignore them at our own peril. Many want to ignore AI because it’s so complex. But with AI, we cannot be ostriches sticking our heads in the sand.

Senate Majority Leader Chuck Schumer delivering remarks to launch the SAFE Innovation Framework for Artificial Intelligence at the Center for Strategic and International Studies June 21, 2023

Within the category of US AI public policy careers, we are particularly interested in careers aimed at improving the government response to, and engagement with, advanced AI systems, as we think this is likely to include the highest-impact opportunities within the field of AI public policy. This is because more powerful systems are likely to have more societal impact, and the worst-case scenarios are more likely to be catastrophic with advanced AI systems.

While advanced AI systems will likely have outsized impact, decisions made today could be key to shaping the environment in which advanced AI systems are developed and deployed. There are likely to be path dependencies and feedback loops in how people think about and respond to current AI issues that will impact the potential development and deployment of advanced AI systems.7

Relatedly, today we can learn and help build trust and collaboration on near-term issues before dealing with more tricky policy challenges associated with advanced AI systems. And current issues in AI provide opportunities to build expertise, influence, relationships, and career capital.

As we discuss later, one way of thinking about contributing to AI policy is that the most valuable contributions may be in spotting policy traps, path dependencies, or feedback loops before they become overt,8 because when they are overt it is often too late.

When thinking about where within AI policy to work, you can probably develop career capital and insights in a variety of areas, and so we think you should primarily be asking, ‘Which area is most likely to contain path dependencies that might be important to the development and societal integration of advanced AI systems?’

Based on our conversations with experts, here are some of the areas of AI policy that we think are likely to be particularly impactful to work on:

  • Issues related to AI, global cooperation, and diplomacy, such as developing research collaborations and reducing risks from races for AI capabilities
  • Issues relevant to AI race dynamics, such as foreign investment controls on AI startups, visa restrictions on AI talent, and government spending on AI R&D
  • The US government’s approach to AI safety, such as a standards and evaluation agenda, a potential licensing regime, funding of AI safety research, and the use of AI in critical systems, particularly in the military and intelligence communities

Policy careers involve moving between different types of roles and institutions

US AI public policy careers could involve moving between a number of different types of institutions, from industry labs to think tanks to advocacy organisations to academia. They may also involve working for the US government directly, perhaps in the Department of Defense (DoD), the intelligence community, White House Office of Science and Technology Policy (OSTP), Department of State, or elsewhere. We go into more depth about which roles to aim for later.

These careers don’t necessarily involve working on AI in the near-term, but could instead involve developing skills and expertise that would be relevant to AI roles in the future. For example, one might do graduate study or work at DoD in non-AI roles with the aim of building experience required to excel in AI roles later in one’s career.

However, as interest and attention on AI has increased, it may be advantageous to build career capital that does directly touch on AI or emerging technology more broadly.

The aim of these roles is to build up expertise relevant to AI public policy and then contribute to developing risk-reducing and broadly beneficial AI policies. This will likely involve activities such as developing and implementing coordination mechanisms, setting up new agencies, writing and implementing new laws and regulations, funding critical research, and brokering verifiable agreements between key players that may develop advanced AI systems.

Why pursue this path?

At 80,000 Hours, we currently rank pursuing a career in AI governance and coordination as one of our few top-recommended priority paths.

Why do we rank US AI public policy careers so highly? In brief, it’s because we think it is both important and neglected to play a key role in helping the US government navigate the transition to a world with advanced AI systems. Below we give four reasons why we think US AI public policy roles are more impactful than many people might think, and we go into each reason in more detail in the headings below:

  1. The US government is a powerful actor that may play a key role in the development and deployment of advanced AI systems.
  2. Your chances of reaching a government role in which you can have a large positive impact on the world are probably high enough that the expected value of your work is likely to be substantial given the government’s huge influence.
  3. There are very few people with relevant expertise in the US government working on issues related to advanced AI systems, and the government is somewhat keen to hire, so these roles are unusually high impact and there is potential for plenty of opportunities to advance.
  4. Working for the US government in an influential policy role gives you many options for what to do afterwards.

1. The US government is a powerful actor that may play a key role in the development and deployment of advanced AI systems

The US government is likely to be involved with the development and deployment of advanced AI systems

AI and society

Whenever technology causes rapid societal change, it is likely that the government will get involved in developing or governing the use of that technology.

AI is a general purpose technology with wide-ranging applications. In the coming decades we are likely to see AI revolutionising sectors across society, from transport to healthcare and from education to manufacturing.

If poorly managed, advanced AI systems could have myriad societal impacts, from substantially increasing economic growth to finding cures for many illnesses, but also potentially from mass unemployment to the possibility of extreme wealth inequality.9 And we believe advanced AI systems could potentially cause an existential catastrophe. As AI systems become more advanced and more issues and risks arise, governments may become increasingly involved. Many political leaders have already expressed an interest in regulating and mitigating risks from AI, and governments have begun moving forward on policies in this area.

Currently AI development is primarily within private companies, and the ultimate involvement — direct or indirect — of governments remains uncertain. We expect that in time there will be new government agencies, both nationally and internationally, created to manage and respond to the development of AI.

AI and national security

AI is also likely to transform intelligence gathering, warfighting, and the domain of national security more broadly. The US military may get involved in the development and deployment of powerful AI systems for this reason.

The US military has kept its superpower status over the past half-century in large part through maintaining technological superiority over its adversaries. The US is now looking to AI to help it maintain a technological edge.

In the national security domain, AI systems are already being used in applications ranging from cybersecurity, military logistics, and image processing. AI systems are being tested for use in unmanned aerial vehicles and undersea warfare.10 An AI system can already beat a retired professional fighter pilot in simulated dog fights.

We suspect that the US national security community is likely to get involved in the development and deployment of advanced AI systems because AI is likely to be a powerful technology with wide-ranging applications. Below we argue that the US could end up influencing the long-term trajectory of AI simply because it is an involved powerful actor.

The US government is a powerful actor

The US government is well resourced

The US government’s federal budget is approximately $6.4 trillion/year. This is approximately the annual revenue of the world’s 19 largest companies by revenue.

The US’s military expenditure is larger than that of the next 7-10 countries combined, depending on how you count. To give a sense of scale, the Department of Defense (DoD) employs almost 3 million people, half a million of whom are in active duty overseas at any time. DoD also has a budget of over $800 billion per year. For fiscal year 2024, the Department of Defense requested $17.8 billion to invest in R&D of science and technology, including AI.

Despite these large budgets, the level of technical AI expertise at leading AI labs is substantially higher than that within the US military.

The US government has unique abilities

In addition to being well resourced, the US government is able to yield unique influences. Governments have many tools at their disposal, including taxation and tax breaks, regulation, antitrust actions, requiring companies to prioritise government contracts, and the many other levers of influence available to the US government.

The use of these levers of influence is meant to be kept in check by the three branches of government, and in turn by public opinion and the many other forces that influence political decisions.

Nonetheless, because of its resources, abilities, and strong incentives to engage with the issue, we expect the US government to be highly relevant, whether directly or indirectly, to the development and deployment of advanced AI systems.

2. Individuals can have a large positive impact in government

Relevant experts on positively shaping AI have told us that they think that US AI public policy careers are likely to have high expected value. People who understand government careers have argued that they expect ambitious people pursuing this path to move into roles that can have a large positive impact on the world substantially quicker than the typical time required to advance up the career ladder in government. However, it has also been suggested to us that it will generally take someone longer to advance up the career ladder in government than in other industries because government is often less meritocratic than industry or academia.11

Can individual federal employees influence the government?

Most mid-level and senior federal employees that we spoke with were able to give us an example of how they had a large positive impact through their role. Some of their examples involved starting new impactful programs worth $10s of millions, saving Americans $100s of millions, or moving billions to something potentially more impactful. We have not been able to vet these stories, but they persuaded us that, at the very least, mid-level and senior federal employees feel as though they can sometimes have a large positive influence on the government.

One thing we found interesting was that federal employees have told us that it was relatively rare for them to be responsible for remarkably good things happening, and instead more frequently they thought they had positively influenced government by preventing unintentionally harmful policies from being adopted.12 We expect this pattern to repeat itself for people working in the domain of AI policy. This phenomenon makes it easier to have a positive influence than it would be if you could only have impact by actively making positive things happen (which is what people often think of when they imagine having an impact in government).

A different type of argument that federal employees can influence the government comes from looking at elected officials. One might naively assume that elected officials make the decisions and federal employees simply carry out the wishes of the elected officials. We do not think this is how the government works in practice. Elected officials are necessarily generalists so they rely heavily on specialists within government not just to implement things, but also to heavily inform their priorities, strategies, and overall policy goals.

Yet another line of argument that individual federal employees can impact government comes from looking at the budgets controlled by different types of people within government.

How large are the budgets that government officials oversee?

This is only a very rough heuristic, but by dividing the $1.2 trillion discretionary federal budget13 by the number of people at different levels of seniority, we can estimate the average budget that different subsets of people in government oversee.

Subset of peopleApproximate numberBudget per person per year within this subset
All federal employees (except US Postal Service workers)2.3M$700,000
Federal employees working in Washington DC370,000$4.6M
Senior Executive Service and political appointees12,000$142M
Political appointees4,000$425M

Note that this method is more of an upper bound on the average influence that people in different groups have over budgets. There is substantial double-counting going on: the political appointee, Senior Executive Service (SES) employee, and federal employee can’t all have complete control over their entire budgets. In practice each person will be heavily constrained by their superiors, and managers will often need to delegate control of parts of their budgets to their teams. We do not know how budgetary control is divided between the different levels of seniority within government, but we would guess that this method greatly overestimates the influence of junior people and underestimates it for senior people.

Nevertheless, these figures are so high that it makes it plausible that some people have a significant scale of influence, especially if you can be either promoted into the SES or picked as a political appointee.

Additionally, much of the influence that you will have as a federal employee is probably not best thought of as influencing budgets. Governments have all sorts of abilities that do not involve spending money, some of which are described above. In the case of AI, we can imagine many important actions that would not involve much government expenditure, such as negotiating agreements to avoid a race-to-the-bottom on AI safety.

We have argued that individual federal employees can have considerable influence over government, especially in more senior roles. But what are your chances of moving into one of these senior roles?

What are your chances of moving into senior roles in government?

A very simplistic model in which only Washington, DC employees are promoted into Senior Executive Service roles suggests that in expectation you will spend 6% of your career in the SES. Of course this is only an average and most employees will not be promoted into the SES, while some will spend much of their career in it. Because no one spends their entire career in the SES, and based on our understanding of the demographics of our readership, we would guess that the average 80,000 Hours reader has a >6% chance of being promoted into the SES if they become a federal employee in Washington, DC.

If you have been or can go to a top law or policy school, then your chances of making it into the SES or political appointees are particularly good. We have not collected data to support this view, but it is perceived wisdom in DC that Yale and Harvard Law grads have higher chances of being promoted to the SES and/or becoming political appointees. To illustrate this point, one person we spoke with said that if you go to Yale Law, then when people in government find out about your educational background they sometimes awkwardly joke that you will probably be their boss one day.

This fits with our impression of other political domains. In the past, we have made estimates of the chances of people with the most predictive academic credentials getting elected into congress and the UK parliament and found that they are surprisingly high.14

Comparing the competitiveness of government roles against those in industry

Our perception is that government roles are generally less competitive than similarly influential roles in industry, in part because many people prefer the freedom and pay of industry more than they like the influence of roles in government.

To illustrate this point with an example, we can make a direct comparison between working in government and earning-to-give. When someone is earning money in order to give it away, this is a competitive endeavour because everyone would like more money. In contrast there are far weaker personal incentives to want to influence government budgets, and so we should expect the influence per role as measured in dollars to be higher in government than in earning to give. Of course, government money is less flexible than personal donor dollars, but even once we factor this in we think the effect still holds.

Despite this positive outlook, elections and other political factors make career progress somewhat unpredictable, which can be tough to bear even if the expected value is high. We discuss this and various other downsides of US AI public policy careers below.

How meritocratic is career progression within government?

Civil servants we spoke with thought that while government careers are perhaps less meritocratic than in industry or academia, they are still fairly meritocratic and the most well-suited people have better odds of rising to the top.15 Senior officials in government are often very impressive individuals, and have on occasion had very successful careers in academia and the corporate world beforehand. As mentioned elsewhere, particularly talented and hardworking individuals may be promoted faster by rising up through think tanks or academia and then moving into government.

3. There is an AI skills shortage in the US government

AI skills are in demand in government

Senior staff within governments and top AI labs have told us that they struggle to find experienced and qualified AI talent to employ. At every national security and technology conference that 80,000 Hours has attended, speakers have lamented the government’s lack of expertise on AI and noted the substantial demand for such expertise within government.

While much of this demand is probably for machine learning engineers and other technical specialists, policy practitioners with technical AI expertise also seem to be in demand.

This demand for AI expertise is not limited to the Department of Defense. Because AI has so many uses and impacts, many parts of government are likely to be involved in the development of policy related to AI. For example, to date we have seen activity on AI from dozens of different agencies, offices, and committees within congress and the executive branch.

In addition to roles in government, there are likely to be dozens of US AI public policy roles in industry, think tanks, non-governmental organisations, inter-governmental organisations, intelligence and defence contractors, academia, and elsewhere in the US AI policy community.

Not enough people in government are thinking about catastrophic risks from advanced AI systems

There are perhaps three levels on which US AI public policy relevant to advanced AI systems is currently being neglected. We outline them below:

  1. Safety and policy issues related to advanced AI systems are very neglected relative to the scale of the issue. We argue for the scale of the problem in more detail in our problem profile on this issue. And we think we likely need more AI policy practitioners with a technical AI background than are currently working in the area.
  2. Within issues related to advanced AI systems, there’s historically been less of a focus on policy and governance approaches compared to the focus on the technical problems and solutions.16
  3. Within the field of AI policy, we think that working within key governments, such as those of the US and China, is more neglected than working in top AI labs.

There are many more roles available for talented people

Eventually, we think it would be valuable for hundreds of people to build expertise and career capital in policy areas relevant to the development and deployment of advanced AI systems.

We can motivate this claim with an estimate looking at senior positions. We think it would be valuable to have at least 20 senior people with a background in and familiarity with issues related to the development and societal integration of advanced AI systems, as we think at least this many senior offices are likely to be involved. If a typical entrant has about a 10% chance of making it into a senior role, then it would be worth perhaps 200 people developing expertise and pursuing these careers.

Additionally we would like to see an academic field of dozens or perhaps hundreds of people studying AI policy, similar to the fields that have emerged developing policy related to nuclear weapons, biosecurity, or cybersecurity, with a focus on catastrophic risks and long-term effects.

It is worth highlighting that thinking about the ideal numbers of people in an area ignores opportunity cost, and that there are diminishing returns to the number of people with expertise on these issues. If enough people enter this broad career path then additional people would become less impactful, so it might cease to be a top option on the margin, despite still being valuable. But we think we are currently a long way away from that point.

4. Government careers were already a good default option

We expect many of the positions in this career path to open up opportunities to have a large positive impact even in important policy areas besides AI policy:

  • There are many pressing global problems you could work on in these roles. For example, from within the national security community, you could help reduce biological global catastrophic risks, or you could work to reduce the chances of a great power war.
  • As discussed above, governments usually control large quantities of resources per decision-maker, and so if you can become one of those decision-makers then you can have outsized impact if you make good decisions.
  • Government careers have reasonable exit options into other social impact jobs. In government, you learn a range of widely applicable skills, such as management, stakeholder engagement, and quantitative analysis of real-world problems. Working in the US government in influential policy roles is generally seen as relatively prestigious when applying for future jobs, though it depends on the audience and your exact role (e.g. entry-level top-end management consulting jobs are often seen as more prestigious than entry-level policy jobs).

This means that even if a career in AI policy doesn’t work out, you’ll likely still have plenty of opportunities to work on other key global problems within government or to exit into other relevant social impact jobs.

As mentioned above, US AI public policy roles are not just in government. Similar arguments on career capital and impact to those given above apply for think tanks and academia. Read more about other options in our broader career review on AI governance and coordination paths.

Arguments against

Your social impact in this career path is high risk, high reward

Your chances of having a large social impact by working in AI policy is highly uncertain. You would be intentionally taking a big, low-probability bet, not just with regard to your own contribution (which is generally uncertain in any cause), but also with regard to the overall case that this area is promising to work in. While in expectation we think this is one of the highest impact career paths, it may be a bad fit if you’d be demotivated by the high probability your impact will end up very small.

In order for you to have outsized impact you would need to:

  • Be in the right place at the right time
  • Have the foresight to take the right actions in government
  • In addition to being thoughtful with good judgement, you need to have the right expertise, which can be somewhat difficult to predict in advance

Additionally, the following conditions would need to be met:

  • The development and societal integration of AI would need to have major potential downsides and/or upsides
  • Coordination problems would need to be solvable, but not too solvable
  • There would need to be critical junctures, path dependencies, policy traps, or strong feedback loops in the development and governance of AI systems

As we discuss in the rest of this section, all of these things happening in concert seems unlikely, and so we think the median outcome is that individuals working on AI policy will have little social impact. However in the scenarios in which all of the above and other caveats are satisfied, your impact could be very large indeed, and so we still recommend this career because of its large expected impact.

You need to be in the right place at the right time

It is difficult to predict exactly when and where key AI policy decisions might be made. To be in the relevant part of government to participate in these decisions, you need to either get lucky, or you need to predict how jurisdiction disputes between parts of government will play out and position yourself well in advance. You can mitigate this risk somewhat by becoming someone whose opinion people seek out for advice so that you will be consulted regardless of where AI policy is made within the government.

Relative to other industries, it can take a long time to climb the career ladder in government to get into positions of influence. You need to be in a position of influence at a time when AI technologies have advanced enough that key decisions on AI policy need to be made. You may be able to shortcut this career climb by rising up through academia or think tanks and then transitioning into roles that are politically appointed, however this route is extremely competitive.

Governments can be difficult to predict

Governments can be difficult to predict, in large part because there are so many moving parts in any decision. As a consequence it can be difficult to predict whether a given action is on balance positive or negative. We hope that individuals will get better at this over time as their political instincts improve. Nonetheless, it may be relatively unclear how to use any influence you gain to effect positive change in the world, with the possible exception of helping to prevent obviously bad things from happening. Additionally, we don’t know to what extent having additional concern for the long-term outcomes of AI will allow you to make better decisions than a typical talented government employee or political appointee.

International coordination problems often haven’t been solved historically

Earlier we argued that advanced AI systems pose an international coordination problem. We argued that helping solve this coordination problem could be one of the impactful things that you could work on. Unfortunately, humanity doesn’t have a good track record of solving hard international coordination problems, though this argument cuts both ways as described below.17

Climate change presents a classic example of the challenges associated with solving international coordination problems. Nations have discussed agreements to limit carbon emissions for around three decades. Despite this effort and modest wins in the form of the Kyoto Protocol and Paris Accords, only a handful of countries have legally binding goals to reduce their emissions, and global carbon emissions continue to increase.

On the other hand, we can point at successful efforts to limit emissions of ozone-depleting gases through the Montreal Protocol as a positive example of international coordination. The agreement took effect in 1989, causing dramatic reductions in the emissions of many ozone-destroying CFC gases, and consequently the ozone hole over the Antarctic is slowly shrinking.18 However, it is arguably much easier to reduce emissions of CFCs than CO2, the greenhouse gas contributing most to global warming.19

The agreements to limit emissions of CFCs and CO2 described above involved almost every nation in the world coordinating through the UN. Additionally, within nations, almost everyone emits CO2. In general, it is harder to reach agreement between all parties when there are more parties involved. Coordinating on advanced AI systems might be easier if only a few actors have the capabilities to develop or deploy the technology.20

The development of advanced AI systems could be difficult to coordinate on, because the benefits of being ahead in its development could be substantial. On the other hand AI might be easier to coordinate on than climate change if the consequences of failing to coordinate on AI looked likely to occur sooner than the worst impacts of climate change,21 or if the worst-case scenarios from developing advanced AI systems could be even worse than climate change.22

If we are unable to make progress on the international coordination issues around advanced AI systems, then this career path becomes less tractable and less appealing. However, note that this argument cuts both ways, and working on this problem is also lower impact if the problem is likely to solve itself. Additionally, if a coordinated solution is necessary to avoid the worst potential risks from advanced AI, then working on the coordination problem becomes more important. From a consequentialist perspective, your contribution will be most impactful if the problem is precisely hard enough that your contribution makes a difference to whether it is solved.

Will you be able to influence any critical junctures associated with the development of AI?

A critical juncture is a term in political science used to describe the point at which path dependencies occur. Critical junctures could occur in the development and deployment of advanced AI systems for a variety of reasons detailed in this footnote.23

If we are looking for opportunities to have a large positive impact on the development of AI over the long term, then critical junctures look particularly relevant.24 Critical junctures seem important because they are moments when, for example, we might lock ourselves out of development trajectories that are particularly good or into trajectories that are particularly bad. However, there is no guarantee that critical junctures will exist in the development of advanced AI systems,25 and even if there are, we may not be able to affect them.26

Critical junctures seem most plausible to us when advanced AI systems of interest constitute AGI, and the development of AGI is discontinuous or unexpected. This suggests that you might want to focus your efforts more on discontinuous AGI scenarios because these are the scenarios in which you might be able to have the most positive impact.27

In addition to the risk of having little impact, there are other impact-based reasons against pursuing a US AI policy career now.

It is probably better to initially work in AI policy at a top AI lab if you can

We would generally recommend taking an AI policy role at Google DeepMind, Anthropic, Microsoft, Meta, Intel, or OpenAI over a similarly senior role in DC for the following reasons:

  • In the immediate term, leading AI labs are likely to be more relevant to the development of AI than the US government
  • There are fewer people in senior roles at leading AI labs, so a given role likely has more impact over the development of AI.
  • We think it will likely be relatively easy to move into the US government having worked on AI policy within a top AI lab.
  • We think there is a reduced risk of needing to work on less important problems to advance your career in AI labs.
  • You are likely to develop more domain-specific knowledge about AI at a top AI lab than at almost any other organisation.

Note that these arguments point against working in government today, but they do not rule it out as an impactful place to go later in your career after working in a top AI lab, for example if the US government decides to take a more active role in the development and deployment of AI.

If you receive an offer for a policy role from a top AI lab then we recommend strongly considering it. Nonetheless, we think that roles focused more on US AI public policy are also likely to be highly impactful.

Other arguments against

  • Senior decision-makers in the US government need to be experts on a range of issues beyond AI. In order to gain the skills required to be promoted into more senior positions you will likely spend most of your time on issues that are not directly related to AI. Some of these skills will not be transferable to new contexts if you leave, making this route provide less flexible career capital. However, many of these things will likely be more relevant than one might think, e.g. helping you learn how to craft policy in different contexts, improving your political instincts, developing stakeholder engagement skills, learning how to make things happen in large bureaucracies, etc. Nonetheless, this may be an argument in favour of working in, for example, a top AI lab where you can focus your attention more on learning about AI and then later transition into a government role where you can also focus full time on AI.
  • Becoming a political appointee for one administration can hurt your ability to work for another administration of the opposing political party. As a result, it is worth thinking through your political allegiances relatively carefully before declaring them. You can also work as an apolitical civil servant — you only need to take on a political role when you have the opportunity to be appointed to the most senior positions.
  • There are also some downsides to this career that will affect your personal life, such as low salaries, which we cover below.

How to pursue a career in US AI public policy

We hope to discuss how to pursue AI public policy careers in more detail in future content. Here we briefly outline where to aim your career long-term, before listing seven initial routes into US AI public policy careers.

Where to aim long-term

Political appointees and the Senior Executive Service

Ultimately, we think that the roles that are most likely to have influence over US policy relevant to the development and societal integration of advanced AI systems are likely to be senior positions within the federal government and Congress.

Senior roles in the federal government broadly fall into two categories: political appointees and the senior executive service (SES).

Political appointees are selected by the president, vice president, or agency heads. There are approximately 4,000 political appointees in an administration.28

The SES forms the level just below political appointees and serves as a link between the political appointees and the rest of the Federal (civil service) workforce. There are approximately 7,000 SES positions in an administration. Additionally, up to 10% of these SES roles may be occupied by political appointees.

Political appointees have a variety of backgrounds and are frequently drawn from industry, academia, think tanks, retired military officers, and former federal employees.

On the other hand, the SES is more typically made up of federal employees who have worked their way up within government.

Ultimately we think that political appointees are likely to have the most influence over US AI policy, though we think that the SES can also wield significant influence while also acting as a valuable stepping stone to political appointments.

Which parts of government are most relevant?

After talking with several people with knowledge of DC and AI, we put together this list of the parts of government that we guess are likely to be most relevant to US policy on the development and deployment of advanced AI systems:

Within Congress, you can either work directly for lawmakers themselves or as staff on a legislative committee. Staff roles on the committees are generally more influential on legislation and more prestigious, but for that reason, they’re more competitive. If you don’t have that much experience, you could start out in an entry-level job staffing a lawmaker and then later try to transition to staffing a committee.

Some people we’ve spoken to expect the following committees — and some of their subcommittees — in the House and Senate to be most impactful in the field of AI. You might aim to work on these committees or for lawmakers who have significant influence on these committees.

House of Representatives

  • House Committee on Energy and Commerce
  • House Judiciary Committee
  • House Committee on Space, Science, and Technology
  • House Committee on Appropriations
  • House Armed Services Committee
  • House Committee on Foreign Affairs
  • House Permanent Select Committee on Intelligence

Senate

  • Senate Committee on Commerce, Science, and transportation
  • Senate Judiciary Committee
  • Senate Committee on Foreign Relations
  • Senate Committee on Homeland Security and Government Affairs
  • Senate Committee on Appropriations
  • Senate Committee on Armed Services
  • Senate Select Committee on Intelligence
  • Senate Committee on Energy & Natural Resources
  • Senate Committee on Banking, Housing, and Urban Affairs

The Congressional Research Service, a nonpartisan legislative agency, also offers opportunities to conduct research that can impact policy design across all subjects.

In general, we don’t recommend taking entry-level jobs within the executive branch because it’s very difficult to progress your career through the bureaucracy at this level. It’s better to get a law degree or relevant master’s degree, which will give you the opportunity to start with more seniority.

The influence of different agencies over AI regulation may shift over time, and there may even be an entirely new agency set up to regulate AI at some point, which could become highly influential. Whichever agency may be most influential in the future, it will be useful to have accrued career capital working effectively in government, creating a professional network, learning about day-to-day policy work, and deepening your knowledge of all things AI.

Here are some of the agencies that may have significant influence on at least one key dimension of AI policy as of this writing:

  • Executive Office of the President (EOP)
    • Office of Management and Budget (OMB)
    • National Security Council (NSC)
    • Office of Science and Technology Policy (OSTP)
  • Department of State
    • Office of the Special Envoy for Critical and Emerging Technology (S/TECH)
    • Bureau of Cyberspace and Digital Policy (CDP)
    • Bureau of Arms Control, Verification and Compliance (AVC)
    • Office of Emerging Security Challenges (ESC)
  • Department of Defense (DOD)
    • Chief Digital and Artificial Intelligence Office (CDAO)
    • Emerging Capabilities Policy Office
    • Defense Advanced Research Projects Agency (DARPA)
    • Defense Technology Security Administration (DTSA)
  • Intelligence Community (IC)
    • Intelligence Advanced Research Projects Activity (IARPA)
    • National Security Agency (NSA)
    • Science advisor roles within the various agencies that make up the intelligence community
  • Department of Commerce (DOC)
    • The Bureau of Industry and Security (BIS)
    • The National Institute of Standards and Technology (NIST)
    • CHIPS office
  • Department of Energy (DOE)
    • Artificial Intelligence and Technology Office (AITO)
    • Advanced Scientific Computing Research (ASCR) Program Office
  • National Science Foundation (NSF)
    • Directorate for Computer and Information Science and Engineering (CISE)
    • Directorate for Technology, Innovation and Partnerships (TIP)
  • Cybersecurity and Infrastructure Security Agency (CISA)

We do not currently recommend attempting to join the US government via the military if you are aiming for a career in AI policy. There are many levels of seniority to rise through and many people competing for places, and initially you have to spend all of your time doing work unrelated to AI. However, having military experience already can be valuable career capital for important other roles in government, particularly national security. We would consider this route more competitive for military personnel who have been to an elite military academy, such as West Point, or for commissioned officers at rank O-3 or above.

Policy fellowships are among the best entryways into policy work. They offer many benefits like first-hand policy experience, funding, training, mentoring, and networking. While many require an advanced degree, some are open to college graduates.

US think tanks

  • Center for Security and Emerging Technology (CSET)
  • Center for a New American Security
  • RAND Corporation
  • The MITRE Corporation
  • Brookings Institution
  • Carnegie Endowment for International Peace
  • Center for Strategic and International Studies (CSIS)
  • Federation of American Scientists (FAS)

Research nonprofits

  • Alignment Research Center
  • Open Philanthropy6
  • Institute for AI Policy and Strategy
  • Epoch AI
  • Centre for the Governance of AI (GovAI)
  • Center for AI Safety (CAIS)
  • AI Impacts
  • Johns Hopkins Applied Physics Lab

Key routes in

We have ranked the top routes to establish your career early on that we think are most promising right now:

  1. If you can get a position working on policy or ML research at a top AI lab, such as Anthropic, OpenAI or Google DeepMind, then this could be a valuable opportunity to skill up before moving into government in the future.

  2. We recommend applying for a prestigious fellowship that comes with a job in the US government, such as the Presidential Management Fellows for recent graduates of advanced degrees, the Horizon Fellowship, the AAAS fellowship for people with science PhDs or engineering master’s, or the TechCongress fellowship for mid-career tech professionals. If you are a STEM graduate then also consider the NSIN Technology and National Security Fellowship.

  3. A postgraduate course where you can work primarily on policy questions relevant to advanced AI systems. This will give you a good grounding in many of the concepts relevant to AI policy in the context of advanced AI systems, as well as experience doing novel research and useful career capital.

  4. Work directly on AI policy through a DC think tank such as the Wilson Center, the Brookings Institution, the Center for a New American Security, or the Center for Security and Emerging Technology. These roles are usually very competitive. The Belfer Center at Harvard University, and CISAC and The Hoover Institution at Stanford University are also worth considering for their focus on the intersection of science, technology and international affairs, despite being outside DC.

  5. We also recommend interning and working on the Hill as a congressional staffer, with the aim of working your way onto relevant committee staff. We have a detailed career profile on this path. We particularly recommend working on the Hill as an intern, after or during a master’s degree, or after an undergrad degree if you have reasons for not wanting to do a master’s. We generally recommend working as a congressional staffer over a graduate-entry federal employee role, because we think you’ll learn more and get better connections.

  6. For many people we think one of the best routes into the US government is to do a prestigious master’s and then enter government as a federal employee. Master’s programs that we currently recommend doing include:

    • A prestigious law JD from Yale or Harvard, or possibly another top 6 law school.
    • Security studies, public policy, or international relations at one of the top 6 policy schools
    • A master’s in machine learning at a top 10 CS department.29
    • Or possibly a master’s in another relevant subject, such as S&T policy, war studies, computer science, economics, or cybersecurity, though we often recommend law, security studies, public policy, international relations, and machine learning over these options.
  7. Become a Systems Engineering and Technical Assistance (SETA) contractor for an Advanced Research Projects Activity (ARPA). Becoming a SETA is a common stepping stone to becoming a programme manager (PM) at IARPA or DARPA, where PMs are frequently responsible for $10M+/yr emerging technology research projects.

The specifics of which option ranks where in this list are highly uncertain and will depend heavily on your background and personal fit, as well as specifics of the available role. We have highlighted a selection of promising roles that you can apply for now on our job board.

One common piece of advice we’ve heard regularly from people with experience in government was to:

  1. Identify someone who is rising up the ranks quickly and has a decent chance of becoming the most senior government official in your domain of interest
  2. Work closely with them, win their trust, and become their mentee
  3. Make yourself indispensable to them

Government officials we interviewed thought that pursuing the above strategy well, especially if you get lucky with the person you pick as a mentor, can lead to having a larger positive impact faster than pretty much any other route.

Find opportunities on our job board

Our job board features opportunities in US policy:

    View all opportunities

    Be careful to avoid doing harm

    We think that working on US AI policy has many opportunities for substantial impact, however for similar reasons there are also many opportunities to do substantial harm. We have a much longer article on this topic which we recommend reading.

    One of the most robust ways of avoiding doing harm is to be thoughtful and honest. Scheming of any sort can be harmful, especially in more uncertain environments such as AI policy. This relates to a more general point that you shouldn’t be motivated by identity-based loyalties, but rather from a genuine desire to improve the long-run future.

    There are more obvious ways of doing harm, such as pushing for bad policies, but below we address three other specific risks that are particularly salient in the context of US AI Policy.

    If you are associated with a niche community, such as the effective altruism community, then any negative actions that you take in government can end up reflecting poorly on the broader community and even harming others from that community who might want to work with government. For this reason we think it is particularly important to be trustworthy, to have integrity, and to be very careful about inadvertently or intentionally burning community capital. This article discusses this issue in more detail.

    Information security is important, and holding a security clearance and being trusted by stakeholders, such as top AI labs, requires being extremely careful with confidential information. You will need to track where each piece of confidential information you receive came from, and not reveal it to people outside its intended audience. This is a skill important enough that it is worth practicing now.

    Finally, there is a risk that some types of government involvement in AI could exacerbate a race to the bottom on AI safety as we’ve discussed. If this were true, then one would want to be careful to avoid inadvertently increasing the chances of such a race to the bottom.

    Fit: Who is especially well placed to pursue this career?

    In short, we recommend that US citizens interested in building experience and technical expertise in AI policy consider this option. If you can get a policy job at a top AI lab, then we recommend doing that instead and potentially transitioning into US policymaking in the future. If you have (or can get) a top 10 computer science PhD, then we recommend considering working on a safety team at a top lab.

    What predicts fit?

    Below, we outline some of the key features of this work and explain what they suggest about predicting personal fit.

    Have relevant expertise in an area relevant to AI policy

    People who are a good fit for US AI policy careers will have developed — or be developing — expertise in a topic relevant to AI policy, so that they can make valuable contributions to the field. At this stage we are not sure what areas will be most valuable to develop expertise in, but some plausible options include international relations, security studies, public policy, machine learning, AI safety, economics, S&T policy, game theory, law, diplomacy, industrial integrated circuit design and manufacturing, and cybersecurity.

    Many roles in US AI public policy careers require excelling at research-related activities. Research activities generally require a combination of intelligence, conscientiousness, and curiosity, among other traits. For a longer discussion of personal fit in research roles, see our career profile on academic research.

    The importance of relationships

    As Ed Feulner, founder and former president of the Heritage Foundation says, ‘people are policy.’ He argues that one’s ability to make things happen is determined at least as much by who you know as what you know. Successful policy practitioners tend to have extensive rolodexes of friends and acquaintances in the DC community that they can collaborate with when a relevant issue comes up.

    One US government employee told us that the ability to appear extroverted when first meeting someone was key to building a strong network. An academic mentioned how people in DC are constantly looking for ways to help each other and add value, much more so than in other communities. This person argued that policymaking requires buy-in from so many stakeholders that cooperation is key to getting things done, probably even more than in other industries.

    We have some practical advice on building and maintaining a strong network of relationships with others.

    Conventional credentials matter

    Perhaps because the US government and policymaking world is so large, people in the policy community will frequently have to make snap judgments about you. As a result, the DC world tends to rely heavily on conventional credentials, potentially more than other industries. This is especially true when policymakers have to make judgments about the people that their connections are working with, who they have never interacted with. People from a few courses at a handful of elite universities are heavily overrepresented in the upper echelons of the US government.30

    As a result, if you have already graduated from a prestigious school, then you may be at a natural advantage. If you haven’t, then doing a top law, policy, international relations, or machine learning master’s can be a useful first step before entering government. Even if you don’t have the right credentials now, it is quite possible to get them either before or while working within the US government.

    For similar reasons, federal employees tend to stick to relatively conventional styles of dress and behaviour. If you are not willing to dress in a suit every day, then this might not be a good career option for you (and likewise with other unconventional lifestyle choices).

    Governments are not nimble

    You should be comfortable working in large organisations that have inflexible bureaucratic processes.

    It can be frustrating to work within a system as politically sensitive and bureaucratic as the US policy making process. This is particularly noticeable in comparison with, say, a start-up.

    In US policy making, you can occasionally have substantial impact over major decisions. However, there are so many stakeholders involved in any decision that most of the time your work will not have much effect on the outcome. This is one of the ways in which US AI public policy is a high-risk, high-reward career path.

    Former President Barack Obama makes an analogy between the US government and a supertanker: any steering you do changes its direction very slightly, but doing so can eventually end up with the tanker in a completely different location.31

    Good judgement and calibration

    Good judgement is important for pursuing all of our priority paths, but we are including it here to emphasise its importance to this career path as well.

    One of the key ways of contributing to AI policy is likely to be using foresight to spot possible policy traps and path dependencies before they get locked in, which is often before they become overt. Doing this well requires judgement, good calibration, and extensive domain knowledge.

    Other traits

    • You will often need to work closely with people who don’t share your views. You will sometimes need to be willing to work on policies you don’t care about or even actively disagree with. You may occasionally find yourself responsible for executing on a plan created by someone more senior than you that you disagree with.
    • You often need to meet strict deadlines.
    • A couple of people we’ve spoken with emphasised the importance of communication, especially being able to translate between technical and non-technical language. Very few decision makers in the US government have technical backgrounds, and so being able to explain technical topics in layman’s terms is a valuable skill.
    • In conversations on this topic, multiple people mentioned how government employees frequently have to work on a wide range of issues, most of which will be outside their area of expertise. This is especially true in small teams that have to cover a wide range of issues, such as a congressperson’s staff. We were told stories about people with science or technology backgrounds becoming responsible for important issues outside their area of expertise, simply because others frequently assumed that they were the most knowledgeable person in the room on any science or technology issue.
    • Additionally, you need to be comfortable taking a risk on the impact you will have over your career. There may not be any critical junctures relevant to AI during your career. Even if there are, there may not be opportunities to impact them. As a result, this path carries a significant risk of having little impact. Nonetheless, in expected value terms, we think work to help make the deployment and societal integration of advanced AI systems go smoothly is still extremely high impact.32

    Other personal fit considerations

    • US government jobs have substantially lower pay than corporate options.33
    • More senior roles often have very long hours, and people who want to advance quickly often work up to 80 hours per week.
    • Most jobs are located in Washington, DC, so you’ll probably need to move to get a good job if you live outside of DC currently.
    • If you are a male, you will need to have signed up for the Selective Service System when you were 18 in order to be eligible for some roles, such as the AAAS and PMF fellowships.
    • For many national security roles you will need to be able to obtain a security clearance. Recreational drug use in the past five years before you apply, dual citizenship, and having a large number of close friendships with foreign citizens — especially citizens of countries considered hostile to the US — can make this more difficult. To make this process easier, keep records on your dates of foreign travel for the past 10 years, as well as records of your addresses for the past 10 years and at least one person who can confirm that you lived at each of those addresses. More information is available.

    Which paths could be even better?

    US AI policy roles have the potential to be highly impactful, and span a wide range of activities, from research to operations to stakeholder engagement. As such, there are roles within this domain for people with many different skill sets. Nonetheless, for some people we think there are likely to be even better options.

    • As we have already mentioned, if you can get a job at a top AI lab working on AI policy, then that could potentially be better than working more directly on US AI public policy at this point.
    • If you can get into a top 10 computer science PhD to work on machine learning, then you should consider AI safety engineering or AI safety research. People with both technical expertise and the skills to navigate a government role could be very valuable and effective in the policy world though.
    • If you are not a US citizen, then you will want to consider a wider range of options. It is probably worth still considering US AI policy though, especially if you have a chance of becoming a naturalised US citizen in the future. It is also worth considering other AI policy options, such as working on AI in academia, in a think tank, within the UK civil service, or as a specialist at the intersection of China and AI. It is also worth considering our other priority paths.

    Conclusion

    We’ve argued that the US government is likely to be a key actor in the development and deployment of advanced AI systems. One of the main mechanisms that could lead to negative outcomes from deploying advanced AI systems is a race to the bottom on AI safety. Governments are likely to be key actors that could contribute to an environment leading to such a race or could actively prevent one. As a result we argued that US AI public policy could be one of the most important issues this century.

    We gave some arguments for and against working on US AI policy, but overall we argued that if you have the following list of traits, then you may have a decent shot at becoming a senior AI policymaker, and that a career in US AI policy may be one of the most impactful things that you could do:

    • Are thoughtful and have good judgement
    • Are an American citizen (though similar roles can be good options for other nationalities.)
    • Have or are interested in developing technical expertise in a field relevant to AI policy
    • Are good at building and maintaining relationships
    • Are comfortable interfacing with a large bureaucracy
    • Have or can get a law JD or a master’s in policy, international relations, or machine learning from a top school

    If you are interested in pursuing a US AI public policy career path, or are interested in learning more, then we would like to hear from you below.

    American interested in working on AI policy?

    We’ve helped many people transition into policy careers. We can offer introductions to people and potential funding opportunities, and we can help answer specific questions you might have.

    If you are a US citizen interested in building expertise to work on US AI policy, apply for our free 1-1 advising.

    Apply for advising

    Learn more

    Getting started in this career

    More about AI policy

    Notes and references

    1. Thanks to Ben Buchanan, Jack Clark, Allan Dafoe, Carrick Flynn, Ben Garfinkel, Keiran Harris, Roxanne Heston, Howie Lempel, Terah Lyons, Brenton Mayer, Luke Muehlhauser, Michael Page, Nicole Ross, Benjamin Todd, Helen Toner, Rob Wiblin, Claire Zabel, Remco Zwetsloot, and Cody Fenwick for useful conversations and comments on this article. Thanks also to Richard Batty, Nick Beckstead, Miles Brundage, Owen Cotton-Barratt, Tom Kalil, Jade Leung, Alex Mann, Carl Shulman, and many current and former US government employees and researchers at AI labs for conversations that helped shape this article. This article does not represent any of these people’s views and all mistakes are our own.

    2. This framing is taken from Page, M, et al. ‘Advanced AI as a global collective action problem,’ unpublished.

    3. See this paper by Amanda Askell for more detailed discussion of this issue.

    4. As Ben Buchanan points out, it is not enough to commit to not racing. One must actually convince the other side that it is not racing, and one’s actions must not be misperceived. In international security this is difficult to do, and history is rife with examples of misperception.

    5. This definition is taken from the OpenAI Charter.

    6. Open Philanthropy is 80,000 Hours’s largest funder.

    7. Thinking of AI policy as a coordination problem, we can imagine that there might be bi-modal equilibria (or basins of attraction) outcomes depending on actions taken over the coming decades. If this is the case then we should be very concerned about the location of tipping points between good and bad outcomes.

    8. See, for example, the Security Dilemma or more generally vicious and virtuous cycles.

    9. See Garfinkel, B. “The impact of artificial intelligence.”

    10. Scharre, P. ‘Army of none’. Tantor, (2018)

    11. This would actually be an argument in favour if you were worse than the typical federal employee; however, if you are likely to be worse than the typical federal employee then we would recommend against entering government in most roles!

    12. A couple examples of this phenomenon are given here: ‘The story of a lucky economist

    13. This number has increased substantially to $1.7 trillion in 2022 since this article was first published.

    14. For instance, we estimated that graduates of Harvard and Yale law schools in the US had between a 1 in 8 and 1 in 12 chance of successfully being elected to congress if they tried. Other elite law schools such as those at Georgetown and the University of Texas had odds that were not much worse. Similarly, we estimated that congressional staffers have perhaps a 1 in 40 chance of being elected to congress if they tried.

      We estimated similarly strong odds for Oxford University PPE students who wanted to be elected into the British Parliament, and even better odds for former presidents of Oxford University’s two ‘unions’. While the British estimates obviously don’t apply to the US context, we see these estimates as additional evidence that there are pockets of society that have strong chances of making it into influential positions in government. If you are in one of these pockets, or if you can get into them, then we think you have a decent chance of making it into an influential role.

    15. Different industries select for different traits, and we think that promotion in government may select more for social intelligence than other industries.

    16. A few articles that have been published on the coordination problem caused by advanced AI systems include:

      * Armstrong, S., N. Bostrom, and C. Shulman ‘Racing to the precipice: a model of artificial intelligence development‘, AI & Society (2016)
      * Bostrom, N., A. Dafoe, C. Flynn *’Public policy and superintelligent AI: a vector field approach
      in ‘Ethics of Artificial Intelligence’, Oxford University Press (2020)
      * Cave, S. & OhEieartaigh, S. ‘An AI Race for Strategic Advantage: Rhetoric and Risks‘ AAAI (2018)
      * Dafoe, A. AI Governance: A Research Agenda. (2018)
      * Danzig, R. Technology roulette: managing loss of control as many militaries pursue technological superiority (2018)
      * Meserole, C., Artificial intelligence and the security dilemma, Lawfare, (2018).

    17. Ben Garfinkel points out that if you define collective action problems broadly as cases where parties fail to achieve Pareto optimal outcomes, then nearly all international security issues are examples. The world wars are particularly salient examples from the past century, where the (in principle unnecessary) harms were nearly existential for the participating countries.

    18. And other ozone-depleting substances controlled by the protocol.

    19. Processes producing CFCs as a waste gas contributed less to global GDP than the industrial sectors producing CO2 as a waste gas. Additionally, the cost of switching over from CFCs to alternative gases that were less polluting was far cheaper than the cost of phasing out carbon emissions.

    20. It is difficult to know how many actors are likely to be involved in the development of advanced AI systems. Costs of training AI systems have been increasing rapidly as they require more compute. If this trend continues, then this line of argument gestures at a possible future in which only the best-resourced governments and companies can afford to train the most advanced and largest deep learning systems.

    21. This statement assumes that we will get good leading indicators on advanced AI systems and possible coordination failures, which seems plausible but not certain.

    22. Global temperatures are projected (see ch. 5) to peak late this century at the earliest and most likely in a future century. Experts expect the full automation of all human jobs to occur sometime later this century or beyond, with a median estimate of this occurring in 122 years.

    23. Ben Garfinkel gives some ways in which critical junctures could occur during the development and adoption of advanced AI systems, including:

      * The risk of irreversible harm to civilization from wars partly caused by tensions around AI development
      * The possibility that there will be a fleeting and non-repeated opportunity to establish a more unitary or generally well-coordinated international order to deal with or pre-empt various issues caused by AI
      * The possibility that certain developments around AI will enable the ‘lock-in’ of particular political institutions, maybe most saliently robust totalitarianism, without it being inevitable which ones will lock in
      * The possibility that there will be a big ‘winner’ of the AI race, who will have disproportionate influence over other critical juncture-ish choices (like how to handle the development/deployment of other important technologies, how to initiate space colonisation, etc.)

      More generally, we could see critical junctures as a result of lock-in, positive feedback, increasing returns (the more a choice is made, the bigger its benefits), self-reinforcement (which creates forces sustaining the decision), and reactive sequences (a series of events triggering one another in a rapid near-unstoppable sequence like dominoes falling).

    24. For a more detailed discussion of this topic, see pg. 6 of Beckstead, N. ‘On the overwhelming importance of shaping the far future‘, PhD thesis, Rutgers (2013)

    25. There could be no critical junctures because development and adoption of AI could be fairly gradual without any substantive discontinuities. Paul Christiano discusses a number of topics relevant to the likelihood of this scenario on the 80,000 Hours podcast.

      Critical junctures could also fail to materialise if the long-run impacts from the development of AI are not particularly path-dependent. To motivate this point, consider the analogy of the industrial revolution, which has given some countries in the West an economic and military advantage which has lasted for a couple of centuries. At first glance this looks like it might be a path dependency, however, the West’s economic and military lead is now eroding, and it seems plausible that major economies in the East will catch up in the future, at least in the absence of advanced AI systems. This example illustrates how the advantage from technological progress to one actor or group of actors can erode away over time. We give this as one example of how, with the benefit of hindsight, the path dependencies or critical junctures that one might initially expect from technological progress may not be as path dependent as originally thought.

    26. If critical junctures exist in history, in which a person could take path dependent actions that steer humanity onto a better or worse trajectory, then the industrial revolution is a plausible candidate. And yet, without the benefit of hindsight, it seems plausible that a smart, motivated person would have struggled to make the industrial revolution go better or worse from a long-term perspective. This motivates the intuition that it may be difficult to affect critical junctures even when they do occur. Nonetheless, the possibility of critical junctures seem like particularly important opportunities to improve the long-run future, so they seem worth paying attention to in case you can affect them.

    27. Toby Ord and Owen Cotton-Barratt discuss this phenomenon in more detail in the general case here and here. One note of caution, though, is that if very few people are working on an issue, then people should coordinate their efforts and shouldn’t all focus on discontinuous scenarios.

    28. Of course, the US government is vast, and very few of these roles would be at all relevant to AI.

    29. Ideally at a school with strong links to government, such as CMU, Harvard, or MIT. We are currently uncertain about how to rank this option relative to the others in this list. It also has a wider range of options of things to do afterwards. So if you are not confident that you want to work on AI policy, then this looks like a particularly good option.

    30. Rothkopf, D., ch. 1 of ‘Running the World‘, Public Affairs, NY

    31. Skidmore, D. ‘The Obama Presidency and US Foreign Policy: Where’s the Multilateralism?‘ International Studies Perspectives (2012) doi:10.1111/j.1528‐3585.2011

    32. See chapter 6 of 80,000 Hours co-founder William MacAskill’s book Doing Good Better for more discussion of the importance of expected value when assessing social impact.

    33. We have a discussion of congressional staffer salaries.