Table of Contents
- 1 Why we wrote this
- 2 What is an AI policy career?
- 3 Concrete questions in AI policy we need to answer
- 4 What are the roles you want to aim for?
- 5 How do you position yourself to get into those roles?
- 6 Resources
- 7 Acknowledgments
- 8 Appendix I
- 9 Appendix II
Why we wrote this
80,000 Hours’ research suggests that one of the highest-impact opportunities to improve the world may be by positively shaping the development of artificial intelligence. The issue is large in scale and neglected by most people. Recent experience suggests it’s possible to make steady progress in reducing the risks. See our profile of the problem for further explanation of why we believe this.
The last few years have seen dramatic growth in the number of people doing technical research to figure out how we can safely program an artificial general intelligence, and we have a guide for people who are considering doing this kind of work.
There is another topic that is just as important and has become relatively more neglected: improving AI policy and strategy. This includes questions like how can we avoid a dangerous arms race to develop powerful AI systems; how can the benefits of advanced AI systems be widely distributed; and how open should AI research be? If we handle these issues badly, it could lead to disaster, even if we can solve the technical challenges associated with controlling a machine intelligence.
We need answers to AI policy and strategy questions urgently because i) implementing solutions could take a long time, ii) some questions are better addressed while AI is less advanced and fewer views/interests on the topic are locked-in, and iii) we don’t know when particular AI capabilities will be developed, and can’t rule out the possibility of surprisingly sudden advances.
As a result, for the right person, work on these issues is among the most promising ways to make a contribution to the world today.
If technical or policy AI work is on your shortlist of possible career options, you should let us know.
We can provide personalised advice and connect you with world class opportunities, including vacancies in AI policy that aren’t yet being publicly advertised.
To complement this article we also have a number of in-depth interviews with people actively working on AI policy and strategy on what the work is like, and how you can join them:
- The world desperately needs AI strategists. Here’s how to become one. (Discusses this article.)
- Prof Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics.
- The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.
This 30 minute video by Yale Professor Allan Dafoe also covers some introductory material in this guide.
What is an AI policy career?
AI policy is the analysis and practice of societal decision-making about AI (note that we say ‘societal’ rather than governmental, since many decision-makers are involved in making ‘policy,’ broadly construed). The term AI strategy is sometimes used to refer to the study of big picture AI policy questions, such as whether we should want AI to be narrowly or widely distributed and which research problems ought to be prioritized. Below, we mostly include such strategic questions under the umbrella of “long-term AI policy.” References at the end of the document provide good introductions to the range of issues falling under the AI policy and AI strategy umbrellas.
Short-term vs. long-term issues
When discussing AI policy it is sometimes useful to distinguish between short-term and long-term AI policy.
Short-term AI policy refers to issues society is grappling with today. These include liability issues with driverless cars, relatively small-scale worker displacement, algorithmic bias, and increasingly automated surveillance.
Long-term AI policy refers to issues that either only arise at all or arise to a much greater extent when AI is much more advanced than it is today, and in particular if advances turn out to be rapid. These will have very long-lasting consequences, such as AI safety risks or geopolitical instability related to human-level or superintelligent AI.
This distinction is not very solid: it may turn out, for example, that issues referred to here as long-term issues will arise sooner than many expect, if we’re overestimating how long it will take to get to advanced AI, a question about which AI experts differ greatly in their expectations. And perhaps the most severe economic issues associated with AI will take longer to materialize than the safety issues.
But short-/long-term is a useful preliminary distinction reflecting the common view that human-level or superintelligent AI is unlikely in the very near term, but that if it occurs it will have transformative, long-lasting societal implications.
Most policy decisions related to near-term narrow AI systems are unlikely to have extremely long-lasting implications in and of themselves, but working on such issues may be valuable for gaining experience to subsequently work on policy issues related to more powerful AI systems, and some of the same policy frameworks and tools may be applicable to both. One can also be more sure that a problem is real, and that one is having a beneficial impact, if one works on already existing problems.
On the other hand, long-term policy and strategy as defined above are probably more important over the long term, perhaps especially if progress in developing AI systems occurs quickly. It is less well developed conceptually and practically at present, and the stakes are much larger, so we expect significantly higher benefits from making contributions in that area.
We’d estimate that less than 10% of work on AI policy is specifically concerned with issues related to highly capable AI systems that might be developed in the future, whereas there is a larger and more rapidly growing body of work focused on more near-term issues such as driverless car policy and drone policy. This makes the former significantly more neglected.
Concrete questions in AI policy we need to answer
To give you a flavour of the kinds of issues you could deal with while working on AI policy, consider some of the following questions.
What capabilities might AIs develop?
What capabilities might AI systems one day have, and what would be some possible social consequences? For example, what would happen if an AI systems could:
- Analyze physics papers and propose new experiments?
- Do the same for other fields, including AI development?
- Analyze social science papers, Internet content, and/or surveillance data and make fine-grained predictions about human behavior in various situations?
- Generate language and images designed to persuade particular sets of people of particular propositions?
- Analyze literature from a variety of fields and sources in order to generate designs for novel materials, technologies, or software?
- Form plans that involve a combination of the above?
What are the implications for computer security?
- How likely is it that AI systems will make it possible to cheaply, efficiently and reliably find vulnerabilities in computer systems with more skill than humans? What kinds of indicators might provide updates on this front?
- What measures could state or non-state actors take to prevent this coming about, and/or mitigate potential negative effects?
- Is it possible that AI systems might prove useful in improving defenses against hacking, on a similar timeline to proving useful for carrying out hacking? To the extent that there is a “computing power arms race” between hackers and institutions with sensitive data, are there cases where the latter ought to be subsidized – or regulated – to increase the computing resources they have available?
What impacts will autonomous weapons have?
- What types of autonomous weapons might be developed in the near-, medium-, and long-term? What kinds of indicators might provide updates on this front?
- What are the likely applications of autonomous weapons of different types? For example, what kinds of autonomous weapons are most likely to be applied as air support, as part of house-to-house combat, in reconnaissance, in surveillance of occupied territory, in assassinations or other covert operations, etc.?
- Taking into account likely types and applications of autonomous weapons, what are the likely effects on global peace and security?
- How do the potential upsides of more capable autonomous weapons (higher precision, fewer civilian deaths, reduced need for human soldiers) compare with potential downsides (lower bar for entering combat, potential geopolitical destabilization e.g. via arms races, risk of accidents or misuse)?
- How likely is the development of relatively cheap autonomous weapons that are accessible and damaging in the hands of private citizens? What sorts of regulatory actions might be desirable to head off this sort of possibility, and when would such actions be appropriate? What indicators of AI progress might indicate the appropriateness of such actions?
- What relevant lessons can be learned from the development of other new weapons technologies (e.g. nuclear weapons, chemical weapons, landmines), and the international governance responses to them?
- What plausible paths exist towards limiting or halting the development and/or deployment of autonomous weapons? Is limiting development desirable on the whole? Does it carry too much risk of pushing development underground or toward less socially-responsible parties?
Campaign to Stop Killer Robots, April 2013
How can safe deployment of broad-scope AI systems be ensured?
An AI system with very broad scope, capable of creative long-term reasoning, could take actions that are very difficult to predict. The value alignment problem is one example – a poorly specified “goal” implemented by a powerful AI system could constitute a global catastrophic risk.
Even if values are “aligned” between an AI system and its user, it could be the case that running an AI system with a particular goal could lead to illegal actions, in ways that are not easy to anticipate for people who haven’t carefully inspected the system – or even for those who have. For example, an AI system with a goal of “making money” might turn out to break the law in attempting to do so, even though it wouldn’t be obvious that this was going to happen while the AI system was still in development and testing.
With these considerations in mind, important questions arise about the cooperation and conflict in the development and deployment of such potentially dangerous systems, such as:
- How can a cooperative spirit be encouraged between relevant actors in the development and deployment of safe AI?
- How can international competition and cooperation over AI be formulated and analyzed from a game theory perspective?
- Would it be useful to have prior agreement between parties about what constitutes a potentially risky “deployment” of an AI system, and under what conditions such deployment should be considered legitimate?
- What sorts of agreements between parties would be enforceable?
There are some other questions you could address in an appendix.
What are the roles you want to aim for?
A key question to ask yourself is where you think you will likely have the largest impact on the most important AI policy issues. The four main roles could be roughly summarized as direct research, working within governments & industries, advocacy, and helping with recruitment.
You might be interested in doing direct research on AI policy: analyzing the strategic considerations involved in AI over the short and long term, evaluating proposed solutions, and suggesting new ones. You might instead be interested in working in government – gathering input from various stakeholders, negotiating what’s feasible with your colleagues in the rest of government, and directly implementing solutions. You might want to do some of the things we just mentioned, but in industry instead of government. You could also want to work in advocacy – helping amplify the voices advocating good solutions, and building broader awareness of the importance of particular problems. Or you might want to work in recruitment – helping walk people through these options, connecting job seekers and employers, and convincing people about the importance of AI policy or particular issues within it.
There is probably more long-term value to be created at the moment from direct work on strategic analysis and the development of feasible solutions to implement, rather than advocacy, given the limited set of good proposals currently available to promote. However, over time advocacy may eventually be critical if many stakeholders need to be involved in the preferred solution. Different skills are applicable to these different ways of problem-solving, though there are some cross-cutting skills like knowledge of AI, writing abilities, and comfort with networking. For now, we recommend that those with research experience strongly consider doing direct work on the sorts of research questions discussed above, but there are are many potential areas for people with different skills to contribute.
The section below provides some examples of what one could do in AI policy. It particularly focuses on two broad areas, research and practice. Advocacy and helping with recruitment are indirectly discussed in 80,000 Hours’ profile on promoting effective altruism.
AI policy researcher
Broadly, this career path involves being an expert in and pushing the frontier of AI policy considerations and interventions. Before giving examples of specific research areas and employers one might be interested in, we note that one can do this in government, industry, academia, or non-profits. Governments are typically not on the cutting edge of developing fundamentally new policy ideas, at least for long-term AI policy, but this may change as the security and economic relevance of AI is increasingly appreciated. Note that one person can straddle the boundary between research and practice – contributing some ideas while also implementing some solutions.
Short-term Policy Research Options.
If one is interested in doing short-term policy research, for example, to have a direct impact or to build up expertise and credibility in the space, one could work in a variety of institutions. We think short-term focussed work is less likely to be highly impactful since it has received more attention to date, but for many people it will be a necessary or useful step in order to build the skills and credibility necessary to work on more pressing questions. One could also leverage one’s position to build bridges between groups working on short and long term concerns, and exploit the synergies that exist.
There isn’t one clear frontrunner in terms of institutions or sectors where you could have the biggest impact, since the issues you’re interested in and knowledgeable about, the details of the position, your nation of citizenship (if looking for a government job), and other factors also play a role. One aiming to work in AI policy should thus stay attuned to the changing landscape of AI, on which we provide some tips below. For example, OpenAI didn’t exist two years ago, but is now an important player in AI, and China’s national profile in AI is also rising quickly.
Examples of relevant institutions where such research is conducted or will soon be conducted, and where one might pursue opportunities, are:
- The University of Oxford, where a lot of analysis of the future of work is happening;
- Various think tanks such as the Brookings, Center for a New American Security, RAND, Data & Society, Amnesty International, and the Alan Turing Institute;
- Consulting firms such as McKinsey;
- Companies and non-profits like Google, DeepMind, Microsoft, OpenAI, Baidu, Tencent, and Amazon;
- The Federal Aviation Administration for drone issues;
- Trade associations such as IEEE (especially their Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems), AAAI, and the Partnership on AI.
One could also do academic research at any university, though it helps to be somewhere with enough people working on related issues to form a critical mass. Examples of universities with this sort of critical mass include the University of Oxford, University of Cambridge, UC Berkeley, MIT, the University of Washington, and Stanford. If you think you could get a job at more than one of these places, we recommend that you aim for an opening where you will be working in close proximity to technical experts in order to calibrate your research, and in which you will have some flexibility to define your research aims and influence the direction of the relevant organization (vs. slotting into a narrowly defined role).
Long-Term Policy and Strategy Research Options.
For longer term policy and strategy research, there are not as many places where serious work is ongoing or might happen in the near future. The following is a nearly comprehensive list:
- The Future of Humanity Institute at Oxford University was founded by Prof Nick Bostrom, author of Superintelligence. The Global Politics of AI Research Group at FHI/Yale is interested in hiring Researchers, Research Assistants, and Interns in the area of AI Strategy and Policy, with an emphasis on long-term strategic issues. See current vacancies and subscribe to get notified of new job openings. You can also send a CV and short statement of interest to [email protected] at any time.
- The Cambridge Centre for the Study of Existential Risk and Leverhulme Centre for the Study for the Future of Intelligence at Cambridge University house academics studying both technical and strategic questions related to AI safety. See current vacancies and subscribe to get notified of new job openings.
- Alphabet’s DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically. See current vacancies and subscribe to get notified of new job openings. Google Brain is another deep learning research project at Google. See current vacancies and subscribe to get notified of new job openings.
- OpenAI was founded in 2015, and aims to “build safe AGI [artificial general intelligence], and ensure AGI’s benefits are as widely and evenly distributed as possible.” It has received $1 billion in funding commitments from the technology community. See current vacancies and subscribe to get notified of new job openings.
- The Global Catastrophic Risk Institute (GCRI) is a nonpartisan think tank that aims to reduce the risk of events large enough to significantly harm or even destroy human civilization at the global scale. See current vacancies and subscribe to get notified of new job openings.
- The Center for a New American Security is a think tank in Washington D.C. that has a program called the Artificial Intelligence and Global Security Initiative. Their research agenda is largely focused on long-term issues.
- The Belfer Center is a think tank at the Kennedy School of Government at Harvard University. As part of their Cyber Security Project they recently started an initiative on Artificial Intelligence and Machine Learning. They’ve written a report on Artificial Intelligence and National Security, which includes a section on mitigating catastrophic risk.
It’s not clear where in government would be the best place to do such work, but examples might be the defense and intelligence communities (e.g. IARPA, DARPA, Office of Net Assessment) and the White House Office of Science and Technology Policy (OSTP), the latter of which is responsible for the report “Preparing for the Future of Artificial Intelligence”. These three are each influential in the area of AI, and have funded and conducted relevant work. The National Science Foundation, Homeland Security Advanced Research Projects Agency, the Office of Naval Research, and the US Digital Service are other agencies where one could have an influence on what research projects are funded and/or implemented directly by the US government.
The Partnership on AI is likely to play some role in fostering discussion and consensus-building on these issues. Whether it will do much in-house research is not yet clear.
While there are typically positions being advertised at at least one of the above organizations, you should also aim to cast a wide net and inquire about positions that may open up in the future or which are not listed publicly. Similar considerations to those listed above apply regarding how to think about prioritizing job opportunities–e.g. regarding proximity to technical experts, even on long-term issues, it is important to remain in close contact with technical experts to ensure that your work is grounded in plausible scenarios for AI’s development. However, as previously mentioned, expert opinions differ greatly so aim to also be a critical observer of AI trends in your own right, and focus more on understanding the range of plausible scenarios than any specific forecast.
AI policy practitioner
In addition to developing and evaluating policy ideas, someone needs to actually implement them. These positions are hugely influential, so it’s important they’re filled by people who understand the issues. These positions require relatively stronger social skills compared to policy research.
It’s not clear yet where the best place to end up would be to implement AI policies over the long term. This means it’s important to build your career capital and be capable of taking opportunities as they arise. Ultimately, you want to get into the most influential and relevant position possible. All things being equal, the higher your level in a government or major AI company, the more likely you will be in a position to determine policy as AI matures.
To do this, gradually aim for higher and more relevant positions in either the executive or legislative branches and be opportunistic if, for example, changing political climates favour different agencies.
Here are some concrete steps to consider:
Short-term Policy Practice Options.
Organizations that are likely to have an important influence on autonomous weapon system policies include the National Security Council and Office of the Under Secretary of Defense for Policy. Also consider local and state governments, international agencies (e.g. the UNIDIR or UNODA for autonomous weapons work), non-profits like the Campaign to Stop Killer Robots, the Department of Labor, the Domestic Policy Council in the US (and their equivalents elsewhere), and national legislatures in key countries.
Local and state government jobs, and jobs working at small companies in industry, should primarily be seen as experience-gaining opportunities rather than ends in themselves if your goal is maximizing impact. Exceptional cases include especially prominent jurisdictions (e.g. California) that may set examples for other states or federal governments, and especially promising startups.
Additionally, private companies and non-profits are currently key players and will probably remain key drivers of AI progress for at least the near future, so places like Google, DeepMind, Microsoft, Facebook, and OpenAI should be high on your list of desired jobs if you are interested in putting policy ideas into action.
Long-Term Policy Practice Options.
It’s not clear yet where the best place is to work if your goal is to implement AI policies down the road, since that will depend on what sorts of policies are developed and proposed in the intervening years. But the legislative and executive branches of prominent countries (e.g. cabinets, the National Security Council in the US, OSTP, the US Congress), the United Nations (which is currently launching an AI-related center under UNICRI) and the European Commission (which was recently tasked by the European Parliament with exploring policies for AI and robotics), and key companies should be considered. Within the US Congress, it might be particularly useful to be an analyst in a relevant committee such as the Committee on Science, Space, and Technology.
When applying for or considering accepting jobs, be sure to seek out opinions about industry and policy trends so as to work for an organization that is likely to remain relevant. For those without clear credentials in long-term AI issues, it may make sense for you to spend the next few years gaining knowledge and experience instead of aiming just to work at the best possible organization right away. It will likely be years before long-term AI policy practice as opposed to research become especially critical, and you will have more information in the future about which organizations are most relevant.
Long-term policy is a difficult topic because the stakes are high, and it’s possible to do more harm than good by mistake. For example, you could make an arms race more likely by promoting the idea that machine intelligence can give you a strategic advantage over rivals, without sufficiently discussing the importance of cooperation. Alternatively, you could discredit the field by producing low-quality analysis or framing the problem in a non-credible or sensationalist way. So it’s important to be cautious in your efforts and base them on detailed discussion with experts.
Positions you can apply for now
We keep an up-to-date database of high-impact vacancies, including for AI policy careers, on our Job Board:
Remember if technical or policy AI work is on your shortlist of possible career options, you should let us know:
We can provide personalised advice and connect you with world class opportunities, including vacancies in AI policy that aren’t yet being publicly advertised.
How do you position yourself to get into those roles?
What fields to learn about
AI policy is not a very mature domain and it draws on many other disciplines. Ideally you will develop familiarity with a lot of disciplines, and deep expertise in at least one. Some examples include:
Not surprisingly, to do AI policy, it’s good to know a fair amount about the AI science and technology landscape. This will help you separate hype from reality, have a sense for the state of the art, and think sensibly about how the field might evolve in the future.
Political science and public policy
These are useful disciplines for helping you understand the roles and constraints of different institutions (e.g. domestic and comparative politics are relevant sub-disciplines) and international issues/dynamics (international relations is a relevant sub-discipline) in dealing with AI. The sub-field of science and technology policy is especially relevant to AI policy, and provides a lot of cases to learn from as well as general conceptual frameworks. There are a lot of analogous issues that have been well-studied in political science, such as nuclear arms races and international cyber-conflict, that are relevant to long-term AI policy, and undergraduate/graduate programs in both of these disciplines tend to emphasize tools such as statistical analysis and game theory that may be useful for doing AI policy analysis.
This is the background of a lot of people who work on AI policy. For example, there is a We Robot conference series that looks at (mostly short-term) legal/policy issues like liability, accountability, military, etc. issues in AI, organized primarily by legal scholars. Law is useful for thinking about the fine-grained issues that organizations will need to deal with when it comes to AI, whereas political science and public policy tend to operate at a higher level of abstraction. Understanding law also allows one to identify various tools and levers for influencing AI development and deployment. Finally, international law complements international relations in providing some insight into the relevant international norms which may feed into, and constrain, any large scale governmental decision processes on advanced AI.
When thinking carefully about the future of work, economics is essential. But there are also sources of insight in the broad economics literature that may bear on other AI policy issues, such as in industrial organization and game theory. In particular, game theory can be a useful lens for thinking about some of the coordination issues that may come into play with highly advanced AI.
Other social sciences
The above are just some examples of relevant disciplines, but other relevant ones include science and technology studies, an empirical/conceptual field which seeks to foster disciplined thinking about the role of science and technology in society and the decision-making processes of scientists and engineers; sociology and anthropology, which are especially relevant when thinking about the social and economic impacts of AI; psychology, which is useful for thinking about AI impacts at the small scale, and in understanding how humans interact with AI and robots, which in turn may inform appropriate policies and framings of the topic; and media and communications studies, which are relevant to thinking about public attitudes toward AI, as well as the potential impact of more powerful persuasive AI systems.
Philosophy and ethics
Philosophers are well-positioned to frame AI policy questions in rigorous ways, and ethics has direct relevance to analyzing what sorts of futures are preferable. Several researchers in AI policy such as Nick Bostrom and Toby Ord have backgrounds in philosophy among other areas.
Security and intelligence studies/national security
A particularly critical area of research now and in the future concerns s the security implications of artificial intelligence. AI is likely to be used directly in these domains to a greater extent in the future than today, and to be a greater focus of security decision-making. Experts in security and intelligence studies and national security practice areas are currently underrepresented in the field of AI policy.
How to generally build your career capital
If you’re not able to enter a top policy position directly, below are some general steps you can take to build your career capital in the area.
Our primary recommendation to anyone interested in this area is to dive in headfirst: learn as much as you can, connect with others interested in AI policy, develop your skills, ideas, and relevant work experience, and try to articulate and solve the problems. Nick Beckstead offers some advice on how to do that in an appendix.
In addition to this:
AI policy is new and rapidly changing, and if you’re looking for a job in this area, you should get to know the right people in order to find out about job opportunities or get recommended.
Careers in policy and strategy require being opportunistic and making good use of connections. You will want to position yourself to take a role in (or move away from) government during big shifts in power. There is a chance of advancing in an organization very quickly given the currently small talent pool, so it’s worth applying for key roles as soon as they become available (so long as you have a back-up plan).
Because of this, to the extent financially possible, go to conferences on AI and attend sessions on policy-related issues; talk to AI researchers; participate in Facebook groups on related topics like AI safety; get in touch with people in the field to talk about your and their work; and express interest in working at places even if there is no position listed (yet), if you think you could add value there in the future.
Talking with those on the “front lines” of research and practice in AI policy is invaluable because much of the analysis on the topic has not been published yet, so consider internships at places like the Future of Humanity Institute (or at least consider signing up for their vacancies newsletter which lists jobs, internships, and volunteer opportunities at their organization and elsewhere). It is also valuable to attend conferences and workshops in your “home” discipline or industry to find others interested in AI, inform those working in other domains about the latest developments, and to identify possible collaborators.
Stay abreast of AI developments
As discussed previously, you’re going to need to focus to some extent, and knowing the state of the art and plausible trends is critical for picking the right issues to focus on. The low hanging fruit for keeping abreast of AI developments is to subscribe to Jack Clark’s Import AI newsletter and follow some of the leads there, if you don’t already know what’s ‘hot’ (For resources to build a general foundation in AI, see below). Also check out the arXiv Sanity Preserver, which filters recent papers uploaded to the pre-print website arXiv.
For those with little background knowledge of the terms that show up in these sources, taking an online course such as Peter Norvig and Sebastian Thrun’s or Andrew Ng’s may be a good idea, and those with access to university courses may want to take an in-person course in AI.
A rough rule of thumb is to aim to read three or so AI papers a week to get a sense of what’s happening in the field, the terminology people use, and to be able to discriminate between real and fake AI news. Regarding AI jargon, your aim should be for at least attaining interactional expertise – essentially, the ability to pass the AI researcher Turing Test in casual conversations at conferences, even if you couldn’t write a novel research paper yourself.
See the list of reading material below. Unfortunately, there isn’t one canonical textbook on AI policy that you can read to learn the essentials, and it may be that crucial insights lie in disciplines not mentioned above. So read broadly in areas you’re unfamiliar with, while also leveraging your existing expertise. If you’re just starting out (e.g. as an undergrad), you could consider double majoring in computer science along with political science, public policy, or economics, while also reading broadly outside those domains. But which courses to take and what to major in depends on your particular strengths and the options you have (e.g. what the programs are like at your school). Our AI safety syllabus offers some other thoughts on what and where to study.
For learning what policy-making is like, there’s no substitute for working either in or around governments or corporations. Much of the relevant knowledge about e.g. which ‘levers to pull’, who the key people are, what issues are on/off the political table, and so on does not reside in books and articles but in the tacit knowledge of those working on the issues. If you can intern in the White House, Congress, a federal agency, a local/state government, an advocacy group that interacts with government, the policy/legal division of a company, etc, then go for it.
In the US, AAAS Science and Technology Fellowships are a well-established vehicle for recent PhDs to gain policy experience, and the Presidential Management Fellowship, Presidential Innovation Fellows, and White House Fellow programs are also worth considering for those who are qualified. Public relations is also an area closely related to policy, and one in which gaining experience could be valuable – indeed, groups oriented toward one or the other often work together in corporations and governments.
In the UK, experience in party politics (e.g. being a parliamentary assistant to an MP) and/or civil service (e.g., working at the Department for Business, Energy, and Industrial Strategy, or entering the Civil Service Fast Stream) might be helpful. Similarly, experience bringing together stakeholders to discuss issues is valuable, and that experience can come from the private sector or think tanks. Another source of experience is working on a political campaign, which can be a great source of needed skills in communications, policy research, speech writing, coalition building, event management, team building, and so forth.
Likewise, participation in local Effective Altruism groups can provide some of these skills, and lead to identifying like-minded collaborators interested in the future of AI.
Finally, holding a strategy position at a major tech firm, like Google, would be very useful for moving into AI policy, even if you aren’t working on AI directly in your initial role. There are trade-offs between spending time working directly on solving research problems, and building up relevant experience, but those interested in AI policy should strive to get at least some experience working in/around practical policy issues if possible.
Common career transitions
This section, generalizing a bit, will describe two typical ways one might get into AI policy—moving from being an AI researcher into being an AI policy researcher/practitioner, and moving from being a policy researcher/practitioner in another area to focusing on AI policy.
From AI to policy
Having a technical background in AI is an important and relatively rare asset in policy work. However, it needs to be coupled with policy expertise. One strategy is to find a collaborator or team with policy expertise, so that you can contribute while learning from one another. In general we recommend that you immerse yourself in learning about politics and policy. It is also useful to seek to first understand the landscape of policy issues, so as to evaluate your compatibility with and the relative importance of working in different areas, before zooming in on a particular focus. The resources we recommend at the end of this document should be useful in that regard.
We consider this to be a particularly exciting career trajectory, given the rarity of people with strong technical AI backgrounds working in policy, and the value of going “meta” – one might in some cases have more of an influence on the field of AI by developing, advocating for, and implementing solutions at the organizational, national, or international level than by working on discrete technical projects.
From policy to AI
Those moving into AI policy from a different policy background should aim to get up to speed on the broad contours of AI policy issues (see the list of resources below), and find a focus within that space. AI is an exciting area where the policy landscape is being rapidly transformed by new technical achievements. Accordingly, it is valuable to establish solid technical foundations and to follow developments closely. Some possible ways to get up to speed in AI (in addition to the recommendations above) that are particularly useful for those with a policy background are MOOCs on AI, in-person classes (available at universities and also sometimes available within large corporations), and a master’s program in AI. Additionally, it would be valuable for such people to attend conferences such as We Robot and Governance of Emerging Technologies to get a better sense of how AI policy issues are similar to and different from those they are familiar with.
The list of resources below are not all required reading (that depends on your background), nor is it comprehensive. But it is representative of the sort of material one should probably find interesting if one intends to work in AI policy. If these materials are all boring to you, that’s probably a bad sign. On the other hand, if you start reading and find this topic important – or see gaps in the literature that you could fill with your expertise/skills – that’s a great sign!
An important caveat to the rough “test” above is that some of the heavier stuff on the list is more relevant if you plan to do research (vs. practice/advocacy/recruiting) in AI policy—e.g. the AI textbooks are on the heavier end of the spectrum, while “Preparing for the Future of Artificial Intelligence” is on the lighter, more introductory end. Within each section below, we roughly order the items from introductory to advanced.
We suggest checking a few to test your interest and if you decide to enter the field, working through many of them.
After listing and describing some lectures, books, and papers, we give examples of syllabi in science and technology policy courses which provide additional pointers to a broader literature.
- Videos from the Beneficial AI 2017 conference, various speakers.
- Ethics of AI @ NYU, Opening and General Issues featuring talks from Nick Bostrom, Virginia Dagnum, and Yann LeCun, and a panel discussion
- Long-Term AI Policy, Miles Brundage.
- The AI Revolution and International Politics, Allan Dafoe.
- Bostrom, N. Superintelligence: Paths, Dangers, Strategies. 2014. (Essential reading if you’re interested in long-term AI policy.)
- Erik Brynjolfsson and Andrew Mcafee. The Second Machine Age – Work, Progress, and Prosperity in a Time of Brilliant Technologies. 2014.
- Lin, P., Abney, K., and Bekey, G. (eds). Robot Ethics: The Social and Political Implications of Robotics.
- This is a good introduction to some of the ethical and (mostly shorter-term) political issues related to AI and robotics, including autonomous weapons and social robots.
- Russell, S. and Norvig, P. Artificial Intelligence: A Modern Approach, third edition. 2010; Goodfellow, I., Courville, A., and Bengio, Y. Deep Learning. 2016.
- These are two of the best books to read in order to better understand the technology behind AI. Russell and Norvig’s textbook is widely used in AI courses, and offers a good introduction to many important concepts and terms. However, it’s less strong in covering deep learning as it was written before the current surge of enthusiasm surrounding that sub-area of AI. Goodfellow et al.’s textbook is the best single resource on deep learning, and is also available for free online. Those with less strong technical backgrounds may want to first watch some online videos on AI, such as Norvig and Thrun’s online course and Andrew Ng’s online course.
- Neal, H., Smith, T. and McCormick, J. Beyond Sputnik: U.S. Science Policy in the 21st Century; Howlett, M. et al. Studying Public Policy: Policy Cycles and Policy Subsystems, 3rd edition.
- The first book gives a helpful overview of U.S. science policy in general–its history, institutions, and issue areas–and the second gives a more general introduction to the theory of public policy, including what it is and how and why it changes.
- The White House, “Preparing for the Future of Artificial Intelligence.” 2016.
- This document, reflecting input to the White House from four workshops throughout the US, summarizes much of the best current thought on short-term AI policy. Note that there are other such documents produced by e.g. UK and EU governments, but the White House report is arguably the best written. It makes some claims about the relationships between short and long-term policy (e.g. that we should do the same thing today regardless of our time horizon) that are worth critically evaluating.
- Other policy reports:
- The National Artificial Intelligence R&D Strategic Plan discusses safety issue among other things
- Bostrom, N. “Strategic Implications of Openness in AI Development.”
- Openness is one of the biggest issues in short and long-term AI policy, where there may be tensions between different goals such as fostering distributed control and benefits of AI and mitigating safety risks. This paper is the best introduction to such strategic questions.
- Bostrom, N., Dafoe, A., and Flynn, C. “Policy Desiderata for the Development of Machine Superintelligence.”
- This paper provides a general framework for thinking about long-term AI policy evaluation, and highlights several core research issues that may shape work in the next few years.
- Brundage, M. and Bryson, J. “Smart Policies for Artificial Intelligence,” 2016.
- This paper provides an overview of short-term “de facto” AI policy in the U.S. as it is today (that is, policies not necessarily labeled “AI policies” but which in fact affect it), and offers some recommendations such as strengthening expertise in government. It also has pointers to other related literature on e.g. driverless car policy.
- Papers from the We Robot conference series, mostly focused on short-term AI policy:
- AI Governance: A Research Agenda by Allan Dafoe
- This Future of Humanity Institute report aims to introduce researchers to the space of plausibly important problems in AI governance. It offers a framing of the overall problem, poses questions that could be pivotal, and references published articles relevant to these questions.
Law, policy, and ethics of AI
- Burton, Emanuelle, Judy Goldsmith, Sven Koenig, Benjamin Kuipers, Nicholas Mattei, and Toby Walsh. 2017. “Ethical Considerations in Artificial Intelligence Courses.” arXiv:1701.07769 [Cs].
- Vanderbilt Law School Program on Law & Innovation Teaching resources and case studies
Other reading Lists
- Global Politics of AI Reading List
- 80,000 Hours technical AI safety syllabus
- Large and evolving compilation of works relevant to AI strategy and policy.
- Science and technology policy syllabi
- Fairness, accountability, and transparency in machine learning (FATML) – syllabi and resources
- Syllabus on Fairness, Accountability, and Transparency in Machine Learning from scholars at Columbia University
- List of scholarship relevant to FATML
This guide was written by Miles Brundage, and the following individuals provided helpful feedback on earlier versions: Jan Leike, Jelena Luketina, Jack Clark, Matthijs Maas, Ben Todd, Sebastian Farquhar, Nick Beckstead, Helen Toner, Richard Batty, David Krueger, Jonathan Yan, Brittany Smith, Robert Wiblin, Carrick Flynn, Allan Dafoe, Niel Bowerman, and Michael Page. These individuals do not necessarily agree with all of the contents of the final document.
We put some career advice questions to Nick Beckstead, a researcher and funder of AI research at the Open Philanthropy Project.
Which 3 sub questions in AI strategy and policy would you most like to see answered ASAP?
- Broad-scope deployment problem.
- Describe several of the most plausible/important scenarios in which transformative AI might be developed. Outline how different actors (companies, governments, researchers) could realistically and beneficially respond to each scenario.
- What is the best set of milestones for tracking progress toward transformative AI? What probabilities should we assign to each of these milestones occurring in the next 5/10/15 years?
Do you think a smart person can plausibly make progress on these issues working independently or do they have to be part of a team?
I think they plausibly could because this field is not very developed yet. They would need to be highly self-directed. I believe putting 50-500 hours into this in order to test one’s ability to make progress is a reasonable activity for smart people who are very interested in this set of questions and are very self-directed.
What kind of independent research project would make you most enthusiastic about seeing someone work on these issues professionally?
Someone writes a paper on one of the topics listed above and either gives a helpful broad overview of the issues (explaining them well and comprehensively discussing different approaches and their strengths and weaknesses at a high level) or bites off a chunk that they can explain particularly well and/or have a new insight about.
What’s the biggest breakthrough in AI policy so far, in your view?
I think there’s an interweaving set of considerations that are important. I suppose the single most important strategic consideration is that a fast takeoff might result in someone/something gaining a decisive strategic advantage (where the “something” could be an AI system optimizing for some goal). Another plausible candidate is, “There is an alignment problem, it might be hard, and we need to solve it before we have a superintelligent AI system.”
What are 3 people who could potentially supervise PhDs in AI policy you think might be good options?
I don’t know very many people who are taking students in these areas, especially among people interested in transformative AI. Allan Dafoe would be the best bet. One might also consider getting a PhD in Oxford and seeking opportunities to collaborate with FHI even without an advisor with significant background and interest in this set of issues.
Some suggestions from Helen Toner: the fellowship at UCLA’s PULSE (mentioned above), CISAC at Stanford, Center for Long Term Cybersecurity at Berkeley, and Perry World House at Penn are other places that someone self-directed could conceivably work on this (probably as a postdoc, not a PhD).
What’s a piece of advice you’d give to someone starting a career in AI policy you think most other people wouldn’t give?
Try working on a question in this vicinity independently for a few months. Apply for funding through an EA Grant or try to raise funds to spend time on this if lack of funding is a bottleneck.
Working independently doesn’t mean refraining from seeking advice, though – it’s probably best to talk to as many people who think deeply about these issues as possible.
How important is it to be good at (office) politics if you want to work on AI strategy?
This is likely to be very important for what you’ve called a “practitioner” and is less essential for what you’ve called a “researcher.”
Additional questions in AI strategy.
What are the costs and benefits of various versions of “openness” from organizations working on AI?
Should an organization working on AI publish papers, publish its source code, collaborate with academics and other organizations, etc.? Does this depend on how advanced its work is and how likely it is to be close to important types of AI (see above)?
Openness could lead to faster advancement of science (via public goods) and reduced risks of authoritarianism and AI-induced “unipolarity,” but it could also increase the risks of AI misuse by malicious actors and of technology development races.
Having well-thought-through principles for what sorts of openness are desirable, and when/under what conditions, could make it easier for people to commit to beneficial forms of openness today without implicitly committing to problematic forms of openness in the future.
We’re interested in analysis of different ways in which an organization might be “open” and the potential costs and benefits. One example.
The sections above point to a variety of potential risks from advanced AI systems. Below we list three additional broad classes of risk:
Loss of control of very powerful AI systems: AI systems will likely become much more capable across a broad range of environments, making them much more effective at achieving their specified goals in creative ways. However, designing AI systems that can be meaningfully controlled by humans and that reliably avoid negative side-effects may be challenging, especially if AI systems’ capabilities and the range of possible side-effects outpaces humans’ ability to foresee them. Incautiously designed systems might reasonably model human attempts to constrain them as obstacles to be overcome, and very capable AI systems pursuing problematic goals may become very difficult for humans to control; AI systems with this issue could be reasonably expected to create difficulties at least comparable to those created by today’s computer worms or cybercrime organizations (and plausibly much greater difficulties). If AI systems with problematic goals become sufficiently capable, their pursuit of those goals could significantly harm humanity’s long-term future, and it seems to us that there is a non-negligible chance of outcomes as bad as human extinction.
Gradual loss of meaningful control of society’s direction: As more and more control is gradually ceded to complex and hard-to-understand AI systems that pursue imperfect proxies of our goals (e.g. maximizing profit), humans may lose the ability to make meaningful collective choices about what direction society should take. Emerging concerns about AI systems that discriminate based on e.g. race or gender could be an early example of such a dynamic, as could concerns about filter bubbles created by increased tailoring of news stories to individuals’ predicted tastes. If our technical ability to design systems that faithfully and transparently reflect our values lags far behind our ability to profitably automate decision-making, this could pose long-term problems for the trajectory of civilization.
Moral relevance of AI systems: At present, there exists very little consensus on what kind of non-human beings count as “morally relevant” (chimpanzees? pigs? ants?). We find it plausible, though by no means definite, that some AI systems may at some point become morally relevant in some meaningful sense. If this were to come about, it could have very significant implications for how AI systems should be designed and used, potentially requiring some protections for certain AI agents. For example, it might be possible to duplicate AI “workers” cheaply; if so, this would drive down the cost of labor that these workers could perform (since new workers could always be generated), potentially creating a situation in which AI “workers” are only able to earn subsistence wages (e.g., enough to cover hardware and power costs). The extent to which situations like this are worth avoiding is tightly connected to the extent to which AI systems have moral worth and/or are entitled to some equivalent of human rights.
Want to work on AI policy? We want to help.
We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. If you want to work on AI safety, apply for our free coaching service.