Summary

Artificial intelligence will have transformative effects on society over the coming decades, and understanding how to navigate these risks is an incredibly important and neglected area of research. An increasing amount of funding is available, but there’s a shortage of sufficiently qualified people. If you have strong technical abilities and a real interest and motivation in this work, now could be a particularly good time to go into this area.

Pros

  • Opportunity to make a significant, foundational contribution to a very important, growing area of research
  • Incredibly intellectually challenging and interesting work for the right person
  • The cause is more talent constrained than funding constrained
  • Good backup options in software engineering in industry, academia, and tech startups

Cons

  • Professional risk—an area of research that is not yet well-integrated with or respected by the broader academic community
  • Significant upfront cost if you don't already have the necessary technical skills
  • Difficult to know in advance if you're a good fit
  • Potential for huge impact, but also a large chance of your individual contribution making little or no difference

Ratings

Career capital: 

Direct impact: 

Earnings: 

Advocacy potential: 

Ease of competition: 

Job satisfaction:

Our reasoning for these ratings is explained below. You might also like to read about our approach to rating careers.

Key facts on fit  

Strong technical abilities (at the level of a top 20 CS or math PhD program), interest in the topic highly important, self-directed as the field has little structure.

Next steps

  1. Learn about the field (start with our problem profile, then Superintelligence, this list of papers, this syllabus and attend a MIRI workshop).
  2. Make applications to the main organizations in the field, or pursue the research independently as an academic.
  3. Fill out this form and we’ll see if we can help you with information and introductions.
  4. Consider getting a PhD in Computer Science.
    (More detail in the full profile)

Recommended

If you are well suited to this career, it may be the best way for you to have a social impact.

Review status

Medium-depth career profile 

Review author

Jess Whittlestone

What is AI safety research, and why might this be a high impact career path?

We don’t argue for the large potential impact from positively shaping the development of artificial intelligence here – we cover that in our problem profile on the topic. You can also learn more by reading Superintelligence and/or this popular introduction to concerns about AI (with a few caveats/corrections here).

Briefly, many experts believe that there’s a significant chance we’ll create superintelligence – artificially intelligent machines with abilities surpassing those of humans across the board – sometime during this century. AI safety research is a growing area within the field of AI that focuses on increasing the chance that, if and when superintelligence arrives, it will make decisions that align with what humans value. AI safety research is a broad, interdisciplinary field – covering technical aspects of how to actually create safe AI systems, as well as broader strategic, ethical and policy issues (more on the different types of AI safety research below.)

There’s increasing concern from AI experts and notable figures – including Elon Musk, Stuart Russell, Stephen Hawking, and Bill Gates – that AI may pose one of the most serious threats to humanity. 2015 saw a clear increase in expert support for work on the problem, shown by this open letter that’s now signed by hundreds of experts. However at present, there are very few researchers working explicitly on AI safety, compared to tens of thousands of computer science researchers working on making machines more powerful and autonomous. This means that an additional researcher with the right skills and motivation right now has the potential to make a huge difference.

Why to get involved now, and why talent constraints are more pressing than funding constraints

Now seems like a particularly good time to get involved, as there’s growing concern about and support for AI safety research – partly owing to the success of Nick Bostrom’s book Superintelligence. In particular, funding for research is growing rapidly. Institutes like the Future of Humanity Institute (FHI) and the Centre for the Study of Existential Risk (CSER) have acquired over $10m in funding over the past year for AI safety, strategy and existential risk research. Elon Musk recently donated $10m to the cause, and the Open Philanthropy Project added another $1m. More recently, Y Combinator announced OpenAI, a new $1bn effort to develop positive AI. There are other large companies and billionaires interested in funding the cause. (Though bear in mind the total spending each year devoted to AI risk research is still far smaller than the what’s spent speeding up AI’s abilities).

Given this influx of money, the key bottleneck is finding and training talented AI risk researchers. There’s currently at least 10-20 positions open, and many of the organisations are concerned they won’t be able to fill them with sufficiently talented and risk-concerned researchers. Many of the existing funders would provide more money if there were convinced they could fund sufficiently capable researchers. For instance, Open Phil added $1m to Elon Musk’s $10m grant, because that was their assessment of the remaining room for funding. They’d provide more funding if there were more compelling opportunities. We’ve spoken to other large funders who feel the same way. Due to this new interested, in Dec ’15 Luke Muehlhauser, the former Executive Director of MIRI, issued a “call to arms” for potential AI risk researchers.

Other funders are mainly held back by concerns that the cause is not tractable, and the best way to demonstrate tractability is for some researchers to make progress in the field.

Note that there are also other talent constraints in the cause. For instance, many of the organisations are constrained by managers and administrators. If you think you might be a good fit for these positions, we’re also happy to talk.

How we’ve researched this field

For this career review, we spoke to: one of the largest donors in the space, Jaan Tallinn, the co-founder of Skype; the author of Superintelligence, Nick Bostrom; a leading professor of computer science; Daniel Dewey, who works full-time on finding researchers for the field; Nate Soares, the Executive Director of the Machine Intelligence Research Institute; and several other researchers in the field.

Read our research notes in our wiki.

What different kinds of research and careers in AI safety are there?

What types of research are there?

People in the field often mean different things by “AI safety research” but here are some of the key areas.

1. Strategic research: Includes figuring out what questions other AI safety researchers should be focusing on, learning from historical precedents such as threats from nuclear and other dangerous tech, and thinking about possible policy responses. This is likely to the most accessible to someone with a less technical background, but you still need to have a decent understanding of the technical issues involved. Strategy research might be well-suited to someone with expertise in policy or economics who can also understand the technical issues. We cover this path in detail in our guide to AI policy and strategy careers.
2. Forecasting work: Understanding how superintelligence might come about, what kinds of scenarios are possible, and what we should expect when superintelligence arrives. This path is also discussed in our guide to AI policy and strategy careers.
3. Technical work: Asking the question, “If we were going to build an AI, how would we make it safe?”, and then figuring out how we might implement different solutions. Work here is highly technical, involving computer science, machine learning, logic, and philosophy. Technical work can be further subdivided into two categories:

  • (A) “Class 1” technical work: finding ways to practically implement things we know how to do at least in principle (e.g. building tools that help us inspect what’s going on inside a neural net.)
  • (B) “Class 2” technical work: trying to figure out how to do even in principle things we don’t yet understand how to do (e.g. how to design a goal function that doesn’t result in perverse instantiation – see ch. 8 of Superintelligence, “Malignant failure modes”, for more on this.)

Different experts we spoke to had different views on the kind of expertise that’s most important for working on AI safety issues. Some people believe that expertise in general AI research and machine learning is what’s most important, others think specific kinds of math/philosophy expertise are most important, and yet others think that strategic analysis and forecasting ability may be most important. However, there was a broad consensus that we ultimately need people doing all of these kinds of research – as well as looking for new avenues we might not have even thought about yet. So it’s probably best to choose between these areas based on pragmatic considerations like your personal fit, interest, and what seems like an available option for you.

What career paths are there?

There are three broad types of career path:

1) Working in an AI lab in academia

  • Good for keeping other options open and for general career capital – allows you to develop prestige and a good network
  • The main downside is that you may be more limited in what you can work on, unless you’re able to find a lab with a lot of flexibility and interest in AI safety. That said, as funding for AI safety research increases, more opportunities to do AI safety research in academic settings may arise.
  • Whether academia is a good fit may depend a lot on your career goals and preferred working style – if you like making incremental progress on tractable problems rather than trying to approach huge issues from the ‘top down’, then academia might be a good fit.

2) Working for various academic or independent organisations focused on AI safety, including the Machine Intelligence Research Institute (MIRI), the Future of Humanity Institute at Oxford (FHI), or the Centre for the Study of Existential Risk at Cambridge (CSER)

  • Gives you a lot more flexibility on what you work on, and the ability to work on problems from the ‘top down’ – i.e. trying to generate solutions to the largest, most pressing problems.
  • A number of people also believe that this is where the most pressing talent bottleneck is.
  • The main downside of these options is that they may provide less flexible career capital, due to being less widely recognised. Having said that, former employees of these organisations have gone into jobs in academia, directorships at new institutes, foundations, and the US government.

3) Working in industry, for example for Google’s DeepMind

  • Since industry is where a lot of the AI developments will come from, it seems especially important that people working on AI safety have an understanding of the work that is being done here.
  • It also seems valuable to have strong connections and lines of communication between those working on increasing the capabilities of AI and those working on safety outside of industry.

Again, the general consensus seems to be that these options are on a fairly level playing field and so you are probably best off choosing based on what you’re personally best suited for and most excited about.

Who should consider AI safety research?

There’s a common belief we’ve come across that you need to be some kind of super-genius to even consider doing AI safety research. Our recent conversations suggest this is misleading – you may need to have exceptional technical ability for some parts of AI safety research, but the number of people for whom AI safety research is worth exploring in general is much broader.

It’s worth at least considering AI safety research if you fit most of the following:

  • You’re highly interested in and motivated by the issues. Even if you’re a math prodigy, if you can’t bring yourself to read Superintelligence, it’s unlikely to be a good fit. A good way to test this is simply to try reading some of the relevant books and papers (more on this below.) It’s also worth being aware that this kind of research has less clear feedback than more applied work, and less of an established community to judge your progress than other academic work. This means you’re likely to face more uncertainty about whether you’re making progress, and you may face scepticism from people outside the community about the value of your work. It’s therefore worth bearing this in mind when thinking about whether this is something you’ll be able to work productively on for an extended time period. Of course, this also means that if you are the kind of person who can work well under these conditions, your expected impact could be especially high, since such people are relatively rare.
  • You have, or think you could realistically do well in, a top 20 PhD or Masters program in computer science or mathematics. For some of the less technical kinds of research (strategy etc.), you might not need to have such strong technical ability, but you certainly need to be comfortable and familiar with the relevant technical issues.
  • You enjoy thinking about philosophical issues. A lot of AI safety work also requires the ability to think philosophically, especially given there are complex ethical issues involved.
  • You enjoy doing research in general. This might sound obvious, but sometimes it can be tempting to go into a field because it sounds interesting without thinking about whether you’ll actually enjoy the day-to-day grind of research. It therefore helps if you’ve done some research before – especially in something related, like computer science – to get a better sense of whether research is for you.

One message we got from a number of people is that if you’re very interested in AI safety research and not sure if you’d be able to contribute, the best thing to do is just to dive in and explore the area more. The best way to find out if you’re going to be able to contribute is just to try. More on how to do this precisely below.

Do you need to have a PhD to go into AI safety research?

Most people we spoke to said that getting a PhD in a relevant field is generally a good idea, but it’s not necessary, so it’s worth trying to enter without a PhD if possible.

The most directly relevant field to get a PhD in is computer science, though this isn’t the only possibility – statistics, applied mathematics, and cognitive science could all also provide a good background if you’re able to study topics relevant to artificial intelligence.

Getting a PhD has a lot of benefits, including allowing you to develop an academic network, learn generally useful skills in computer science, and get experience doing research. If you’re not totally sure whether you want to do AI safety research or something else, a PhD also allows you to keep other options open in academia and industry. See our career profile on computer science PhDs for more.

The main downside of doing a PhD is that it can take a long time (3-4 years in the UK, 5-7 in the US.)

Probably don’t get a PhD if:

  • You’re already in a position to contribute directly to AI safety research – especially if an organisation working on AI safety is interested in hiring you. There seems to be a talent bottleneck in AI safety right now, and since it’s such a pressing issue, early efforts could be disproportionately valuable.
  • You’re not particularly intrinsically motivated by the idea of doing a computer science PhD (though this might mean you should check whether you’re going to be motivated by AI safety research, too!)
  • You can’t find an advisor who will support you in either developing a general understanding of CS, AI and machine learning, or working directly on something relevant to AI safety.
  • You think you’re in a particularly good position to learn and do research that’s directly relevant to AI safety on your own. We’d be very wary of this: only do this if you have a community you can stay connected with, collaborate with, get feedback from, and you know you can be self-motivated.

What are some good backup options if it doesn’t work out?

See our career profiles for more information on these options.

What are some good first steps if you’re interested?

If you’re interested but not sure how you can contribute, the best way to start is just to begin exploring.

1) Read lots

2) Start discussing ideas

  • Email the authors of papers with questions – most researchers are very willing to engage with well thought-out questions!
  • Comment on blogs online and engage in online discussions
  • Reach out to anyone in your network/community who you might be able to discuss these ideas with
  • Consider going to a MIRI workshop if there’s one nearby

3) Look for areas that interest you where you might be able to contribute

  • Look for questions that capture you, things you disagree with, or places you think something is missing
  • Pick an open problem and see if you can make any progress on it

4) Consider getting a computer science PhD, especially if you’re concerned about keeping your options open, and the idea of computer science research is appealing to you. Other options include mathematics, machine learning if it’s available, analytic philosophy or statistics. If you don’t have many technical skills, a data science boot camp is a way to start. Read more about degree and PhD selection in our syllabus.

5) If you’re already in a relevant PhD program, look for relevant internships and work experience. Google’s DeepMind sometimes offers internships. Organisations like MIRI and FHI tend not to offer internships yet, but are often happy to have talented researchers interested in AI safety visit their offices and spend time talking to them.

6) Get a job in a relevant organisation or get academic funding. Some organisations to consider asking for advice and applying to include:

As of Dec ’15, all of these groups are hiring AI risk researchers.

Want to work on AI safety? We want to help.

We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. **If you want to work on AI safety, apply for our free coaching service.**

Apply for coaching