The possibility of human-level artificial intelligence poses significant risks to society, but understanding how to navigate these risks is an incredibly neglected area of research. An increasing amount of funding is available, but there’s a shortage of sufficiently qualified people. If you have strong technical abilities and a real interest and motivation in this work, now could be a particularly good time to go into this area.
- Learn about the field (start with Superintelligence, this list of papers, this syllabus and attend a MIRI workshop).
- Make applications to the main organizations in the field, or pursue the research independently as an academic.
- Fill out this form and we’ll see if we can help you with information and introductions.
- Consider getting a PhD in Computer Science.
(More detail in the full profile)
Key facts on fitStrong technical abilities (at the level of a top 20 CS or math PhD program), interest in the topic highly important, self-directed as the field has little structure.
What is AI safety research, and why might this be a high impact career path?
Many experts believe that there’s a significant chance we’ll create superintelligence – artificially intelligent machines with abilities surpassing those of humans across the board – sometime during this century. AI safety research is a growing area within the field of AI that focuses on increasing the chance that, if and when superintelligence arrives, it will make decisions that align with what humans value. AI safety research is a broad, interdisciplinary field – covering technical aspects of how to actually create safe AI systems, as well as broader strategic, ethical and policy issues (more on the different types of AI safety research below.)
There’s increasing concern from AI experts and notable figures – including Elon Musk, Stuart Russell, Stephen Hawking, and Bill Gates – that AI may pose one of the most serious threats to humanity. 2015 saw a clear increase in expert support for work on the problem, shown by this open letter that’s now signed by hundreds of experts. However at present, there are very few researchers working explicitly on AI safety, compared to tens of thousands of computer science researchers working on making machines more powerful and autonomous. This means that an additional researcher with the right skills and motivation right now has the potential to make a huge difference.
We won’t argue in depth for the importance of the AI cause here, so this profile will be most relevant to people who are already convinced of the importance of this cause but aren’t sure whether they can contribute, or how to do so. If you want to find out more about the risks of artificial intelligence, we recommend reading Superintelligence and/or this popular introduction to concerns about AI (with a few caveats/corrections here).
Why to get involved now, and why talent constraints are more pressing than funding constraints
Now seems like a particularly good time to get involved, as there’s growing concern about and support for AI safety research – partly owing to the success of Nick Bostrom’s book Superintelligence. In particular, funding for research is growing rapidly. Institutes like the Future of Humanity Institute (FHI) and the Centre for the Study of Existential Risk (CSER) have acquired over $10m in funding over the past year for AI safety, strategy and existential risk research. Elon Musk recently donated $10m to the cause, and the Open Philanthropy Project added another $1m. More recently, Y Combinator announced OpenAI, a new $1bn effort to develop positive AI. There are other large companies and billionaires interested in funding the cause. (Though bear in mind the total spending each year devoted to AI risk research is still far smaller than the what’s spent speeding up AI’s abilities).
Given this influx of money, the key bottleneck is finding and training talented AI risk researchers. There’s currently at least 10-20 positions open, and many of the organisations are concerned they won’t be able to fill them with sufficiently talented and risk-concerned researchers. Many of the existing funders would provide more money if there were convinced they could fund sufficiently capable researchers. For instance, Open Phil added $1m to Elon Musk’s $10m grant, because that was their assessment of the remaining room for funding. They’d provide more funding if there were more compelling opportunities. We’ve spoken to other large funders who feel the same way. Due to this new interested, in Dec ’15 Luke Muehlhauser, the former Executive Director of MIRI, issued a “call to arms” for potential AI risk researchers.
Other funders are mainly held back by concerns that the cause is not tractable, and the best way to demonstrate tractability is for some researchers to make progress in the field.
Note that there are also other talent constraints in the cause. For instance, many of the organisations are constrained by managers and administrators. If you think you might be a good fit for these positions, we’re also happy to talk.
How we’ve researched this field
For this career review, we spoke to: one of the largest donors in the space, Jaan Tallinn, the co-founder of Skype; the author of Superintelligence, Nick Bostrom; a leading professor of computer science; Daniel Dewey, who works full-time on finding researchers for the field; Nate Soares, the Executive Director of the Machine Intelligence Research Institute; and several other researchers in the field.
Read our research notes in our wiki.
What different kinds of research and careers in AI safety are there?
What types of research are there?
People in the field often mean different things by “AI safety research” but here are some of the key areas.
1. Strategic research: Includes figuring out what questions other AI safety researchers should be focusing on, learning from historical precedents such as threats from nuclear and other dangerous tech, and thinking about possible policy responses. This is likely to the most accessible to someone with a less technical background, but you still need to have a decent understanding of the technical issues involved. Strategy research might be well-suited to someone with expertise in policy or economics who can also understand the technical issues.
2. Forecasting work: Understanding how superintelligence might come about, what kinds of scenarios are possible, and what we should expect when superintelligence arrives.
3. Technical work: Asking the question, “If we were going to build an AI, how would we make it safe?”, and then figuring out how we might implement different solutions. Work here is highly technical, involving computer science, machine learning, logic, and philosophy. Technical work can be further subdivided into two categories:
- (A) “Class 1” technical work: finding ways to practically implement things we know how to do at least in principle (e.g. building tools that help us inspect what’s going on inside a neural net.)
- (B) “Class 2” technical work: trying to figure out how to do even in principle things we don’t yet understand how to do (e.g. how to design a goal function that doesn’t result in perverse instantiation – see ch. 8 of Superintelligence, “Malignant failure modes”, for more on this.)
Different experts we spoke to had different views on the kind of expertise that’s most important for working on AI safety issues. Some people believe that expertise in general AI research and machine learning is what’s most important, others think specific kinds of math/philosophy expertise are most important, and yet others think that strategic analysis and forecasting ability may be most important. However, there was a broad consensus that we ultimately need people doing all of these kinds of research – as well as looking for new avenues we might not have even thought about yet. So it’s probably best to choose between these areas based on pragmatic considerations like your personal fit, interest, and what seems like an available option for you.
What career paths are there?
There are three broad types of career path:
1) Working in an AI lab in academia
- Good for keeping other options open and for general career capital – allows you to develop prestige and a good network
- The main downside is that you may be more limited in what you can work on, unless you’re able to find a lab with a lot of flexibility and interest in AI safety. That said, as funding for AI safety research increases, more opportunities to do AI safety research in academic settings may arise.
- Whether academia is a good fit may depend a lot on your career goals and preferred working style – if you like making incremental progress on tractable problems rather than trying to approach huge issues from the ‘top down’, then academia might be a good fit.
2) Working for various academic or independent organisations focused on AI safety, including the Machine Intelligence Research Institute (MIRI), the Future of Humanity Institute at Oxford (FHI), or the Centre for the Study of Existential Risk at Cambridge (CSER)
- Gives you a lot more flexibility on what you work on, and the ability to work on problems from the ‘top down’ – i.e. trying to generate solutions to the largest, most pressing problems.
- A number of people also believe that this is where the most pressing talent bottleneck is.
- The main downside of these options is that they may provide less flexible career capital, due to being less widely recognised. Having said that, former employees of these organisations have gone into jobs in academia, directorships at new institutes, foundations, and the US government.
3) Working in industry, for example for Google’s DeepMind
- Since industry is where a lot of the AI developments will come from, it seems especially important that people working on AI safety have an understanding of the work that is being done here.
- It also seems valuable to have strong connections and lines of communication between those working on increasing the capabilities of AI and those working on safety outside of industry.
Again, the general consensus seems to be that these options are on a fairly level playing field and so you are probably best off choosing based on what you’re personally best suited for and most excited about.
Who should consider AI safety research?
There’s a common belief we’ve come across that you need to be some kind of super-genius to even consider doing AI safety research. Our recent conversations suggest this is misleading – you may need to have exceptional technical ability for some parts of AI safety research, but the number of people for whom AI safety research is worth exploring in general is much broader.
It’s worth at least considering AI safety research if you fit most of the following:
- You’re highly interested in and motivated by the issues. Even if you’re a math prodigy, if you can’t bring yourself to read Superintelligence, it’s unlikely to be a good fit. A good way to test this is simply to try reading some of the relevant books and papers (more on this below.) It’s also worth being aware that this kind of research has less clear feedback than more applied work, and less of an established community to judge your progress than other academic work. This means you’re likely to face more uncertainty about whether you’re making progress, and you may face scepticism from people outside the community about the value of your work. It’s therefore worth bearing this in mind when thinking about whether this is something you’ll be able to work productively on for an extended time period. Of course, this also means that if you are the kind of person who can work well under these conditions, your expected impact could be especially high, since such people are relatively rare.
- You have, or think you could realistically do well in, a top 20 PhD or Masters program in computer science or mathematics. For some of the less technical kinds of research (strategy etc.), you might not need to have such strong technical ability, but you certainly need to be comfortable and familiar with the relevant technical issues.
- You enjoy thinking about philosophical issues. A lot of AI safety work also requires the ability to think philosophically, especially given there are complex ethical issues involved.
- You enjoy doing research in general. This might sound obvious, but sometimes it can be tempting to go into a field because it sounds interesting without thinking about whether you’ll actually enjoy the day-to-day grind of research. It therefore helps if you’ve done some research before – especially in something related, like computer science – to get a better sense of whether research is for you.
One message we got from a number of people is that if you’re very interested in AI safety research and not sure if you’d be able to contribute, the best thing to do is just to dive in and explore the area more. The best way to find out if you’re going to be able to contribute is just to try. More on how to do this precisely below.
Do you need to have a PhD to go into AI safety research?
Most people we spoke to said that getting a PhD in a relevant field is generally a good idea, but it’s not necessary, so it’s worth trying to enter without a PhD if possible.
The most directly relevant field to get a PhD in is computer science, though this isn’t the only possibility – statistics, applied mathematics, and cognitive science could all also provide a good background if you’re able to study topics relevant to artificial intelligence.
Getting a PhD has a lot of benefits, including allowing you to develop an academic network, learn generally useful skills in computer science, and get experience doing research. If you’re not totally sure whether you want to do AI safety research or something else, a PhD also allows you to keep other options open in academia and industry. See our career profile on computer science PhDs for more.
The main downside of doing a PhD is that it can take a long time (3-4 years in the UK, 5-7 in the US.)
Probably don’t get a PhD if:
- You’re already in a position to contribute directly to AI safety research – especially if an organisation working on AI safety is interested in hiring you. There seems to be a talent bottleneck in AI safety right now, and since it’s such a pressing issue, early efforts could be disproportionately valuable.
- You’re not particularly intrinsically motivated by the idea of doing a computer science PhD (though this might mean you should check whether you’re going to be motivated by AI safety research, too!)
- You can’t find an advisor who will support you in either developing a general understanding of CS, AI and machine learning, or working directly on something relevant to AI safety.
- You think you’re in a particularly good position to learn and do research that’s directly relevant to AI safety on your own. We’d be very wary of this: only do this if you have a community you can stay connected with, collaborate with, get feedback from, and you know you can be self-motivated.
What are some good backup options if it doesn’t work out?
See our career profiles for more information on these options.
- Academia (if you get a PhD)
- Software engineering
- Working in tech startups as a founder or early employee
- Quantitative finance for earning to give (if you’ve developed relevant technical skills)
What are some good first steps if you’re interested?
If you’re interested but not sure how you can contribute, the best way to start is just to begin exploring.
1) Read lots
- Nick Bostrom’s Superintelligence
- Academic papers and textbooks in AI and machine learning: having a good understanding of AI research more broadly is very valuable for AI safety researchers. Some suggestions: Machine Learning: A Probabilistic Perspective by Kevin Murphy, Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig, Reinforcement Learning: An Introduction by Richard Sutton and Andrew Barto. Check out our AI safety syllabus for more
- A number of useful research guides: from UC Berkeley’s Center for Human-Compatible AI, MIRI, the Future of Life Institute, Jacob Steinhardt, and Daniel Dewey’s guide
- Blogs and websites: MIRI’s Intelligent Agent Foundations Forum, Paul Christiano’s Medium posts, AI Impacts
- Papers and books in broader fields that are likely to be relevant: especially philosophy, cognitive science and economics
2) Start discussing ideas
- Email the authors of papers with questions – most researchers are very willing to engage with well thought-out questions!
- Comment on blogs online and engage in online discussions
- Reach out to anyone in your network/community who you might be able to discuss these ideas with
- Consider going to a MIRI workshop if there’s one nearby
3) Look for areas that interest you where you might be able to contribute
- Look for questions that capture you, things you disagree with, or places you think something is missing
- Pick an open problem and see if you can make any progress on it
4) Consider getting a computer science PhD, especially if you’re concerned about keeping your options open, and the idea of computer science research is appealing to you.
5) If you’re already in a relevant PhD program, look for relevant internships and work experience. Google’s DeepMind sometimes offers internships. Organisations like MIRI and FHI tend not to offer internships yet, but are often happy to have talented researchers interested in AI safety visit their offices and spend time talking to them.
6) Get a job in a relevant organisation or get academic funding. Some organisations to consider asking for advice and applying to include:
- The Future of Humanity Institute – an academic institute at Oxford devoted to research into existential risks and other long-term issues, vacancies
- Cambridge Centre for the Study of Existential Risk – an academic institute at Cambridge devoted to research into and advocacy concerning existential risks
- Future of Life Institute – a volunteer-run research and outreach organisation working to mitigate existential risks
- Machine Intelligence Research Institute – a non-profit research institute doing technical research into how to align AI with human values, vacancies.
- Google Deep Mind – the leading for-profit company developing AGI
- OpenAI – a non-profit research institute that’s part of Y Combinator dedicated to developing AGI that benefits humanity
As of Dec ’15, all of these groups are hiring AI risk researchers.
7) Get in touch with us! We’d be happy to try and help you with your personal situation, and make introductions to people who might be able to help. Fill out this form and we’ll be in touch.