Technical AI safety is a multifaceted area of research, with many sub-questions in areas such as reward learning, robustness, and interpretability. These will all need to be answered in order to make sure AI development will go well for humanity as systems become more and more powerful.

Not all of these questions are best tackled with abstract mathematics research; some can be approached with concrete coding experiments and machine learning (ML) prototypes. As a result, some AI safety research teams are looking to hire a growing number of Software Engineers and ML Research Engineers.

Additionally, some research teams that may not think of themselves as focussed on ‘AI Safety’ per se, nonetheless work on related problems like verification of neural nets or learning from human feedback, and are often hiring engineers.

Note that this guide was written in November 2018 to complement an in-depth conversation on the 80,000 Hours Podcast with Catherine Olsson and Daniel Ziegler on how to transition from computer science and software engineering in general into ML engineering, with a focus on alignment and safety. If you like this guide, we’d strongly encourage you to check out the podcast episode where we discuss some of the instructions here, and other relevant advice.

Update Feb 2022: The need for software engineers in AI safety seems even greater today than when this post was written (e.g. see this post by Andy Jones). You also don’t need as much knowledge of AI safety to enter the field as this guide implies.

What are the necessary qualifications for these positions?

Software Engineering: Some engineering roles on AI safety teams do not require ML experience. You might already be prepared to apply to these positions if you have the following qualifications:

  • BSc/BEng degree in computer science or another technical field (or comparable experience)
  • Strong knowledge of software engineering (as a benchmark: could pass a Google software engineering interview)
  • Interest in working on AI safety
  • (usually) Willingness to move to London or the San Francisco Bay Area

If you’re a software engineer with an interest in these roles, you may not need any additional preparation, and may be ready to apply right away.

ML Engineering and/or Research Engineering: Some roles require experience implementing and debugging machine learning algorithms. If you don’t yet have ML implementation experience, you may be able to learn the necessary skills quickly, so long as you’re willing to spend a few months studying. Before deciding to do this, you should check that you meet all the following criteria:

  • BSc/BEng degree in computer science or another technical field (or comparable experience)
  • Strong knowledge of software engineering (as a benchmark: could pass a Google software engineering interview)
  • Interest in working on AI safety
  • (usually) Willingness to move to London or the San Francisco Bay Area

How can I best learn Machine Learning engineering skills if I don’t yet have the necessary experience?

Initial investigation

Implementing and debugging ML algorithms is different from traditional software engineering. The following can help you determine whether you’ll like the day-to-day work:

ML basics

If you don’t have any experience in machine learning, start by familiarizing yourself with the basics. If you have some experience, but haven’t done a hands-on machine learning project recently, it’s also probably a good idea to brush up on the latest tools (writing TensorFlow, starting a virtual machine with a GPU, etc).

Although it can be difficult to find time for self-study if you’re already employed full-time or have other responsibilities, it’s far from impossible. Here are some ideas of how you might get started:

  • Consider spending a few hours a week on an online course. We recommend either of these two:
  • If you’re employed full-time in a software engineering role, you might be able to learn ML basics without leaving your current job:
    • If you’re at a large tech company, take advantage of internal trainings, including full-time ML rotation programs.
    • Ask your manager if you can incorporate machine learning into your current role: for example, to spend 20% of your time learning ML, to see if it could improve one of the projects you work on.

For simple ML problems, you can get pretty far just on CPU on your laptop, but for larger problems it’s useful to buy a GPU and/or rent some cloud GPUs. You can often get some cloud computing credits through a free trial, educational credits for students, or asking a friend with a startup.

Learn ML implementation and debugging, and speak with the team you want to join

Once you know the 101-level basics of ML, the next thing to learn is how to implement and debug ML algorithms. (Based on the experiences of others in the community who have taken this path, we expect this to take at minimum 200 hours of focused work, and likely more if you are starting out with less experience).

Breadth of experience is not important here: you don’t need to read all the latest papers, or master an extensive reading list. You also don’t need to do novel research or come up with new algorithms. Nor do you need to focus on safety at this stage; in fact, focusing on well-known and established ML algorithms is probably better for your learning.

What you do need is to get your hands dirty implementing and debugging ML algorithms, and to build evidence for job interviews that you have some experience doing this.

You should strongly consider contacting the teams you’re interested in at this stage. Send them an email with the specifics of what you’re planning on spending your time on to get feedback on it. The manager of the team may suggest specific resources to use, and can help you avoid wasting time on extraneous skills you don’t need for the role.

The most straightforward way to gain this experience is to choose a subfield of ML relevant to a lab you’re interested in. Then read a few dozen of the subfield’s key papers, and reimplement a few of the foundational algorithms that the papers are based on or reference most frequently. Potential sub-fields include the following:

  • Deep reinforcement learning
  • Defenses against adversarial examples
  • Verification and robustness proofs for neural nets
  • Interpretability & visualization

If it isn’t clear how to get started – for example, if you don’t have access to a GPU, or don’t know how to write TensorFlow – many of the resources in the “basics” section above have useful tips.

If you need to quit your job to make time for learning in this phase, but don’t have enough runway to self-fund your studies, consider applying for an EA grant when it next opens – they are open to funding career transitions such as this one.

Case study: Daniel Ziegler’s ML self-study experience

In January 2018, Daniel had strong software engineering skills but only basic ML knowledge. He decided that he wanted to work on an AI safety team as a research engineer, so he talked to Dario Amodei (the OpenAI Safety team lead). Based on Dario’s advice, Daniel spent around six full-time weeks diving into deep reinforcement learning together with a housemate. He also spent a little time reviewing basic ML and doing supervised learning on images and text. Daniel then interviewed and became an ML engineer on the safety team.

Daniel and his housemate used Josh Achiam’s Key Papers in Deep RL list to guide their efforts. They got through about 20-30 of those papers, spending maybe 1.5 hours independently reading and half an hour discussing each paper.

More importantly, they implemented a handful of the key algorithms in TensorFlow:

  • Q-learning: DQN and some of its extensions, including prioritized replay and double DQN
  • Policy gradients: A2C, PPO, DDPG

They applied these algorithms to try to solve various OpenAI Gym environments, from the simple ‘Cartpole-v0’ to Atari games like ‘Breakout-v4’.

They spent 2-10 days on each algorithm (in parallel as experiments ran), depending on how in-depth they wanted to go. For some, they only got far enough to have a more-or-less-working implementation. For one (PPO), they tried to fix bugs and tune things for long enough to come close to the performance of the OpenAI Baselines implementation.

For each algorithm, they would first test on very easy environments, and then move to more difficult environments. Note that an easy environment for one algorithm may not be easy for another: for example, despite its simplicity, the Cartpole environment has a long time horizon, which can be challenging for some algorithms.

Once the algorithm was partially working, they would attain higher performance by looking for remaining bugs, both by reviewing the code carefully, and by collecting metrics such as average policy entropy to perform sanity-checks, rather than just tune hyperparameters. Finally, when they wanted to match the performance of Baselines, they scrutinized the Baselines implementations for small important details, such as exactly how to preprocess and normalize observations.

By the end of six weeks, Daniel was able to talk fluently about the key ideas in RL and the tradeoffs between different algorithms. Most importantly, he was able to implement and debug ML algorithms, going from math in a paper to running code. In retrospect, Daniel reports wishing he had spent a little more time on ML conceptual & mathematical fundamentals, but that overall this process prepared Daniel well for the interview and the role, and was particularly well-suited for OpenAI’s focus on reinforcement learning.

Now apply for jobs

These positions will eventually be filled, but you can find a constantly updated list of some of the most promising positions on the 80,000 Hours job board.

The following example job postings for software engineers on AI safety research teams specify that machine learning experience is not required:

  • OpenAI’s safety team is currently hiring a software engineer for a range of projects, including interfaces for human-in-the-loop AI training and collecting data for larger language models. (Update: this job posting is now closed.)
  • MIRI is hiring software engineers.
  • Ought is hiring research engineers with a focus on candidates who are excited by functional programming, compilers, program analysis, and related topics.

The following example job postings do expect experience with machine learning implementation:

  • DeepMind is hiring research engineers for their Technical AGI Safety team, Safe and Robust AI team – which works on neural net verification and robustness – and potentially others as well.
  • Google AI is hiring research software engineers in locations worldwide. Although Google AI does not have an “AI Safety” team, there are research efforts focused on robustness, security, interpretability, and learning from human feedback.
  • OpenAI’s safety team is hiring machine learning engineers to work on alignment and interpretability.
  • The Center for Human Compatible AI at Berkeley is hiring machine learning research engineers for 1-2 year visiting scholar positions to test alignment ideas for deep reinforcement learning systems.

When you apply to a larger organization that has multiple areas of research, specify in your application which of them you are most interested in working on. Investigate the company’s research areas in advance, in order to make sure that the areas you list are in fact ones that the company works on. For example, don’t specify “value alignment” on an application to a company that does not have any researchers working on value alignment.

If you find that you cannot get a role contributing to safety research right now, you might look for a role in which you can gain relevant experience, and transition to a safety position later.

Non-safety-related research engineering positions are also available at other industry AI labs though these are likely to be more competitive than roles on AGI safety teams.

Finally, you could consider applying to a 1-year fellowship/residency program at Google, OpenAI, Facebook, Uber, or Microsoft.

Learn more