How to pursue a career in research to lower the risks from superintelligent machines: a new career review.

google-deepmind-artificial-intelligence

This is a summary of our full career review on artificial intelligence risk research.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.

Many people we coach are interested in doing research into artificial intelligence (AI), in particular how to lower the risk that superintelligent machines do harmful things not intended by their creators – a field usually referred to as ‘AI risk research’. The reasons people believe this is a particularly pressing area of research are outlined in sources such as:

Our goal with this career review was not to assess the cause area of AI risk research – on that we defer to the authors above. Rather we wanted to present some concrete guidance for the growing number of people who want to work on the problem.

We spoke to the leaders in the field, including top academics, the head of MIRI and managers in AI companies, and the key findings are:

  • Some organisations working on this problem, particularly those with strong academic affiliations, appear to be ‘talent constrained’, in that they find it harder to identify researchers they’d like to hire than to raise funding. However, this is a recent development, and the cause area could become ‘funding constrained’ again in future.
  • There’s a variety of paths into this career, and not just for computer scientists. You can either work in industry, academia or non-profits focused on these issues. A PhD is a natural next step, but can sometimes be skipped. There are three main classes of research in this field: ‘strategic research’, ‘forecasting work’ and ‘technical research’. While all are highly intellectually demanding, they each require people from different skills and fields of expertise. As a result there is room for economists, philosophers and historians to contribute, and not just computer scientists and mathematicians.
  • The key aspects of personal fit are: that you’re highly interested in and motivated by the issues; you enjoy thinking about philosophical issues; you enjoy doing research in general; for technical work, you have, or think you could realistically do well in, a top 20 PhD or Masters program in computer science or mathematics.
  • Working on this research question creates some professional risk because it is an area of research that is not yet well-integrated with or respected by the broader academic community.
  • Nonetheless, there are quite good fallback options for people who decide to later leave the field, especially within the technology industry.

Continue reading our full review of artificial intelligence risk research.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.