How to pursue a career in research to lower the risks from superintelligent machines: a new career review.

80,000 Hours is a non-profit that gives you the information you need to find a fulfilling, high-impact career. Our advice is all free, tailored for talented graduates, and based on five years of research alongside academics at Oxford. Start with our career guide.

google-deepmind-artificial-intelligence

This is a summary of our full career review on artificial intelligence risk research.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.

Many people we coach are interested in doing research into artificial intelligence (AI), in particular how to lower the risk that superintelligent machines do harmful things not intended by their creators – a field usually referred to as ‘AI risk research’. The reasons people believe this is a particularly pressing area of research are outlined in sources such as:

Our goal with this career review was not to assess the cause area of AI risk research – on that we defer to the authors above. Rather we wanted to present some concrete guidance for the growing number of people who want to work on the problem.

We spoke to the leaders in the field, including top academics, the head of MIRI and managers in AI companies, and the key findings are:

  • Some organisations working on this problem, particularly those with strong academic affiliations, appear to be ‘talent constrained’, in that they find it harder to identify researchers they’d like to hire than to raise funding. However, this is a recent development, and the cause area could become ‘funding constrained’ again in future.
  • There’s a variety of paths into this career, and not just for computer scientists. You can either work in industry, academia or non-profits focused on these issues. A PhD is a natural next step, but can sometimes be skipped. There are three main classes of research in this field: ‘strategic research’, ‘forecasting work’ and ‘technical research’. While all are highly intellectually demanding, they each require people from different skills and fields of expertise. As a result there is room for economists, philosophers and historians to contribute, and not just computer scientists and mathematicians.
  • The key aspects of personal fit are: that you’re highly interested in and motivated by the issues; you enjoy thinking about philosophical issues; you enjoy doing research in general; for technical work, you have, or think you could realistically do well in, a top 20 PhD or Masters program in computer science or mathematics.
  • Working on this research question creates some professional risk because it is an area of research that is not yet well-integrated with or respected by the broader academic community.
  • Nonetheless, there are quite good fallback options for people who decide to later leave the field, especially within the technology industry.

Continue reading our full review of artificial intelligence risk research.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.

Author: Robert Wiblin

Rob studied both genetics and economics at the Australian National University (ANU), graduating top of his class and being named Young Alumnus of the Year in 2015.

He worked as a research economist in various Australian Government agencies, and then moved to the UK to work at the Centre for Effective Altruism, first as Research Director, then Executive Director, then Research Director for 80,000 Hours.

He was founding board Secretary for Animal Charity Evaluators and is a member of the World Economic Forum’s Global Shapers Community.