How to pursue a career in research to lower the risks from superintelligent machines: a new career review.

google-deepmind-artificial-intelligence

This is a summary of our full career review on artificial intelligence risk research.

Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.

Many people we coach are interested in doing research into artificial intelligence (AI), in particular how to lower the risk that superintelligent machines do harmful things not intended by their creators – a field usually referred to as ‘AI risk research’. The reasons people believe this is a particularly pressing area of research are outlined in sources such as:

Our goal with this career review was not to assess the cause area of AI risk research – on that we defer to the authors above. Rather we wanted to present some concrete guidance for the growing number of people who want to work on the problem.

We spoke to the leaders in the field, including top academics, the head of MIRI and managers in AI companies, and the key findings are:

  • Some organisations working on this problem,

Continue reading →

High impact interview 1: Existential risk research at SIAI

The plan: to conduct a series of interviews with successful workers in various key candidates for high impact careers.

The first person to agree to an interview is Luke Muehlhauser (aka lukeprog of Less Wrong), the executive director of the Singularity Institute for Artificial Intelligence, whose mission is to influence the development of greater-than-human intelligence to try and ensure that it’s a force for human flourishing rather than extinction.

Continue reading →