This is a summary of our full career review on artificial intelligence risk research.
Have you read the profile and think you want to contribute to artificial intelligence risk research? Fill out this form and we’ll see if we can help.
Many people we coach are interested in doing research into artificial intelligence (AI), in particular how to lower the risk that superintelligent machines do harmful things not intended by their creators – a field usually referred to as ‘AI risk research’. The reasons people believe this is a particularly pressing area of research are outlined in sources such as:
- The book Superintelligence: Paths, Dangers, Strategies by University of Oxford philosopher Prof Nick Bostrom.
- This more accessible introduction from Wait But Why.
- This GiveWell cause investigation.
- This cause description and associated papers from the Global Priorities Project.
Our goal with this career review was not to assess the cause area of AI risk research – on that we defer to the authors above. Rather we wanted to present some concrete guidance for the growing number of people who want to work on the problem.
We spoke to the leaders in the field, including top academics, the head of MIRI and managers in AI companies, and the key findings are:
- Some organisations working on this problem,