The next few decades might see the development of powerful machine learning algorithms with the potential to transform society. This could potentially have both huge upsides and downsides, including “existential” risks. We want to find people to do foundational research into what governments and society should do to manage the long-term risks from advanced AI. This research could involve economics, politics, political science, security studies, international relations, history, and many other disciplines, as well as an understanding of AI, and mostly takes place with academia or leading AI companies like Google DeepMind. We can help you figure out what to study to enter this area, work out whether it’s for you, and then find jobs and funding. We’re in touch with many of the key researchers and funders in this area, such as the Future of Humanity Institute in Oxford.
Governments need to hire experts in AI and its social impact to safely manage this transition. We can help you develop relevant expertise, and find relevant jobs in the civil service, political parties, think tanks, scientific funding bodies, technology journalism, industry, and other areas – though we’re mostly focused on government positions. We can introduce you to mentors with backgrounds in the US and UK governments, and other people on this path. We’re most able to help people who are interested in working on reducing the long-term risks, rather than short-term challenges like automation and/or lethal autonomous weapons, and who have engaged with the arguments as outlined in Superintelligence by Nick Bostrom.