If a smarter-than-human AI system were developed, who would decide when it was safe to deploy? How can we discourage organisations from deploying such a technology prematurely to avoid being beaten to the post by a competitor? Should we expect the world’s top militaries to try to use AI systems for strategic advantage – and if so, do we need an international treaty to prevent an arms race?
Questions like this are the domain of AI policy experts.
We recently launched a detailed guide to pursuing careers in AI policy and strategy, put together by Miles Brundage at the University of Oxford’s Future of Humanity Institute.
It complements our article outlining the importance of positively shaping artificial intelligence and a podcast with Dr Dario Amodei of OpenAI on more technical artificial intelligence safety work which builds on this one. If you are considering a career in artificial intelligence safety, they’re all essential reading.
I interviewed Miles to ask remaining questions I had after he finished his career guide. We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far.