If a smarter-than-human AI system were developed, who would decide when it was safe to deploy? How can we discourage organisations from deploying such a technology prematurely to avoid being beaten to the post by a competitor? Should we expect the world’s top militaries to try to use AI systems for strategic advantage – and if so, do we need an international treaty to prevent an arms race?
Questions like this are the domain of AI policy experts.
We recently launched a detailed guide to pursuing careers in AI policy and strategy, put together by Miles Brundage at the University of Oxford’s Future of Humanity Institute.
It complements our article outlining the importance of positively shaping artificial intelligence. If you are considering a career in artificial intelligence safety, both are essential reading.
I interviewed Miles to dig deeper into his advice. We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far.
The audio, summary and full transcript are below.
Robert Wiblin: Hi,