The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ – a general intellect that is much smarter than the best humans, in practically every field.
But according to Allan Dafoe – Senior Research Fellow in the International Politics of AI at Oxford University – even if we stopped at today’s AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including:
- Mass labor displacement, unemployment, and inequality;
- The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order;
- Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack;
- Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance;
- Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens.
Allan is Director of the Center for the Governance of AI, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation. His current focus is helping humanity safely navigate the invention of advanced artificial intelligence.
I ask Allan:
- What are the distinctive characteristics of artificial intelligence from a political or international governance point of view?
- Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons?
- How can AI be well-governed?
- How should we think about the idea of arms races between companies or countries?
- What would you say to people skeptical about the importance of this topic?
- How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today?
- What’s the most urgent questions to deal with in this field?
- What can people do if they want to get into the field?
- Is there anything unusual that people can look for in themselves to tell if they’re a good fit to do this kind of research?
The 80,000 Hours podcast is produced by Keiran Harris.