#31 – Allan Dafoe on trying to prepare the world for the possibility that AI will destabilise global politics

The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ – a general intellect that is much smarter than the best humans, in practically every field.

But according to Allan Dafoe – Senior Research Fellow in the International Politics of AI at Oxford University – even if we stopped at today’s AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including:

  • Mass labor displacement, unemployment, and inequality;
  • The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order;
  • Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack;
  • Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance;
  • Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens.

Allan is Director of the Center for the Governance of AI, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation. His current focus is helping humanity safely navigate the invention of advanced artificial intelligence.

I ask Allan:

  • What are the distinctive characteristics of artificial intelligence from a political or international governance point of view?
  • Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons?
  • How can AI be well-governed?
  • How should we think about the idea of arms races between companies or countries?
  • What would you say to people skeptical about the importance of this topic?
  • How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today?
  • What’s the most urgent questions to deal with in this field?
  • What can people do if they want to get into the field?
  • Is there anything unusual that people can look for in themselves to tell if they’re a good fit to do this kind of research?

The 80,000 Hours podcast is produced by Keiran Harris.

Highlights

Nuclear energy is useful, but it wasn’t crucially useful, whereas AI seems like it’s on track to be like the new electricity or the new industrial revolution in the sense it’s a general-purpose technology that will completely transform and invigorate the economy in every sector of the economy. That’s, I guess, one problem or one difference is that its economic bounty and the gradient of incentives to develop it are so much more substantial than most other dual-use technologies we’re used to thinking about governing

There’s another discussion of an AI arms race that, for example, most prominently was stimulated by the quote that Putin gave of, “Whoever leads in AI will rule the world.” This quote garnered a huge amount of attention from media around the world and also by serious thinkers of national security. Because one property of an arms race is that, in many ways, all it takes is the perception that the other side believes it to be existing, believes that an arms race is taking place, to generate the possibility of an arms race.

If I think that you believe this technology is strategic, even if I personally don’t believe it to be strategic, then I need to now worry about how your beliefs might shape your behaviors and your willingness to take risks. This quote, “Whoever leads in AI will rule the world,” it was not an official statement of Russian foreign policy. It was not a summary of a report that the Russian military produced. Rather, it was what seems to be an extemporaneous comment that Putin made in the context of giving children feedback on their science projects.

Some problems are more important than others. However, we are sufficiently uncertain about what are the core problems that need to be solved that are precise enough and modular enough that they can be really focused on that I would recommend a different approach. Rather than try to find really the highest-leverage, most-neglected problem, I would advise people interested in working in this space to get a feel for the research landscape.

They can look at some of the talks at EAG and then ask themselves what are their comparative advantages? What’s their driving interest or passion? Do they believe they have an insight that’s neglected, an idea? What’s their background? What’s the community of scholars and policy makers they would see themselves as most comfortable in?

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.