Why experts and forecasters disagree about AI risk

By Dietmar Rabich, CC BY-SA 4.0

This week we’re highlighting:

The idea this week: even some sceptics of AI risk think there’s a real chance of a catastrophe in the next 1,000 years.

That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about existential risk.

It’s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the Existential Risk Persuasion Tournament to investigate these disagreements and find out whether they could be resolved.

The interview covers a lot of issues, but here are some key details that stood out on the topic of AI risk:

  • Domain experts in AI estimated a 3% chance of AI-caused human extinction by 2100 on average, while superforecasters put it at just 0.38%.
  • Both groups agreed on a high likelihood of “powerful AI” being developed by 2100 (around 90%).
  • Even AI risk sceptics saw a 30% chance of catastrophic AI outcomes over a 1,000-year timeframe.
  • But the groups showed little convergence after extensive debate, suggesting some deep-rooted disagreements.

Ezra’s research found some key differences in how these groups view the world:

  • Sceptics tend to see change as gradual, while concerned experts anticipate more abrupt shifts.
  • There were divergent views on humanity’s ability to coordinate and regulate AI development.
  • Sceptics generally view the world as more resilient to catastrophic risks.

You can check out the section of the interview that discusses Ezra’s hypotheses about stark the reason behind these differences in views of AI risk.

There were a lot of important takeaways from the interview for me. First, I’m excited to see this kind of novel work applying interesting techniques to try to resolve important disagreements. I think there’s probably a lot more to be learned by iterating on these methods and applying them in other domains.

Second, I think it demonstrates an admirable willingness to learn from disagreements, rather than just engaging in unproductive public fights.

And third, I think it highlights that even though we can improve our understanding of these issues, some of the most important issues are going to remain highly uncertain. And we’re going to need to figure out how to proceed — in our careers, in our lives, and as a civilisation — in the face of that uncertainty.

You can find the full interview on our website, Apple Podcasts, YouTube Spotify, and elsewhere. Feel free to share it with anyone who might find it useful.

This blog post was first released to our newsletter subscribers.

Join over 450,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!

Learn more: