#40 – How well can we actually predict the future? Katja Grace on why expert opinion isn’t a great guide to AI’s impact and how to do better

Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions?

Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects.

Note: Katja’s organisation AI Impacts is currently hiring part- and full-time researchers.

There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast.

But there are also many things we’re now able to predict confidently — like the climate of Oxford in five years — that we no longer give ourselves much credit for.

Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones.

One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability?

And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity?

A significant historical example was the development of nuclear weapons. Over thousands of years, the energy density of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities.

Katja likes to compare our efforts to predict AI with those to predict climate change. While both are major problems (though Katja and 80,000 Hours have argued that we should prioritise AI safety), only climate change has prompted hundreds of millions of dollars of prediction research.

That neglect creates a high impact opportunity, and Katja believes that talented researchers should strongly consider following her path.

Some promising research questions include:

  • What’s the relationship between brain size and intelligence?
  • How frequently, and when, do technological trends undergo discontinuous progress?
  • What’s the explanation for humans’ radical success over other apes?
  • What are the best arguments for a local, fast takeoff?

In today’s interview we also discuss:

  • Why is AI impacts one of the most important projects in the world?
  • How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions?
  • How does writing an academic paper differ from posting a summary online?
  • When will unguided machines be able to produce better and cheaper work than humans for every possible task?
  • What’s one of the most likely jobs to be automated soon?
  • Are people always just predicting the same timelines for new technologies?
  • How do AGI researchers different from other AI researchers in their predictions?
  • What are attitudes to safety research like within ML? Are there regional differences?
  • Are there any other types of experts we ought to talk to on this topic?
  • How much should we believe experts generally?
  • How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world?
  • How quickly has the processing capacity for machine learning problems been increasing?
  • What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive?
  • What are the best arguments for and against discontinuous development of AI?
  • Comparing our predictions of climate change and AI development
  • How should we measure human capacity to predict generally?
  • How have things changed in the AI landscape over the last 5 years?
  • How likely is an AI explosion?
  • What should we expect from a post AI dominated economy?
  • Should people focus specifically on the early timeline scenarios even if they consider them unlikely?
  • How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Highlights

There are basically two kinds of things people talk about, where one of them is something like human level AI and one of them is something like vastly superhuman AI. I think when people talk about human level AI, they are often vague on the details that might matter a lot. For instance, there’s AI that can do what a human can do but for like a billion dollars an hour. It’s different from AI that can do what a human does at the price of a human, and often people are ambiguous about which one they’re talking about. Also, like, are we thinking about physical tasks as well? Like does the robotics have to be ready?

Directly asking [machine learning researchers] when high level AI is gonna appear is probably pretty uninformative. I probably still think if it was very close, I would still expect to get a bunch more close answers. But yeah, I think we should heavily discount the possibility … Like I don’t think we should ask them when they think it is and then take that as our main guess. I think it’s a small amount of evidence. But I think there might be good ways to use AI experts in combination with other things to come up with good estimates – there might be better forecasting methods and that sort of thing.

Often when people think that there will be a discontinuity in AI progress, they implicitly have some theory about it. Because it’s sort of an algorithm and maybe it’s likely to be a very simple one or something. So, we can say okay, are things that are algorithms more likely to undergo fast progress. So, we usually measure these things in terms of like how long would it have taken to have this amount of progress at the usual rates. Nuclear weapons were six thousand years of previous rates in like one go, so that’s big.

The next biggest one we could find was high temperature superconductors. Where they underwent maybe like a hundred years of previous progress. So, I think this was people that are discovering different materials that could be superconductors. They hadn’t really realized that there’s a whole different class of things that could be superconductors. And I think they might have like sort of had some theory that ruled it out, then they came across this class and suddenly things went very fast.

So, I think it’s interesting that both of these are sort of like discovering a new thing in nature.

Suppose you’re a company and maybe you’re mining coal and you make some AI that cares about mining coal. Maybe it sort of knows about human values enough for like in the next ten years or something not do anything terrible, but overall let’s say it’s like a bunch of sort of agents who are smarter than humans and better than humans in every way, but they just care a lot about mining coal. I expect in the long run for them to basically accrue resources and decision making and control over things and so on, ’cause they’re basically better than us in every way. And in the long run he let us move toward just like trying to mine a lot of coal and not do anything that humans would have cared about, which you know, might be fine if they’re the right kind of creatures who really get a lot of pleasure from the coal mine or something.

But you also might imagine that they’re not even conscious or anything, but the consciousness thing doesn’t really matter for what will happen in the world. Like, they might still be very good at like taking control of things. I guess it seems similar to what happened with say pre-human like you know, chimp like species and so on. If they had a choice to like start off humans existing it seems like it was probably a bad idea for them even if they could maybe kill a particular human or something. They quickly lost control of the situation ’cause we were just like better at everything.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.