I think there’s a notion that predicting things is quite hard in general, which it is in some sense. But there are a bunch of things that we can predict well, that we have predicted, and we don’t give ourselves credit for them because they’re ‘easy’ to predict.
Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions?
Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects.
Note: Katja’s organisation AI Impacts is currently hiring part- and full-time researchers.
There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast.
But there are also many things we’re now able to predict confidently — like the climate of Oxford in five years — that we no longer give ourselves much credit for.
Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones.
One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability?
And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity?
A significant historical example was the development of nuclear weapons. Over thousands of years, the energy density of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities.
Katja likes to compare our efforts to predict AI with those to predict climate change. While both are major problems (though Katja and 80,000 Hours have argued that we should prioritise AI safety), only climate change has prompted hundreds of millions of dollars of prediction research.
That neglect creates a high impact opportunity, and Katja believes that talented researchers should strongly consider following her path.
Some promising research questions include:
- What’s the relationship between brain size and intelligence?
- How frequently, and when, do technological trends undergo discontinuous progress?
- What’s the explanation for humans’ radical success over other apes?
- What are the best arguments for a local, fast takeoff?
In today’s interview we also discuss:
- Why is AI impacts one of the most important projects in the world?
- How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions?
- How does writing an academic paper differ from posting a summary online?
- When will unguided machines be able to produce better and cheaper work than humans for every possible task?
- What’s one of the most likely jobs to be automated soon?
- Are people always just predicting the same timelines for new technologies?
- How do AGI researchers different from other AI researchers in their predictions?
- What are attitudes to safety research like within ML? Are there regional differences?
- Are there any other types of experts we ought to talk to on this topic?
- How much should we believe experts generally?
- How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world?
- How quickly has the processing capacity for machine learning problems been increasing?
- What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive?
- What are the best arguments for and against discontinuous development of AI?
- Comparing our predictions of climate change and AI development
- How should we measure human capacity to predict generally?
- How have things changed in the AI landscape over the last 5 years?
- How likely is an AI explosion?
- What should we expect from a post AI dominated economy?
- Should people focus specifically on the early timeline scenarios even if they consider them unlikely?
- How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours podcast is produced by Keiran Harris.