Is transformative AI coming sooner than we thought?
It seems like it probably is, which would mean that work to ensure this transformation goes well (rather than disastrously) is even more urgent than we thought.
In the last six months, there have been some shocking AI advances:
This caused the live forecast on Metaculus for when “artificial general intelligence” will arrive to plunge — the median declined 15 years, from 2055 to 2040.
You might think this was due to random people on the internet over-updating on salient evidence, but if you put greater weight on the forecasters who have made the most accurate forecasts in the past, the decline was still 11 years.
Last year, Jacob Steinhardt commissioned professional forecasters to make a five-year forecast on three AI capabilities benchmarks. His initial impression was that the forecasts were aggressive, but one year in, actual progress was ahead of predictions on all three benchmarks.
Particularly shocking were the results on a benchmark of difficult high school maths problems. The state-of-the-art model leapt from a score of 7% to 50% in just one year — more than five years of predicted progress. (And these questions were hard — e.g. a Berkeley PhD student scored ~75%.)
I recently did an informal poll of people I trust who are interested in the AI safety field.
- Several reported no change to their estimates on when transformative AI would arrive.
- Several reported a small decrease.
- Two reported significant decreases (here is one public example).
So people’s timelines are clearly shortening overall. My sense is that the average estimate of these “insiders” has declined by 2–5 years at the median. This is less than the 11-year decline on Metaculus, because (in part) the broader Metaculus group has moved towards the already more aggressive views of the “insider” group.
Where does all this leave us?
I think the best overall review of how to forecast AI is this article by Holden Karnofsky from September 2021. At that point, he estimated a ~50% chance we’ll see transformative AI within 40 years (by 2060).
But since all the updates above happened after September, I would now personally guess that the timeline is shorter than that.
If I were forced to name a figure (mainly based on combining the views of people I trust), I’d say there’s a 50% chance that transformative AI has arrived by somewhere between 2040 and 2050.
What really matters is not the exact year or probability, but that there’s a significant chance that AI totally changes the world in our lifetimes. And by “totally changes,” I mean makes things much weirder than almost all sci-fi.
Perhaps more importantly, it’s hard to rule out, say, a 10% chance of transformative AI arriving within the next 10 years.
If handled well, this transformation could bring about abundance and prosperity for everyone. But in the worst case, we could lose control of the AI systems themselves. Unable to govern beings with capabilities far greater than our own, we might find ourselves with as little control over our future as chimpanzees have over theirs.
This is a strange and overwhelming situation, and one that I don’t feel like I’ve fully internalised myself.
But if you’re interested in doing something about it, this “call to vigilance” is a good place to start. You can also check out a list of all our relevant articles and podcasts.
Our one-on-one team has also helped lots of people find jobs working on AI safety, and not just within technical positions. For instance, Jonas Schuett used his background as a lawyer to help found the Legal Priorities Project and advise the UK government on AI.
Want to pursue a career in AI safety?
If you think that helping the transition to transformative AI go well might be a career option for you, we’d be excited to advise you on next steps, one-on-one. We can help you consider your options, make connections with others working in the same field, and possibly even help you find jobs or funding opportunities — all for free.
Apply to speak with our team