When do experts expect AGI to arrive?
I’ve argued elsewhere that it’s plausible AGI arrives before 2030. That’s a big claim.
As a non-expert, it would be great if there were experts who could tell us what to think.
Unfortunately, there aren’t.
There are only different groups with different weaknesses. Here’s an overview.
Table of Contents
1. Leaders of AI companies
The leaders of AI companies are saying that AGI arrives in 2–5 years.
This is easy to dismiss. This group is obviously selected to be bullish on AI and wants to hype their own work and raise funding.
However, I don’t think their views should be totally discounted. They’re the people with the most visibility into the capabilities of next-generation systems.
And they’ve also been among the most right about recent progress, even if they’ve been too optimistic.
Most likely, progress will be slower than they expect, but maybe only by a few years.
2. AI researchers in general
One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors.
The survey asked for forecasts of “high-level machine intelligence,” defined as when AI can accomplish every task better or more cheaply than humans. The median estimate was a 25% chance in the early 2030s and 50% by 2047 — with some giving answers in the next few years and others hundreds of years in the future.
The median estimate of the chance of an AI being able to do the job of an AI researcher by 2033 was 5%.1
They were also asked about when they expected AI could perform a list of specific tasks. Most of their answers in 2023 (shown in red) moved much earlier compared to 2022 (shown in blue) — showing they were just as surprised as everyone else at the success of ChatGPT and LLMs.

Finally, they were asked about when we should expect to be able to “automate all occupations,” and they responded with much longer estimates (20% chance by 2079).
It’s not clear to me why ‘all occupations’ should be so much further in the future than ‘all tasks’ – occupations are just bundles of tasks. (In addition, the researchers think once we reach ‘all tasks,’ there’s about a 50% chance of an intelligence explosion.)
Perhaps respondents envision a world where AI is better than humans at every task, but humans continue to work in a limited range of jobs (e.g. priests).2
Moreover, some predictions appear already outdated. For instance, they anticipated AI wouldn’t write simple Python code until 2025, yet this was possible by 2024, if not earlier.
Finally, forecasting AI progress requires a different skillset than conducting AI research.
For all these reasons, I’m skeptical about their specific numbers.
My main takeaway is that, by 2023, a significant fraction of researchers in the field believed that something like AGI is a realistic near-term possibility, even if many remain skeptical.
If 30% of experts say your airplane is going to explode, and 70% say it won’t, you shouldn’t conclude ‘there’s no expert consensus, so I won’t do anything.’
The reasonable course of action is to act as if there’s a significant explosion risk. Confidence that it won’t happen seems difficult to justify.
Expert forecasters
Instead of seeking AI expertise, we might consider forecasting expertise.
Metaculus aggregates hundreds of forecasts, which collectively have proven effective at predicting near-term political and economic events.
It has a forecast about AGI, which is defined with four conditions (detailed on the site).
As of December 2024, the forecasters average a 25% chance of AGI by 2027 and 50% by 2031.
The forecast has dropped dramatically over time:

One problem is that this definition is overly stringent, because it includes general robotic capabilities. Robotics is currently lagging, so satisfying this definition could be harder than having an AI that can do remote work jobs or help with scientific research.
But the definition is also not stringent enough because it doesn’t include anything about long-horizon agency or the ability to have novel scientific insights.
Metaculus also seems to suffer from selection effects and their forecasts are seemingly drawn from people who are unusually into AI.
Superforecasters in 2022
Another survey asked 33 people who qualified as superforecasters of political events.
Their median estimate was a 25% chance of AGI (using the same definition as Metaculus) by 2048 — much further away.
However, these forecasts were made in 2022, before ChatGPT caused many people to shorten their estimates.
The superforecasters also lack expertise in AI, and they made predictions that have already been falsified about growth in training compute.
In 2023, another group of especially successful superforecasters, Samotsvety, which has engaged much more deeply with AI, made much shorter estimates: ~28% chance of AGI by 2030. But they’re again selected for interest in AI.
All of the forecasters have been selected for being good at forecasting near-term current events, which could fail to generalise to forecasting long-term, radically novel events.
Summary of expert views
In sum, it’s a confusing situation. Personally, I put some weight on all the groups, which averages me out at ‘experts think AGI before 2030 is a realistic possibility, but many think it’ll be much longer.’ Mostly I prefer to think about the question bottom up, as I’ve done here.
Group | 25% chance of AGI by | Strengths | Weaknesses |
---|---|---|---|
AI company leaders (January 2025) | 2026 Unclear definition. | Best visibility into next generation of AI Most right recently | Selection bias Incentives to hype No forecasting expertise |
Published AI researchers (2023) | ~2032 Defined as ‘can do all tasks better than humans’ | Understand the technology Less selection bias | No forecasting expertise Gave inconsistent and already falsified answers Would probably give sooner answers in 2025 |
Metaculus forecasters (January 2025) | 2027 4-part definition incl. robotic manipulation. | Expertise in near-term forecasting Interested in AI | Appear to be selected for interest in AI Near-term forecasting expertise may not generalise |
Superforecasters via XPT (2022) | 2047 Same definition as above. | Expertise in near-term forecasting | Don’t know as much about AI Some forecasts already falsified Before the 2023 AI boom Near-term forecasting expertise may not generalise |
Samotsvety superforecasters (2022) | ~2029 (‘Transformative AI’) | Extremely good forecasting track record. More knowledgeable of AI. | Same as above, but more knowledgeable of AI, but also more selected to think AI is a big deal. |
Learn more
- Why AGI might be here by 2028
- Through a glass darkly by Scott Alexander is an exploration of what can be learned from expert forecasts on AI.
- Results of the largest survey of AI researchers from 2023, and some skeptical discussion of it.
Notes and references
- Median probability of being able to do the job of an AI researcher by 2043 was 10%.
An AI that can meaningfully help speed-up AI research will probably arrive sooner (which might accelerate a “full” automated researcher).↩ - I’d also argue “all tasks” is more relevant to figuring out when an acceleration of AI or scientific research might be possible.↩