3 reasons AGI might still be decades away

We recently argued that AGI could be here by 2030.
And we’re not the only ones — CEOs of leading AI labs and many AI researchers are saying similar things.
But many people disagree, and there’s a good chance that AGI won’t be here by 2030. Some think it could still be decades away.
So what are the reasons to expect a longer path to AGI?
Reason 1: The path to AGI isn’t obvious
Today’s AI systems can write excellent code, produce research reports, and do Nobel Prize-worthy work in protein folding.
But frontier systems still can’t:
- Independently carry out complicated tasks for hours, days, or weeks
- Interact with the physical world in complex and adaptive ways
- Consistently learn from past interactions to improve performance over time
In other words, we don’t yet have AGI — AI systems with general intelligence that can reliably replace humans on a wide range of tasks.
In 2024, OpenAI’s Sam Altman declared, “we are now confident we know how to build AGI.” But how will we get there?
Some people think that we can build AGI by scaling up existing models. But others argue that scaling has only seriously improved AI performance in areas like software engineering, where the tasks are clearly defined and often quickly verifiable. It hasn’t really helped AI systems grapple with the real world, where things are much messier.
Maybe it’s these real-world skills that will make the difference between AIs that are genuinely capable of transforming society and those that are just helpful tools. Think of the difference between making an accurate medical diagnosis — which current AI systems can already do — and proactively guiding a patient through six months of treatment, responding to their changing needs along the way.
So what does it mean if scaling isn’t enough to get us to AGI?
Some researchers, like François Chollet, argue that the current AI paradigm is fatally flawed, and we need new ideas. And these might just take time to discover.
Reason 2: We might hit major bottlenecks or diminishing returns soon
The pace of AI progress has been rapid over the last few years. This has been thanks to increasing quantities of compute, consistent efficiency improvements, and billions in investment.
However, this pace might not last forever. We may run into bottlenecks:
- The investment needed to build bigger AI systems could dry up.
- AI chip production might not be able to keep up with demand.
- We could run out of high-quality data needed to train models.
- There could be diminishing returns to algorithmic improvements as we pick the low-hanging fruit.
We’ve argued elsewhere that these bottlenecks are likely to emerge by 2030, and possibly even sooner — and if we haven’t already achieved AGI by this point, progress could seriously slow down.
These potential bottlenecks also cast doubt on the idea that automating AI research would rapidly result in the development of AGI — even with automation, there might be limits on how fast progress can realistically go.
Other major world events could also stall progress, like a global recession, another pandemic, or a great power war. These events could make us hit a bottleneck sooner.
Reason 3: The world doesn’t seem to believe it yet
If AGI were just around the corner, wouldn’t we be seeing clearer signs?
Markets don’t seem to be pricing in a massive, near-term disruption to the economy. Governments aren’t scrambling to redesign institutions to navigate a fundamentally new world. And while policymakers are starting to act, it’s not with the urgency or scale you’d expect if they actually believed a civilisation-defining technology was around the corner.
Even within the AI community, the idea of AGI by 2030 is controversial. Some experts are sounding the alarm, but others think we’re decades away from very advanced AI systems.
None of this proves AGI won’t arrive very soon. But if your model of the future is out of sync with what most of the world — including markets, policymakers, and even some domain experts — is doing, it’s worth asking why.
So where does this leave us?
Despite all of this, we still think that AGI could be here very soon. And we think this idea is worth taking very seriously, even if you think slower progress is more likely.
Importantly, many people who dismiss the idea that we’re on the brink of AGI still think it might only be a couple of decades away or less. For example:
- Meta’s Yann LeCun says human-level AI “will take several years if not a decade.”
- Cognitive scientist Gary Marcus says AGI could “perhaps [come] 10 or 20 years from now.”
- François Chollet (formerly at Google) says AGI is “likely in the next 10–15 years.”
- AI researchers Ege Erdil and Tamay Besiroglu have recently made a case for AGI being 20–30 years away.
We used to be arguing over whether it’s even possible to get AGI this century. Now, the debate is about whether it’s a few years or 30 years away.
As Helen Toner noted, “even the skeptical view implies we’re in for a wild decade or two.”
And even if AGI is a few decades away, we still need people working to steer humanity away from catastrophic AI risks now. The problems might take a lot of work to address — and the stakes are extremely high.
This blog post was first released to our newsletter subscribers.
Join over 500,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!
Learn more:
- Forecaster reacts: METR’s bombshell paper about AI acceleration by Peter Wildeford
- Why I have slightly longer timelines than some of my guests by Dwarkesh Patel
- “Long” timelines to advanced AI have gotten crazy short by Helen Toner
- The case for multi-decade AI timelines by Ege Erdil
- AI stocks could crash by Benjamin Todd