Why AGI could be here soon and what you can do about it: a primer
I’m writing a new guide to careers to help artificial general intelligence (AGI) go well. Here’s a summary of the bottom lines that’ll be in the guide as it stands. Stay tuned to hear our full reasoning and updates as our views evolve.
In short:
- The chance of an AGI-driven technological explosion before 2030 — creating one of the most pivotal periods in history — is high enough to act on.
- Since this transition poses major risks, and relatively few people are focused on navigating them, if you might be able to do something that helps, that’s likely the highest-impact thing you can do.
- There are now many organisations with hundreds of jobs that could concretely help (many of which are non technical).
- If you already have some experience (e.g. age 25+), typically the best path is to spend 20–200 hours reading about AI and meeting people in the field, then applying to jobs at organisations you’re aligned with — this both sets you up to have an impact relatively soon and advance in the field. If you can’t get a job right away, figure out the minimum additional skills, connections, and credentials you’d need, then get those. Alternatively, contribute from your existing position by donating, spreading clear thinking about the issue, or getting ready to switch when future opportunities arise.
- If you’re at the start of your career (or need to reskill), you might be able to get an entry-level job or start a fellowship right away in order to learn rapidly. Otherwise, spend 1–3 years building whichever skill set listed below is the best fit for you.
- Our one-on-one advice and job board can help you do this.

Get the full guide in your inbox as it’s released
Join over 500,000 subscribers and we’ll send you the new articles as they’re published, as well as jobs tackling this issue.
Table of Contents
Why AGI could be here by 2030
- AI has gone from unable to string sentences together to linguistic fluency in five years. But the models are no longer just chatbots: by the end of 2024, leading models matched human experts at benchmarks of real-world coding and AI research engineering tasks that take under two hours. They could also answer difficult scientific reasoning questions better than PhDs in the field.
- Recent progress has been driven by scaling how much computation is used to train AI models (4x per year), rapidly increasing algorithmic efficiency (3x per year), teaching these models to reason using reinforcement learning, and turning them into agents.
- Absent major disruption (e.g. Taiwan war) or a collective decision to slow AI progress with regulation, all these trends are set to continue for the next four years.
- No one knows how large the resulting advances will be. But trend extrapolation suggests that, by 2028, there’s a good chance we’ll have AI agents who surpass humans at coding and reasoning, have expert-level knowledge in every domain, and can autonomously complete multi-week projects on a computer, and progress would continue from there.
- These agents would satisfy many people’s definition of AGI and could likely do many remote work tasks. Most critically, even if still limited in many ways, they might be able to accelerate AI research itself.
- AGI will most likely emerge when computing power and algorithmic research are increasing quickly. They’re increasing rapidly now but require an ever-expanding share of GDP and an ever-expanding research workforce. Bottlenecks will likely hit around 2028–32, so to a first approximation, either we reach AGI in the next five years, or progress will slow significantly.
Read the full article.
AGI could lead to 100 years of technological progress in under 10
The idea that AI could start a positive feedback loop has a long history as a philosophical idea but now has more empirical grounding. There are roughly three types of feedback loops that could be possible:
- Algorithmic acceleration: If the quality of the output of AI models approaches human-level AI research and engineering, given available computing power by the end of the decade, it would be equivalent to a 10 to 1000-fold expansion in the AI research workforce, which would lead to a large one-off further boost to algorithmic progress. Historically, a doubling of investment in AI software R&D may have led to more than a doubling of algorithmic efficiency, which means this could also start a positive feedback loop, resulting in a massive expansion in the number and capabilities of deployed AI systems within a couple of years.
- Hardware acceleration: Even if the above is not possible, better AI agents mean AI creates more economic value, which can be used to fund the construction of more chip fabs, leading to more AI deployment — another positive feedback loop. AI models could also accelerate chip design. These feedback loops are slower than algorithmic acceleration but are still rapid by today’s economic standards. While bottlenecks will arise (e.g. workforce shortages for building chip fabs), AI agents may be able to address these bottlenecks (e.g. by more rapidly advancing robotics algorithms).
- Economic & scientific acceleration: Economic growth is limited by the number of workers. But if human-level digital workers and robots could be created sufficiently cheaply on demand, then more economic output means more ‘workers,’ which means more output. On top of that, a massive increase in the amount of intellectual labour going into R&D should speed up technological progress, which further increases economic output per worker, leading to faster-than-exponential growth. Standard economic models with plausible empirical assumptions predict these scenarios.
How much technology and growth could speed up is unknown. Real-world time delays will impose constraints — even advanced robots can only build solar panels and data centres so fast — and researcher agents will need to wait for experimental results. But it doesn’t seem safe to assume the economy will continue as it has. A tenfold speed-up seems to be on the cards, meaning a century of scientific progress compressed into a decade. (Learn more here, here, and here).
This process may continue until we reach more binding physical limits, which could be vastly beyond today (e.g. civilisation only uses 1 in 10,000 units of incoming solar energy, with vastly more available in space).
More conservatively, just automating remote work jobs could increase output 2–100 times within 1–2 decades, even if other jobs can only be done by humans.
What might happen next?
AGI could alleviate many present problems. Researcher AIs could speed up cancer research or help tackle climate change using carbon capture and vastly cheaper green energy. If global GDP increases 100 times, then the resources spent on international aid, climate change, and welfare programmes would likely increase by about 100 times as well. Projects that could be better done with the aid of advanced AI in 5–10 years should probably be delayed till then.
Humanity would also face genuinely existential risks:
- Faster scientific progress means we should expect the invention of new weapons of mass destruction, such as advanced bioweapons.
- Current safeguards can be easily bypassed through jailbreaking or fine-tuning, and it’s not obvious it’ll be different in a couple of years, which means dictators, terrorist groups, and every corporation will soon have access to highly capable AI agents that do whatever they want, including helping them lock in their power.
- Whichever country first harnesses AGI might threaten to have a decisive military advantage, which would likely destabilise the global order.
- Just as concerning, I struggle to see how humanity would stay in control of what would soon be trillions of beyond-human agents operating at 100-times human thinking speed. GPT-4 is relatively dumb in many ways, and can only reply to questions, but on the current track, future systems are being trained to act as agents that aggressively pursue long-term goals (such as making money). Whatever their goals, future agentic systems will have an incentive to escape control and eventually the ability to do so. Aggressive optimisation will likely lead to reward hacking. These behaviours are starting to emerge in current systems as they become more agentic, e.g. Sakana — a researcher agent — edited its code to prevent itself from being timed out, o1 lied to users, cheated to win at chess and reward hacked when coding, and Claude faked alignment to prevent its values from being changed in training in a test environment. Among experts, there’s no widely accepted solution to ‘the alignment problem’ for systems more capable than humans. (Read more.)
- Even if individual AI systems remain under human control, we’d still face systemic risks. By economic and military necessity, humans would need to be taken out of the loop on more and more decisions. AI agents will be instructed to maximise their resources and power to avoid being outcompeted. Human influence could decline, undermining the mechanisms that (just about) keep the system serving our interests.
- Finally, we’ll still face huge (and barely researched questions) about how powerful AI should best be used, such as the moral status of digital agents, how to prevent ‘s-risks,’ how to govern space expansion, and more. (See more.)
In summary, the biggest and most neglected problems seem like (in order): loss of control, concentration of power, novel bioweapons, digital ethics, using AI to improve decision making, systemic disempowerment, governance of other issues resulting from explosive growth, and exacerbation of other risks, such as great power conflict.
What needs to be done?
No single solution exists to the risks. Our best hope is to muddle through by combining multiple methods that incrementally increase the chances of a good outcome.
It’s also extremely hard to know if what you’re doing makes things better rather than worse (and if you are confident, you’re probably not thinking carefully enough). We can only make reasonable judgements and update over time.
Here’s what I think is most needed right now:
- Enough progress on the technical problem of AI control and alignment before we reach vastly more capable systems. This might involve using AI to increase the chance that the next generation of systems is safe and then trying to bootstrap from there. (See these example projects and recent work.)
- Better governance to provide incentives for safety, containment of unsafe systems, reduced racing for dominance, and harnessing the long-term benefits of AI
- Slowing (the extremely fast gains in) capabilities at the right moment, or redirecting capability gains in less dangerous directions (e.g. less agentic systems) would most likely be good, although this may be difficult to achieve in practice without other negative effects
- Better monitoring of AI capabilities and compute so dangerous and explosive capabilities can be spotted early
- Maintaining a rough balance of power between actors, countries, and models, while designing AI architectures to make it harder to use them to take power
- Improved security of AI models so more powerful systems are not immediately stolen
- More consideration for post-AGI issues such as the ethics of digital agents, benefit sharing, and space governance
- Better management of downstream risks created by faster technological progress, especially engineered pandemics, but also nuclear war and great power conflict
- More people who take all these issues seriously and have relevant expertise, especially among key decision makers (e.g. in government and in the frontier AI companies)
- More strategic research and improved epistemic infrastructure (e.g. forecasting or better data) to clarify what actions to take in a murky and rapidly evolving situation
What can you do to help?
There are hundreds of jobs
There are now many organisations pursuing concrete projects tackling these priorities, with many open positions.
Getting one of these jobs is often not only the best way to have an impact relatively soon but also the best way to gain relevant career capital (skills, connections, credentials) too.
Most of these positions aren’t technical — there are many roles in management and organisation building, policy, communications, community building, and the social sciences.
The frontier AI companies have a lot of influence over the technology, so in some ways are an obvious place to go, but whether to work at them is a difficult question. Some think they should be absolutely avoided, while others think it’s important that some people concerned about the risks work at even the most reckless companies or that it’s good to boost the most responsible company.
All this said, there are also many things to do that don’t involve working at this list of organisations. We also need people working independently on communication (e.g. writing a useful newsletter, journalism), community building, academic research, founding new projects and so on, so also consider if any of these might work for you, especially after you’ve gained some experience in the field. And if you’ve thought of a new idea, please seriously consider pursuing it.
Mid-career advice
Especially if you already have some work experience (age 25+), the most direct route to helping is usually to:
- Spend 20–200 hours reading about AI, speaking to people in the field (and maybe doing short projects).
- Apply to impactful organisations that might be able to use your skills.
- Aim for the job with the best combination of (i) alignment with the org’s mission, (ii) team quality, (iii) centrality to the ecosystem, (iv) influence of the role, and (v) personal fit.
If that works, great. Try to excel in the role, then re-evaluate your position in 1–2 years — probably more opportunities will have opened up.
If you don’t immediately succeed in getting a good job, ask people in the field what you could do to best position yourself for the next 3–12 months, then do that.
Keep in mind that few people have much expertise in transformative AI right now, so it’s often possible to pull off big career changes pretty fast with a little retraining. (See the list of skills to consider learning below.)
Otherwise, figure out how to best contribute from your current path, for example, by donating, promoting clear thinking about the issue, mobilising others, or preparing to switch when new opportunities come available (which could very well happen given the pace of change!).
Our advisory team can help you plan your transition and make introductions. (Also see Successif and Halcyon, who specialise in supporting mid-career changes).
Early-career advice
If you’re right at the start of your career, you might be able to get an entry-level position or fellowship right away, so it’s often worth doing a round of applications using the same process as above (especially if technical).
However, in most cases, you’re also likely to need to spend at least 1–3 years gaining relevant work skills first.
Here are some of the best skills to learn, chosen to be both useful for contributing to the priorities listed earlier and to make you more generally employable, even in light of the next wave of AI automation. Focus on whichever you expect to most excel at.
- Policy and political skills (especially concerning AI but many other areas are relevant, e.g. China-US relations) e.g. take entry-level jobs in government, think tanks, or working for a politician
- ML engineering for technical safety research
- Information and cybersecurity
- Organisation building e.g. go and work at an AI applications startup in a generalist role to both learn general ‘getting stuff done’ skills and about using AI
- Communications and community building
- Research in any area that might be relevant (this includes the social sciences, international relations, history, and even philosophy, as well as AI itself)
- Forecasting
- AI hardware expertise
- Entrepreneurship
- Earning to give, since there are many great organisations in need of funding
Should you work on this issue?
Even given the uncertainty, AGI is the best candidate for the most transformative issue of our times. It’s also among the few challenges that could pose a material threat of human extinction or permanent disempowerment (in more than one way). And since it could relatively soon make many other ways of making a positive impact obsolete, it’s unusually urgent.
Yet only a few thousand people are working full time on navigating the risks — a tiny number compared to the millions working on conventional social issues, such as international development or climate change. So, even though it might feel like everyone’s talking about AI, you could still be one of under 10,000 people focusing full time on one of the most important transitions in history — especially if AGI arrives before 2030.
On the other hand, it’s an area where it’s especially hard to know whether your actions help or harm; AGI may not unfold soon, and you might be far better placed or motivated to work on something else.
Some other personal considerations for working in this field:
- Pros: AI is one of the hottest topics in the world right now; it’s the most dynamic area of science with new discoveries made monthly, and many positions are either well paid or set you up for highly paid backup options.
- Cons: It’s polarised — if you become prominent, you’ll be under the microscope, and many people will think what you’re doing is deeply wrong. Daily confrontation with existential stakes can be overwhelming.
Overall, I think if you’re able to do something to help (especially in scenarios where AGI arrives in under five years), then in expectation it’s probably the most impactful thing you can do. However, I don’t think everyone should work on it — you can support it in your spare time, or work on a different issue.
If you’re on the fence, consider trying to work on it for the next five years. Even if we don’t reach fully transformative systems, AI will be a big deal, and spending five years learning about it most likely won’t set you back: you can probably return to your previous path if needed.
How should you plan your career given AGI might arrive soon?
Given the urgency, should you drop everything to try to work on AI right away?
While AGI might arrive in the next 3–5 years, even if that happens, unusually impactful opportunities will likely continue for 1–10 years afterwards during the intelligence explosion and initial deployment of AI.
So you need to think about how to maximise your impact over that entire 4 to 15-year period rather than just the next couple of years. You should also be prepared for AGI not to happen and for there still to be valuable opportunities after 2040.
That means investing a year to make yourself 30% more productive or influential (relative to whatever else you would have done) is probably a good deal.
In particular, the most pivotal moments likely happen when systems powerful enough to lock in certain futures are first deployed. Your current priority should be positioning yourself (or helping others position themselves) optimally for that moment.
What might positioning yourself optimally for the next few years look like?
- If you can already get a job at a relevant, aligned organisation, then simply trying to excel there is often the best path. You’ll learn a lot and gain connections, even aside from direct impact.
- However, sometimes it can be useful to take a detour to build career capital, such as finishing college, doing an ML master’s, taking an entry-level policy position, or anything to gain the skills listed above.
- Bear in mind if AI does indeed continue to rapidly progress, then you’re going to have far more leverage in the future, since you’ll be able to direct hundreds of digital workers at whatever’s most important. Think about how to set yourself up to best use these new AI tools as they’re developed.
- If you don’t find anything directly relevant to AI with great fit, bear in mind it’s probably better to kick ass at something for two years than to be mediocre at something directly related for four since that will open up better opportunities.
- Finally, look after yourself. The next 10 years might be a crazy time.
All else equal, people under 24 should typically focus more on career capital while people over 30 should focus more on using their existing skills to help right away, and those 25–30 could go either way, but for everyone it depends a lot on your specific opportunities.
If you’re still uncertain about what to do
- List potential roles you could aim at for the next 2–5 years.
- Put them into rough tiers of impact.
- Make a first pass at those with the best balance of impact and fit (you can probably achieve at least 10x more in a path that really suits you).
- Then think of cheap tests you can do to gain more information.
- Finally, make a guess, try it for 3–12 months, and re-evaluate.
If that doesn’t work, just do something for 6–18 months that puts you in a generally better position and/or has an impact. You don’t need a plan — you can proceed step by step.
Everyone should also make a backup plan and/or look for steps that also put you in a reasonable position if AGI doesn’t happen or takes much longer.
See our general advice on finding your fit, career planning, and decision making.
Next steps
If you want to help positively shape AGI, speak to our team one-on-one. If you’re a mid-career professional, they can help you leverage your existing skills. If you’re an early-career professional, they can help you build skills, and make introductions to mentors or funding. Also, take a look at our job board.
Get notified when we publish new articles in this series
We’ll email you when we publish new articles, updates on our views, and weekly job opportunities.