AGI could be here by 2030, and poses extreme risks

How can you use your
career to help?

AI that is more capable than humans could lead to explosive technological change and make the next decade among the most pivotal in history.

But it also poses huge — even existential — risks, and as a society, we’re not ready.

You might be able to use your skills to help.

We’ve supported over 1,000 people with many different backgrounds in shifting their careers to tackle this problem.

Our advisees now work at:

AI is progressing fast

AI progress graph

Companies are trying to build artificial general intelligence (AGI), and trends suggest they might succeed by 2030.

Key drivers of progress:

  • Compute scaling by 4x / year
  • Algorithmic efficiency improving by 3x / year
  • Advanced reinforcement learning, which teaches AI systems to reason
  • The development of autonomous AI agents

In five years, we could have AI that is better than humans at most tasks and accelerates its own development via a powerful feedback loop.

This possibility is worth taking seriously.

Another way to get a sense of AI progress is to explore its capabilities for yourself:
See what AI can do

Companies are trying to build artificial general intelligence (AGI), and trends suggest they might succeed by 2030.

Key drivers of progress:

  • Compute scaling by 4x / year
  • Algorithmic efficiency improving by 3x / year
  • Advanced reinforcement learning, which teaches AI systems to reason
  • The development of autonomous AI agents

In five years, we could have AI that is better than humans at most tasks and accelerates its own development via a powerful feedback loop.

This possibility is worth taking seriously.

Read more: the case for AGI by 2030

AGI could bring huge benefits, but poses incredibly serious risks

If things go well, AGI could lead to unprecedented growth and innovation. But if things go badly, it could be disastrous.

The top problems could be existential — causing a moral catastrophe, a permanently worse trajectory for humanity, or even our extinction.

Other risks:
Loss of control AI-enabled power grabs Understanding the moral status of digital minds Gradual disempowerment The possibility of a stable totalitarian regime Great power conflict

Loss of control

AI's capabilities may outpace our ability to control it. If systems are misaligned with human values, and have gathered (or been given) substantial power in society, this could lead to an existential catastrophe.

Read more Podcast on this issue

Loss of control

AI's capabilities may outpace our ability to control it. If systems are misaligned with human values, and have gathered (or been given) substantial power in society, this could lead to an existential catastrophe.

Read more Podcast on this issue

AI-enabled power grabs

AI could turbocharge efforts to seize and hold power. Advanced surveillance, persuasion, and control technologies could let small groups or authoritarian leaders entrench their rule — with little chance of recovery.

Read more Podcast on this issue

Understanding the moral status of digital minds

Advanced AI systems might one day be conscious. If we fail to recognize — or properly account for — the moral status of digital beings, we risk making serious ethical mistakes, which could result in enormous suffering or injustice.

Read more Podcast on this issue

Gradual disempowerment

Over time, humanity's ability to shape the future could slowly erode. If AI systems take over more key decisions — and if their goals aren't aligned with human flourishing — we could find ourselves increasingly sidelined, even without a singular catastrophic moment.

Read more Podcast on this issue

The possibility of a stable totalitarian regime

New technologies could lock in oppressive governments permanently. If AI tools enable near-total surveillance and control, it could create regimes more stable — and more brutal — than anything we've seen in history.

Read more Podcast on this issue

Great power conflict

Competition over AI could trigger catastrophic wars. Rival nations racing to develop or control powerful AI systems could miscalculate, sparking major conflict — potentially involving nuclear weapons or other devastating technologies.

Read more Podcast on this issue

Read about how and why to prioritise issues

Interviews with experts on the risks

Use your career to help make this go well

Your career is the single largest driver of your impact. We aim to give you the resources to make the biggest contribution you can.

Read our career reviews

Write-ups of some key paths for tackling top AGI risks and how to enter them.

Improve organisational, political, and societal decision-making about AI to reduce catastrophic risks.

The development of AI could transform society. Help increase the chance it’s positive by doing technical research to find ways to prevent AI systems from carrying out dangerous behaviour.

Help secure important organisations from attacks and prevent emerging technologies like AI and biotech from being misused, stolen, or tampered with.

Use knowledge of a key input into advanced AI systems to reduce risks and improve governance decisions.

Help Chinese companies and stakeholders involved in building AI make the technology safe and good for society.

Job board

We curate job listings to help you work on pressing world problems, including helping AGI go well. On our board, you’ll find:

View the job board

Get free 1-1 career support

Our team of career advisors can help you figure out how best to contribute.

We can:

  • Review your thinking
  • Suggest opportunities that match your background
  • Introduce you to people in the field

We’re most helpful for people who want an analytical and ambitious approach to impact. If in doubt, apply.

The form is taking a while to load...

If you don't see the form within a few seconds, it may be getting blocked by a browser extension. Try disabling any ad blocking or tracker blocking extensions, or load the form directly here.

Dr Zac Kenton

I was introduced to some experts in the field and others who were interested in following a similar path. These introductions have played a big part in me obtaining an internship at MILA. 80,000 Hours also helped me to be awarded an EA grant for pursuing research related to AI safety.

Dr Zac Kenton
Research Scientist
DeepMind
Ethan Perez

The advising team is incredibly well-researched and connected in AI safety. Their advice is far more insightful, personalized, and impact-focused than most of what I got from Google, self-reflection, or the peers or mentors I would typically go to.

Ethan Perez
Research Scientist
Anthropic
Rosie Campbell

As a direct result of advising, I found a role as Assistant Director of the Center for Human-Compatible AI at UC Berkeley, where I began contributing to shaping provably beneficial AI.

Rosie Campbell
Director of Special Projects
Eleos AI

What if you need to skill up?

If you’re not ready to jump into working on this issue full time, make your next step learning, meeting people, and building skills.

Stay up to date

AI moves fast, and there’s a lot to understand. Use these resources to keep learning.

More podcasts we recommend

Top newsletters

Also join our newsletter to get future research updates and jobs in your inbox!

FAQs

There is a lot of exaggerated hype about AI out there, and it’s good to be skeptical of people making extreme claims about a new technology — they often turn out to be misleading or even scams.

But the progress we’ve seen in AI over the last few years is undoubtedly real, even if some people claim AI can do things it can’t yet. The technology is moving fast, and the AI capabilities now on display every day would astound people working in the field five years ago. We’ve documented some of the impressive progress here and here, and explored whether it might continue here.

It’s important to keep in mind at least three key points about AI progress:

  1. AI has made incredible progress.
  2. Current AI systems still have many flaws and limitations.
  3. Future AI systems will likely be much, much better.

We think sometimes people focus too much on either (1) or (2) without acknowledging the other. And not nearly enough people are focused on (3) — and what it might mean for civilisation.

Unfortunately, we think so.

Over the past 10 years we’ve investigated many candidate existential risks — meaning disasters that could result in outcomes as bad as human extinction, or worse — because we think these risks are worth prioritising. Most candidate existential risks don’t make the cut. But because AI technology could become enormously powerful and agentic, it’s like creating a new, smarter-than-us species and hoping it treats us well forever.

We’re not alone in being concerned. Top scientists, AI business leaders, and politicians have also warned that risks downstream of the creation of powerful AI could be existential. This view is still controversial, though. Some serious people who have examined the question of whether AI poses an existential risk have come away unconvinced. So we’re not completely certain.

But we don’t need certainty to act on existential risks. If there’s a meaningful chance that a technology could pose an existential risk, then it’s worth taking steps to mitigate that. We think in this case, the risks are high enough that they demand much more attention than they have received.

For (much) more on this question, see our article on the risk of an AI-related catastrophe.

We think some of the best arguments that AGI won’t come soon are:

  1. The path to AGI is not obvious: Today’s AI systems, while impressive, remain narrow specialists rather than general problem-solvers. They excel at specific tasks (chess, image recognition, coding, etc.) but lack the broad adaptability and understanding of a human mind. For example, even advanced language models often stumble on basic common-sense reasoning or logical consistency. Some AI scientists think we’re missing crucial insights and innovations that will be necessary to create truly general and broadly useful intelligent systems.
  2. There are many potential bottlenecks on AI development that might choke off progress at a certain point. And it’s possible some will appear on the horizon that we haven’t foreseen.
  3. Most of the world does not seem to be acting like we’re about to experience a radically transformative new technology in the coming years. It’s even highly controversial among people working on AI. To think so is going against the grain of the world as a whole, and might be due to placing too much credence in the predictions of a small group of AI experts.

We think the best arguments against the idea that AGI poses a significant existential risk are:

  1. We don’t understand the nature of AI goals and motivations. It’s possible that it will be easy to steer by default, and be unlikely to develop powerseeking or dishonest tendencies, which would reduce several of the risks (though not all).
  2. If development is slow and gradual, then it is much more likely to be safe. If advances in AI are incremental and closely monitored, safety research may be able to head off any issues.
  3. In general, AI systems will be designed by humans, and humans have strong incentives to ensure that AI systems are safe and aligned to human purposes at each step in technological development.
  4. Multiple powerful AI systems could be deployed by different actors and act as “checks and balances” on one another, reducing several of the risks we’re most worried about.
  5. Many other concerns about new technology and catastrophic risks have been unfounded or exaggerated — or the problems have been solved relatively easily — so we should expect something similar to play out with AI.
  6. Existentially risky is an extremely high bar — meaning we should start by assuming that issues created by the development of AGI will be more moderate — and only change our minds based on strong arguments or evidence.

We think there are good responses to each of these points in favour of thinking AGI could come soon and pose existential risks, but it’s not so obvious that reasonable people can’t disagree.

Absorbing the information, podcasts, and newsletters on this page is a good place to start. We also have a reading list for other blog posts and research papers we recommend.

Those sources will link to other resources, and we’d encourage you to explore them for as long as they hold your interest.

Attending conferences, such as Effective Altruism Global or NeurIPS, can be a useful way to network with other people in the field. There are also meet-ups, courses, and other events to grow your network and skills.

Finally, our 1-1 advisors can introduce you to experts or potential future colleagues based on your specific situation.

If you’ve got specialised skills (or experience), there are many organisations that could be excited to hire you.

In fact, many organisations are bottlenecked on more experienced talent — people who can lead teams or bring years of experience in an area that’s particularly hard for generalists, like scientific disciplines or law.

You can filter our job board for different skills and seniority, as well as for organisations working on AI. The job board allows you to set up alerts for specific filters, so you can get an email if a role matching your needs is posted.

In general, we know employers working on AI risk are interested in people with the following skills:

Other skills might also be relevant, even if it’s not obvious at first how or they’re too niche for us to cover here.

Getting to know more people in the field might help you better figure out where you can contribute. So go to conferences, meet-ups, lectures, and other events with people who are also working on AI risks to expand your network.

It’s also possible you’ll need to learn new skills or get new experience. The good news is that you may be able to get up to speed relatively quickly, because AI risk is a young field. For ways to skill up, see our section above.

You can also apply to speak to one of our 1-1 advisors, who can help put you in touch with potential collaborators or employers.

If you’re at the start of your career, you might be able to get an entry-level position or fellowship right away. So it’s often worth doing a round of applications immediately, especially if you have technical skills.

You can try the following steps:

  1. Spend 20-200 hours reading about AI, speaking to people in the field (and maybe doing short projects).
  2. Apply to impactful organisations that might be able to use your help. Check out our job board for some opportunities.
  3. Aim for the job with the best combination of (i) strong org mission, (ii) team quality, (iii) centrality to the ecosystem, (iv) influence of the role, and (v) personal fit.

However, in most cases, early career people will need to spend at least 1–3 years gaining relevant skills before they should focus on how to apply them.

The skills listed above seem most helpful for working on this issue. Focus on whichever you expect to most excel at. But if you are right at the start of your career, don’t forget general skills like project management and personal productivity. You might want to check out the chapter of our career guide on how to be successful.

If you want to help reduce catastrophic risks from AI, working at a frontier AI company is an important option to consider, but the impact is extremely varied. These roles often come with great potential for career growth, and by placing you right in the center of AI development, some could be (or lead to) highly impactful ways of reducing the chances of an AI-related catastrophe.

However, there’s also a risk of doing a lot of harm, and there are particular roles you should probably avoid.

For more on this question, check out our full article.

If you don’t immediately want to change jobs or aren’t sure what you could do, you could ask people in the field what to do in order to best position yourself for the next 3–12 months, and then do that. Or see our advice for early career people above.

Keep in mind that few people have much expertise in transformative AI right now, so it’s often possible to pull off big career changes pretty fast with a little retraining.

Otherwise, we recommend that you educate yourself more about AI and follow along with ongoing developments. That will help you figure out how to best contribute from your current path, for example, by donating, promoting clear thinking about the issue, mobilising others, or preparing to switch when new opportunities become available (which could very well happen given the pace of change!).

No. There are reasons not to work on this issue:

  1. You might not have the flexibility to make a large career change right now.
  2. There are other important problems, and you might be far better suited for a job focused on another issue.
  3. You might be too concerned about the (definitely huge) uncertainties about how best to help or be less convinced by the arguments that it’s pressing.

However, we think most people reading this should seriously consider it. The issue is very neglected, important, and urgent, and many people can contribute. So we think it will be many people’s best option.