AGI could be here by 2030, and poses extreme risks
How can you use your
career to help?
AI that is more capable than humans could lead to explosive technological change and make the next decade among the most pivotal in history.
But it also poses huge — even existential — risks, and as a society, we’re not ready.
People with a wide range of skills and experience are urgently needed to help mitigate these risks.
We’ve supported over 1,000 people with many different backgrounds in shifting their careers to tackle this problem.
AI is progressing fast

Companies are trying hard to build artificial general intelligence (AGI), and trends suggest they might succeed by 2030.
Every few months, these systems improve at:
- Reasoning and planning
- Solving physics, biology, and math problems
- Taking autonomous actions
This is driven by 3-4x increases in compute and algorithmic efficiency each year, plus new training techniques. By 2030, we could have AI that is better than humans at most tasks, even driving AI development itself.
There’s also a chance – perhaps 1 in 4 – that AGI won’t be developed for decades. But the possibility of AGI coming soon is worth taking very seriously.
Companies are trying hard to build artificial general intelligence (AGI), and trends suggest they might succeed by 2030.
Every few months, these systems improve at:
- Reasoning and planning
- Solving physics, biology, and math problems
- Taking autonomous actions
This is driven by 3-4x increases in compute and algorithmic efficiency each year, plus new training techniques. By 2030, we could have AI that is better than humans at most tasks, even driving AI development itself.
There’s also a chance – perhaps 1 in 4 – that AGI won’t be developed for decades. But the possibility of AGI coming soon is worth taking very seriously.
AGI could bring huge benefits, but poses incredibly serious risks
If things go well, AGI could lead to unprecedented growth and innovation. But if things go badly, it could be disastrous.
The top problems could be existential — causing a moral catastrophe, a permanently worse trajectory for humanity, or even our extinction.
Interviews with experts on the risks
Use your career to help
Career reviews
Improve organisational, political, and societal decision-making about AI to reduce catastrophic risks.
The development of AI could transform society. Help increase the chance it’s positive by doing technical research to find ways to prevent AI systems from carrying out dangerous behaviour.
Help secure important organisations from attacks and prevent emerging technologies like AI and biotech from being misused, stolen, or tampered with.
Use knowledge of a key input into advanced AI systems to reduce risks and improve governance decisions.
Help Chinese companies and stakeholders involved in building AI make the technology safe and good for society.
Job board
We curate job listings to help you work on pressing world problems, including helping AGI go well. On our board, you’ll find:
Free 1-1 career support
Our career advisors can help you figure out how best to contribute.
We can:
- Review your thinking
- Suggest opportunities that match your background
- Introduce you to people in the field
We’re most helpful for people who want an analytical and ambitious approach to impact. If in doubt, apply.

![]()
I was introduced to some experts in the field and others who were interested in following a similar path. These introductions have played a big part in me obtaining an internship at MILA. 80,000 Hours also helped me to be awarded an EA grant for pursuing research related to AI safety.
Dr Zac KentonResearch ScientistDeepMind
![]()
The advising team is incredibly well-researched and connected in AI safety. Their advice is far more insightful, personalized, and impact-focused than most of what I got from Google, self-reflection, or the peers or mentors I would typically go to.
Ethan PerezResearch ScientistAnthropic
![]()
As a direct result of advising, I found a role as Assistant Director of the Center for Human-Compatible AI at UC Berkeley, where I began contributing to shaping provably beneficial AI.
Rosie CampbellDirector of Special ProjectsEleos AI
What if you need to skill up?
Stay up to date


Top newsletters
FAQs
There is a lot of exaggeration and hype about AI out there, and it’s good to be skeptical of extreme claims about a new technology — they often turn out to be wrong or even scams.
But the progress we’ve seen in AI over the last few years is undoubtedly real, even if some people do claim AI can do things it can’t yet. The technology is moving fast, and the AI capabilities now on display would astound people working in the field five years ago. We’ve documented some of the impressive progress here and here, and explored whether it might continue here.
It’s important to keep three things in mind:
AI has made incredible progress.
Current AI systems still have many flaws and limitations.
Future AI systems will likely be much, much better.We think people often focus too much on either (1) or (2) without acknowledging the other. And not nearly enough people are focused on (3) — and what it might mean for the future of civilisation.
A lot, and more all the time.
Our top two recommendations for staying up to date on what AI can do already are:
- The AI Digest, for interactive demos and explainers on AI’s new capabilities
- AI Explained, for a podcast covering the most pressing issues facing AI development and machine learning
Also, if you use AI yourself, that will help you understand the technology at a more visceral level.
See our guide to using
AI at work and in life to get more done.Unfortunately, we think so.
Over the past 10 years we’ve investigated many candidate existential risks — meaning disasters that could result in outcomes as bad as human extinction, or worse — because we think these risks are worth prioritising. Most candidate existential risks don’t make the cut. But because AI technology could become enormously powerful and agentic, it’s like creating a new, smarter-than-us species we can’t control and hoping it treats us well forever. We think that means the risks it poses are on another level.
We’re not alone in being this concerned. Top scientists, AI business leaders, and some politicians have also warned that risks downstream of the creation of powerful AI could be existential.
This view is still controversial, though. Some serious people who have examined the question of whether AI poses an existential risk have come away unconvinced. So we’re not completely certain.
But we don’t need certainty: if there’s a meaningful chance that a technology could pose an existential risk, then it’s worth a lot of effort to mitigate that. We think in this case, the risks are high enough that they demand much more attention than they have received.
For (much) more on this question, see our article on the risk of an AI-related catastrophe.
We think some of the best arguments for thinking that AGI won’t come soon are:
- The path to AGI is not obvious: Today’s AI systems, while impressive, remain narrow specialists rather than general problem-solvers. They excel at specific tasks (chess, image recognition, coding, etc.) but lack the broad adaptability and understanding of a human mind. For example, even advanced language models often stumble on basic common-sense reasoning or logical consistency. Some AI scientists think we’re missing crucial insights and innovations that will be necessary to create truly general and broadly useful intelligent systems.
- There are many potential bottlenecks on AI development that might choke off progress at a certain point. And it’s possible some will appear on the horizon that we haven’t foreseen.
- Most of the world does not seem to be acting like we’re about to experience a radically transformative new technology in the coming years. It’s even highly controversial among people working on AI. So thinking AI may move much, much faster is going against the grain of the world as a whole, and might be due to placing too much credence in the predictions of a small group of AI experts.
- Even if we develop AGI fairly soon, society may adopt it slowly, making the technology’s impact more gradual. Indeed, we’re very likely to see some resistance to adoption. Our view is that whatever sectors are slow to adopt will be quickly outcompeted, and that even a few crucial sectors using the technology could be enough for transformative effects. But if adoption is slow across the board, things could unfold more gradually.
One of the best resources we know makes the case for more gradual AI progress is this podcast from Dwarkesh with employees from Epoch arguing that AGI is still 30 years away.
We think the best arguments against the idea that AGI poses significant existential risks are:
- We don’t understand the nature of AI goals and motivations. It’s possible that it will be easy to steer by default, and be unlikely to develop powerseeking or dishonest tendencies, which would reduce several of the risks we’re most worried about (though not all).
- If development is slow and gradual, then advances in AI are more likely to be closely monitored, and safety researchers may be able to head off any issues. So if you think AI progress will be relatively slow (see above) you will probably be less worried.
- In general, AI systems will be designed by humans, and humans have strong incentives to ensure that AI systems are safe and aligned to human purposes at each step in technological development.
- Multiple powerful AI systems could be deployed by different actors and may be able to act as “checks and balances” on one another.
- Many other concerns about new technology and catastrophic risks have been unfounded or exaggerated — or the problems have been solved relatively easily — so we might expect something similar to play out with AI.
- ‘Existentially risky’ is an extremely high bar. We should start by assuming that issues created by the development of AGI will be more moderate, and only change our minds based on strong arguments or evidence.
We think there are strong arguments to be highly concerned — see our section on risks above. But these questions are hard, and it’s not so obvious that reasonable people can’t disagree.
The environmental impacts of AI are indeed serious. The International Energy Agency projects data centre electricity consumption — significantly driven by AI workloads — could reach 3% of global electricity demand by 2030, with associated water usage and CO2 emissions comparable to mid-sized countries. And there are other concerns too, like habitat disruption from expanding data center facilities.
Nonetheless, we don’t think environmental impacts are among the very biggest concerns arising from AGI.- The key reason, which we think is underappreciated by most of society, is that AGI could lead to problems that are even more serious — such as powerseeking, deception, and potentially takeover by extremely powerful AI systems, or the ability for powerseeking humans to use powerful AI systems to take over legitimate institutions in an AI-assisted coup. We think these issues — and maybe others — have the potential to affect everyone alive today, plus all future generations.
- On the other hand, we don’t think climate change poses an existential risk on its own. Even considering AI’s accelerating energy demands, we’d guess that there’s less than a 1 in 1,000,000 chance of temperature changes severe enough to cause human extinction, and we don’t see a path to permanent human disempowerment from climate change.
- Climate change does seem like a risk factor for some of these even more severe impacts — e.g. because it increases conflict between nations, which can lead to the development or use of new AI-enabled destructive technologies. For this reason, as well as the direct impacts, we are extremely supportive of more efforts to reign in rising temperatures and their impacts.
- Environmental impacts from AI are relatively visible and measurable. Major tech companies routinely track and report environmental footprints. And there are some established pathways for improvement: renewable energy adoption, advanced cooling systems, and energy-efficient hardware. Hundreds of billions of dollars go toward combating climate change each year. In contrast, risks from misuse or loss of control of powerful AI systems have many, many fewer people and resources working on them and lack public awareness.This is a large reason why we’re focused on helping people work on these highly neglected issues.
- AI itself could be part of the solution for climate change. AI systems can optimize energy usage in data centers, improve grid efficiency, greatly accelerate clean energy research, and help design more efficient hardware — if we handle it well. So if we can avoid the risks and capture the upsides of AGI, we may be able to use it to address climate change and other environmental problems.
Absorbing the information, podcasts, and newsletters on this page is a good place to start. We also have a reading list for other blog posts and research papers we recommend.
Those sources will link to other resources, and we’d encourage you to explore them for as long as they hold your interest.
Attending conferences, such as Effective Altruism Global or NeurIPS, is a useful way to meet and network with other people in the field. There are also meet-ups, courses, and other events to grow your network and skills.
Finally, our 1-1 advisors can sometimes introduce you to experts, local networks, or potential future colleagues based on your specific situation.
No, not everyone should do that. There are reasons not to work on this issue:
- You might not have the flexibility to make a large career change right now.
- There are other important problems, and you might be far better suited for a job focused on another issue. Personal fit is very important both for enjoying your job and having an impact.
- You might be too concerned about the (definitely huge) uncertainties about how best to help or be less convinced by the arguments that risks from AI are particularly pressing. For example, perhaps you are moved by some of the arguments here.
However, we think most people reading this should seriously consider dropping what they’re doing to work on helping AGI go well. The issue is very neglected, important, and urgent, and many people can contribute. So we think it will be a lot of our audience’s best option.
If you’ve got specialised skills (or experience), there are many organisations that could be excited to hire you.
In fact, many projects are bottlenecked on more experienced talent — people who can lead teams or bring years of experience in an area that’s particularly hard for generalists, like scientific disciplines or law.
You can filter our job board for different skills and seniority, as well as for organisations working on AI. The job board also allows you to set up email alerts for jobs matching specific criteria.
In general, we know employers in the AI risk field are interested in people with the following skills:
- Policy and political skills (especially concerning AI but also other areas, e.g. China-US relations). These skills allow you to work in government, think tanks, or politics.
- ML engineering for technical safety research
- Information and cybersecurity, to safeguard powerful AI technology from cyberthreats and theft
- Organisation building, e.g. to do general ‘getting stuff done’ work
- Communications and community building
- Research in any area that might be relevant, including social sciences, international relations, history, and even philosophy, as well as AI itself
- Forecasting
- AI hardware expertise
- Entrepreneurship
Other skills might also be relevant, even if it’s not obvious at first how or they’re too niche for us to cover here.
Getting to know more people in the field might help you better figure out where you can contribute. So go to [conferences, meet-ups, lectures, and other events with people who are also working on AI risks to expand your network. (You can get alerted to upcoming events by joining our newsletter.)
It’s also possible you’ll need to learn new skills or get new experience. The good news is that you may be able to get up to speed relatively quickly, because AI risk is a young field. For ways to skill up, see our section above.
You can also apply to speak to one of our 1-1 advisors, who can help put you in touch with potential collaborators or employers.
Even if you’re at the start of your career, you might be able to get an entry-level position or fellowship right away. So it can be worth doing a round of applications immediately, especially if you have technical skills.
You can try the following steps:
- Spend 20–200 hours learning about AI, speaking to people in the field (and maybe doing short projects).
- Apply to impactful organisations that might be able to use your help. Check out our job board for opportunities.
- Aim for the job with the best combination of (i) strong org mission, (ii) team quality, (iii) centrality to the ecosystem, (iv) influence of the role, and (v) personal fit.
However, in most cases, early career people will need to spend at least 1–3 years gaining relevant skills before they should focus on how to apply them.
The skills listed above seem most helpful for working on this issue. Focus on whichever you expect to most excel at.
If you are right at the start of your career, also don’t forget general skills like project management and personal productivity. You might want to check out the chapter of our career guide on how to be successful.
If you want to help reduce catastrophic risks from AI, we think working at a frontier AI company is an important option to consider — but the impact is extremely varied and sometimes negative. These roles often come with great potential for career growth, and by placing you right in the center of AI development, some could provide (or later lead to) highly impactful opportunities for reducing the chances of an AI-related catastrophe.
However, there’s also a risk of doing a lot of harm, and there are particular roles you should probably avoid.
For much more on this question, check out our full article.
If you just don’t want to immediately change jobs or aren’t sure what you could do, you could try to get involved with the AI safety community and ask people what to do in order to best position yourself for the next 3–12 months — and then do that. Or see our advice for early career people above, or the section above on skilling up.
Otherwise, we recommend that you educate yourself more about AI and follow along with ongoing developments. That will help you figure out how to best contribute from your current path, for example, by donating, promoting clear thinking about the issue, mobilising others, or preparing to switch when new opportunities do become available.