Updates to our list of the world’s most pressing problems

80,000 Hours’ aim is to help people find careers that tackle the world’s most pressing problems. To do this, one thing we do is maintain a public list of what we see as the issues where additional people can have the greatest positive impact.

We’ve just made significant updates to our list. Here are the biggest changes:

  • We’ve broadened our coverage of particularly pressing issues downstream of the possibility that artificial general intelligence (AGI) might be here soon. In particular, we added a profile on AI-enabled power grabs near the top of our list and are adding several writeups of new emerging challenges that advanced AI could create or worsen.
  • We’ve removed ‘meta’ problems for simplicity and clarity. Our problem profiles list used to feature articles on building effective altruism, broadly improving institutional decision making, and global priorities research — which are all approaches to improving our ability to solve the world’s most pressing problems. Grouping these ‘meta problems’ with object-level problems sometimes causes confusion and makes it hard to compare across cause areas, so we’ve now taken them off the list. But we still think these topics are very important, so the articles are still live on our site, and related articles appear on our list of impactful career paths.
  • We’ve streamlined the presentation by consolidating related issues and restructuring the page as a more unified ranking rather than separate categories.

We made this change in line with our new strategic approach, which involves expanding and deepening our content about how to help the transition to a world with AGI go well.

Expanded coverage of AI-related challenges

Since 2016, we’ve ranked “risks from artificial intelligence” as our top pressing problem, but our profile on it primarily concerned the possibility that powerful AI might develop goals that are at odds with human interests. We still think this issue of power-seeking AI that is ‘misaligned’ with humanity’s interests is very pressing — but as AI has advanced, more issues have come into view. We have updated our overall problems list to reflect this broader landscape.

In particular, we’ve highlighted AI-enabled power grabs: the growing concern that advanced AI could enable unprecedented concentration of power among small groups of humans or authoritarian leaders.

AI technology could allow its creators — or others who control it, such as national governments or militaries — to gain and potentially keep extraordinary power. The possibility of ‘secret loyalties,’ where AI systems that appear to serve the public interest are actually trained to advance the interests of their true controllers, might mean it won’t be easy to know if this problem has emerged. As these systems are deployed throughout the economy, government, and military, they could covertly seek opportunities to consolidate power for those that control them.

We previously listed “risks of stable totalitarianism” as an emerging challenge, which covered similar ground. However, we decided to write the new and differently framed article to focus on the specific kinds of scenarios we think are most likely to be severe and urgent, and involve using AI to concentrate power. We’re also concerned about a fairly wide range of scenarios where small groups of human actors could accumulate dangerous amounts of control, even if they don’t amount to ‘totalitarianism’ in the traditional sense.

We’re not sure how likely or severe AI-enabled power grabs could be, and especially given how neglected working on this area is, it might be even more important for people to consider working on it than on risks of AI takeover. However, it’s also possible that as more people dive into the issue, it will turn out not to be as pressing as we think, or that if other issues we list were adequately addressed it might not represent a big additional problem.

New emerging challenges from AI advancement

We’ve also changed our list of “emerging challenges,” to focus more on issues that, while currently more speculative, could become critical as AI develops. This includes some problems we’d already written about — like the moral status of digital minds, gradual disempowerment, and ‘s-risks’. We do not consider this list complete, and hope to add more profiles, which we are in the process of investigating.

These changes reflect our view that AI development presents both the biggest risks and some of the biggest opportunities facing humanity — and that we are far from understanding the landscape in its entirety.

Removing ‘meta’ problems from the list

We’ve removed our profiles on building effective altruism, improving institutional decision making, and global priorities research from the main problem list.

The main reasons for this change are clarity and simplicity. It’s always been difficult to know exactly how to rank these ‘meta problems’ — which are about improving our ability to understand and work on object-level problems — against the object-level problems they are designed to help with. We also find that they create repetition and an unclear ontology on our site, because we also cover the material elsewhere. So we’ve now decided to focus the list on object-level problems and discuss broader approaches to working on these problems elsewhere.

This change does not mean we think ‘meta’ work is unimportant — indeed, 80,000 Hours’ own strategy for solving pressing problems is ‘meta.’ We still think these kinds of strategies are very valuable, and we cover them on our career reviews page.

Streamlining the presentation

Beyond substantive changes, we’ve made several adjustments aimed at making the page cleaner:

Consolidated related issues: We’re now considering the problem of nuclear weapons to be part of the problem of great power conflict, since nuclear risks are a major component of the most concerning great power war scenarios.

Unified structure: Rather than having separate categories that felt like distinct lists of issues, we’ve restructured the page to flow more naturally from what we think are top priorities to issues that we regard as likely to be less pressing.

(Though it’s worth noting that we regard issues we list near the bottom of our page as still hugely more pressing than the vast majority of problems that people normally work on in their careers. For example, global health is an area where more people can normally do much more good than they would be able to in more saturated areas like education or health in high-income countries.)

Note also that we don’t claim our list is comprehensive — there are likely many significant gaps and omissions that we’d ideally cover if we had more time and resources.

What this means for readers

If you’re interested in AI-related careers: We have now added more about different kinds of risks posed from advanced AI, and pointed to some emerging challenges that we think could be the top option for people who are particularly well placed to make progress on them. We don’t have well-developed career advice on all these areas yet, but the first step if you want to get involved is to understand the frontiers of knowledge on these issues — read the profiles on the page, or check out our podcast. If you’re ready to shift your career, apply to speak with us 1-1 for free.

If you were considering meta work: Our career reviews have guidance on how to contribute to community building, global priorities research, and institutional improvement. We also still do advising in these areas.

If you disagree with our AI focus: We recognise that our view that AI may be transformative enough to affect everything is controversial. And our decision to focus on helping people work on making that transformation go well represents a significant bet on our part. AI development might plateau, its risks might be lower than we think, or other issues might simply prove more pressing. Other problems on our list, like factory farming or global health, might be more promising in that case.

You might also just have different values and views about what problems matter most. If so, we still think it’s worth thinking hard about where to focus if you want to do a lot of good, and encourage you to interrogate the arguments we present in our articles on global issues, as well as others’ viewpoints, and, if you want, develop your own problem prioritisation.

Looking ahead

These changes reflect our best current guess about where additional people can have the most impact. With the possibility that AGI could arrive by 2030, we think a wide range of challenges downstream of this development deserve urgent attention.

We expect this list to continue evolving rapidly as AI capabilities continue to advance and the world changes faster and faster. We have some planned updates already, the details of the issues we list will change, and the next few years seem likely to bring entirely new challenges we haven’t yet imagined. We expect to make mistakes and change our minds — possibly quite a lot — and we hope that sharing our views and being explicit about the reasons behind them will help us make important updates more quickly and contribute positively to the conversation about where additional people should work if they want to make the biggest positive difference they can.

You can see our full updated ranking here. As always, we encourage you to click through to each profile, to understand our reasoning and get ideas for how you might contribute to tackling these challenges in your career.

Thank you for wanting to use your career to tackle the world’s most pressing problems. Your willingness to engage with these ideas — and to act despite (considerable) uncertainty about what approaches are best — is what makes 80,000 Hours’ work worthwhile. There’s a lot to figure out and a lot to do. We appreciate you for being part of this with us.