If you want to maximise your chance of having a big positive impact with your career, we think it’s usually best to work on a global issue that’s large in scale, solvable, and neglected. These are not always the biggest problems in the world — rather they are the issues that receive little attention compared to how important they are and how much can be done about them.

This page presents our current list of which world problems we think best fit this profile, and therefore seem most promising for more people to work on right now.

What is this page based on?

Our views draw on work by the University of Oxford’s Global Priorities Institute, the Open Philanthropy Project, and our own research. Read about a framework we use for comparing issues, and the moral and methodological assumptions behind our views.

The most distinctive aspect of our approach is probably ‘longtermism’. Longtermism is the idea that because such huge numbers of individuals might live in the long-run future, and because we think everyone’s interests matter equally, approaches to improving the world should be evaluated mainly in terms of their potential for long-term impact — over thousands, millions, or even billions of years.

See our FAQ below for more information.

We begin this page with some categories of especially pressing world problems we’ve identified so far, which we then put in a roughly prioritised list.

We then give a longer list of global issues that also seem promising to work on (and which could well be more promising than some of our priority problems), but which we haven’t investigated much yet.

Finally, we talk about what, in practice, these categories might mean for your career.

The kinds of issues we currently prioritise most highly

Emerging technologies and global catastrophic risks

In the 1950s, large scale production of nuclear weapons meant that a few world leaders gained, for the first time, the ability to kill hundreds of millions of people. This was a striking milestone in a robust trend: as technology improves and the world economy grows, it gets easier to cause destruction on an ever larger scale.

In the 21st century, we expect this trend to continue. New transformative technologies may promise a radically better future, but also pose catastrophic risks. Mitigating these risks, while increasing the chance these technologies allow future generations to flourish, may be the crucial challenge of this century.

There is a growing movement working to address these issues, including new research institutes at Cambridge, MIT, and Oxford. Nonetheless, work on mitigating many risks remains remarkably neglected — in some cases receiving attention from only a handful of researchers. If you can find an effective way to work on these issues, we think it may be the most valuable thing you can do.

Building capacity to explore and solve problems

Comparing global problems involves lots of uncertainty and difficult judgement calls, and there have been surprisingly few serious attempts to make such big picture comparisons. And there are many global issues we haven’t yet seen investigated much at all.

For these reasons, we’re also strongly in favour of work that might help resolve some of this uncertainty, as well as work that seems robustly useful on many different assumptions yielding different conclusions about what’s best to work on.

One top priority in this category is to build the new field of ‘global priorities’ research, to try to work out which global problems are most pressing and make progress on foundational questions about how best to address them.

Another strategy is to help major existing institutions improve their capacities to make complex decisions, and therefore navigate global challenges.

A third strategy is to build communities of people who want to do good effectively, with the hope that they can deal with future challenges as they come. We’re especially keen to build the effective altruism community, because it explicitly aims to work on whichever global challenges will be most pressing in the future. We count ourselves as part of this community because we share this aim.

Finally, we want to encourage people to explore problem areas where the case for impact is more speculative but which might be very pressing — e.g. promoting civilisational resilience, mitigating great power conflict, or laying the foundations for the governance of outer space. Making progress on these issues, which are less explored (especially from a longtermist perspective) can help establish and build these nascent fields, or help us discover they’re less promising, meaning others can work more productively and efficiently.

Especially pressing global issues — our current overall list

If we had to rank the problem areas we’ve investigated so far in terms of the overall effectiveness of additional work on them (assuming someone had the same level of personal fit for each), our ranking would be as stated below.

Note that this is not a ranking of which world problems we think are the most important full stop, but rather a ranking of which problems we think are most pressing for people who broadly share our values and might follow our advice to work on at the current margin.

Partly for this reason, we expect this list to change to some extent year-to-year (perhaps seeing the addition or subtraction of one or two issues a year) as circumstances change — e.g. as some problems become less neglected or new issues arise.

Click through each of the links below to see our full writeups for each area — what the issue is and why we prioritise it as highly as we do.

Highest priority areas

There are also many global issues we haven’t looked into as much, but which seem like they could be as pressing as the issues above. We’d be excited to see a substantial minority of readers explore these issues in order to learn more about them, especially if they have unusually good fit for working on them. For instance:

There are probably still other issues we haven’t thought of, or perhaps an issue that stands above all the others we know of in terms of how pressing it is to work on — we call this ’cause X,’ and it is another reason to do more exploration.

Second-highest priority areas

These are global problems we’re confident are among the most pressing for more people to work on, but for which we’d guess an additional person working on them achieves somewhat less impact than work on our highest priorities, all else equal. Still, they could easily be someone’s top choice depending on the circumstance.

Together, all of the above categories make up what we call our priority problem areas or priority problems.

Other important global issues we’ve looked into

We’d love to see more people working on these issues, but given our general worldview they seem less pressing than our priority problems:

You can also see global issues we haven’t looked into, but which also seem important to us (though likely less pressing than our priority problem areas) below.

Interested in contributing to solving any of these problems?

Join our newsletter to hear about podcasts with experts in these fields, job opportunities, and updates on our research on how you can have a big impact with your career.

You’ll receive about two emails per month, you can unsubscribe with one click.

Long lists of potentially pressing global issues beyond our current priorities

There are many global problems we have not yet looked into at length, but which, upon further investigation, might turn out to be very promising for people to work on. Below we list some issues we’ve at least briefly considered.

We’d be keen to see more of our readers gain expertise and test out projects in these areas than are currently doing so, especially within the first set of issues below. This is both because we think it might be directly valuable (especially for people with an unusually good personal fit for one of these areas or access to an unusually good opportunity), and because it presents a chance to discover new highly pressing issues.

For these reasons, we expect it makes sense for a significant minority of our readers (say 10–20%) to explore new areas like those listed below rather than focusing on our current priority problem areas. This would be the 10–20% who are relatively best suited to these areas, which probably means those with some kind of pre-existing interest. We discuss this more below.

Right now, we know very few people (who share our priorities) who are working on these areas, so if you find yourself choosing between one of the issues below and one of our highest-priority issues, and you have equally good opportunities and fit for each, we currently think working on one of these less explored but possibly very pressing issues could be your best bet.
We came up with this list by surveying seven advisors and combining their views with our own judgement. We’ve added some interesting sources with arguments about the importance of each area as leads to learn more, though we don’t always agree with everything these sources say.

Note that the issues within each list are not in any particular order, and that there may be some overlap between them.

Potential highest priorities

The following are some issues that seem like they might be especially pressing from the perspective of improving the long-term future. We think these have a chance of being as important for people to work on as our priority problems listed above, but we haven’t investigated them enough to know.

A large violent conflict between major powers such as the US, Russia or China could be the most devastating event to occur in human history, and could result in billions of deaths. In addition, mistrust between major powers makes it harder for them to coordinate on arms control or ensure the safe use of new technologies. In general, it seems plausible that existential risks are heightened in war between powerful nations.

Researcher Stephen Clare estimates that the chance of great power conflict this century is around 45%, and that the chance of an extinction-level war is around 1%. We think that this 1% figure overstates the risk because it doesn’t consider that wars would be unlikely to continue once 90% or more of the population has been killed – but the 45% figure seems more plausible and very far from reassuring.

Though there is considerable existing work in this area, peacebuilding measures aren’t always aimed at reducing the chance of the worst outcomes. We’d like to see more research into how to reduce the chance of the most dangerous conflicts breaking out and the damage they would cause, as well as implementation of the most effective mitigation strategies.

Great power conflict is the subject of a large body of literature spanning political science, international relations, military studies, and history. Get started with accessible materials on contemporary great power dynamics — this blog post for a brief and simple explanation, this report from Brookings on the changing role of the US on the world stage, this podcast series on current military and strategic dynamics from the International Institute for Strategic Studies, and this talk on the risks from great power conflict using the scale, solvability, and neglectedness framework.

Useful books in this area include After Tamerlane: The Rise and Fall of Global Empires, 1400-2000, and Only the Dead: The Persistence of War in the Modern Age. You could also listen to our podcast with Chris Blattman on the five reasons wars happen.

International governing institutions might play a crucial role in our ability to navigate global challenges, so improving them has the potential to reduce risks of global catastrophes. Moreover, in the future we may see the creation of new global institutions that could be very long-lasting, especially if the international community trends toward more cohesive governing bodies — and getting these right could be very important

The Biological Weapons Convention is an example of one way institutions like the UN can help coordinate states to reduce global risks — but it also demonstrates current weaknesses of this approach, like underfunding and weak enforcement mechanisms.

There doesn’t seem to be as much work on improving global governance as you might expect — especially with an eye toward reducing global catastrophic risks. Here are a few pieces we know of:

We’d be keen to see more research on what governance reforms might be best for improving the long-run future.

We often elect our leaders with ‘first-past-the-post’-style voting, but this can easily lead to perverse outcomes. Better voting methods could lead to better institutional decision-making, better governance in general, and better international coordination.

Despite these potential benefits, ideas in this space often get little attention. One reason might be that current political leaders — those with the most power to institute reforms — have little incentive to change the systems that brought them to power. This might make this area particularly difficult to make progress in, though we still think additional effort in this area may be promising.

To learn more check out resources from the Center for Election Science and our podcast episode with Aaron Hamlin.

A related issue is the systematic lack of representation of future generations’ interests in policy making. To learn more about this issue and about potential solutions, see this new paper by Will MacAskill and Tyler John. One group trying to address this issue in practice in the UK is the All Party Parliamentary Group for Future Generations.

There is also the importance of voting security to prevent contested elections, discussed in our interview with Bruce Schneier.

AI systems in the future may be moral patients — that is, they could deserve moral consideration for their own sake. Why? The biggest reason we’re concerned about this is because they could become sentient — and so feel conscious pleasure, suffering, or other good and bad feelings.

If so, then we will need to ensure that the future goes well not only for humans and animals, but for AI systems themselves.

Mistreating sentient systems or allowing them to suffer — whether intentionally or accidentally, perhaps because we don’t know that they are sentient — could be a moral catastrophe, analogous to factory farming, but on a potentially much larger scale.

Whereas AI alignment and AI governance work seeks to ensure that the development of AI benefits humanity, work on artificial sentience seeks to ensure that the development of AI benefits AI systems themselves, or at least does not harm them.

It might sound a bit outlandish to think that AI systems could be sentient, and it’s true that we don’t have a great understanding of sentience/consciousness. However, many philosophers and consciousness researchers think there’s no reason in principle that an artificial system made from silicone couldn’t be sentient.

One way AI systems could be sentient is if they emulate the computational structure of the human brain. If we are conscious because of the computational structure of our brains (as is plausible), then digital people with the same computational structure would also be sentient. But AI systems that are very different from us might also have their own forms of sentience, in the same way that nonhuman animals like octopuses might.

We’re far from fully understanding this domain. Understanding when/how artificial systems could be conscious is even more difficult than understanding which nonhuman animals are sentient, because artificial systems can be even more architecturally different to us than animals, do not share our biological substrate, and do not share our evolutionary history.

Unlike with nonhuman animals, we are actively engaged in the process of designing artificial systems. And it seems very important to get right — imagine if we mistakenly think that some huge number of systems we create are non-sentient, or feeling pleasure, when really they are suffering. As AI systems continue to grow in both scale and capability, this issue will grow more and more pressing.

Work on artificial sentience can take the form of:

  • Increasing our understanding of consciousness and related issues — either via direct research or by field-building to encourage better work on these topics. Research topics span neuroscience and other sciences of biological minds, artificial intelligence, philosophy of mind, and ethics.
  • Thinking about the appropriate institutions and norms for making sure that the development of digital minds, if it happens, is managed well. Navigating these issues is especially important if the majority of future beings will be digital rather than biological.

Despite longstanding interest in the question of whether AI systems could be conscious — dating back to the very beginning of the field of AI — rigorous work on artificial sentience is surprisingly neglected, in part because it falls at the intersection of several fields of inquiry. The study of consciousness is also beset with not just empirical uncertainty, but conceptual uncertainty as well. A small group of researchers, however, are doing work focused on the question of artificial consciousness. Institutions where this work happens include the Digital Minds Research Group at the Future of Humanity Institute and the Sentience Institute. Dedicated journals include The Journal of Artificial Intelligence and Consciousness.

To learn more, check out:

It seems possible that humanity will at some point settle outer space. If it does, the sheer scale of the accessible universe makes what it does there enormously important.

Currently there is no agreement on how to decide what happens in space, should settlement become possible. The Outer Space Treaty of 1967 prohibits countries from claiming sovereignty over anything in space, but attempts to agree on more than that have failed to achieve consensus.

Who ends up in control of resources in space will naturally shift how they are used, and might influence vast numbers of lives. Furthermore, having agreements on how space is divided between groups might avoid a major conflict or a harmful rush to claim resources, and instead foster cooperation or compromise between different parties.

To make more concrete one possible way things could go wrong: one superpower may be alarmed by another superpower that finds itself on the verge of claiming and settling Mars, as they would anticipate eventually being eclipsed economically and militarily.

Despite the huge stakes, governance of space is an extremely niche area of study and advocacy. As a result, major progress could probably be made by a research community focused on this issue, even just by applying familiar lessons from related fields of law and social science.

Arguably it is premature to work on this problem because actual space settlement appears so far off. While this is an important point, we don’t think this is decisive for 4 reasons.

First, legal arrangements like constitutions and international treaties are often ‘sticky’ because they are difficult to renegotiate. Second, it may be easier to agree on fair processes for splitting resources in space while settlement remains far in the future, as it will be harder for interest groups to foresee what peculiar rules would benefit them in particular. Third, humanity may experience another ‘industrial revolution’ in the next century driven by AI or atomic scale manufacturing, which would allow space settlement to begin sooner than seems likely today. Fourth, once settlement becomes possible there will likely be a rush to agree on how to manage the process, and the more preparation has been completed ahead of that moment the better the outcome is likely to be.

See our full problem profile for more on the case for and against working on space governance and how you might be able to help.

The case here is similar to the case for improving institutional decision-making: better reasoning and cognitive capacities usually make for better outcomes, especially when problems are subtle or complex. And as with institutions, work on improving individual decision-making is likely to be helpful no matter what challenges the future throws up.

Strategies for improving reasoning might include producing tools, trainings, or research into how to best make better forecasts or decisions, or come to sensible views on complex topics.

Strategies for improving cognition might take a variety of forms, e.g. researching safe and potentially beneficial nootropics like creatine. This cause profile on research into pharmacological cognitive enhancement (including but not limited to nootropics) argues this research could rival global health work in its potential for helping people today, and perhaps have longer-run benefits too.

Although focusing on individuals seems to us like it will usually be less effective for tackling global problems than taking a more institutional approach, it may be more promising if interventions can influence large segments of the population or be targeted toward the most influential people. See the Update Project for an example of the latter kind of strategy.

Many of the biggest challenges we face have the character of global ‘public goods’ problems — meaning everyone is worse off because no particular actors are properly incentivized to tackle the problem, and they instead prefer to ‘free-ride’ on the efforts of others.

If we could make society better at providing public goods in general, we might be able to make progress on many challenges at once. One idea we’ve discussed that both has promise and faces many challenges is quadratic funding, but the space for possible interventions here seems enormous.

Another potential approach here is improving political processes. Governments have enormous power and are the bodies we most often turn to to tackle public goods problems. Shifting how this power is used even a little can have substantial and potentially long-lasting effects. Check out our podcast episode with Glen Weyl to learn about current and fairly radical ideas in this space.

If you’re interested in tackling these issues, gaining experience in advocacy or politics, studying economics, or learning product design may all be useful first steps.

We’d be keen to see more research into balancing the risks and benefits of surveillance by states and other actors, especially as technological progress makes surveillance on a mass scale easy and affordable.

Some have argued that sophisticated surveillance techniques might be necessary to protect civilization from risks posed by advancing technology with destructive capabilities (for example see Nick Bostrom’s article ‘The Vulnerable World Hypothesis’); at the same time, many warn of the dangers widespread surveillance poses not only to privacy but to valuable forms of political freedom (example).

Because of these conflicts, it may be especially useful to develop ways of making surveillance more compatible with privacy and public oversight.

Both the risks and benefits of advances in atomically precise manufacturing seem like they might be significant, and there is currently little effort to shape the trajectory of this technology. However, there is also relatively little investment going into developing atomically precise manufacturing, which reduces the urgency of the issue.

To learn more, read our problem profile on risks from atomically precise manufacturing.

Over human history, we have many times seen dominant groups give less consideration to the interests of others, often minorities in the society or those with less power.

But over the last 300 years, campaigns for equal consideration for people of different genders and sexualities, people of different races, ethnicities, and faiths, and people with disabilities have made significant progress.

Though these campaigns are still works in progress, these examples of ‘moral circle expansion’ suggest that positively shaping people’s values is possible, and that there could be promising opportunities to continue this progress.

If positive values like altruism and concern for those whose interests are often under-considered — including members of the above mentioned groups as well as people from different countries, future generations, nonhuman animals, or potential machine intelligences — were more widespread, this seems likely to help with a range of issues, including future problems that haven’t come up yet.

There could also be ways that the values held by society today or in the near future get ‘locked in’ for a long time, for example in constitutions, making it more important that positive values are widespread before such a point.

We’re unsure about the range of things an impactful career aimed at promoting positive values could involve, but one strategy would be to pursue a position that gives you a platform for advocacy (e.g. journalist, blogger, podcaster, academic, or public intellectual) and then using that position to speak and write about these ideas.

The Sentience Institute offers a number of resources on moral circle expansion, with a focus on spreading concern to nonhuman animals.

In the context of cause prioritization within the effective altruism community, some have argued for the importance of spreading positive values through working to improve the welfare of farmed animals (comparing it to AI safety research), while others have pushed back against this view. Others have argued against moral advocacy being desirable in general.

We might be able to significantly increase the chance that, if a catastrophe does happen, civilization survives or gets rebuilt. However, measures in this space receive very little attention today.

To learn more, see our podcast episode on the development of alternative food sources, this paper on refuges and our podcast episode with Paul Christiano.

An ‘s-risk’ is a risk of an outcome much worse than extinction. Research to work out how to mitigate these risks is a subset of global priorities research that might be particularly neglected and important. Read more.

This is a strategy for creating artificial intelligence by replicating the functionality of the brain in software. If successful, whole brain emulation could enable dramatic new forms of intelligence — in which case steering the development of this technique could be crucial. Read a tentative outline of the risks associated with whole brain emulation.

Bryan Caplan has written about the worry that ‘stable totalitarianism’ could arise in the future, especially if we move toward a more unified world government (perhaps in order to solve other global problems) or if certain technologies — like radical life extension or better surveillance technologies — make it possible for totalitarian leaders to rule for longer.

We think more research in this area would be valuable. For instance, we’d be excited to see further analysis and testing of Caplan’s argument, as well as people working on how to limit the potential risks from these technologies and political changes if they do come about. Listen to our podcast with Caplan for some discussion.

A blog post by David Althaus and Tobias Baumann argues that when people with some or all of the so-called ‘dark tetrad’ traits — narcissism, psychopathy, Machiavellianism, and sadism — are in positions of power or influence, this plausibly increases the risk of catastrophes that could influence the long-term future.

Developing better measures of these traits, they suggest — as well as good tests of these measures — could help us make our institutions less liable to be influenced by such actors. We could, for instance, make ‘non-malevolence’ a condition of holding political office or having sway over powerful new technologies.

While it’s not clear how large of a problem malevolent individuals in society are compared to other issues, there is historical precedent for malevolent actors coming to power — Hitler, Stalin, and Mao plausibly had strong dark tetrad traits — and perhaps this wouldn’t have happened if there had been better precautions in place. If so, this suggests that careful measures could prevent future bad events of a similar scale (or worse) from taking place.

Liberal democracies seem more conducive to intellectual progress and economic growth than other forms of governance that have been tried so far, and perhaps also to peace and cooperation (at least with other democracies). Political developments that threaten to shift liberal democracies toward authoritarianism therefore may be risk factors for a variety of disasters (like great power conflicts), as well as for society generally going in a more negative direction.

A great deal of effort — from political scientists, policymakers and politicians, historians, and others — already goes into understanding this situation and protecting and promoting liberal democracies, and we’re not sure how to improve upon this.

However, there are likely to be some promising interventions in this area that are currently relatively neglected, such as voting reform (discussed above) or improving election security in order to increase the efficacy and stability of deomocratic processes. A variety of other work, like good journalism or broadly promoting positive values, also likely indirectly contributes to this area.

Listen to our podcast episode with Mike Berkowitz, executive director of the Democracy Funders Network, to learn more.

The technology involved in recommender systems — such as those used by Facebook or Google — may turn out to be important for positively shaping progress in AI safety, as argued here.

Improving recommender systems may also help provide people with more accurate information and potentially improve the quality of political discourse.

It may be that the best opportunities for doing good from a longtermist perspective lie far in the future — especially if resources can be successfully invested now to yield greater leverage later. However, right now we have no way of effectively and securely investing resources long-term.

In particular, there are few if any financial vehicles that can be reasonably expected to persist for more than 100 years while also earning good investment returns and remaining secure. We’re unsure in general how much people should be investing vs. spending now on the most pressing causes. But it seems at least worthwhile to look more into how such philanthropic vehicles might be set up.

Founders Pledge — an organisation that encourages effective giving for entrepreneurs — is currently exploring this idea and is actively seeking input.

Learn more about this topic by listening to our podcast episode with Philip Trammell.

We’d be excited to see more discussion and exploration of many of these areas from the perspective of trying to improve the long-term future, and hope to help facilitate such exploration going forward.

Want to work on one of these problems?

If you’re interested in pursuing a career focusing on any of the problems mentioned so far in this article, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on the same issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Other longtermist issues

We’re also interested in the following issues, but at this point think that work on them is likely somewhat less effective for substantially improving the long-term future than work on the issues listed above.

Speeding up economic growth doesn’t seem as useful as more targeted ways to improve the future, and in general we favour differential development. However, speeding up growth might still have large benefits, both for improving long-term welfare, and perhaps also for reducing existential threats. For debate on the long-term value of economic growth check out our podcast episode with Tyler Cowen.

The causes of growth already see considerable research within economics, though this area is still more neglected than many topics. Potential strategies for increasing growth include trade reform (which also has the potential to reduce conflict), land use reform, and increasing aid spending and effectiveness.

A related field that might be similarly or perhaps more promising to work in is progress studies, which investigates the causes of economic, technological, scientific, cultural, and organizational advancement. (See also this list of resources.) Progress studies is a relatively new discipline so there might be more opportunities for additional people to make valuable contributions than in traditional economics.

Scientific research has been an enormous driver of human welfare. However, science policy and infrastructure are not always well-designed to incentivize research that most benefits society in the long-term.

For example, we’ve argued that some scientific and technological developments can increase risks of catastrophe, which better institutional checks might be able to help reduce.

More prosaically, scientific progress is often driven more by what is commercially valuable, interesting, or prestigious than by considerations of long-run positive impact. In general, we favour differential development in science and technology over indiscriminate progress, which better science policies or institutional design may help enable.

This suggests that there is room for improving systems shaping scientific research and increasing their benefits going forward. We’re particularly keen on people creating structures or incentives to push scientific research in more positive and less risky directions. Read more about this problem area, or check out example research questions on improving science policy and infrastructure.

Reducing harmful restrictions on migration has the potential to greatly increase economic growth, intercultural understanding, and cosmopolitanism — as well as help migrants directly. However, it also faces strong opposition and so carries political risk.

Read more from the Open Philanthropy Projectl, OpenBorders.info, or see the book Open Borders: The Science and Ethics of Immigration.

Recent advances in the science of ageing have made it seem more feasible than was previously thought to radically slow the ageing process and perhaps allow people to live much longer. If these efforts are successful, some have argued there would be positive long-run effects on society, as people would be led to think in more long-term ways and could keep working productively past retirement age, which could be beneficial for intellectual and economic growth.

That said, the case for long-term impact here is highly speculative and many people think more anti-ageing research could be totally ineffective (or perhaps even negative). Anti-ageing research also might soon be able to draw substantial private investment, meaning it will be less neglected. But some have also argued that’s a reason to work on it now, because it may need some early successes before it can become a self-sustaining field. Read more about this cause area, or see this interview with anti-ageing researcher and advocate, Aubrey de Grey.

Institutional quality seems to play a large role in development, so if there were a way to make improvements to institutions in developing countries, this could be an effective way to improve many people’s lives.

For instance, legal and political changes in China seem to have been key to its economic development from the 80’s onwards. For a discussion of the importance of governing institutions for economic growth see our interview with a group trying to found cities with improved legal infrastructure in the developing world.

Keep in mind, however, these efforts are often best pursued by citizens of the relevant countries. There is also substantial disagreement about which institutions are best, and the answers will vary depending on a country’s circumstances and culture.

Expanding to other planets could end up being one of the most consequential things humanity ever does. It could greatly increase the number of beings in the universe and might reduce the chance that we go extinct by allowing humans to survive deadly catastrophes on earth. It may also have dramatic negative consequences, for instance if we fail to take into account the welfare of beings we cause to exist in the process, or if settlement turns out to increase the risk of eventual catastrophic conflict. (Read more.)

However, independent space colonies are likely centuries away, and there are more urgent challenges in the meantime. As a result, we think that right now resources are generally better used elsewhere. Still, there does seem to be a chance that in the long run research on the question of whether space settlement is likely to be good or bad — and how good or bad — could have significant impacts.

Lie detection technology may soon see large improvements due to advances in machine learning or brain imaging. If so, this might have significant and hard-to-predict effects on many areas of society, from criminal justice to international diplomacy.

Better lie detection technology could improve cooperation and trust between groups by allowing people to prove they are being honest in high-stakes scenarios. On the other hand, it might increase the stability of non-democratic regimes by helping them avoid hiring, or remove, anyone who isn’t a ‘true believer’ in their ideology.

Wild animals are very numerous, and they often suffer due to starvation, heat, parasitism and other issues. Almost nobody is working to figure out what if anything can be done to help them, or even which animals are likely to be suffering most. Research on invertebrates might be especially important, as there is such an enormous number of them.

It’s also possible that because this issue is unintuitive and challenges the idea that what’s natural is, by default, innocuous, advocating for wild animal welfare could help us make moral progress. In particular, it may help set precedents for work on digital sentience, which may become a pressing issue in the future.

Learn more in our interview with Persis Eskander and read some early research from the Foundational Research Institute here.

It seems plausible that political discourse is significantly affected by the way many people now receive their information — through algorithms owned and run by social media companies.

This could have a number of harmful effects, including:

  • People’s views could become increasingly far from reality. If voters form highly inaccurate and difficult-to-change views about the world, this could hurt policy over a long period of time.
  • Governments could make information readily available — for example, about an ongoing pandemic — but if people cannot tell whether the information they are receiving is reliable, they won’t act on this information. This is an example of a problem caused by a lack of epistemic security.

We aren’t sure how important this is, or how to best go about addressing this problem.

It seems likely that more research would be valuable to:

  • Assess how strong the evidence is for social media (or the internet in general) driving inaccurate views.
  • Identify the extent to which this could impact the long-term future.

However, the issue does not seem particularly neglected, so it’s likely hard to make a marginal difference. For example, Facebook, Youtube, Twitter, and other platforms all have policies for tackling misinformation (though of course we might expect conflicts of interest to reduce the effectiveness of these efforts). The issue has also been addressed by prominent academics, is expressly political, and is regularly in the news. This reduces the chances that this is one of the world’s most pressing problems for additional people to work on. That said, we haven’t looked into this problem that much, and there may be innovative ways people can make a difference here that we’re not aware of.

Listen to our podcast with Tristan Harris, cofounder of the Center for Humane Technology, to learn more.

Other global issues

We think the following issues are quite important from a short- or medium- term perspective, and that work on them might well be as impactful as additional work focused on reducing the suffering of animals from factory farming or improving global health.

However, we don’t prioritise them as highly as those listed above because they seem somewhat less neglected, and because work on them seems less likely to substantially impact the very long-run future.

Improving mental health seems like one of the most direct ways of making people better off, and there appear to be many promising areas for research and reform that have not yet been adequately explored — especially with regard to new drug therapies and improving mental health in the developing world. See the Happier Lives Institute for more.

There is also some chance that like economic growth, better mental health in a population could have positive indirect effects that accumulate over time. Read a preliminary review of this cause area and check out our podcast episode with Spencer Greenberg to learn more.

Basic scientific research in general has had a large positive effect on welfare historically. Major breakthroughs in biomedical research specifically could lead to people living much longer, healthier lives. You might also be able to use training in biomedical research to work on other promising areas discussed above, like biosecurity or anti-ageing research. Read more.

Most people lack access to adequate pain relief, which leads to widespread suffering due to injuries, chronic health conditions, and disease. One natural approach is increasing access to cheap pain relief medications that are common in developed countries, but often not available in the developing world. One group working in this area is the Organization for the Prevention of Intense Suffering. Read more.

We discuss extreme risks of climate change — such as severe warming and geopolitical risks — in our writeup of the area.

Climate change also threatens to create many smaller problems or make other global problems worse, for example frictions between countries due to movement of refugees. While compared to other areas we cover climate change is not as neglected, we are highly supportive of reducing carbon-emissions through research, better technology, and policy interventions. Read more.

Smoking takes an enormous toll on human health — accounting for about 6% of all ill-health globally according to the best estimates. This is more than HIV and malaria combined. Despite this, smoking is on the rise in many developing countries as people become richer and can afford to buy cigarettes.

Possible approaches include advocating for cigarette taxes or campaigns to discourage smoking, and development of e-cigarette technology. Read more.

There are also some global problems that — while important — seem to us to be particularly crowded relative to how impactful work on them is likely to be. What we’ve learned about these problems suggests to us that resources are usually better spent elsewhere. In particular, we think these problems are less promising to work on that global health:

  • Education. Improving education can have positive impacts, and in some exceptional cases might be a great option. However, it seems to us that resources are usually better spent elsewhere. In the context of poor countries, work in education seems likely to be considerably less impactful than improving health. And in rich countries, education receives a comparatively large amount of resources, and so seems to us to be crowded and hard to affect.
  • Resource scarcity. it’s unclear how serious resource scarcity really is, and substantial effort already goes into avoiding resource shortages by profit-motivated actors. This is likely dominated by work to mitigate climate change or other environmental problems. Read more here and here.

If you’d like to read more about other near-termist problem areas, you might be interested in checking out these cause reports from Open Philanthropy.

Which issue should you focus on?

Determining for yourself which issues are most pressing

We gave our best guesses above about which global problems are most pressing for more people to work on. You might disagree, or want to do some of your own investigation before you make any decisions.

See this brief guide to investigating for yourself which problems are most pressing, which also links to a few other resources that can help you think things through.

Considering your personal fit

We encourage our readers to weigh how pressing a problem area is in general along with their personal fit for the area — how successful they are likely to be compared to the average person working on the problem, based on their skills and experience. You can think about your expected long-term impact in an area roughly as the product of how pressing the problem is that you’re working on and how much you in particular will be able to contribute to solving it.

If you’re coordinating as a part of a community, considering your comparative advantage — your fit for different areas compared to the community as a whole — may also be important.

For more on how to assess your personal fit and related topics, see our article Personal fit: why being good at your job is even more important than people think.

Advantages of spreading out over different issues

We do not think all our readers (let alone everyone) should work on our top ranked problems. Differing personal fit alone would mean that our readers should spread out over different problem areas, even if they were to all agree with our ranking of the general pressingness of the issues.

Moreover, as we gain more readers, there will be additional reasons for them to spread out. Two of the most important reasons are:

  • As more people work on an issue, there are diminishing returns to additional work. This means that a group of people that’s large compared to the capacity of an issue to absorb people will start to run out of fruitful opportunities to make progress on that issue, making it better for them to be spread out into other areas.
  • If you work with others, there is value of information in exploring new world problems — if you explore an area and find out that it’s promising, other people can enter it as well.

We cover this subject in more detail in our article on community coordination.

Among people who follow our advice, we aim to help a majority shoot for one of the highest-priority problem areas we listed above, but we’d also like to see a significant fraction aim for opportunities in the second and third groups. As we said above, we also think that some of our readers — perhaps 10–20% — will likely have the most impact by working on other global issues, especially those we list as potentially as pressing as our highest priority areas.

We think the reasons to spread out over different problem areas also apply to people aiming to take an ‘effective altruism’ approach to doing good — perhaps even moreso. For instance, if the effective altruism community becomes associated with a single issue, that could reduce its potential to grow and adapt in the future, which is an additional reason for people who take this approach to work on a variety of problem areas.

All that said, we think our highest-priority areas are currently neglected relative to how important they are, even within the effective altruism community, so we plan to continue to focus most of our efforts on them for now.

How do our organisational priorities overlap with these lists?

As a team, we also have limited capacity to provide advice and research. So, we try to focus a majority of our effort on our highest-priority areas. We then aim to put a smaller amount of effort into the second-highest-priority areas, and the remainder into other issues. For instance, most of our advising is focused on our highest-priority areas, but we also cover many other global issues on our podcast.

As our staff and readership grows, or if in the future our highest-priority problems become less neglected or we learn more about other pressing problems, we may prioritise a wider range of issues.

Frequently asked questions

Holding all else equal, we think that additional work on the most pressing global problems can be 100 to 1000 times more valuable in expectation than additional work on many more familiar social causes, where your impact is typically limited by the smaller scale of the problem or the best opportunities for improving the situation already being taken by others.

For this reason, our most important advice for people who want to make a big positive difference with their careers is to choose a very pressing problem to work on. This page is meant to help readers with that choice. Read more about the importance of choosing the right problem.

Global problems log graph

This list is based on input from our advisors (especially at the Global Priorities Institute and the the Open Philanthropy Project), judgement calls informed by our moral and methodological assumptions, and our own research:

This key ideas series is designed to help you think this through, especially the later sections on decision making and career planning.

As discussed in this article:

Your impact = pressingness of the problem x your ability to contribute

Different problems need different skills and expertise, so people’s ability to contribute to solving them varies dramatically. To learn more about what’s most needed to address different problems, click through to read the profiles above.

To explore your own skills and other aspects of your personal fit (especially early in your career) and find your comparative advantage, we encourage you to create a plan and test out options.

If you already have experience in an area, see our article about how you might best be able to apply it.

Read next:  The best solutions are far more effective than others

Many social interventions don’t have much impact; but the best are enormously effective. How taking a ‘hits based’ approach to finding solutions can enable you to have far more impact.

Enter your email and we’ll mail you a book (for free).

Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity.