Your career can help solve the world’s most pressing problems

At 80,000 Hours, we do research into how people can most effectively use their careers to help solve the world’s most pressing problems. We use this research to provide one-on-one support to help people pursue career paths with a greater impact.

This page is a summary of what we’ve learned so far. It links to more detailed explanations of our views and reasoning, and other useful resources.

We start with the big picture and end with practical next steps. We cover: (i) the ethical views that inform our advice, (ii) the global problems we currently think are most pressing to work on, (iii) the careers we think most effectively address these problems, (iv) some advice on long-term career strategy that’s useful for whatever problems you focus on, and (v) a process for planning your career in light of your strengths and priorities.

Our advice is tailored for graduates aged 20-35 who want to have a large-scale positive impact with their careers, though much of it is relevant to a broader audience. You can find out more about us and how we were started here.

This series is a work in progress. We are currently working on this page, and drafting new articles. Join our newsletter to get notified when we release updates.

In our view, ‘having a positive impact’ is about promoting long-term welfare. However, we’re highly uncertain about this definition, so in practice we aim to consider other perspectives.

We’ve started by trying to identify the most pressing global problems to work on based on this definition. These are not necessarily the world’s biggest and most well-known problems, but rather those where additional effort can make the biggest long-term difference at the margin. Right now, we think these involve the risk of global catastrophes that could have permanent negative consequences — i.e., ‘existential risk’. Nuclear war and runaway climate change are the two most well-known catastrophic risks, and both may contribute to existential risk. However, we think that, all else equal, additional people can have even more impact by working to reduce the risk of large scale pandemics and to positively shape the development of advanced artificial intelligence, mainly because these areas are so much more neglected, which has left many of the most promising interventions untried.

Because we are very uncertain about these priorities, we also provide support to people building the new field of global priorities research, as well as people working in other high-impact areas.

We currently think some of the most promising career paths involve addressing these problems through work in carefully chosen areas of research, government policy and nonprofits. For those with the flexibility to pursue a new career path, we especially recommend considering whether one of our ‘priority paths’ might be a good fit in the long-term, perhaps after spending several years gaining skills. Those who already have specialised skills or are further along in their careers should also consider applying those skills to the most pressing global problems — this will often involve taking approaches that differ from our general suggestions. People in any field can also contribute to whichever problem they think is highest priority by financially supporting effective organisations in that area. Our research is ongoing and there are many excellent paths we’ve not written about — we discuss other promising options later.

Once you have ideas about which career paths seem most promising to you, aim to identify the option where you have the best chance of excelling in the long term i.e. where you have the best ‘personal fit’. It’s hard to predict your personal fit, so look for cheap ways to test different paths. If still in doubt, one good approach is to enter the path that would be highest-impact if you perform towards the top end of your expectations. If it works out, you can continue for many years. If it doesn’t, you can limit the downside by switching to something else (ideally a backup plan you’ve identified ahead of time). This strategy only works, however, if you’ve also tried to avoid risky options that might do more harm than good.

If you’re unsure which path to pursue, another option is to build career capital: skills, connections and credentials that are likely to be relevant to many high-impact paths in the future. You can also sometimes progress faster in your top option by first focusing on career capital. We cover this and other strategic considerations below.

To integrate all of these considerations, and work out your best options in light of your strengths and priorities, you can use our step-by-step process. Once you have a best-guess option, go ahead and pursue it. Bear in mind that career decisions are not fixed — we recommend that you review your career every 1-2 years. Between reviews, focus on excelling in your current path.

Our ethical views What does it mean to ‘make a difference’?

At 80,000 Hours, we help people find careers that more effectively ‘make a difference’, ‘do good’, or ‘have a positive impact’ on a large scale.

In this section, we lay out what we mean by these phrases. In a nutshell, we think ‘making a difference’ is about promoting long-term welfare. However, we’re highly uncertain about this definition, so in practice aim to consider other perspectives.

Our advice doesn’t entirely depend on the philosophical views in this section, but we think it’s important to be transparent about them. Alternatively, skip ahead to our practical suggestions.

When it comes to making a difference, we aim to be impartial in the sense that we give equal weight to everyone’s interests. This means we strive to avoid privileging the interests of others’ based on arbitrary factors such as their race, gender, or nationality, as well as where or even when they live. In addition, we think that the interests of many non-human animals should be given significant weight although we’re unsure of the exact amount. Because we aim to avoid privileging anyone’s interests based on when they were born, we always consider the impact our actions have on future generations, in addition to their effects on people alive today.

From this perspective, we aim to increase the expected welfare of others by as much as possible i.e., enable more individuals to live flourishing lives that are long, healthy, fulfilled, and free from avoidable suffering.

This means that, roughly, how much difference an action makes depends on how much it improves others’ welfare and how many others are helped.

As individuals, we all have goals besides making a difference in this way. We care about our friends, personal projects, other moral aims, and so on. But we’d like to see more people approach their careers as an opportunity to do good from an impartial perspective, and this perspective is the focus of our research and advice.

The average species lasts for 1-10 million years. Homo sapiens has been around for only 200,000. With the benefit of technology and foresight, civilisation could, in principle, survive for at least as long as the earth is habitable — probably hundreds of millions of years.

The possibility of a long future means there will, in expectation, be far more people in the future than there are alive today. We think future generations clearly matter, and impartial concern most likely implies their interests matter as much as anyone’s.

If our actions can predictably affect future generations in non-trivial ways, then because the welfare of so many people is at stake, these effects are likely what matter most about our actions from a moral perspective.

If this reasoning is correct, it would imply that approaches to improving the world should be evaluated mainly in terms of their potential long-term impact, over thousands, millions, or even billions of years. In other words, the question ‘how can I have a positive impact?’ should mostly be replaced with ‘how can I best make the very long-term future go well?’. These arguments and their implications are studied as part of an emerging school of thought called longtermism.

It’s difficult to predict the long-term effects of our actions, but we think it’s clear that the interests of future generations are neglected by most people and institutions today, suggesting that outstanding opportunities to help may still be untaken. We also think that actions we take today can have effects that last for a very long time, and that we can make educated guesses about what these effects are likely to be. For example, as we cover more in the next section, we can take steps to make it less likely that civilisation ends, which would irreversibly deprive future generations of the chance to flourish. We think there may be other ways to have a foreseeably positive impact on the long-term future as well.

We remain unsure about many of these arguments, but overall we’re persuaded that focusing more on the very long-term effects of our actions is one of the most important ways we can do more good. Such a radical claim requires much more argument, and we outline the considerations for and against it, as well as list further reading, in our full article on this topic.

As covered, we think that the most important thing for us to focus on from an impartial perspective is increasing long-term welfare. However, we are not sure that this is the only thing that matters morally.

Some moral views that were widely held in the past are regarded as flawed or even abhorrent today. This suggests we should expect our own moral views to be flawed in ways that are difficult for us to recognise. What’s more, there is still significant moral disagreement within society, among contemporary moral philosophers, and, indeed, within the 80,000 Hours team. It’s also extremely difficult to know all the ethical implications of our actions, and grand projects to advance abstract ethical aims often go badly.

As a result, we think it’s important to be modest about our moral views, and in the rare cases where there’s a conflict, try very hard to avoid actions that seem seriously wrong from a common-sense perspective. This is both because such actions might be wrong in themselves, and because they seem likely to lead to worse long-term consequences.

More generally, we aim to uphold cooperative norms and to factor ‘moral uncertainty’ into our views. We do the latter by taking into account a variety of reasonable ethical perspectives, rather than simply acting in line with a single point of view.

For these reasons, we don’t exclusively seek to promote long-term welfare. Rather, we do what we can to make everyone better off in ways that are as consistent as possible with common sense ethics, and without sacrificing anything that might be of comparable moral importance.

Global priorities What are the most pressing problems to work on?

Now that we have a sense of what ‘making a difference’ means, we can ask which career options make a difference most effectively.

We think that the most important single factor that determines the expected impact of your work is probably the issue you choose to focus on — whether that’s climate change, education, technological development or something else.

It’s harder to have a big impact on commonly supported causes because work in most areas has diminishing marginal returns. In other words, if an area already receives plenty of attention, then there will usually already be people working on the most promising interventions.

Although we’d like to see more people working on many global problems, we think additional people can have the most impact by focusing on the issues that are most neglected relative to how high the stakes are and the number of promising opportunities to make progress.

So, what are the most neglected and tractable issues that have the biggest stakes for long-term welfare?

Our current view of the world’s most pressing problems

In the 1950s, the large-scale production of nuclear weapons meant that, for the first time, a few world leaders gained the ability to kill hundreds of millions of people — and possibly many more if they triggered a nuclear winter, which would make it nearly impossible to grow crops for several years. Since then, the possibility of runaway climate change has joined the list of catastrophic risks facing humanity.

During the next century we may develop new transformative technologies, such as advanced artificial intelligence and bioengineering, that could bring about a radically better future — but which may also pose grave risks.

Previously, we focused on improving near-term global health, and we still think it’s an important cause. However, over the past eight years, we’ve come to realise that the present generation is capable of putting the entire future of civilisation at stake if it doesn’t wisely navigate the development of these technologies.

In combination with our growing confidence in longtermism, this has persuaded us that the most important challenge of the next century is likely to be to reduce ‘existential risk’ — the risk from events that would drastically damage the long-term future of humanity.

There are several types of existential risk. Currently, we’re most concerned by the risk of global catastrophes that might lead to billions of deaths and threaten to permanently end civilisation. There are several reasons we think it’s so urgent to address these risks.

First, because of the power of the new technologies noted above, we think that the probability of this kind of catastrophe occurring in our lifetime is too big to ignore.

Second, it seems like such an event would be among the worst things that could happen. This is especially true if one takes a longtermist perspective, because extinction would also mean the loss of the potential welfare of all future generations.

Third, some of these risks are highly neglected. For instance, less than $50 million per year is devoted to the field of AI safety or work specifically targeting global catastrophic biorisks. By comparison, billions or trillions of dollars go into more familiar priorities, such as international development, poverty relief in rich countries, education, and technological development. This makes the former fields perhaps more than a factor of 1000 more neglected.

This neglect suggests that a comparatively small number of additional people working on these risks could significantly reduce them. We suggest specific ways to help in the next section.

This said, we remain uncertain about this picture. Many of the ‘crucial considerations’ that led us to our current priorities were only recently identified and written about. We may yet learn of other ways to increase the probability of a positive long-term future and reduce the chance of widespread future suffering, that seem more promising to address than the existential risks we currently focus on.

For these reasons, we also work to support those creating the new academic field of global priorities research, which draws on economics, philosophy and other disciplines to work out what’s most crucial for the long-term future.

In addition, we encourage people to work on ‘capacity-building’ measures that will help humanity manage future challenges, whatever those turn out to be. These measures could involve improving institutional decision making and building the ‘effective altruism’ community.

Some other issues we’ve focused on in the past include ending factory farming and improving health in poor countries. They seem especially promising if you don’t think people can or should focus on the long-term effects of their actions.

There are many issues we haven’t been able to look into yet, so we expect there are other high-impact areas we haven’t listed. We have a list of candidates on our problem profile page, and we’d be excited for people to explore some of these as well as other areas that could have a large effect on the long-term future. These areas can be particularly worth pursuing if you’re especially motivated by one of them. We cover this more in the section on ‘personal fit’ below.

Best opportunities Which careers effectively contribute to solving these problems?

The most effective careers are those that address the most pressing bottlenecks to progress on the most pressing global problems.

For the same reasons we think it’s an advantage to work on neglected problems, we also think it’s an advantage to take neglected approaches to those problems. We discuss some of these approaches in this section.

Five categories of career to consider

Given our take on the world’s most pressing problems and the most pressing bottlenecks these issues face, we think the following five broad categories of career are a good place to start generating ideas if you have the flexibility to consider a new career path.

Many of the top problem areas we focus on are mainly constrained by a need for additional research, and we’ve argued that research seems like a high-impact path in general.

Following this path usually means pursuing graduate study in a relevant area where you have good personal fit, then aiming to do research relevant to a top problem area, or else supporting other researchers who are doing this.

Research is the most difficult to enter of the five categories, but it has big potential upsides, and in some disciplines, going to graduate school gives you useful career capital for the other four categories. This is one reason why if you might be a good fit for a research career, it’s often a good path to start with (though we still usually recommend exploring other options for 1-2 years before starting a PhD unless you’re highly confident you want to spend your career doing research in a particular area).

After your PhD, it’s hard to re-enter academia if you leave, so at this stage if you’re still in doubt it’s often best to continue within academia (although this is less true in certain disciplines, like machine learning, where much of the most cutting-edge research is done in industry). Eventually, however, it may well be best to do research in non-profits, corporations, governments and think tanks instead of academia, since this can sometimes let you focus more on the most practically relevant issues and might suit you better.

You can also support the work of other researchers in a complementary role, such as a project manager, executive assistant, fundraiser or operations. We’ve argued these roles are often neglected, and therefore especially high-impact. It’s often useful to have graduate training in the relevant area before taking these roles.

Some especially relevant areas to study include (not in order and not an exhaustive list): machine learning, neuroscience, statistics, economics / international relations / security studies / political science / public policy, synthetic biology / bioengineering / genetic engineering, China studies, and decision psychology. (See more on the question of what to study.)

Read more about research careers.

Government is often the most important force in addressing pressing global problems, and there are many positions that seem to offer a good network and a great deal of influence relative to how competitive they are.

In this category, we usually recommend that people aim to develop expertise in an area relevant to one of our priority problems and then take any government or policy job where you can help to improve policy relevant to that problem (or another pressing global problem). Another option is to first develop policy relevant career capital (perhaps by working in a generalist policy job) and then use the skills and experience you’ve developed to work on a high-priority problem later in your career.

If you’re a U.S. citizen, working on U.S. federal policy can be particularly valuable because the U.S. federal government is so large and has so much influence over many of our priority problems. People whose career goal is to influence the U.S. federal government often switch between many different types of roles as they advance. In the U.S., many types of roles that can lead to a big impact on our priority problems fit into one of the following four categories. (We focus on the U.S. here because of its influence. We think working in policy can also be quite valuable in other countries, although the potential career paths look slightly different.)

  1. Working in the executive branch such as the Defense Department, the State Department, intelligence agencies, or the White House. We don’t yet have a review of executive branch careers but our article on U.S. AI policy careers also makes a more general case for the promise of working in the U.S. federal government. (See also our profile on the UK civil service) Note, though, that in the U.S. top executive branch officials are often hired from outside the traditional career civil service. So even if your goal is to eventually be a top executive branch official, the best path might include spending much of your career in other types of roles, including those we describe next (but also including other roles such as some in the private sector) .

  2. Working as a Congressional staffer. Congressional staffers can have a lot of influence over legislation, especially if they work on a committee relevant to one of our priority problems. It’s possible to achieve seniority and influence as a Congressional staffer surprisingly quickly. Our impression, though, is that the very top staffers often have graduate degrees sometimes including degrees from top law schools. From this path it’s also common to move into the executive branch, or to seek elected office.

  3. Working for a political campaign. We doubt that political campaign work is the highest impact option in the long run but if the candidate you work for wins this can be a great way to get a high-impact staff position. For example, some of the top people who work on a winning presidential campaign eventually get high-impact positions in the White House or elsewhere in the executive branch. This is a high-risk strategy because it only pays off if your candidate wins and, even then, not everybody on the campaign staff will get influential jobs or jobs in the areas they care about.

  4. Influencer positions outside of government, covering policy research and advocacy. For example, you might work at a think tank or a company interested in a relevant policy area. In this kind of job, you can develop original proposals for policy improvements and/or help to set the agenda around a specific area of policy. You can also often build expertise and connections to let you switch into the executive branch, a campaign, or other policy positions. For many areas of technical policy, especially AI policy, we’d particularly like to emphasise jobs in industry. Working at a top company in an industry can sometimes be the best career capital for policy positions relevant to that industry. In machine learning in particular, some of the best policy research is being done at industry labs, like OpenAI’s and DeepMind’s. Journalists can also be very influential but our impression is that there is not as clear of a path from working as a journalist to getting other policy jobs.

In the UK, the options are similar. One difference is that there is more separation between political careers and careers in the civil service (which is the equivalent of the executive branch). A second difference is that the U.K. Ministry of Defence has less power in government than the U.S. Defense Department does. This means that roles outside of national security are comparatively more influential in the U.K. than in the U.S. Read more in our profiles on UK civil service careers and UK party political careers. (Both are unfortunately somewhat out of date but still provide useful information).

People also often start policy careers by doing graduate studies in an area that’s relevant to the type of policy you want to work on. In the US, it’s also common to enter from law school, a master of public policy, or a career in business.

Some especially relevant areas of policy expertise to gain and work within include: technology policy; security studies; international relations, especially China-West relations; and public health with a focus on pandemics and bioterrorism.

There are many government positions that require a wide range of skill types, so there should be some options in this category for nearly everyone. For instance, think tank roles involve more analytical skills (though more applied than the pure research pathway), while more political positions require relatively good social skills. Some positions are very intense and competitive, while many government positions offer reasonable work-life balance and some don’t have very tough entry conditions.

Although we suspect many non-profits don’t have much impact, there are still many great non-profits addressing pressing global issues, and they’re sometimes constrained by a lack of talent, which can make them a high-impact option.

One major advantage of non-profits is that they can tackle the issues that get most neglected by other actors, such as addressing market failures, carrying out research that doesn’t earn academic prestige, or doing political advocacy on behalf of disempowered groups such as animals or future generations.

To focus on this category, start by making a list of non-profits that address the top problem areas, have a large scale solution to that problem, and are well run. Then, consider any job where you might have great personal fit.

The top non-profits in an area are often very difficult to enter, but you can always expand your search to consider a wider range of organisations. These roles also cover a wide variety of skills, including outreach, management, operations, research, and others.

We list some organisations to consider on our job board, which includes some top picks as well as an expanded list at the bottom. Read more about working at effective non-profits in our full career review (which is unfortunately somewhat out of date).

If you already have a strong existing skill set, is there a way to apply that to one of the key problems?

If there’s any option in which you might excel, it’s usually worth considering, both for the potential impact and especially for the career capital; excellence in one field can often give you opportunities in others.

This is even more likely if you’re part of a community that’s coordinating or working in a small field. Communities tend to need a small number of experts covering each of their main bases.

For instance, anthropology isn’t the field we’d most often recommend someone learn, but it turned out that during the Ebola crisis, anthropologists played a vital role, since they understood how burial practices might affect transmission and how to change them. So, the biorisk community needs at least a few people with anthropology expertise.

This means that if you have an existing skill set that covers a base for a community within a top area, it can be a promising option, even if it’s obscure.

However, there are limits to what can be made relevant. We struggle to think of a way to apply a PhD in medieval portraiture directly to the top problem areas, so sometimes it will be better to retrain rather than apply an existing skill.

If you have an unusual skill set, it’s hard for us to give general advice online about how best to use it. Ideally, you can speak to experts in the problem areas you want to work on about how it might be applied. For the problems we focus on, we have some rough ideas about how particular skillsets can be applied here.

We think many of our readers can excel in roles in the four areas mentioned above, and we encourage you not to rule out these categories prematurely.

If you’re able to take a job where you earn more than you need, and you think none of the categories above are a great fit for you, we’d encourage you to consider earning to give. It’s also worth considering this option if you have an unusually good fit for a very high-earning career.

By donating to the most effective organisations in an area, just about anyone in a well paid job can have a substantial impact.

You may be able to take this a step further and ‘earn to give’ by aiming to earn more than you would have done otherwise and to donate some of this surplus effectively.

Not everyone wants to make a dramatic career change, or is well-suited to the narrow range of jobs that have the most impact on the most pressing global problems. However, by donating, anyone can support these top priorities, ‘convert’ their labour into labour working on the most pressing issues, and have a much bigger impact.

This can allow you to pursue your preferred career, while still contributing to pressing areas that require a specialised skill set like biosecurity or global priorities research.

For those who are an especially good fit with a higher-earning career (compared to the other paths), earning to give can be their highest-impact option. For instance, people who were earning to give provided early funding for many organisations we now think are high-impact, and some of those organisations could not have existed without this funding (including us!).

We list some of the highest-earning jobs available in a separate article, and for those with quantitative skills, we especially highlight quantitative trading. However, you can earn to give in any job that pays you more than you need to live comfortably.

When earning to give, it’s also important to pick a job with good personal fit, that doesn’t cause significant harm, and that builds career capital, particularly if you might want to transition into other high-impact options later on.

Considering both income and career capital leads us to favour jobs in high-performing organisations where you can develop skills that are useful in one of the other four categories, such as management or operations. Tech startups with 20-100 employees are often a good place to consider. Management consulting is another option.

Read more about earning to give.

Note that we think these categories are a good place to start, but they’re certainly not the right fit for everyone, especially if you have lots of experience or well-developed skills in another area.

See an outline of the reasoning behind these categories.

Our priority paths

Below are some more specific options that are among the most promising paths we know of right now. Many of them are difficult to enter; you may need to start by investing in your skills for several years, there may be relatively few positions available, and some require difficult-to-obtain-credentials, such as a PhD from a top school. However, if you have potential to excel in any of these paths, we encourage you to seriously consider it, as it may be one of your highest-impact options.

As we’ve argued, the next few decades might see the development of powerful machine learning algorithms with the potential to transform society. This could have both huge upsides and downsides, including the possibility of catastrophic risks.

To manage these risks, one need is technical research into the design of safe AI systems (including the “alignment problem”), which we cover later. But in addition to the technical problems, there are many other important questions to address. These can be roughly categorised into the three key challenges of transformative AI strategy:

  • Ensuring broad sharing of the benefits from developing powerful AI systems, as opposed to letting AI’s development harm humanity or unduly concentrate power.
  • Avoiding exacerbating military competition or conflict caused by increasingly powerful AI systems.
  • Ensuring that the groups that develop AI are working together to develop and implement safety features.

To overcome these challenges, we need a community of experts who understand the intersection of modern AI systems and policy, and work together to mitigate long-term risks and ensure humanity reaps the benefits of advanced AI. These experts would broadly carry out two overlapping activities: (i) research – to develop strategy and policy proposals, and (ii) implementation – working together to put policy into practice.

Ultimately, we see these issues as equally important as the technical ones, but currently they are more neglected. Many of the top academic centres and AI companies have started to hire researchers working on technical AI safety, and there’s perhaps a community of 20-50 full-time researchers focused on the issue. However, there are only a handful of researchers focused on strategic issues or working in AI policy with a long-term perspective.

Note that there is already a significant amount of work being done on nearer-term issues in AI policy, such as the regulation of self-driving cars. What’s neglected is work on issues that are likely to arise as AI systems become substantially more powerful than those in existence today — so-called “transformative AI” — such as the three non-technical challenges outlined above.

Some examples of top jobs to work towards long-term in this path include the following, which fit a variety of skill types:

How to enter

In the first few years of this path, you’d focus on learning more about the issues and how government works, as well as meeting key people in the field, and doing research, rather than pushing for a specific proposal. AI policy and strategy is a deeply complicated area, and it’s easy to make things worse by accident (e.g. see the Unilateralist’s Curse).

Some common early career steps include:

  1. Relevant graduate study. Some especially useful fields include international relations, strategic studies, machine learning, economics, law, public policy, and political science. Our top recommendation right now is machine learning if you can get into a top 10 school in computer science. Otherwise, our top recommendation tends to be: a) law school if you can get into Yale or Harvard b) international relations if you want to focus on research and c) strategic studies if you want to focus on implementation. However, the best choice for you will also depend heavily on your personal fit and the particular schools you get into.

  2. Working at a top AI company, especially DeepMind and OpenAI.

  3. Any general entry-level government and policy positions (as listed earlier), which let you gain expertise and connections, such as think tank internships, being a researcher or staffer for a politician, joining a campaign, and government leadership schemes.

This field is at a very early stage of development, which creates multiple challenges. For one, the key questions have not been formalised, which creates a need for “distentanglement research” to enable other researchers to get traction. For another, there is a lack of mentors and positions, which can make it hard for people to break into the area.

Until recently, it’s been very hard to enter this path as a researcher unless you’re able to become one of the top approximately 30 people in the field relatively quickly. While mentors and open positions are still scarce, some top organisations have recently recruited junior and mid-career staff to serve as research assistants, analysts, and fellows. Our guess is that obtaining a research position will remain very competitive but positions will continue to gradually open up. On the other hand, the field is still small enough for top researchers to make an especially big contribution by doing field-founding research.

If you’re not able to land a research position now, then you can either (i) continue to build up expertise and contribute to research when the field is more developed, or (ii) focus more on the policy positions, which could absorb hundreds of people.

Most of the first steps on this path also offer widely useful career capital. For instance, depending on the subarea you start in, you could often switch into other areas of policy, the application of AI to other social problems, operations, or earning to give. So, the risks of starting down this path if you may want to switch later are not too high.

Since this is one of our top priority paths, we have a specialist advisor, Niel Bowerman, who focuses on finding and helping people who want to enter it. He is especially focused on roles aimed at improving US AI public policy. If you would like advice, get in touch here.

Could this be a good fit for you?

One key question is whether you have a reasonable chance of getting some of the top jobs listed earlier.

The government and political positions require people with a well-rounded skill set, the ability to meet lots of people and maintain relationships, and the patience to work with a slow-moving bureaucracy. It’s also ideal if you’re a US citizen that might be able to get security clearance, and don’t have an unconventional past that could create problems if you choose to work in politically sensitive roles.

The more research-focused positions would typically require the ability to get into a top 10 grad school in a relevant area and deep interest in the issues. For instance, when you read about the issues, do you get ideas for new approaches to them? Read more about predicting fit in research.

Turning to other factors, you should only enter this path if you’re convinced of the importance of long-term AI safety. This path also requires making controversial decisions under huge uncertainty, so it’s important to have excellent judgement, caution and a willingness to work with others, or it would be easy to have an unintended negative impact. This is hard to judge, but you can get some information early on by seeing how well you’re able to work with others in the field.

However, if you can succeed in this area, then you have the opportunity to make a significant contribution to what might well be the most important issue of the next century.

Key further reading

Other reading

As we’ve argued, the next few decades might see the development of powerful machine learning algorithms with the potential to transform society. This could have both huge upsides and downsides, including the possibility of existential risks.

Besides strategy and policy work discussed above, another key way to limit these risks is research into the technical challenges raised by powerful AI systems, such as the alignment problem. In short, how do we design powerful AI systems so they’ll do what we want, and not have unintended consequences?

This field of research has started to take off, and there are now major academic centres and AI labs where you can work on these issues, such as MILA in Vancouver, FHI at Oxford, CHAI at Berkeley, DeepMind in London and OpenAI in San Francisco. We’ve advised over 100 people on this path, with several already working at the above institutions. The Machine Intelligence Research Institute, in Berkeley, has been working in this area for a long time and has an unconventional perspective and research agenda relative to the other labs.

There is plenty of funding available for talented researchers, including academic grants, and philanthropic donations from major grantmakers like the Open Philanthropy Project. It’s also possible to get funding for your PhD programme. The main need of the field is more people capable of using this funding to carry out the research.

In this path, the aim is to get a position at one of the top AI safety research centres, either in industry, nonprofits or academia, and then try to work on the most pressing questions, with the eventual aim of becoming a research lead overseeing safety research.

Broadly, AI safety technical positions can be divided into (i) research and (ii) engineering. Researchers direct the research programme. Engineers create the systems and do the analysis needed to carry out the research. Although engineers have less influence over the high-level research goals, it can still be important that engineers are concerned about safety. This concern means they’ll better understand the ultimate goals of the research (and so prioritise better), be more motivated, shift the culture towards safety, and use the career capital they gain to benefit other safety projects in the future. This means that engineering can be a good alternative for those who don’t want to be a research scientist.

It can also be useful to have people who understand and are concerned by AI safety in AI research teams that aren’t directly focused on AI safety to help promote concern for safety in general, so this is another backup option. This is especially true if you can end up in a management position with some influence over the organisation’s priorities.

How to enter

The first step on this path is usually to pursue a PhD in machine learning at a good school. It’s possible to enter without a PhD, but it’s close to a requirement in research roles at the academic centres and DeepMind, which represent a large fraction of the best positions. A PhD in machine learning also opens up options in AI policy, applied AI and earning to give, so this path has good backup options.

However, if you want to pursue engineering over research, then the PhD is not necessary. Instead, you can do a masters programme or train up in industry.

It’s also possible to enter this path from neuroscience, especially computational neuroscience, so if you already have a background in that area you may not have to return to study. Recently, opportunities have also opened up for social scientists to contribute to AI safety (we plan to cover this in future work).

Could this be a good fit for you?

  • Might you have a shot of getting into a top 5 graduate school in machine learning? This is a reasonable proxy for whether you can get a job at a top AI research centre, though it’s not a requirement. Needless to say these places are very academically demanding.
  • Are you convinced of the importance of long-term AI safety?
  • Are you a software or machine learning engineer who’s been able to get jobs at FAANG and other competitive companies? You may be able to train to enter a research position, or otherwise take an engineering position.
  • Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, sometimes have ideas for questions to look into, and can’t resist pursuing them? Read more about how to tell if you’re a good fit for working in research.

Further reading

The Open Philanthropy Project takes an effective altruism approach to advising philanthropists on where to give. It likely has over $10bn of committed funds from Dustin Moskovitz and Cari Tuna, and is aiming to advise other philanthropists. There are other “angel” donors in the community who could give in the $1-$10m range per year, but aren’t at their maximum level of giving. And we know a number of other billionaires who are interested in effective altruism and might want to start new foundations.

One reason why these donors don’t give more is a lack of concrete “shovel-ready” opportunities. This is partly due to a lack of qualified leaders able to run projects in the top problem areas (especially to found nonprofits working on research, policy and community building). But another reason is a lack of grantmakers able to vet these opportunities or generate new projects themselves. A randomly chosen new project in this area likely has little expected impact — since there’s some chance it helps and some chance it makes the situation worse — so it’s vital to have grantmakers able to distinguish good projects from the bad.

The skill of grantmaking involves being able to survey the opportunities available in an area, and come to reasonable judgements about their likelihood of success, and probable impact if they do succeed. Grantmakers also need to build a good network, both so they can identify opportunities early, and identify groups with good judgement and the right intentions.

In addition, grantmakers need to get into a position where they’re trusted by the major funders, and this requires having some kind of relevant track record.

All of this makes it incredibly difficult to become a grantmaker, especially early in your career. The Open Philanthropy Project’s last hiring round for research analysts had hundreds of applicants, only twelve of whom got in-person trials, of which 5 received job offers.

However, the high stakes involved mean that if you are able to get into one of these positions, then you can have a huge impact. A small scale grantmaker might advise on where several million dollars of donations are given each year. Meanwhile, a grantmaker at a large foundation — typically called a “programme officer” or “programme director” — might oversee $5-$40m of grants per year.

Given the current situation, it’s likely that a significant fraction of the money a grantmaker oversees wouldn’t have been donated otherwise for at least several years, so they get good projects started sooner and may increase the total amount of giving by creating capacity before potential donors lose interest.

What’s more, by having more talented grantmakers, the money can be donated more effectively. If you can improve the effectiveness of $10m per year to a top problem area by 10%, that’s equivalent to donating about $1m yourself. This often seems achievable because the grantmakers have significant influence over where the funds go and there’s a lot of potential to do more detailed research than what currently exists.

Overall, we think top grantmakers working in effective altruism can create value equal to millions or even tens of millions of dollars per year in donations to top problem areas, making it one of the highest-impact positions right now.

Finally, these positions offer good career capital because you’ll make lots of important connections within the top problem areas. This creates opportunities to exit into direct work. Another exit option is government and policy. Or you could switch into operations or management, and have an impact by enabling other grantmakers to be more effective.

Related paths

One related path is to work as a grantmaker in a foundation that doesn’t explicitly identify with effective altruism, in order to help bring in an effective altruism perspective. The advantage of this path is that it might be easier to add value. However, the downside is that most foundations are not willing to change their focus areas, and we think choice of focus area is the most important decision. Existing foundations also often require significant experience in the area, and sometimes it’s not possible to work from junior positions up to programme officer.

Another related path is philanthropic advising. One advantage of this path is that you can pursue it part-time to build a track record. This also means you could combine it with earning to give and donating your own money, or with advocacy positions that might let you meet potential philanthropists. We’ve seen several people give informal advice to philanthropists, or be given regranting funds to donate on their behalf.

A third related path is to work at a government agency that funds relevant research, such as IARPA, DARPA and NIH. Grantmakers in these agencies often oversee even larger pools of funding, but you’ll face more restrictions on where it can go. They also often require a PhD.

How to enter

One entry route is to take a junior position at one of these foundations (e.g. research analyst), then work your way up to being a grantmaker. We think the best place to work if you’re taking this path is the Open Philanthropy Project (disclaimer: we’ve received grants from them). Founders Pledge also has a philanthropic advising team, though it has less of a track record and is less focused on long-term focused problem areas. You could also consider research positions at other effective altruism organisations — wherever will let you build a track record of this kind of research (e.g. the Future of Humanity Institute).

Another key step is to build up a track record of grant making. You could start by writing up your thoughts about where to give your own money on the effective altruism forum. From there, it might be possible to start doing part-time philanthropic advice, and then work up to joining a foundation or having a regranting pool (funds given to you by another donor to allocate).

A third option is to pursue work in the problem area where you want to make grants, perhaps in nonprofits, policy or research, in order to build up expertise and connections in the area. This is the usual route into grantmaking roles. For instance, the Open Philanthropy Project hired Lewis Bollard to work on factory farming grants after he was Policy Advisor & International Liaison to the CEO at The Humane Society of the United States, one of the leading organisations in the area.

Could this be a good fit for you?

  • This position requires a well-rounded skill set. You need to be analytical, but also able to meet and build relationships with lots of people in your problem area of focus.
  • Like AI policy, it requires excellent judgement.
  • Some indications of potential: Do you sometimes have ideas for grants others haven’t thought of, or only came to support later? Do you think you could persuade a major funder of a new donation opportunity? Can you clearly explain the reasons you hold particular views, and their biggest weaknesses? Could you develop expertise and strong relationships with the most important actors in a top problem area? Could you go to graduate school in a relevant area at a top 20 school? (This isn’t needed, but is an indication of analytical ability.)
  • Note that working as support or research staff for an effective grantmaker is also high-impact, so that’s a good backup option.

Further reading

We think building the effective altruism community is a promising way to build capacity to address pressing global problems in the future. This is because it seems possible to grow a great deal and contains people who are willing to switch area to work on whichever issues turn out to be most urgent in the future, so is robust to changes in priorities.

We realise this seems self-promotional, since we ourselves run an effective altruism organisation. However, if we didn’t recommend what we ourselves do, then we’d be contradicting ourselves. We also wouldn’t want everyone to work on this area, since then we’d only build a community and never do anything. But we think recommending it as one path among about ten makes sense.

A key way to contribute to building the effective altruism community is to take a job at one of the organisations in the community — see a list of organisations. Many of these organisations have a solid track record, are growing and have significant funding, so a big bottleneck is finding staff who are a good fit. However, many are also management constrained, which raises the bar for getting hired — your application may need to demonstrate a high likelihood of excelling with relatively little supervision almost immediately as well as meeting the other requirements.

The level of competition for jobs at these organisations varies substantially depending on the particular specific position and organisation but it’s extremely difficult to get the most competitive roles, especially if you’re early in your career and don’t have much experience after college.

Because it’s so hard for organisations to identify people with the characteristics needed to fill these positions, we think you won’t be significantly replaceable if you get one and these roles can be very high-impact.

An additional staff member can often grow the community by several additional people each year, achieving a multiplier on their effort. These organisations don’t only grow the community, they also do other useful work, such as research and fundraising.

These roles let you develop expertise in effective altruism, top global problem areas and running startup nonprofits. They put you at the heart of the effective altruism movement and long-term future communities, letting you build a great network there. Many of the organisations also put a lot of emphasis on personal development.

Role types

There are a variety of roles available, broadly categorised into the following:

  • Management, operations and administration – e.g. hiring staff, setting strategy, creating internal processes, setting budgets.
  • Research and advice – e.g. developing the ideas of effective altruism, writing and talking about them.
  • Outreach, marketing and community – e.g. running social media accounts, content marketing, running promotional accounts, visual design, moderating forums, market research, responding to the media, helping people in the community.
  • Systems and engineering – e.g. web engineering, data capture and analysis, web design, creating internal tools.

How to enter

To enter these roles, you can apply directly to the organisations. Organisations often hire people who are already involved in the community. This is because commitment to and knowledge of the community are a requirement for many jobs and because it’s easier to evaluate a candidate if you already know their work. This means that if you want to aim towards these positions, the most important step is to start meeting people in the community, and doing small projects to build your reputation (e.g. writing on the forum, volunteering at EA Global, starting a local group, or doing freelance consulting for an organisation). Because these positions are scarce, almost nobody can count on getting one. This means you should make sure you’re acquiring career capital that would be relevant to other paths (e.g. a full-time job or graduate school) at the same time as you’re building your reputation within effective altruism. It’s usually not a good idea to commit to this path or build plans that depend on getting one of these jobs before you’ve gotten an offer. We list more advice in our full profile.

If you want to get a job that puts you in a better position to enter these roles in the future, then do something that lets you develop a concrete skill that’s relevant to one of the role types listed above. Well-run tech startups with 10-100 people are often a good place to learn these skills in a similar context. Alternatively, some effective altruism organisations frequently hire people from our other priority paths. Excelling in any of those paths is a great way to better position yourself for a job at an effective altruism organisation and could be equally or more impactful on its own.

Could this be a good fit for you?

Whether you might be a good fit in part depends on the type of role you’re going for. However, there are some common characteristics the organisations typically look for:

  • A track record that demonstrates intelligence and an ability to work hard.
  • Evidence of deep interest in effective altruism — for some roles you need to be happy to talk about it much of the day. This breaks down into a focus on social impact and a scientific mindset, as well as knowledge of the community.
  • Flexibility and independence – the organisations are relatively small, so staff need to be happy to work on lots of different projects with less structure.
  • It’s not a requirement, but it seems to be becoming difficult to get most of these jobs without several years of experience in a relevant skill.

Further reading

Within working at effective altruism organisations, we’d like to especially highlight roles in operations.

Operations staff enable everyone else in the organisation to focus on their core tasks and maximise their productivity. They’re especially crucial for enabling an organisation to increase in scale. The best operations staff do this by setting up scalable systems, rather than addressing individual tasks. This could involve creating a financial system to make budgets and track expenses; creating an HR system to hire staff, onboard them and give feedback; or being an executive assistant to a project founder. Some operations staff manage a significant part of their organisation, and several have worked their way up to CEO or COO. Organisations need operations staff as they grow. It’s hard to rapidly hire and retain staff without an excellent operations team.

Core effective altruism organisations and the broader global catastrophic risk community have often struggled to find qualified candidates with experience in operations and several job searches stayed open for long periods of time. Recently, several organisations have had successful hiring rounds, so there are now fewer open positions, especially senior positions, than a year ago. That said, several organisations are currently hiring operations staff (see our job board) and we expect additional positions to open up over time.

Many expect that it must be easy to hire operations staff from outside of the community, but we don’t think that’s true, especially while the relevant organisations are small. Since operations staff are so central to an organisation, they need to deeply understand the mission and be deeply embedded in the team.

Instead, we think these roles likely get unfairly neglected, because they are less glamorous and their impact is more indirect.

Within academic settings, it’s also because university regulations often prevent the institutes from paying as much as these roles merit.

This can make operations roles an especially good way to contribute to these top problem areas.

Operations roles also let you gain a skill set that’s needed in basically every organisation — especially by managers — and so is highly transferable. In addition, in this path you can gain the other career capital benefits from working at effective altruism organisations we mentioned earlier.

How to enter

To get these roles, it’s often possible to enter directly and work your way up.

If you’re not able to land a full-time job right away, one common way people have entered these roles before is by volunteering to help run an EA Global conference.

Otherwise, you could seek operations experience in top organisations in the private sector, such as growing a well-run tech startup, while also meeting people in the relevant communities.

Could this be a good fit for you?

In addition to the traits we list for working at effective altruism organisations, operations staff are especially good at optimising systems, anticipating problems, making plans, having attention to detail, staying calm in the face of urgent tasks, prioritising among a large number of tasks, and communicating with the rest of the team.

We often find that people who are good at operations don’t realise that they are. Some clues you might be a good fit include:

  • Have you gone above and beyond in a previous job? For instance, did you notice a problem and take steps to fix it without anyone telling you what to do?
  • Have you been able to quickly learn new skills, and apply that knowledge? For instance, did you succeed in your studies or quickly pick up hobbies in your spare time?
  • Can you work independently? For example, have you pushed ahead with side projects that you weren’t required to do for work or school?
  • Have you run a significant event, such as a conference, exhibition or concert? This involves some similar skills, and people have often had the opportunity during university.

Further reading

We’ve argued that one of the most important priorities is working out what the priorities should be. There’s a huge amount that’s not known about how to do the most good, and although this is one of the most important questions someone would ask, it has received little systematic study.

The study of which actions do the most good is especially neglected if you take a long-term perspective, in which what most matters is the effects of our actions on future generations. This position has only been recently explored, and we know little about its practical implications. Given this, we could easily see our current perspective on global priorities shifting given more research, so these questions have practical significance.

The study of how to help others is also especially neglected from a high-level perspective. People have done significant work on questions like “how can we reduce climate change”, but much less on the question “how pressing is climate change compared to health?” and “what methods should we use to make that comparison?”. It’s these high-level questions we especially want to see addressed.

We call the study of high-level questions about how best to help others “global priorities research”. It’s primarily a combination of moral philosophy and economics, but it also draws on decision theory, decision-making psychology, moral psychology, and a wide variety of other disciplines, especially those concerning technology and public policy. You can see a research agenda produced by the Global Priorities Institute at Oxford.

We’d like to see global priorities research turned into a flourishing field, both within and outside of academia.

To make this happen, perhaps the biggest need right now is to find more researchers able to make progress on the key questions of the field. There is already enough funding available to hire more people if they could demonstrate potential in the area (though there’s a greater need for funding than with AI safety). Demonstrating potential is hard, especially because the field is even more nascent than AI safety, resulting in a lack of mentorship. However, if you are able to enter, then it’s extremely high-impact — you might help define a whole new discipline.

Another bottleneck to progress on global priorities research might be operations staff, as discussed earlier, so that’s another option to consider if you want to work on this issue.

Role types

You can broadly pursue this path either in academia or nonprofits.

We think building this field within academia is a vital goal, because if it becomes accepted there, then it will attract the attention of hundreds of other researchers.

The only major academic centre currently focused on this research is the
Global Priorities Institute at Oxford (GPI), so if you want to pursue this path as an academic, that’s one of the top places to work. One problem is that GPI only has a couple of open positions, and you’d usually need to have a top academic background in philosophy or economics to get one of them (e.g. if you did well in a PhD from a top 10 school in your subject that’s a good sign). Positions are especially competitive in philosophy.

A new organisation called the Forethought Foundation for Global Priorities Research offers scholarships and fellowships to students in global priorities research as well as research grants for established scholars. We expect you’ll need a top background in philosophy or economics to get one of these, too (e.g. an undergrad who could get into a top 10 philosophy PhD programme or a top 10-20 economics PhD programme, a grad student attending one of those programmes, or a post-doc or academic who graduated from – or teaches at – one of those programmes).

That said, we expect that other centres will be established over the coming years. In the meantime, you could try to build expertise. For instance, doing an economics PhD (and postdoc) opens up lots of other options, so is a reasonable path to pursue even if you’re not sure that global priorities research is a good fit for you. It’s also important to have academics doing global priorities research (and potentially collaborating with GPI) at other universities.

One downside of academia, however, is that you need to work on topics that are publishable, and these are often not those that are most relevant to real decisions. This means it’s also important to have researchers working elsewhere on more practical questions.

We think the leading applied centre working on this research is the Open Philanthropy Project. One other advantage of working there is that your findings will directly feed into how billions of dollars are spent (disclaimer: we have received grants from them). However, you can also pursue this research at other effective altruism organisations. 80,000 Hours, for instance, does a form of applied global priorities research focused on career strategy.

How to enter?

The best entry route to the academic end of the field is to study a PhD in economics or philosophy. This is both because PhDs provide useful training and because they are required for most academic positions. Currently, economics is in shorter supply than philosophy, and also gives you better back-up options, so is preferable if you have the choice.

It’s also possible to enter from other disciplines. A number of people in the field have backgrounds in maths, computer science and physics. Psychology is perhaps the next most relevant subject, especially the areas around decision-making psychology and moral psychology. The field also crosses into AI and emerging technology strategy, so the options we listed in the earlier sections are also relevant, as well as knowledge of relevant areas of science. Finally, as the field develops there will be more demand for people with a policy focus, who might have studied political science, international relations, or security studies. In general, this is a position where wide general knowledge is more useful than most.

With the non-academic positions, a PhD isn’t necessary, but you do ideally need to find a way to demonstrate potential in this kind of research. It’s useful to develop skills in clear writing and basic quantitative analysis. Sometimes people enter the non-academic roles directly from undergrad if they’re sufficiently talented.

Could this be a good fit for you?

  • Might you be able to get into a PhD in economics or philosophy at a top 10 school? (This isn’t to say this qualification is required, it’s just that if you would be able to succeed in such a path, it’s an indicator of ability.)
  • Do you have excellent judgement? By this we mean, can you take on messy, ill-defined questions, and come up with reasonable assessments about them? This is not required in all roles, but it is especially useful right now given the nascent nature of the field and nature of the questions that are ultimately being addressed.
  • Do you have general knowledge or an interest in a wide range of academic disciplines?
  • Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, and sometimes have ideas for questions to look into? Are you able to work independently for many days at a time? Are you able to stick with or lead a research project over many years? Read more about predicting success in research.

Key further reading

Other reading

We’ve argued that pandemics pose a global catastrophic risk, and this risk could increase as advances in bioengineering make it possible to create engineered pandemics that are more deadly than naturally occurring ones.

There is already a significant community working on pandemic prevention, and there are many ways to contribute to this field. However, (while this is starting to change) most of the existing work is focused on conventional, naturally-caused pandemics, while we think what’s most important are catastrophic risks, especially those that might end civilisation. These are much more likely to be deliberately caused, so involve issues more naturally covered by the defence & bioterrorism community than public health, and involve a different set of interventions. What’s more, we’ve estimated that most of the past funding for work on bioterrorism has focused on risks like anthrax, which can’t spread from person-to-person, and so don’t pose a catastrophic risk.

This means that despite significant existing work on pandemic prevention, “global catastrophic biological risks” remain highly neglected.

We rate biorisk as a less pressing issue than AI safety, mainly because we think biorisks are less likely to be existential, and AI seems more likely to play a key role in shaping the long-term future in other ways. However, it can easily be your top option if you have a comparative advantage in this path (e.g. a background in medicine).

To mitigate these risks, what’s most needed at a high-level is people able to develop strategy and policy proposals, and work with governments to aid their implementation. We call this path “biorisk strategy and policy research”. We can further divide this into an academic and a government path, as follows.

The main line of defence against these risks is government, but there are currently few people concerned with existential risks working there. So, we think it’s valuable to build up a community of experts in relevant areas of national government and intergovernmental organisations, such as the US Centers for Disease Control, European Centre for Disease Prevention and Control, FBI’s Weapons of Mass Destruction Directorate and the World Health Organization. You could also work in relevant think tanks, such as the Center for Health Security or nonprofits like the Nuclear Threat Initiative. These experts can help to implement better policies when they’re known, and they can also help to improve policy proposals in the meantime. We list key organisations in our profile (which is unfortunately out of date).

Another option is to work in academia. This involves developing a relevant area of expertise, such as synthetic biology, genetics, public health, epidemiology, international relations, security studies and political science. Note that it’s possible, and at times beneficial, to start by studying a quantitative subject (sometimes even to graduate level), and then switch into biology later. Quantitative skills are in-demand in biology and give you better back-up options.

Once you’ve completed training, you could: do research on directly useful technical questions, such as how to create broad-spectrum diagnostics or rapidly deploy vaccines; do research on strategic questions, such as how dangerous technologies should be controlled; or you could advise policy-makers and other groups on the issues. A top research centre to aim to work at is the Center for International Security and Cooperation at Stanford.

As with AI strategy, global catastrophic biological risk is still a nascent field. This again can make it hard to contribute, since we don’t yet know which research questions are most important, and there is often a shortage of mentorship.

This means that there’s an especially pressing need for more “field building” or “disentanglement” research, with the aim of defining the field. If you might be able to do this kind of work, then your contribution is especially valuable since you can unlock the efforts of other researchers. The main home for most of this kind of research with a long-term focus right now is the Future of Humanity Institute in Oxford. There’s also a significant need for mentors who can help the next generation enter.

If you’re not able to contribute to the strategic research right now, then you can (i) try to identify more straightforward research questions that are relevant, (ii) work in more conventional biorisk organisations to build up expertise, (iii) focus on policy positions with the aim of building a community and expertise, (iv) become an expert on a relevant area of biology.

One advantage of biorisk is that many of the top positions seem somewhat less competitive than in AI technical safety work, because they don’t require world-class quantitative skills.

Besides pandemic risks, we’re also interested in how to safely manage the introduction of other potentially transformative discoveries in biology, such as genetic engineering, which could be used to fundamentally alter human characteristics and values, or anti-ageing research. We see these issues as somewhat less pressing and neglected than engineered pandemics, but they provide another reason to develop expertise in these kinds of areas.

Often the way to enter this path is to pursue relevant graduate studies (such as in the subjects listed above) because this takes you along the academic path, and is also helpful in the policy path, where many positions require graduate study. Alternatively, you can try to directly enter relevant jobs in government, international organisations and nonprofits, and build expertise on the job.

The main backup option from this path depends on what expertise you have, but one direction is other options in policy — it’s usually possible to switch your focus within a policy career. You could also work on adjacent research questions, such as those relevant to global health.

Unfortunately, overall the backup options often seem a little worse than AI safety, because qualifications in biology don’t open up as many options (many more people get biology PhDs than there are academic positions, leading to a high rate of drop out). This is another reason why we list this path lower. That said, you could still exit into the biotechnology or health industries to earn to give, and ultimately a wide range of other paths, such as nonprofit careers.

Could this be a good fit for you?

  • Are you deeply convinced of the importance of reducing extinction risks?
  • Compared to similar roles in AI safety, this field doesn’t require equally strong quantitative skills.
  • In research, might you be capable of getting a PhD from a top 30 school in one of these areas? This isn’t required but is a good indicator. Read more about predicting success in research.
  • If focused on field building research, can you take on messy, ill-defined questions, and come up with reasonable assessments about them?
  • Are you able to be discreet about sensitive information concerning biodefence?
  • If focused on policy, might you be capable of getting and being satisfied in a relevant position in government? In policy, it’s useful to have relatively stronger social skills, such as being happy to speak to people all day, and being able to maintain a network. Policy careers also require patience to work with large bureaucracies and sometimes public scrutiny.
  • Do you already have experience in a relevant area of biology?

Key reading

  • Our profile on biorisk (unfortunately out of date with an update in progress), which includes a list of recommended organisations and research centres, and relevant podcasts, which are linked to from the profile.

Other reading

China will play a role in solving many of the biggest challenges of the next century, including emerging technologies and global catastrophic risks, and China is one of the most influential countries in AI development and deployment.

However, it often seems like there’s a lack of understanding and coordination between China and the West. For instance, even today, 3 times as many Americans study French as study Chinese. For this reason, we’d like to see more people (both Chinese and non-Chinese) develop a deep understanding of the intersection of effective altruism and China, and help to coordinate between the two countries.

In particular, we want people to learn about the aspects of China most relevant to our top problem areas, which means topics like artificial intelligence, international relations, pandemic response, bioengineering, political science, and so on. China is also crucial in improving farm animal welfare, though we currently rate this as a lower priority.

More concretely, this could mean options like:

  • Graduate study in a relevant area, such as machine learning or synthetic biology; economics, international relations, or security studies with a focus on China or emerging technology; or Chinese language, history and politics.
  • If possible, a prestigious China-based fellowship, such as the Schwarzman Scholarship programme or Yenching Scholars, is a great option.
  • Research at a think tank or academic institute focused on these topics.
  • Work at a Chinese technology company.
  • If you are a foreigner, learn Chinese in China, or find another option that lets you live there.
  • If you are Chinese, work at an international effective altruism organisation.
  • Work at an influential philanthropic foundation.

Once you have this expertise, you could aim to contribute to AI and biorisk strategy & policy questions that involve China. You could also advise and assist international organisations that want to coordinate with Chinese organisations. You might also directly work for Chinese organisations that are concerned with these problem areas.

Note that we’re not in favour of promoting effective altruism in China, working in the government, or attempting to fundraise from Chinese philanthropists. This could easily backfire if the message was poorly framed or if its intent was misperceived in China.

Rather, the aim of this path is to learn more about China, and then aim to improve cooperation between international and Chinese groups. If you are considering doing outreach in China, get in touch and we can introduce you to people who can help you navigate the downsides.

To help with this priority path, we work with a part-time advisor who’s a specialist in China. You can get help from them here.

Could this be a good fit for you?

  • Do you already have knowledge of China? If not, could you see yourself becoming interested in Chinese politics, economy, culture, and so on, and also being involved in the effective altruism community?
  • Compared to other options on the list, this path requires more humanities skills (e.g. understanding international relations and cross-cultural differences) rather than scientific skill set.
  • Otherwise the skill set required is fairly similar to the AI strategy and policy path earlier.

Further reading

If you can find a job with good fit, it usually seems possible to make a larger contribution to the problem areas we highlight by working directly in them rather than earning to give — so in this sense they’re more “talent constrained” than “funding constrained”.

However, additional funding is still useful, so earning to give is still a high-impact option. Earning to give can also be your top option if you have fit with an unusually well-paid career. One path we’ve seen work especially well is quant trading in hedge funds and proprietary firms.

Quant trading means using algorithms to trade the financial markets for profit. We think, for the most part, that the direct impact of the work is likely neutral. But it might be the highest-paid career path out there.

Compensation starts around $100k-$300k per year, and can reach $1m per year within a couple of years at the best firms, and eventually over $10m per year if you make partner. We estimate that if you can enter the path at a good firm, the expected earnings are around $1m per year over a career. This is similar to being a tech startup founder, except that for startup founders who make it into a top accelerator, the deal is more like a 10% chance of getting $30m in 3-10 years, so the startup option involves much more risk and a major delay.

This means that the earnings in quant trading can enable huge donations to effective charities relatively quickly. We know several people in this path, such as Sam, who donated six figures within their first couple of years of work. This is enough to set you up as an “angel donor” in the community — you can try to identify promising new projects that larger donors can later scale up.

Many people also find the work surprisingly engaging. Creating a winning strategy is an intellectual challenge, but unlike academia, you get to work closely with a team and receive rapid feedback on how you’re performing. These firms often have “geeky” cultures, making them quite unlike the finance stereotype. The hours are often a reasonable 45-60h per week, rather than the 60-80h reported in entry-level investment banking.

These jobs are prestigious due to their earnings, and you can learn useful technical skills, as well as transferable skills like teamwork. The main downside on career capital is that these positions do not especially help you make connections, since you’ll mainly only work with others in your firm.

The other main downside of these jobs is that they’re highly competitive. There are more comments on fit below.

You may also find it surprisingly hard to set aside the time to figure out where is best to give, especially if you’re hoping to provide funding to early projects, working in unusual areas.

Role types and top firms

There are two broad pathways:

  • Traders – who develop and oversee the strategies.
  • Engineers – who create the systems to collect data, implement trades, and track performance.

It varies from firm to firm, but typically the engineers get paid less, but with more stability in their earnings.

It’s also important to know that the salaries can vary significantly by firm. There are perhaps only a couple of firms where it’s possible to progress to seven figures relatively quickly without a graduate degree, including Jane Street, Hudson River Trading, DE Shaw, and maybe several others. The pay is often significantly less in other firms, though still often several hundred thousand dollars per year. Other firms also offer earnings on this level, but require a PhD.

In addition, note that quant trading positions are very different from “quant” jobs at other financial companies, such as investment banks or non-quantitative investment firms. Usually “quants” are “middle office” staff who provide analysis to the “front office” staff who oversee the key decisions. This makes them more stable but significantly lower paid, and sometimes less prestigious. Such firms also typically have a less geek-friendly culture.

Could this be a good fit for you?

  • One indication of potential is that you’d be capable of finishing in the top half of the class at a top 30 school in mathematics, theoretical physics or computer science at undergraduate level.
  • Another option is to enter based on your programming skills as an engineer. This might be possible if you’re someone who would be able to get a top software engineering job at a tech company such as Google.
  • Besides intelligence, the firms also look for good judgement and rapid decision-making skills. One indication of these is that you like playing strategy games or poker.
  • Would you be capable of reliably giving a large fraction of your income to charity?
  • Compared to academia, you need to have relatively better communication and team work skills, since you’ll work closely with your colleagues hour-by-hour in potentially stressful situations.

Further reading

Governments and other important institutions frequently have to make complex, high-stakes decisions based on the judgement calls of just a handful of people. There’s reason to believe that human judgements can be flawed in a number of ways, but can be substantially improved using more systematic processes and techniques. Improving the quality of decision-making in important institutions could improve our ability to solve almost all other problems.

We’d like to help form a new community of researchers and practitioners who develop and implement these techniques. We’re especially keen to help people who want to work on the areas of policy most relevant to global catastrophic risks, such as nuclear security, AI, and biorisk. Note that we’re not talking about the popular “nudge” work in behavioural sciences, which is focused on making small improvements to personal behaviours. Rather, we’re interested in neglected work relevant to high-stakes decisions like whether to go to war, such as Tetlock’s research into forecasting.

This path divides into two main options: (i) developing better decision-making techniques (ii) getting them applied in important organisations, especially those relevant to catastrophic risks.

To enter, the first step is to gain relevant expertise. This is most naturally done by getting a PhD in behavioural or decision science. However, you could also take a more practical route by starting your career in government and policy, and learning about the science on the side.

Once you have the expertise, you can either try to make progress on key research questions in the field, or work with an important organisation to improve their processes. We can introduce you to people working on this.

As with global priorities research, this is a nascent field that could become much bigger, and now is an exciting time to enter.

Could this be a good fit for you?

  • Might you be able to get a job in a relevant area of government? Do you know how to influence choices within a bureaucracy?
  • On the research path, might you be able to get into a psychology PhD at a top 30 school?
  • Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, and sometimes have ideas for questions to look into? Are you able to work independently for many days at a time? Are you able to stick with or lead a research project over many years? Read more about predicting success in research.

Further reading

Key resources:

Other resources:

What to do if you already have well-developed skills?

If you already have well-developed skills, your best bet is probably to work out how best to apply those skills to the most pressing issues, which may involve doing something different from the priority paths above. It’s difficult for us to give sufficiently specialised advice through online content, and you likely need to speak to experts in these areas. We have a few ideas for some common categories of skill.

Coming up with more options

There are many ways to have a big impact beyond the paths listed above. We cover some other ways to help in our profiles on individual problems. We also cover some tips for generating more options here and in our material on decision making. If you have an idea for making a major contribution that doesn’t fit into the categories above, we highly encourage you to investigate the opportunity, talk to relevant experts, and determine if it’s a good personal fit. We need people coming up with creative ways to contribute.

If you are unable to work on key bottlenecks for a top problem directly, your best option may be to invest in yourself – build ‘career capital’ – so that you can work on or donate to these areas in the future, which we cover later. You could also financially support the best organisations in the area.

Career strategy Other important career priorities

Personal fit: how to increase your chances of finding a career path where you’ll excel

Once you’ve identified some promising options, the next most important step is to find the option where you have the best chance of excelling over the course of your career — what we call your ‘personal fit’.

The productivity of different people within a field varies greatly, sometimes by orders of magnitude. This means that, to the extent that it’s predictable, ‘personal fit’ is one of the most important factors in determining the expected impact of your career. Excelling at your work also boosts your career capital, giving you more options in the future.

Because personal fit it so important, we would almost never encourage you to pursue a career you dislike — you’d be unlikely to persist and therefore to excel in the long-term.

You can think of your degree of personal fit with an option as a multiplier on how promising that option is in general, such that total impact = (impact of option) x (personal fit). This means that if you’re an especially good fit for a path, it can be worth taking over another option that’s higher-impact on average. Likewise, it can also be worth pursuing a path that’s only an average fit if it’s unusually high-impact. Though, the ideal is to have both.

Academic studies and common sense both suggest that while it’s possible to predict people’s performance in a path to some degree, it’s a difficult endeavour, and your gut assessment is usually not accurate. We think the best way to get an accurate read is probably to try small experiments (like part-time work, night study or an internship), look objectively at your track record, and ask experts to assess your chances.

Exploration: focus on the option with the most long-term upside, but have a backup plan

If you remain uncertain about what to do after exhausting the lowest cost ways to get more information (and you probably will), then one good approach is to go down the path that would be highest-impact if you were to perform toward the top end of your expectations (as long as there’s no significant risk that you’ll unintentionally do harm). This is because the upside and downside from being optimistic are usually somewhat asymmetrical: in the good case, you have a big impact over many years; if it doesn’t work out, you can switch to something else. This is one reason it’s important not to be underconfident. The costs of spending a few months trying out a path are low relative to the huge benefits if it turns out well.

If you take this strategy, then it’s important to make a back-up plan. This could include another nearby career to switch into – your ‘Plan B’ – and should definitely include a way to support yourself if your top options all go wrong – your ‘Plan Z’. We have more detail on how to do this here. If you’re not able to pursue your highest upside option right now because it’s too risky, that’s completely fine: choose one of the less risky paths on your shortlist, or focus on building your career capital instead.

Accidental harm: the risk of doing more harm than good and how to reduce it

To have a big impact, we encourage people to work on neglected, high-stakes issues. Unfortunately these fields are often ‘fragile’ — it’s easy to accidentally make the situation worse rather than better.

For example, it’s hard to overcome first impressions, so if you run a publicity campaign about a new topic, you can make it harder to run a different campaign in the future. This is an example of ‘lock in’. If you later discover a more effective way to frame the problem, then the first campaign may have had a negative impact.

These risks mean that before you focus on entering the option with the most upside, it’s important to try to mitigate the chance of having a major negative impact. Building a deep understanding of your field, finding good mentors, and coordinating well with others can help — we cover more strategies here.

Career capital: deciding how much to prioritise investing in yourself relative to having an impact right away

Another strategic consideration is ‘career capital’ — the skills, connections, credentials, and finances that can help you have a bigger impact in the future. People can often take steps early in their career to become far more productive, and making these investments can let you achieve much more over a 40-year period.

However, if you decide to focus on gaining career capital, it’s crucial to gain career capital that will best help you address the most pressing global problems in the future. Prestigious corporate jobs — such as consulting — will help you develop transferable career capital, but given our view of global priorities, it seems even more promising to take a focused approach. For instance, you could do relevant graduate study, work in government or policy, or work with any great team (in any sector) that lets you learn the skills that are most needed in pressing areas (e.g. management and operations in small organisations; research; working independently).

Moreover, often the best way to get the most relevant career capital is just to start working in an important area with a great mentor or to try out some low-risk means of having an impact and learn as you go. We currently think you should only take a detour to gain career capital if it’ll put you significantly ahead of where you would be if you worked on a top problem directly.

These more focused approaches involve specialising. We think this is often the right call because many of the highest-impact options require it. However, specialising also involves taking on some risk: new research or changing facts on the ground could make your specialisation less promising after you’ve invested in training. This is another reason why we always recommend that you have a Plan B. Fortunately, most of the opportunities we recommend to gain specialist career capital also open up good backup options in nearby fields.

However, if you are very uncertain about which option is best long-term, and can’t see a way to reduce that uncertainty significantly in the next couple of years, then another strategy is to try to gain transferable career capital that’s useful in many different paths, and decide where to focus later. For instance, if you can learn how to be a good manager, that skill is needed by most organisations in most areas, so it will likely open up good options even if your views about what’s best change dramatically.

Coordination: how to work with a community to have a greater impact

You’re likely to have much more impact if you work as part of a community, since communities enable their members to specialise, work together and achieve economies of scale, thereby having a greater collective impact. Being part of a community can also help you stay motivated and find support when the going gets tough.

Within a community, you should consider which priorities are most neglected by the community, and where you have the greatest comparative advantage. This can mean focusing on issues that seem less pressing, or that you’re less well-suited to, if they’re especially neglected by the rest of the community.

You may find it useful to get involved with the effective altruism community, which we helped to set up. There are also many other great communities to get involved with, such as those focused on particular global problems like the biosecurity and climate change communities.

Personal wellbeing: How to handle conflicts between your own happiness and making a difference

We think there’s less tension between the two than is often supposed. Finding work you excel at and that helps others is fulfilling and many of our readers say they’ve become happier by doing this. Moreover, you’ll have a greater impact if you find work you enjoy and that fits with your personal life because you’ll have a greater chance of excelling in the long term and getting other people on board. So enjoying your work and having an impact are mutually supportive goals.

This said, sometimes conflicts do arise, and how to handle them is a difficult issue.

We may live in a uniquely important time in history with the opportunity to influence the development of new technologies that could impact the long-term future and reduce existential risks. We also have many other opportunities to help others a great deal with comparatively little cost to ourselves. This motivates some of our readers to make impartially doing good the main focus of their careers. Some philosophers, such as Peter Singer, have argued that we have a moral obligation to do so.

However, most of our readers see ‘making a difference’ in the way we’ve outlined as just one of several important career goals, which may include other moral aims, supporting a family, or fulfilling their personal projects.

Whatever your views on this topic, we think it’s important to take seriously the risk of burning out if one engages in too much self-sacrifice. Even if your only career goal was to make a difference, you should aim to contribute over your entire 40-year career, and to inspire others to do more good too. This means it’s important to cultivate self-compassion and take a path in which you’ll be motivated for the long term. You can read some more advice on taking care of yourself here.

Take actionHow to write your career plan and choose your next step

This section describes a process for making use of everything we’ve covered and deciding what concrete steps to take next.

We advise people to use a systematic approach to important career decisions and to take the time to investigate many options. This is partly because career decisions are high stakes, so deserve the time investment: if you can make your 80,000 hour career just 1% better, it would in theory be worth spending up to about 800 hours working out how to do that.

It’s also because gut decisions work best in stable domains where you have lots of opportunity to practice and get clear feedback to train your intuition. Having a positive impact in the modern world is not like this, and so while our gut intuitions are a useful input into certain aspects of our decisions, we shouldn’t just choose the option these intuitions say is best overall.

In a nutshell, the process we propose for systematically thinking through your career plan, summing up everything covered so far, is as follows:

  1. Identify some of the world’s most pressing problems.
  2. Identify the key bottlenecks within those problems to come up with long-term career paths that might most help (such as our priority paths).
  3. Create a shortlist of the paths that are plausibly a good fit for you.
  4. Work out whether you’ll advance faster in any of these paths by gaining career capital first. If you’re very unsure what’s best long term, then also consider focusing on career capital that will be useful in many different potentially pressing problem areas in the future. Add to or modify your shortlist of options appropriately.
  5. Investigate and test out the options on your shortlist to find those with the best personal fit.
  6. Take the next step in the path that seems highest-impact in the long-term if things go very (but not unrealistically) well. But have a back-up plan. Your Plan B consists of good alternatives that are relatively easy to switch to from your top option if your top option doesn’t work out. Plan Z is how you’ll support yourself if it all goes wrong.
  7. Review your career about once a year, but otherwise focus on succeeding in your current position.

You can find a step-by-step process for comparing between your options in our full article on making career decisions.

When it comes to taking action, we have collected some advice on how to get a job and how to be more successful within your path.

How else can we help?

80,000 Hours is an independent nonprofit that is here to help you have a larger impact with your career. We’re building a community of people who focus their careers on addressing some of the world’s greatest challenges. We hope you’ll join us.

Enter a high-impact career

If you’re interested in working in one of our ‘priority paths’, or have other ideas about how to have a big impact on one of our top problem areas, our advising team might be able to speak with you one-on-one. They can help you consider your options, find job and funding opportunities, and make connections with others working on these issues.

Speak to our team

If you’re ready to apply for jobs, or just want more ideas, see our job board. We currently list over 200 positions and update the list twice a month.

View job listings

Find ways to meet people interested in applying these ideas on our community page.

Learn more

Our podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths – from academics and activists to entrepreneurs and policymakers – to analyse the case for working on these issues and suggest concrete ways to help.

Listen to the podcast

Get monthly updates featuring our latest research, high-impact career opportunities, and new articles in this series.

See also: