80,000 Hours is a non-profit that provides research and support to help people switch into careers that effectively tackle the world’s most pressing problems.
This page is a summary of the most useful things we’ve learned so far. We start with the big picture and end with practical next steps, covering:
The ethical and big picture views that inform our advice.
Some neglected global problems we think are especially pressing to work on.
Some ideas for career paths that especially help to address those problems.
A list of career strategy considerations that are useful regardless of what problem you focus on.
A process for planning your career in light of your strengths and personal priorities.
You’ll probably need at least an hour to read this page in full. If you want to start applying these ideas to your own career, you may find it useful to set aside a weekend to reflect and explore some of the materials we link to.
We know this is a lot of time to invest, but if just one idea we cover helps you significantly increase the impact of your career, it’ll be worth it. A typical career lasts for 80,000 hours, so if you can make your career just 1% better, then in theory it would be worth spending up to 800 hours working out how to do that. Hopefully, we’ll be a lot faster.
In our view, ‘having a positive impact’ is about promoting welfare over the long term. However, we’re highly uncertain about this definition, so in practice we aim to consider other perspectives.
We started by trying to identify the most pressing global problems to work on based on this definition. These are not necessarily the world’s biggest or most well-known problems, but rather those where an additional person can make the biggest long-term difference on the margin. Right now, we think these involve shaping the development of emerging technologies and reducing the risk of global catastrophes that could have permanent negative consequences — i.e. ‘existential risks’. Nuclear war and runaway climate change are the two most well-known; however, we think that, all else equal, additional people can have even more impact by working to reduce the risk of large-scale pandemics and to positively shape the development of advanced artificial intelligence (which you can read more about here), mainly because these areas are so much more neglected, which has left many of the most promising interventions untried.
We currently think some of the most promising career paths involve addressing these problems through work in carefully chosen areas of research, government policy and non-profits. For those with the flexibility to pursue a new career path, we especially recommend considering whether one of our ‘priority paths’ might be a good fit in the long-term, perhaps after spending several years gaining relevant skills.
Those who already have specialised skills or are further along in their careers can also consider applying those skills to the most pressing global problems — this will often involve taking approaches that differ from our general suggestions. People in almost any job can also contribute to whichever problem they think is highest priority by financially supporting effective organisations in that area. Our research is ongoing and there are many excellent paths we’ve not written about — we discuss other promising options below.
Once you have ideas about which career paths might be best for you, aim to identify the option where you have the best chance of excelling in the long term, i.e. where you have the best ‘personal fit’. It’s hard to predict your personal fit, so look for low-cost ways to test different paths, such as speaking to people and doing side projects.
Once you’ve done these tests, look for a next step to pursue for a couple of years. The ideal next step is one that has a good balance of:
Specialist career capital — how much does it advance you towards your top long-term options?
Transferable career capital and back-up options — does it open up other promising options?
Information value — does it let you test out a potentially excellent but uncertain long-term option?
Personal fit — where do you have the highest chances of excelling? (& relative fit if coordinating with a community)
Immediate impact — will it let you contribute to a pressing problem right away?
Personal priorities — does it fit with the rest of your life and risk-tolerance?
If you’re in your first couple of jobs and/or very uncertain about your long-term options, focus more on testing out paths and building career capital; later in your career, focus more on immediate impact. To combine all of these considerations, you can use our step-by-step process.
When you take whatever next step you choose, remember that career decisions are not fixed — we recommend that you review your career every 1-2 years. Between reviews, focus on excelling in your current path.
These are a lot of claims to digest (and indeed, they might be wrong). Read on to see where they come from and how to apply them.
Careers decisions are highly individual, so there are many questions we can’t easily help with. We aim to focus on career questions that are more widely relevant. To answer the questions we tackle, we draw on:
Expert interviews — you can listen to over 60 examples of these interviews on our podcast, and also see the results of some anonymous interviews, and our annual survey. Our first pass on many questions involves synthesising what several experts say on the question.
Academic literature — we aim to draw on academic literature where it’s available, such as the literature on existential risks, the distribution of productivity in different fields, and how to make good decisions.
Advising our readers — we’ve given one-on-one advice to over 1,000 people since 2011, many of whom we’re still in touch with. This gives us a sense of what mistakes are common, as well as some indication of how decisions play out over time.
It’s not usually possible to confidently answer the kinds of questions we tackle. However, we do our best to synthesise the sources of evidence we draw on, using our research principles. We also aim to highlight the key aspects of our reasoning so that readers can make their own assessments.
The topics we tackle are complex, and in the past we’ve noticed people interpreting our advice in ways we didn’t intend. Here are some points to bear in mind before diving into our advice.
We want our writing to inform people’s views, but only in proportion to the likelihood that we are correct. Given that, it’s important to keep in mind that we’ve been wrong before and we’ll be wrong again. We’ve spent a lot of time thinking about these issues, but we still have a lot to learn. Our positions often change every couple of years, and due to the nature of the questions we take on we’re rarely more than about 70% confident in our answers. You should try to strike a balance between what we think and your previous position, depending on the strength of the arguments and how much you already knew about the topic.
It’s extremely difficult to give universally applicable career advice. The most important issue here is that which option is best for you depends a huge amount on your skills and circumstances, and the specific details of the opportunity. So, while we might highlight path A more than path B, the best opportunities in path B will often be better than the typical opportunities in path A. Moreover, your personal circumstances could easily mean the best option for you is in path B. So, treat the specific options we mention as an aid for compiling your personal list of career ideas. Also keep in mind that many issues in career choice are a matter of balancing opposing considerations — for instance, if we say people put too much emphasis on X, there will usually be some readers who put too little emphasis on X, and need to hear the opposite advice.
Our advice is aimed at a particular audience: namely, people with college degrees who want to make having a positive impact (from an impartial perspective) the main focus of their careers, especially in the problem areas we most recommend; who live in rich, (for the most part) English-speaking countries; and who want to take an analytical approach to their career. At any given moment many people need to focus on taking care of their own lives, and we don’t think anyone should feel guilty if that’s the case. Certain parts of our advice, such as our list of priority paths, are especially aimed at people who are unusually high-achieving. In general, the more similar you are to our core audience, the more useful the advice will be, although much of what we write is useful to anyone who wants to make a difference.
Treat increasing your impact as just one long-term goal. Working on the world’s most pressing problems is among the most worthwhile challenges we can imagine, though it can also be overwhelming. Bear in mind, 80,000 Hours is about how to maximise your impact, and this can make it sound like we don’t care about other goals. However, the team sees increasing our impact as just one important goal among several in our lives, which means we often do things that aren’t optimal from the perspective of doing good. Indeed, even if your only goal was to have an impact, to do that it’s vital to do something you can stick with for years — and this means taking care of your personal priorities as well.
Aim for steady progress rather than perfection. It can take a long time to work out how to factor the ideas we cover into your own plans and find the right opportunity. Along the way, because there’s always more that could be done, it can be easy to become overly perfectionist, get caught up with comparisons, and never be satisfied. When using our advice, the aim is not to find the (unknowable & unattainable) perfect option, or have more impact than other people. Rather, focus on making steady progress towards the best career that’s practical for you given your constraints.
Older articles on the site are less likely to reflect our current views, so check their publication date. We also aim to keep this key ideas page up-to-date as the canonical source of advice, and to flag older articles when our views have changed, though we have hundreds of pages of content, so we don’t catch everything.
The big picture What does it mean to ‘make a difference’?
At 80,000 Hours, we help people find careers that more effectively ‘make a difference’, ‘do good’, or ‘have a positive impact’ on a large scale.
In this section, we lay out what we mean by these phrases. In brief, we think ‘making a difference’ is about promoting welfare in the long term. However, we’re highly uncertain about this definition, so in practice aim to consider other perspectives.
Our advice doesn’t entirely depend on the philosophical views we gesture at in this section, but we think it’s important to be transparent about them, as these broad ideas have informed our advice since we started in 2011. Alternatively, skip ahead to our practical suggestions.
When it comes to making a difference, we aim to be impartial in the sense that we give equal weight to everyone’s interests. This means we strive to avoid privileging the interests of others based on arbitrary factors such as their race, gender, or nationality, as well as where or even when they live. In addition, we think that the interests of many non-human animals should be given significant weight, although we’re unsure of the exact amount.
Striving toward impartiality can lead to more radical conclusions than it might at first seem. One is that, because we aim to avoid privileging anyone’s interests based on when they were born, we always consider the impact our actions have on all future generations, not just their effects on people alive today or people who will be born soon.
What does it mean to further someone’s interests from this perspective? Our tentative hypothesis is that we should aim to increase the expected welfare of others by as much as possible i.e. enable more individuals to live flourishing lives that are healthy, happy, fulfilled, and free from avoidable suffering.1
Putting impartiality and welfarism together means that, roughly, how much positive difference an action makes depends on how it increases peoples’ welfare, and how many people are helped, no matter when or where they live.
We think most would agree that it’s important to have some concern for the welfare of others impartially considered. As individuals, however, we all have further goals besides making a difference in this way: we care about our friends, personal projects, other moral aims, and so on. These goals are important too, but we’d like to see more people approach their careers as an opportunity to do good from an impartial perspective, and this is the focus of our research and advice.
It sounds obvious that if you can help two people rather than one, and the cost to you is the same, it’s better to help the two people. However, when applied to the world today, this obvious-sounding idea leads to surprising conclusions.
Modern levels of wealth and technology have given some members of the present generation potentially enormous abilities to help others, while our common-sense views of what it means to be a good person have not caught up with this change. This means that some actions that are widely considered to ‘do good’ have dramatically greater positive consequences than others.
For instance, the UK’s National Health Service and many US government agencies are willing to spend over $30,000 to give someone an extra year of healthy life.2 This is a fantastic use of resources by ordinary standards.
However, research by GiveWell has found that it’s possible to give an infant a year of healthy life by donating around $100 to one of the most cost-effective global health charities, such as Against Malaria Foundation. This is about 0.33% as much.3 This suggests that at least in terms of improving health, one career working somewhere like AMF might achieve as much as 300 careers focused on one typical way of doing good in a rich country.
These kinds of health programmes offer such a good opportunity to do good that even the most prominent aid sceptics have offered few arguments against them.
When we’ve looked at other ways of doing good, we’ve found this pattern replicated: the most effective ways to help usually seem much better than what’s typical. We’ll give more examples later.
This wide spread of outcomes is also probably what we should expect to find.
A trait like height follows a ‘normal’ distribution: the tallest people are only about 50% taller than the average. A trait like income, however, follows a ‘fat tail’ distribution: the highest-earning people earn thousands of times more than average. This concept has also been popularised as the ‘80/20 principle’, or as the idea that outcomes are dominated by ‘black swan events’.
We expect that the distribution of the expected impact of different actions is more likely to be like income than height.
One reason for this is that if the outcomes of different actions are caused by the multiplication of several factors — as they often are — then the value of different actions will end up as a fat tailed distribution (technically, a log-normal distribution).
This means that if your aim is to impartially help others, your key concern shouldn’t just be to ‘make a difference’ — it should be to identify the very best ways to help among the options open to you. This insight is the key idea behind the ‘effective altruism’ movement, which we helped to found in 2012 (see an academic introduction and a popular introduction).
This idea might sound obvious, but when we surveyed people on how much more effective they think the best charities are compared to the median, a typical response was that the best charities are only 66% more effective; whereas instead it seems like the difference is more like 10,000%. So, the difference between the best and typical ways of helping are much larger than ordinarily supposed.
This means the top priority in doing good is to get the big picture right, and not to sweat the details. If you can do better on the big decisions, then you can have hundreds of times more impact than what’s typical, which is an amazing feat. That’s what the rest of this series is about.
If someone offered you a free beer, but told you there’s a 1% chance it contains poison, you wouldn’t drink it. This because the badness of drinking poison far outweighs the goodness of getting a free beer, so even though you’re very unlikely to end up with poison, it’s not worth drinking.
We all make decisions about risk and uncertainty like this in our daily lives, but when trying to do good we face even greater uncertainty about the ultimate effects of our actions, especially if we consider all their long-term effects.
The best we can do is to consider all of the good and bad things that could result from our actions, and weigh them by the probability that they will actually happen. So the possibility of dying in a car crash will be regarded as twice as bad if it’s twice as likely.
The technical term for adding up all the good and bad potential consequences of an action, weighted by their probability, is the ‘expected value’ of the action. We aim to seek the actions with the highest expected-value, according to the values listed above.
This doesn’t mean that in practice we should attempt to make explicit estimates of the probabilities and values of different outcomes. This is sometimes helpful, but it’s often better to look for useful heuristics, find robust arguments, use gut intuitions, or even make a snap decision to save time. Practical decision-making should use whatever methods work. Expected value theory instead describes the ideal we’re trying to approximate.
Whether the expected value approach is the ideal way to make all decisions is debated, but these debates mainly focus on highly unusual circumstances, such as when dealing with tiny probabilities of extreme amounts of value, as in Pascal’s Wager. It’s widely accepted as a description of how an ideal agent would weigh outcomes in most circumstances. (The biggest challenge to this view is perhaps the ‘complex’ problem of cluelessness, which we discuss in the further reading.)
We also believe that the consequences of an action should be evaluated relative to what would have happened if the action were not taken — the counterfactual. For instance, if you rush to give first aid to someone injured on the street, then your ‘tangible’ impact is whatever help you deliver to the injured person. However, your counterfactual impact depends on what would have happened if you hadn’t acted. For instance, if there was someone else in the crowd better qualified to give first aid, then by stepping in, you might have made the situation worse rather than better. So, it’s possible to have a negative counterfactual impact while having a positive tangible impact.
This means that thoroughly considering counterfactuals can have a significant effect on which actions seem best. For instance, considering counterfactuals shows that it’s easier to set back a field than it first seems, because, for instance, if you start a new project, you also need to consider whether you might thereby prevent someone else from setting up an even better version of it. It also makes it look more important to work in neglected areas where someone else won’t do what you would have done anyway.
The average species lasts for 1-10 million years. Homo sapiens have been around for only 200,000. With the benefit of technology and foresight, civilisation could, in principle, survive for at least as long as the earth is habitable — probably around a billion years more.
Given that we can’t rule out this possibility, this means that there will, in expectation, be a huge number of future generations. There could also be a much larger number of people in each future generation, and their lives could be much better than ours.
We think future generations clearly matter, and impartial concern most likely implies their interests matter as much as anyone’s.4
If we care about all the consequences of our actions, then what’s most important about our actions from an impartial perspective is their potential effects on these future generations.
If this reasoning is correct, it would imply that approaches to improving the world should be evaluated mainly in terms of their potential long-term impact, over thousands, millions, or even billions of years.
In other words, the question ‘how can I have a positive impact?’ should mostly be replaced with ‘how can I best make the very long-term future go well?’. These arguments and their implications are studied as part of an emerging school of thought called longtermism.
We feel relatively confident about this idea, but we’re not confident about what it implies in practice.
An obvious response to the above is that it’s so difficult to predict the very long-term effects of our actions that although these effects might be very important, we don’t know what they are. Instead, this response goes, we should focus on helping people in the short-term where we can have more confidence in their positive effects.
We agree it’s very hard to know the long-term effects of our actions; however, as discussed above, we think we should aim to use whatever evidence and theory is available to make the best possible estimates of the expected value of different actions. Moreover, because the expected number of future generations is so great, our actions only need to have non-negligible effects on them for these effects to dominate their expected value.
In practice, we think there are some actions that potentially have very long-term positive effects. For example, we can take steps to make it less likely that civilisation ends through a disaster like nuclear war, which would irreversibly deprive future generations of the chance to flourish. We cover other examples in the next section.
Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved. If there’s a chance civilisation lasts longer than ten million years, or that there are more than ten billion people in each future generation, then the argument is strengthened even further.
This aside, even if we’re not sure what actions would help today, longtermism would likely imply that our key focus should be on carrying out research to identify these actions, or otherwise making society better able to tackle long-term challenges.
In contrast, we don’t see much reason to expect that actions with good short-term effects will also be those that will be best from a long-term perspective.
One major reason for this is that future generations lack any economic or political power, which means we should expect their interests to be neglected by our current institutions. Within philanthropy, too, very little attention is paid to the interests of those who might live more than 100 years in the future. This all suggests that outstanding opportunities to help may remain untaken, and that it would be reasonable for society to allocate significantly more attention to these issues.
We remain unsure about many of these arguments, but overall we’re persuaded that focusing more on the very long-term effects of our actions is our best bet for now.
As we said, we think that the most important thing for us to focus on from an impartial perspective is increasing long-term welfare. However, we are not sure that this is the only thing that matters morally.
Many moral views that were widely held in the past are regarded as flawed or even abhorrent today. This suggests we should expect our own moral views to be flawed in ways that are difficult for us to recognise. What’s more, there is still significant moral disagreement within society, among contemporary moral philosophers, and, indeed, within the 80,000 Hours team. It’s also extremely difficult to know all the relevant effects of our actions, and grand projects to advance abstract ethical aims often go badly.
As a result, we think it’s important to be modest about our moral views, and in the rare cases where there’s a conflict, try very hard to avoid actions that seem seriously wrong from a common-sense perspective. This is both because such actions might be wrong in themselves, and because they seem likely to lead to worse long-term consequences.
More generally, we aim to uphold cooperative norms and to factor ‘moral uncertainty’ into our views. We do the latter by taking into account a variety of reasonable ethical perspectives, rather than simply acting in line with a single point of view.
For these reasons, we don’t exclusively seek to promote long-term welfare or take a purely ‘utilitarian’ approach to ethics. Rather, we do what we can to make everyone better off in ways that are as consistent as possible with common-sense ethics, and without sacrificing anything that might be of comparable moral importance.
Global priorities What are the most pressing problems to work on?
Perhaps the most important decision you face
Now that we have a sense of what ‘making a difference’ means, we can ask how you can make a difference with your career most effectively.
We think the most important single factor determining the expected impact of your work is probably the issue you choose to focus on.
For example, you might choose to focus on climate change, education, technological development, or something else. We think it’s of paramount importance to choose carefully.
Although it’s very challenging to compare different global issues, roughly we think that the issues where your work will make the biggest difference are those that have the best overall combination of being comparatively
i. Neglected, ii. Important, and iii. Tractable.
Why work on issues that are comparatively neglected? The basic reason is that, at least among issues that are roughly similar in importance and tractability, it’s usually harder to have a big impact working on more established or popular issues, because there are probably already people working on the most promising interventions. For this reason, if you’re the 100th person working on a problem your contribution is likely to make a much larger difference than if you’re the 10,000th.
How much larger? In our view, returns to more work diminish relatively quickly, and most likely ‘logarithmically’ — meaning that it matters a lot how neglected an area is.
From what we’ve seen, some global issues appear to be thousands of times more neglected than others of similar importance — i.e. they receive only a tiny fraction of the resources. This implies that if all else is held constant, work in some areas is thousands of times more effective than work in others.
Of course, all else isn’t always equal — sometimes issues are neglected because they’re not important or not tractable. But in fact we think there are a number of problems that are highly neglected despite being very important and reasonably tractable. We argue for this in our profiles of individual global problems.
If this is roughly right, then working on some issues is much higher impact than working on others — making choosing the right issue to focus on one of the most important decisions you face.
Putting a number on the scale of these differences is very challenging, but our best estimate is that an additional person working on one of the most pressing issues will (in expectation) have over 100 times as much impact vs. an additional person working on a typical issue.5
Just to be clear — in an ideal world there would be far more people working on every important social issue. However, as individuals, each of us only has one career, and we’ll all have far more impact if we focus on the issues that are the most pressing for us to work on right now.
Orienting your career around a new problem area is a big decision, and you don’t need to do it right away, especially if you’re early in your career when it’s important to focus on exploring and building skills. Generally, it’s worth doing some serious analysis before you make a big commitment to a particular path. In fact, if we were to draw only one lesson from all our research on high-impact careers, it’s that what issue you should work on in your career deserves a lot of thought.
Although we present our view of the world’s most pressing problems below, we know some readers won’t share the assumptions that went into the analysis (or might think we’re making some other kind of mistake). Thus we also encourage you to compare issues you might work on according to your own estimates — using our framework as a guide insofar as you find it useful.
Our current view of the world’s most pressing problems
Enormous complexity as well as evolving circumstances mean any prioritization of global issues will be highly uncertain and subject to change. Nonetheless, our work over the years has led us to think that two broad categories of global issues are particularly pressing: successfully navigating emerging technologies, and research and capacity-building for future work.
We discuss which specific issues within these categories we prioritize most highly below.
Navigating emerging technologies
In the 1950s, the large-scale production of nuclear weapons meant that, for the first time, a few world leaders gained the ability to kill hundreds of millions of people — and possibly many more if they triggered a nuclear winter, which would make it very difficult to grow crops for several years. Since then, the possibility of runaway climate change has joined the list of catastrophic risks facing humanity.
During the next century we may develop new transformative technologies, such as advanced artificial intelligence and synthetic biology, which could bring about a radically better future — but which may also pose grave risks.
We’ve become increasingly convinced that one of the most crucial issues facing the present generation is how to wisely navigate the development of new technologies in order to increase the chance of a positive future for everyone, and to help reduce the risk of global catastrophes — events that, if they happened, could lead to billions of deaths and may even threaten to permanently end human civilisation — one form of ‘existential risk’.
We think that right now, trying to reduce these risks is one of the most important things we can do.
This is because we think the rise of powerful new technologies means that the probability of this kind of catastrophe occurring in our lifetimes is too big to ignore, and that a catastrophe like this would clearly be among the worst things that could possibly happen. This is especially true if one takes the longtermist perspective we covered above, because extinction would also mean the loss of the potential welfare of all future generations.
Moreover, some of these risks are highly neglected. For instance, less than $50 million per year is devoted to the field of AI safety or work specifically targeting global catastrophic biorisks. By comparison, billions or trillions of dollars go into more familiar priorities, such as international development, preventing terrorism, poverty relief in rich countries, education, and technological development. This makes the former fields perhaps 1,000 times more neglected.6
This neglect suggests that a comparatively small number of additional people working on these risks could significantly reduce them. We suggest specific ways to help in the next section.
Research and capacity building
This said, we remain uncertain about this picture. Many of the ‘crucial considerations’ that led us to these priorities were only recently identified and written about. What’s more, there are significant arguments against the idea that reducing existential risk is the top priority from a longtermist perspective (see here and here). It also seems likely that we will learn of other ways to increase the probability of a positive long-term future and reduce the chance of widespread future suffering, some of which may seem more promising to address than the existential risks we currently focus on.
For these reasons, we also work to support those creating the new academic field of global priorities research, which draws on economics, philosophy and other disciplines to work out what’s most crucial for the long-term future.
In addition, we encourage people to work on ‘capacity-building’ measures that will help humanity manage future challenges. For instance, these measures could involve improving institutional decision making and building the ‘effective altruism’ community — a community of people aiming to work on the problems that turn out to be most pressing in the future, whatever those might be (this flexibility is why we helped to set up the effective altruism community in the first place).
Our overall prioritized list of issues
Given how little research there has been in prioritizing global issues, we would not be surprised if our views changed significantly in the future.
That said, our rough guess at which specific areas are most pressing for more people to work on right now is as follows.
Click the links to see our full discussion of each area and why we prioritize it as highly as we do.
Together, these categories make up what we call our “priority problem areas” or “priority problems.”
Other potentially promising issues to work on
The following are some more issues that seem like they might be very promising to work on, but we haven’t investigated very much, so we aren’t as confident as we are about the areas listed above. Focusing on one of these might be especially promising if you already have relevant experience or you are particularly motivated by the issue.
What about if you want to focus more on concrete, measurable impact in the near term? In that case we think some of the best issues to work on are improving people’s health in poorer countries and reducing suffering from factory farming, as these issues affect many lives and receive less attention than they deserve.
We also take seriously the chance of discovering ‘Cause X’ — an issue we don’t yet prioritise (or even know about) but which will turn out to be as or even more pressing than the issues we discuss here.
Read our key articles on comparing global priorities (and picking between them):
Best opportunities Which careers effectively contribute to solving these problems?
The most effective careers are those that address the most pressing bottlenecks to progress on the most pressing global problems.
For the same reasons we think it’s an advantage to work on neglected problems, we also think it’s an advantage to take neglected approaches to addressing those problems. We discuss some of these approaches in this section.
Our aim is to help you get new ideas for long-term career paths to work towards. There are many great options we don’t cover, and the best option for you will depend on your strengths and circumstances. What’s more, there’s significant variety between opportunities within each path. For all these reasons, all we can do is help you expand your personal list of long-term options and next steps. You’ll need to synthesise all the considerations to work out the best option for you. We give some tips on doing that in the later sections.
Five career categories for generating options
Given our take on the world’s most pressing problems and the most pressing bottlenecks these issues face, we think the following five broad categories of career are a good place to start generating ideas if you have the flexibility to consider a new career path.
Many of the top problem areas we focus on are mainly constrained by a need for additional research, and we’ve argued that research seems like a high-impact path in general.
Following this path usually means pursuing graduate study in a relevant area where you have good personal fit, then aiming to do research relevant to a top problem area, or else supporting other researchers who are doing this.
Research is the most difficult to enter of the five categories, but it has big potential upsides, and in some disciplines, going to graduate school gives you useful career capital for the other four categories. This is one reason why if you might be a good fit for a research career, it’s often a good path to start with (though we still usually recommend exploring other options for 1-2 years before starting a PhD unless you’re highly confident you want to spend your career doing research in a particular area).
After your PhD, it’s hard to re-enter academia if you leave, so at this stage if you’re still in doubt it’s often best to continue within academia (although this is less true in certain disciplines, like machine learning, where much of the most cutting-edge research is done in industry). Eventually, however, it may well be best to do research in non-profits, corporations, governments and think tanks instead of academia, since this can sometimes let you focus more on the most practically relevant issues and might suit you better.
You can also support the work of other researchers in a complementary role, such as a project manager, executive assistant, fundraiser or operations. We’ve argued these roles are often neglected, and therefore especially high-impact. It’s often useful to have graduate training in the relevant area before taking these roles.
Some especially relevant areas to study include (not in order and not an exhaustive list): machine learning, neuroscience, statistics, economics / international relations / security studies / political science / public policy, synthetic biology / bioengineering / genetic engineering, China studies, and decision psychology. (See more on the question of what to study.)
Government is often the most important force in addressing pressing global problems, and there are many positions that seem to offer a good network and a great deal of influence relative to how competitive they are.
In this category, we usually recommend that people aim to develop expertise in an area relevant to one of our priority problems and then take any government or policy job where you can help to improve policy relevant to that problem. Another option is to first develop policy relevant career capital (perhaps by working in a generalist policy job) and then use the skills and experience you’ve developed to work on a high-priority problem later in your career.
If you’re a U.S. citizen, working on U.S. federal policy can be particularly valuable because the U.S. federal government is so large and has so much influence over many of our priority problems. People whose career goal is to influence the U.S. federal government often switch between many different types of roles as they advance. In the U.S., many types of roles that can lead to a big impact on our priority problems fit into one of the following four categories. (We focus on the U.S. here because of its influence. We think working in policy can also be quite valuable in other countries, although the potential career paths look slightly different.)
Working in the executive branch such as the Defense Department, the State Department, intelligence agencies, or the White House. We don’t yet have a review of executive branch careers but our article on U.S. AI policy careers also makes a more general case for the promise of working in the U.S. federal government. (See also our profile on the UK civil service) Note, though, that in the U.S. top executive branch officials are often hired from outside the traditional career civil service. So even if your goal is to eventually be a top executive branch official, the best path might include spending much of your career in other types of roles, including those we describe next (but also including other roles such as some in the private sector) .
Working as a Congressional staffer. Congressional staffers can have a lot of influence over legislation, especially if they work on a committee relevant to one of our priority problems. It’s possible to achieve seniority and influence as a Congressional staffer surprisingly quickly. Our impression, though, is that the very top staffers often have graduate degrees, sometimes including degrees from top law schools. From this path it’s also common to move into the executive branch, or to seek elected office.
Working for a political campaign. We doubt that political campaign work is the highest impact option in the long run but if the candidate you work for wins this can be a great way to get a high-impact staff position. For example, some of the top people who work on a winning presidential campaign eventually get high-impact positions in the White House or elsewhere in the executive branch. This is a high-risk strategy because it only pays off if your candidate wins and, even then, not everybody on the campaign staff will get influential jobs or jobs in the areas they care about. Running for office yourself involves a similar high-risk, righ-reward dynamic.
Influencer positions outside of government, covering policy research and advocacy. For example, you might work at a think tank or a company interested in a relevant policy area. In a job like this, you might be able to: develop original proposals for policy improvements, lobby for specific policies, generally influence the conversation about a policy area, bring an area to the attention of policymakers, etc. You can also often build expertise and connections to let you switch into the executive branch, a campaign, or other policy positions. For many areas of technical policy, especially AI policy, we’d particularly like to emphasise jobs in industry. Working at a top company in an industry can sometimes be the best career capital for policy positions relevant to that industry. In machine learning in particular, some of the best policy research is being done at industry labs, like OpenAI’s and DeepMind’s. Journalists can also be very influential but our impression is that there is not as clear of a path from working as a journalist to getting other policy jobs.
In the UK, the options are similar. One difference is that there is more separation between political careers and careers in the civil service (which is the equivalent of the executive branch). A second difference is that the U.K. Ministry of Defence has less power in government than the U.S. Defense Department does. This means that roles outside of national security are comparatively more influential in the U.K. than in the U.S. Read more in our profiles on UK civil service careers and UK party political careers. (Both are unfortunately somewhat out of date but still provide useful information).
People also often start policy careers by doing graduate studies in an area that’s relevant to the type of policy you want to work on. In the US, it’s also common to enter from law school, a master of public policy, or a career in business.
Some especially relevant areas of policy expertise to gain and work within include: technology policy; security studies; international relations, especially China-West relations; and public health with a focus on pandemics and bioterrorism.
There are many government positions that require a wide range of skill types, so there should be some options in this category for nearly everyone. For instance, think tank roles involve more analytical skills (though more applied than the pure research pathway), while more political positions require relatively good social skills. Some positions are very intense and competitive, while many government positions offer reasonable work-life balance and some don’t have very tough entry conditions.
Although we suspect many non-profits don’t have much impact, there are still many great non-profits addressing pressing global issues, and they’re sometimes constrained by a lack of talent, which can make them a high-impact option.
One major advantage of non-profits is that they can tackle the issues that get most neglected by other actors, such as addressing market failures, carrying out research that doesn’t earn academic prestige, or doing political advocacy on behalf of disempowered groups such as animals or future generations.
To focus on this category, start by making a list of non-profits that address the top problem areas, have a large scale solution to that problem, and are well run. Then, consider any job where you might have great personal fit.
The top non-profits in an area are often very difficult to enter, but you can always expand your search to consider a wider range of organisations. These roles also cover a wide variety of skills, including outreach, management, operations, research, and others.
If you already have a strong existing skill set, is there a way to apply that to one of the key problems?
If there’s any option in which you might excel, it’s usually worth considering, both for the potential impact and especially for the career capital; excellence in one field can often give you opportunities in others.
This is even more likely if you’re part of a community that’s coordinating or working in a small field. Communities tend to need a small number of experts covering each of their main bases.
For instance, anthropology isn’t the field we’d most often recommend someone learn, but it turned out that during the Ebola crisis, anthropologists played a vital role, since they understood how burial practices might affect transmission and how to change them. So, the biorisk community needs at least a few people with anthropology expertise.
This means that if you have an existing skill set that covers a base for a community within a top area, it can be a promising option, even if it’s obscure.
However, there are limits to what can be made relevant. We struggle to think of a way to connect some subjects directly to the top problem areas, so sometimes it will be better to retrain rather than apply an existing skill.
If you have an unusual skill set, it’s hard for us to give general advice online about how best to use it. Ideally, you can speak to experts in the problem areas you want to work on about how it might be applied. For the problems we focus on, we have some rough ideas about how particular skillsets can be applied here.
We think many of our readers can excel in roles in the four areas mentioned above, and we encourage you not to rule out these categories prematurely.
If you’re able to take a job where you earn more than you need, and you think none of the categories above are a great fit for you, we’d encourage you to consider earning to give. It’s also worth considering this option if you have an unusually good fit for a very high-earning career.
By donating to the most effective organisations in an area, just about anyone can have an impact on the world’s most pressing problems.
You may be able to take this a step further and ‘earn to give’ by aiming to earn more than you would have done otherwise and to donate some of this surplus effectively.
Not everyone wants to make a dramatic career change, or is well-suited to the narrow range of jobs that have the most impact on the most pressing global problems. However, by donating, anyone can support these top priorities, ‘convert’ their labour into labour working on the most pressing issues, and have a much bigger impact.
This can allow you to pursue your preferred career, while still contributing to pressing areas that require a specialised skill set like biosecurity or global priorities research.
For those who are an especially good fit with a higher-earning career (compared to the other paths), earning to give can be their highest-impact option. For instance, people who were earning to give provided early funding for many organisations we now think are high-impact, and some of those organisations could not have existed without this funding (including us!).
We list some of the highest-earning jobs available in a separate article, and for those with quantitative skills, we especially highlight quantitative trading. However, you can earn to give in any job that pays you more than you need to live comfortably.
When earning to give, it’s also important to pick a job with good personal fit, that doesn’t cause significant harm, and that builds career capital, particularly if you might want to transition into other high-impact options later on.
Considering both income and career capital leads us to favour jobs in high-performing organisations where you can develop skills that are useful in one of the other four categories, such as management or operations. Tech startups with 20-100 employees are often a good place to consider. Management consulting is another option.
Note that we think these categories are a good place to start, but they’re certainly not the right fit for everyone, especially if you have lots of experience or well-developed skills in another area.
We’ve promoted these categories since the very start of 80,000 Hours, though our views about which are best have changed over the years as we learn more and the needs in the top problem areas change. See an outline of the reasoning behind these categories.
Our priority paths
Below are some more specific options that are among the most promising paths we know of right now. Many of them are difficult to enter; you may need to start by investing in your skills for several years, they’re focused on people who can work in English-speaking countries, there may be relatively few positions available, and some require difficult-to-obtain-credentials, such as a PhD from a top school. For this reason, we encourage you to invest some time coming up with good back-up options. That said if you have the potential to excel in one of the following paths, it’s worth seriously considering, because it may be one of your highest-impact options.
Note that we tend to add or demote about one option from the list each year as we learn more and the needs in our top problem areas change.
As we’ve argued, the next few decades might see the development of powerful machine learning algorithms with the potential to transform society. This could have both huge upsides and downsides, including the possibility of catastrophic risks.
To manage these risks, one need is technical research into the design of safe AI systems (including the “alignment problem”), which we cover later. But in addition to the technical problems, there are many other important questions to address. These can be roughly categorised into the three key challenges of transformative AI strategy:
Ensuring broad sharing of the benefits from developing powerful AI systems, as opposed to letting AI’s development harm humanity or unduly concentrate power.
Avoiding exacerbating military competition or conflict caused by increasingly powerful AI systems.
Ensuring that the groups that develop AI are working together to develop and implement safety features.
To overcome these challenges, we need a community of experts who understand the intersection of modern AI systems and policy, and work together to mitigate long-term risks and ensure humanity reaps the benefits of advanced AI. These experts would broadly carry out two overlapping activities: (i) research – to develop strategy and policy proposals, and (ii) implementation – working together to put policy into practice.
Ultimately, we see these issues as equally important as the technical ones, but currently they are more neglected. Many of the top academic centres and AI companies have started to hire researchers working on technical AI safety, and there’s perhaps a community of 20-50 full-time researchers focused on the issue. However, there are only a handful of researchers focused on strategic issues or working in AI policy with a long-term perspective.
Note that there is already a significant amount of work being done on nearer-term issues in AI policy, such as the regulation of self-driving cars. What’s neglected is work on issues that are likely to arise as AI systems become substantially more powerful than those in existence today — so-called “transformative AI” — such as the three non-technical challenges outlined above.
Some examples of top jobs to work towards long-term in this path include the following, which fit a variety of skill types:
Work at top AI labs, such as DeepMind or OpenAI, especially in relevant policy team positions or other influential roles.
In academia, become a researcher at one of the institutes focused on long-term AI policy, especially the Future of Humanity Institute at Oxford, which already has several researchers working on these issues at the Center for the Governance of AI.
In party politics, aim to get an influential position, especially as an advisor with a focus on emerging technology policy (e.g. start as a staffer in Congress).
How to enter
In the first few years of this path, you’d focus on learning more about the issues and how government works, as well as meeting key people in the field, and doing research, rather than pushing for a specific proposal. AI policy and strategy is a deeply complicated area, and it’s easy to make things worse by accident (e.g. see the Unilateralist’s Curse).
Some common early career steps include:
Relevant graduate study. Some especially useful fields include international relations, strategic studies, machine learning, economics, law, public policy, and political science. Our top recommendation right now is machine learning if you can get into a top 10 school in computer science. Otherwise, our top recommendation tends to be: i) law school if you can get into Yale or Harvard ii) international relations if you want to focus on research and iii) strategic studies if you want to focus on implementation. However, the best choice for you will also depend heavily on your personal fit and the particular schools you get into.
Working at a top AI company, especially DeepMind and OpenAI.
Any general entry-level government and policy positions (as listed earlier), which let you gain expertise and connections, such as think tank internships, being a researcher or staffer for a politician, joining a campaign, and government leadership schemes.
This field is at a very early stage of development, which creates multiple challenges. For one, the key questions have not been formalised, which creates a need for “disentanglement research” to enable other researchers to get traction. For another, there is a lack of mentors and positions, which can make it hard for people to break into the area.
Until recently, it’s been very hard to enter this path as a researcher unless you’re able to become one of the top approximately 30 people in the field relatively quickly. While mentors and open positions are still scarce, some top organisations have recently recruited junior and mid-career staff to serve as research assistants, analysts, and fellows. Our guess is that obtaining a research position will remain very competitive but positions will continue to gradually open up. On the other hand, the field is still small enough for top researchers to make an especially big contribution by doing field-founding research.
If you’re not able to land a research position now, then you can either (i) continue to build up expertise and contribute to research when the field is more developed, or (ii) focus more on the policy positions, which could absorb hundreds of people.
Most of the first steps on this path also offer widely useful career capital. For instance, depending on the subarea you start in, you could often switch into other areas of policy, the application of AI to other social problems, operations, or earning to give. So, the risks of starting down this path if you may want to switch later are not too high.
Since this is one of our top priority paths, we have a specialist advisor, Niel Bowerman, who focuses on finding and helping people who want to enter it. He is especially focused on roles aimed at improving US AI public policy. If you would like advice, get in touch here.
Could this be a good fit for you?
One key question is whether you have a reasonable chance of getting some of the top jobs listed earlier.
The government and political positions require people with a well-rounded skill set, the ability to meet lots of people and maintain relationships, and the patience to work with a slow-moving bureaucracy. It’s also ideal if you’re a US citizen that might be able to get security clearance, and don’t have an unconventional past that could create problems if you choose to work in politically sensitive roles.
The more research-focused positions would typically require the ability to get into a top 10 grad school in a relevant area and deep interest in the issues. For instance, when you read about the issues, do you get ideas for new approaches to them? Read more about predicting fit in research.
Turning to other factors, you should only enter this path if you’re convinced of the importance of long-term AI safety. This path also requires making controversial decisions under huge uncertainty, so it’s important to have excellent judgement, caution and a willingness to work with others, or it would be easy to have an unintended negative impact. This is hard to judge, but you can get some information early on by seeing how well you’re able to work with others in the field.
However, if you can succeed in this area, then you have the opportunity to make a significant contribution to what might well be the most important issue of the next century.
As we’ve argued, the next few decades might see the development of powerful machine learning algorithms with the potential to transform society. This could have both huge upsides and downsides, including the possibility of existential risks.
Besides strategy and policy work discussed above, another key way to limit these risks is research into the technical challenges raised by powerful AI systems, such as the alignment problem. In short, how do we design powerful AI systems so they’ll do what we want, and not have unintended consequences?
This field of research has started to take off, and there are now major academic centres and AI labs where you can work on these issues, such as MILA in Montreal, FHI at Oxford, CHAI at Berkeley, DeepMind in London and OpenAI in San Francisco. We’ve advised over 100 people on this path, with several already working at the above institutions. The Machine Intelligence Research Institute, in Berkeley, has been working in this area for a long time and has an unconventional perspective and research agenda relative to the other labs.
There is plenty of funding available for talented researchers, including academic grants, and philanthropic donations from major grantmakers like the Open Philanthropy Project. It’s also possible to get funding for your PhD programme. The main need of the field is more people capable of using this funding to carry out the research.
In this path, the aim is to get a position at one of the top AI safety research centres, either in industry, nonprofits or academia, and then try to work on the most pressing questions, with the eventual aim of becoming a research lead overseeing safety research.
Broadly, AI safety technical positions can be divided into (i) research and (ii) engineering. Researchers direct the research programme. Engineers create the systems and do the analysis needed to carry out the research. Although engineers have less influence over the high-level research goals, it can still be important that engineers are concerned about safety. This concern means they’ll better understand the ultimate goals of the research (and so prioritise better), be more motivated, shift the culture towards safety, and use the career capital they gain to benefit other safety projects in the future. This means that engineering can be a good alternative for those who don’t want to be a research scientist.
It can also be useful to have people who understand and are concerned by AI safety in AI research teams that aren’t directly focused on AI safety to help promote concern for safety in general, so this is another backup option. This is especially true if you can end up in a management position with some influence over the organisation’s priorities.
How to enter
The first step on this path is usually to pursue a PhD in machine learning at a good school. It’s possible to enter without a PhD, but it’s close to a requirement in research roles at the academic centres and DeepMind, which represent a large fraction of the best positions. A PhD in machine learning also opens up options in AI policy, applied AI and earning to give, so this path has good backup options.
However, if you want to pursue engineering over research, then the PhD is not necessary. Instead, you can do a masters programme or train up in industry.
It’s also possible to enter this path from neuroscience, especially computational neuroscience, so if you already have a background in that area you may not have to return to study. Recently, opportunities have also opened up for social scientists to contribute to AI safety (we plan to cover this in future work).
Could this be a good fit for you?
Might you have a shot of getting into a top 5 graduate school in machine learning? This is a reasonable proxy for whether you can get a job at a top AI research centre, though it’s not a requirement. Needless to say these places are very academically demanding.
Are you convinced of the importance of long-term AI safety?
Are you a software or machine learning engineer who’s been able to get jobs at FAANG and other competitive companies? You may be able to train to enter a research position, or otherwise take an engineering position.
Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, sometimes have ideas for questions to look into, and can’t resist pursuing them? Read more about how to tell if you’re a good fit for working in research.
The Open Philanthropy Project takes an effective altruism approach to advising philanthropists on where to give. It likely has over $10bn of committed funds from Dustin Moskovitz and Cari Tuna, and is aiming to advise other philanthropists. There are other “angel” donors in the community who could give in the $1-$10m range per year, but aren’t at their maximum level of giving. And we know a number of other billionaires who are interested in effective altruism and might want to start new foundations.
One reason why these donors don’t give more is a lack of concrete “shovel-ready” opportunities. This is partly due to a lack of qualified leaders able to run projects in the top problem areas (especially to found nonprofits working on research, policy and community building). But another reason is a lack of grantmakers able to vet these opportunities or generate new projects themselves. A randomly chosen new project in this area likely has little expected impact — since there’s some chance it helps and some chance it makes the situation worse — so it’s vital to have grantmakers able to distinguish good projects from the bad.
The skill of grantmaking involves being able to survey the opportunities available in an area, and come to reasonable judgements about their likelihood of success, and probable impact if they do succeed. Grantmakers also need to build a good network, both so they can identify opportunities early, and identify groups with good judgement and the right intentions.
In addition, grantmakers need to get into a position where they’re trusted by the major funders, and this requires having some kind of relevant track record.
All of this makes it incredibly difficult to become a grantmaker, especially early in your career. The Open Philanthropy Project’s last hiring round for research analysts had hundreds of applicants, only twelve of whom got in-person trials, of which 5 received job offers.
However, the high stakes involved mean that if you are able to get into one of these positions, then you can have a huge impact. A small scale grantmaker might advise on where several million dollars of donations are given each year. Meanwhile, a grantmaker at a large foundation — typically called a “programme officer” or “programme director” — might oversee $5-$40m of grants per year.
Given the current situation, it’s likely that a significant fraction of the money a grantmaker oversees wouldn’t have been donated otherwise for at least several years, so they get good projects started sooner and may increase the total amount of giving by creating capacity before potential donors lose interest.
What’s more, by having more talented grantmakers, the money can be donated more effectively. If you can improve the effectiveness of $10m per year to a top problem area by 10%, that’s equivalent to donating about $1m yourself. This often seems achievable because the grantmakers have significant influence over where the funds go and there’s a lot of potential to do more detailed research than what currently exists.
Overall, we think top grantmakers working in effective altruism can create value equal to millions or even tens of millions of dollars per year in donations to top problem areas, making it one of the highest-impact positions right now.
Finally, these positions offer good career capital because you’ll make lots of important connections within the top problem areas. This creates opportunities to exit into direct work. Another exit option is government and policy. Or you could switch into operations or management, and have an impact by enabling other grantmakers to be more effective.
One related path is to work as a grantmaker in a foundation that doesn’t explicitly identify with effective altruism, in order to help bring in an effective altruism perspective. The advantage of this path is that it might be easier to add value. However, the downside is that most foundations are not willing to change their focus areas, and we think choice of focus area is the most important decision. Existing foundations also often require significant experience in the area, and sometimes it’s not possible to work from junior positions up to programme officer.
Another related path is philanthropic advising. One advantage of this path is that you can pursue it part-time to build a track record. This also means you could combine it with earning to give and donating your own money, or with advocacy positions that might let you meet potential philanthropists. We’ve seen several people give informal advice to philanthropists, or be given regranting funds to donate on their behalf.
A third related path is to work at a government agency that funds relevant research, such as IARPA, DARPA and NIH. Grantmakers in these agencies often oversee even larger pools of funding, but you’ll face more restrictions on where it can go. They also often require a PhD.
How to enter
One entry route is to take a junior position at one of these foundations (e.g. research analyst), then work your way up to being a grantmaker. We think the best place to work if you’re taking this path is the Open Philanthropy Project (disclaimer: we’ve received grants from them). Founders Pledge also has a philanthropic advising team, though it has less of a track record and is less focused on long-term focused problem areas. You could also consider research positions at other effective altruism organisations — wherever will let you build a track record of this kind of research (e.g. the Future of Humanity Institute).
Another key step is to build up a track record of grant making. You could start by writing up your thoughts about where to give your own money on the effective altruism forum. From there, it might be possible to start doing part-time philanthropic advice, and then work up to joining a foundation or having a regranting pool (funds given to you by another donor to allocate).
A third option is to pursue work in the problem area where you want to make grants, perhaps in nonprofits, policy or research, in order to build up expertise and connections in the area. This is the usual route into grantmaking roles. For instance, the Open Philanthropy Project hired Lewis Bollard to work on factory farming grants after he was Policy Advisor & International Liaison to the CEO at The Humane Society of the United States, one of the leading organisations in the area.
Could this be a good fit for you?
This position requires a well-rounded skill set. You need to be analytical, but also able to meet and build relationships with lots of people in your problem area of focus.
Like AI policy, it requires excellent judgement.
Some indications of potential: Do you sometimes have ideas for grants others haven’t thought of, or only came to support later? Do you think you could persuade a major funder of a new donation opportunity? Can you clearly explain the reasons you hold particular views, and their biggest weaknesses? Could you develop expertise and strong relationships with the most important actors in a top problem area? Could you go to graduate school in a relevant area at a top 20 school? (This isn’t needed, but is an indication of analytical ability.)
Note that working as support or research staff for an effective grantmaker is also high-impact, so that’s a good backup option.
We think building the effective altruism community is a promising way to build capacity to address pressing global problems in the future. This is because it seems possible to grow a great deal and contains people who are willing to switch area to work on whichever issues turn out to be most urgent in the future, so is robust to changes in priorities.
We realise this seems self-promotional, since we ourselves run an effective altruism organisation. However, if we didn’t recommend what we ourselves do, then we’d be contradicting ourselves. We also wouldn’t want everyone to work on this area, since then we’d only build a community and never do anything. But we think recommending it as one path among about ten makes sense.
A key way to contribute to building the effective altruism community is to take a job at one of the organisations in the community — see a list of organisations. Many of these organisations have a solid track record, are growing and have significant funding, so a big bottleneck is finding staff who are a good fit.
An additional staff member can often grow the community by several additional people each year, achieving a multiplier on their effort. And these organisations also do other useful work, such as research, fundraising and providing community infrastructure.
These roles let you develop expertise in effective altruism, top global problem areas and running startup nonprofits. They put you at the heart of the effective altruism movement and long-term future communities, letting you build a great network there. Many of the organisations also put a lot of emphasis on personal development.
However, it’s also important to bear in mind that these roles require a specific type of person: someone who both has strong skills that are needed by the organisations (which often require a style of research and reasoning which isn’t common elsewhere), a good fit with the specific team, and deep engagement in effective altruism. Many of the organisations are also management constrained, which raises the bar for getting hired — your application may need to demonstrate a high likelihood of excelling with relatively little supervision almost immediately.
This means if you are a good fit for one of these roles, then you probably won’t be significantly replaceable, and taking the role can be very high-impact.
However, it also means that most people are not a good fit for most of these roles. This means that unless you have strong evidence otherwise, you shouldn’t expect to have more than a couple of percent chance of landing a specific job. Given that there are usually not many jobs available within these organisations at a given time, you shouldn’t have ‘work at EA orgs’ as the only category of jobs you pursue. You should also apply to another category, such as policy positions or something to build career capital.
There are a variety of roles available, broadly categorised into the following:
Management, operations and administration – e.g. hiring staff, setting strategy, creating internal processes, setting budgets.
Research and advice – e.g. developing the ideas of effective altruism, writing and talking about them.
Outreach, marketing and community – e.g. running social media accounts, content marketing, running promotional accounts, visual design, moderating forums, market research, responding to the media, helping people in the community.
Systems and engineering – e.g. web engineering, data capture and analysis, web design, creating internal tools.
We’d like to especially highlight roles in operations management, since there’s a significant need for them by organisations in the community, but we often find that these roles get neglected, perhaps because they’re seen as less glamorous. Another common assumption is that these roles are easier to enter, which makes them more replaceable. Our view, however, is that operations management jobs are both essential and difficult, and require people to make that the main focus of their career. Read more in our full article about operations management.
How to enter
To enter these roles, you can apply directly to the organisations. Organisations often hire people who are already involved in the community, because commitment to and knowledge of the community are a requirement for many jobs and because it’s easier to evaluate a candidate if you already know their work. This means that if you want to aim towards these positions, the most important step is to start meeting people in the community, and doing small projects to build your reputation (e.g. writing on the forum, volunteering at EA Global, starting a local group, or doing freelance consulting for an organisation). We list more advice in our full profile.
As mentioned, because these positions are scarce, almost nobody can count on getting one. This means you should make sure you’re acquiring career capital that would be relevant to other paths (e.g. a full-time job or graduate school) at the same time as you’re building your reputation within effective altruism. It’s usually not a good idea to commit to this path or build plans that depend on getting one of these jobs before you’ve gotten an offer.
If you want to get a job that puts you in a better position to enter these roles in the future, then do something that lets you develop a concrete skill that’s relevant to one of the role types listed above. Well-run tech startups with 10-100 people are often a good place to learn these skills in a similar context. Alternatively, some effective altruism organisations frequently hire people from our other priority paths. Excelling in any of those paths is a great way to better position yourself for a job at an effective altruism organisation and could be equally or more impactful on its own.
Could this be a good fit for you?
Whether you might be a good fit in part depends on the type of role you’re going for. However, there are some common characteristics the organisations typically look for:
A track record that demonstrates intelligence and an ability to work hard.
Evidence of deep interest in effective altruism — for some roles you need to be happy to talk about it much of the day. This breaks down into a focus on social impact and a scientific mindset, as well as knowledge of the community.
Flexibility and independence – the organisations are relatively small, so staff need to be happy to work on lots of different projects with less structure.
It’s not a requirement, but it seems to be becoming difficult to get most of these jobs without several years of experience in a relevant skill.
We’ve argued that one of the most important priorities is working out what the priorities should be. There’s a huge amount that’s not known about how to do the most good, and although this is one of the most important questions someone would ask, it has received little systematic study.
The study of which actions do the most good is especially neglected if you take a long-term perspective, in which what most matters is the effects of our actions on future generations. This position has only been recently explored, and we know little about its practical implications. Given this, we could easily see our current perspective on global priorities shifting given more research, so these questions have practical significance.
The study of how to help others is also especially neglected from a high-level perspective. People have done significant work on questions like “how can we reduce climate change”, but much less on the question “how pressing is climate change compared to health?” and “what methods should we use to make that comparison?”. It’s these high-level questions we especially want to see addressed.
We call the study of high-level questions about how best to help others “global priorities research”. It’s primarily a combination of moral philosophy and economics, but it also draws on decision theory, decision-making psychology, moral psychology, and a wide variety of other disciplines, especially those concerning technology and public policy. You can see a research agenda produced by the Global Priorities Institute at Oxford.
We’d like to see global priorities research turned into a flourishing field, both within and outside of academia.
To make this happen, perhaps the biggest need right now is to find more researchers able to make progress on the key questions of the field. There is already enough funding available to hire more people if they could demonstrate potential in the area (though there’s a greater need for funding than with AI safety). Demonstrating potential is hard, especially because the field is even more nascent than AI safety, resulting in a lack of mentorship. However, if you are able to enter, then it’s extremely high-impact — you might help define a whole new discipline.
Another bottleneck to progress on global priorities research might be operations staff, as discussed earlier, so that’s another option to consider if you want to work on this issue.
You can broadly pursue this path either in academia or nonprofits.
We think building this field within academia is a vital goal, because if it becomes accepted there, then it will attract the attention of hundreds of other researchers.
The only major academic centre currently focused on this research is the Global Priorities Institute at Oxford (GPI), so if you want to pursue this path as an academic, that’s one of the top places to work. One problem is that GPI only has a couple of open positions, and you’d usually need to have a top academic background in philosophy or economics to get one of them (e.g. if you did well in a PhD from a top 10 school in your subject that’s a good sign). Positions are especially competitive in philosophy.
A new organisation called the Forethought Foundation for Global Priorities Research offers scholarships and fellowships to students in global priorities research as well as research grants for established scholars. We expect you’ll need a top background in philosophy or economics to get one of these (e.g. an undergrad who could get into a top 10 philosophy PhD programme or a top 10-20 economics PhD programme; a grad student attending one of those programmes; or a postdoc or academic who graduated from – or teaches at – one of those programmes).
That said, we expect that other centres will be established over the coming years. In the meantime, you could try to build expertise. For instance, doing an economics PhD (and postdoc) opens up lots of other options, so is a reasonable path to pursue even if you’re not sure that global priorities research is a good fit for you. It’s also important to have academics doing global priorities research (and potentially collaborating with GPI) at other universities.
One downside of academia, however, is that you need to work on topics that are publishable, and these are often not those that are most relevant to real decisions. This means it’s also important to have researchers working elsewhere on more practical questions.
We think the leading applied centre working on this research is the Open Philanthropy Project. One other advantage of working there is that your findings will directly feed into how billions of dollars are spent (disclaimer: we have received grants from them). However, you can also pursue this research at other effective altruism organisations. 80,000 Hours, for instance, does a form of applied global priorities research focused on career strategy.
How to enter?
The best entry route to the academic end of the field is to study a PhD in economics or philosophy. This is both because PhDs provide useful training and because they are required for most academic positions. Currently, economics is in shorter supply than philosophy, and also gives you better back-up options, so is preferable if you have the choice.
It’s also possible to enter from other disciplines. A number of people in the field have backgrounds in maths, computer science and physics. Psychology is perhaps the next most relevant subject, especially the areas around decision-making psychology and moral psychology. The field also crosses into AI and emerging technology strategy, so the options we listed in the earlier sections are also relevant, as well as knowledge of relevant areas of science. Finally, as the field develops there will be more demand for people with a policy focus, who might have studied political science, international relations, or security studies. In general, this is a position where wide general knowledge is more useful than most.
With the non-academic positions, a PhD isn’t necessary, but you do ideally need to find a way to demonstrate potential in this kind of research. It’s useful to develop skills in clear writing and basic quantitative analysis. Sometimes people enter the non-academic roles directly from undergrad if they’re sufficiently talented.
Could this be a good fit for you?
Might you be able to get into a PhD in economics or philosophy at a top 10 school? (This isn’t to say this qualification is required, it’s just that if you would be able to succeed in such a path, it’s an indicator of ability.)
Do you have excellent judgement? By this we mean, can you take on messy, ill-defined questions, and come up with reasonable assessments about them? This is not required in all roles, but it is especially useful right now given the nascent nature of the field and nature of the questions that are ultimately being addressed.
Do you have general knowledge or an interest in a wide range of academic disciplines?
Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, and sometimes have ideas for questions to look into? Are you able to work independently for many days at a time? Are you able to stick with or lead a research project over many years? Read more about predicting success in research.
There is already a significant community working on pandemic prevention, and there are many ways to contribute to this field. However, most of the existing work is focused on naturally-caused pandemics like those we’ve seen in the past and COVID-19 (though this is starting to change a bit). While these are very important to mitigate, we think it’s even more important to prevent pandemics that pose catastrophic risks, especially those that might totally end human civilisation. There is substantial overlap between work that mitigates these known pandemic risks and more extreme risks, so work in the one is also helpful for work in the other; still, work that is particularly focused on the extreme risks seems somewhat neglected in the field right now.
For reasons our profile explains, catastrophic pandemics seem more likely to be human-caused, and perhaps even deliberately caused. So they may be more well-targeted by security and biodefence interventions than conventional public health ones. Moreover, much past funding for work on bioterrorism seems to have focused on more well-known risks such as anthrax, which doesn’t pose a catastrophic risk.
This means that despite significant existing work on pandemic prevention, global catastrophic biological risksseem neglected.
We rate biorisk as a less pressing issue than AI safety, mainly because we think biorisks are less likely to be truly existential, and AI seems more likely to play a key role in shaping the long-term future in other ways. However, working to prevent catastrophic pandemics seems very high value to us, and can easily be your top option if you have a comparative advantage in this path (e.g., a background in medicine).
We can roughly divide this path into working in government and related organizations on the one hand, and working in research on the other.
The main line of defence against these risks is government, so it’s valuable to build up a community of experts in relevant areas of national government and intergovernmental organisations. These include:
The US Centers for Disease Control
The World Health Organization
The European Centre for Disease Prevention and Control
Another option is to work in academia. This involves developing a relevant area of expertise, such as synthetic biology, genetics, public health, epidemiology, international relations, security studies, or political science. Note that it’s possible—and at times beneficial—to start by studying a quantitative subject (sometimes even to graduate level), and then switch into biology later. Quantitative skills are in demand in biology and give you better back-up options.
Once you’ve completed training, you could do a number of things—including but not limited to: research on directly useful technical questions (such as how to create broad-spectrum diagnostics or rapidly deploy vaccines), research on strategic questions (such as how dangerous technologies should be controlled), or advising for policy-makers and other groups on the relevant issues. One top research centre you could aim to work at is the Center for International Security and Cooperation at Stanford.
As with AI strategy, the study of global catastrophic biological risk is still a nascent field. This again can make it hard to contribute, since—although progress is being made—we don’t yet know which research questions are most important, and there is often a shortage of mentorship.
This means that there’s an especially pressing need for more “field building” or “disentanglement” research, with the aim of defining the field. If you might be able to do this kind of work, then your contribution is especially valuable since you can unlock the efforts of other researchers. The main home for most of this kind of research with a long-term focus right now is the Future of Humanity Institute in Oxford.
If you’re not able to contribute to disentanglement research right now, there are several other things you can do, including: (i) tackle more straightforward relevant research questions, (ii) work in more mainstream biorisk organisations to build up expertise, (iii) focus on policy positions with the aim of building a community and expertise, or (iv) become an expert on a relevant area of biology, international relations, or a related field.
One advantage of working on biorisk is that many of the top positions seem somewhat less competitive than in AI technical safety work, because they don’t require world-class quantitative skills.
Besides pandemic risks, we’re also interested in how to safely manage the introduction of other potentially transformative discoveries in biology which could be used to fundamentally alter human characteristics and values (such as genetic engineering) and anti-ageing research. We see these issues as somewhat less pressing than the possibility of engineered pandemics, but they provide another reason to develop expertise in these areas.
Often the way to enter this path is to pursue relevant graduate studies (such as in the subjects listed above) because this takes you along the academic path, and is also helpful in the policy path, where many positions require graduate study. Alternatively, you can try to directly enter relevant jobs in government, international organisations, or nonprofits, and build expertise on the job.
The backup options for this path depend on what expertise you have, but they include other options in the policy realm—it’s usually possible to switch your focus within a policy career. You could also work on adjacent research questions that also have the potential to make a positive difference, such as in global health, ageing, or genetics. These backup options seem generally attractive, though somewhat less promising and more competitive than the ones made available by pursuing AI safety policy or technical research (which is one reason we rank this path a bit lower).
Could this be a good fit for you?
Are you deeply concerned with reducing catastrophic risks, and especially extinction risks?
Do you have reasonably strong quantitative skills? (They don’t need to be as strong as they do for AI fields.)
Do you already have experience in a relevant research area relevant to biology (such as those listed above)?
Might you be capable of getting a PhD from a top 30 school in one of these areas? This isn’t required but is a good indicator. Read more about predicting success in research.
If focused on field-building research, can you take on messy, ill-defined questions, and come up with reasonable assessments about them?
If focused on policy, might you be capable of getting and being satisfied in a relevant position in government? In policy, it’s useful to have relatively stronger social skills, such as being happy to speak to people all day, and being able to maintain a robust professional network. Policy careers also require patience in working with large bureaucracies, and sometimes also involve facing public scrutiny.
China will play a role in solving many of the biggest challenges of the next century, including emerging technologies and global catastrophic risks, and China is one of the most influential countries in AI development and deployment.
However, it often seems like there’s a lack of understanding and coordination between China and the West. For instance, even today, 3 times as many Americans study French as study Chinese. For this reason, we’d like to see more people (both Chinese and non-Chinese) develop a deep understanding of the intersection of effective altruism and China, and help to coordinate between the two countries.
In particular, we want people to learn about the aspects of China most relevant to our top problem areas, which means topics like artificial intelligence, international relations, pandemic response, bioengineering, political science, and so on. China is also crucial in improving farm animal welfare, though we currently rate this as a lower priority.
More concretely, this could mean options like:
Graduate study in a relevant area, such as machine learning or synthetic biology; economics, international relations, or security studies with a focus on China or emerging technology; or Chinese language, history and politics.
If possible, a prestigious China-based fellowship, such as the Schwarzman Scholarship programme or Yenching Scholars, is a great option.
Research at a think tank or academic institute focused on these topics.
Work at a Chinese technology company.
If you are a foreigner, learn Chinese in China, or find another option that lets you live there.
If you are Chinese, work at an international effective altruism organisation.
Work at an influential philanthropic foundation.
Once you have this expertise, you could aim to contribute to AI and biorisk strategy & policy questions that involve China. You could also advise and assist international organisations that want to coordinate with Chinese organisations. You might also directly work for Chinese organisations that are concerned with these problem areas.
Note that we’re not in favour of promoting effective altruism in China, working in the government, or attempting to fundraise from Chinese philanthropists. This could easily backfire if the message was poorly framed or if its intent was misperceived in China.
Rather, the aim of this path is to learn more about China, and then aim to improve cooperation between international and Chinese groups. If you are considering doing outreach in China, get in touch and we can introduce you to people who can help you navigate the downsides.
To help with this priority path, we work with a part-time advisor who’s a specialist in China. You can get help from them here.
Could this be a good fit for you?
Do you already have knowledge of China? If not, could you see yourself becoming interested in Chinese politics, economy, culture, and so on, and also being involved in the effective altruism community?
Compared to other options on the list, this path requires more humanities skills (e.g. understanding international relations and cross-cultural differences) rather than scientific skill set.
Otherwise the skill set required is fairly similar to the AI strategy and policy path earlier.
If you can find a job you have a good fit with, it seems like it’s usually possible to make a larger contribution to the problem areas we highlight by working directly on them rather than earning to give. We generally think that these problem areas are more “talent constrained” than “funding constrained”.
However, additional funding is still useful, so earning to give is still a high-impact option. Earning to give can also be your top option if your best fit is with an unusually well-paid career. One kind of job we’ve seen work especially well for this is quantitative trading in hedge funds and proprietary firms.
Quantitative trading means using algorithms to trade the financial markets for profit. We think that, for the most part, the direct impact of the work is likely neutral. But it might be the highest-paid career path out there.
Compensation starts around $100k–$300k per year, and can reach $1m per year within a couple of years at the best firms. Eventually, it’s possible to earn over $10m per year if you make partner. We estimate that if you can work as a quantitative trader at a good firm, the expected earnings average around $1m per year over a career. This is similar to being a tech startup founder, except that for startup founders who make it into a top accelerator, the deal is more like a 10% chance of getting $30m in 3–10 years, so the startup option involves much more risk and a major delay.
Given the expected earnings in quantitative trading, this work can enable you to make large donations to effective charities relatively quickly. We know several people in this career path, such as Sam, who donated six figures within their first couple of years of work. This is enough to set you up as an “angel donor” in the community, meaning you could fund promising new projects that larger donors could later scale up if they’re successful enough in the early stages.
Many people also find the work surprisingly engaging. Creating a winning trading strategy is an intellectual challenge, and you get to work closely with a team and receive rapid feedback on how you’re performing. These firms often have “geeky” cultures, making them quite unlike the stereotype about financial workplaces. The hours are often a reasonable 45–60h per week, rather than the 60–80h reported in entry-level investment banking.
These jobs are prestigious due to their earnings, and you can learn useful technical skills, as well as transferable skills like teamwork.
The main downsides of these positions are that they may not help you make that many good connections—since you’ll mainly only work with others in your firm—and they don’t help you learn about global problems on the job. They’re also highly competitive. There are more comments on fit below.
Role types and top firms
There are two broad pathways:
Traders: develop and oversee the strategies
Engineers: create the systems to collect data, implement trades, and track performance
It varies from firm to firm, but typically the engineers get paid less, but have more stability in their earnings.
It’s also important to know that the salaries can vary significantly by firm. There are perhaps only a couple of firms where it’s possible to progress to seven figures relatively quickly without a graduate degree. These include Jane Street, Hudson River Trading, DE Shaw, and possibly some others. The pay is often significantly less in other firms, though still often several hundred thousand dollars per year. Other firms offer earnings on the level of the aforementioned ones, but require a PhD.
In addition, note that quantitative trading positions are very different from “quant” jobs at other financial companies, such as investment banks or non-quantitative investment firms. Usually “quants” are “middle office” staff who provide analysis to the “front office” staff who oversee the key decisions. This makes them more stable but significantly lower paid, and sometimes less prestigious. Such firms also typically have a less geek-friendly culture.
Could this be a good fit for you?
One indication of potential fit is that you’d be capable of finishing in the top half of the class at a top 30 school in mathematics, theoretical physics, or computer science at the undergraduate level.
One option is to enter this path based on your programming skills as an engineer. This might be possible if you’re someone who would be able to get a top software engineering job at a tech company such as Google.
Besides intelligence, top firms also look for good judgement and rapid decision-making skills. One indication of these is that you like playing strategy games or poker.
Compared to academia, you need to have relatively better communication and teamwork skills to pursue this path, since you’ll work closely with your colleagues hour-by-hour in potentially stressful situations.
Would you be capable of reliably giving a large fraction of your income to charity? Finding support in your giving though community or public commitments can help.
What about other options for earning to give outside quantitative trading? Although we don’t know of a career path that has as high and as secure earning potential, if you can find another very high paying career—such as in other areas of finance or in some cases (particularly in the US) law—donating part of your income from such a career could be your best option for making a positive difference. Entrepreneurship may also be a promising option—though it’s risky, we know several successful founders donating substantial and in some cases very large sums to supporting effective organisations.
Whether earning to give is worth it in a particular case depends on how much the potential role earns, and how good your fit is for other options. If you would be able to do useful AI policy research or pursue one of the other ‘priority paths’ discussed above, then your potential earnings would have to be much higher in order for earning to give to be your best option. But if you aren’t a good fit for one of those options or something else that seems similarly high-impact, it’s more likely that earning to give could be your top option even if you pursue a path that pays less than quantitative trading. Unfortunately, because of this variability it’s hard to give one-size-fits-all advice in this area.
It’s also challenging to figure out where it is best to give, especially if you’re hoping to provide funding to early projects in unusual areas. We provide some giving advice here. If you are a large donor (say, giving over 100k/year) it might also be worth it to seek professional giving advice. Regardless, you can learn from the work of professional philanthropic advisors like Effective Giving or Open Philanthropy by reading about their grants and reasoning online.
Governments and other important institutions frequently have to make complex, high-stakes decisions based on judgement calls, often from just a handful of people. There’s reason to believe that human judgements can be flawed in a number of ways, but can be substantially improved using more systematic processes and techniques. One of the most promising areas we’ve seen is the potential to use more rigorous forecasting methods to make better predictions about important future events. Improving the quality of foresight and decision-making in important institutions could improve our ability to solve almost all other problems.
We’d like to help form a new community of researchers and practitioners who develop and implement these techniques. We’re especially keen to help people who want to work on the areas of policy most relevant to global catastrophic risks, such as nuclear security, AI, and biorisk. Note that we’re not talking about the popular “nudge” work in behavioural sciences, which is focused on making small improvements to personal behaviours. Rather, we’re interested in neglected work relevant to high-stakes decisions like whether to go to war, such as Tetlock’s research into forecasting.
This path divides into two main options: (i) developing better forecasting and decision-making techniques (ii) getting them applied in important organisations, especially those relevant to catastrophic risks.
To enter, the first step is to gain relevant expertise. This is most naturally done by working on relevant techniques in a lab like Tetlock’s or studying other important decision-making processes in a graduate programme. However, you could also take a more practical route by starting your career in government and policy, and learning about the science on the side.
Once you have the expertise, you can either try to make progress on key research questions in the field, or work with an important organisation to improve their processes. We can introduce you to people working on this.
As with global priorities research, this is a nascent field that could become much bigger, and now is an exciting time to enter.
Could this be a good fit for you?
Might you be able to get a job in a relevant area of government? Do you know how to influence choices within a bureaucracy?
On the research path, might you be able to get into a relevant PhD at a top 30 school?
Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, and sometimes have ideas for questions to look into? Are you able to work independently for many days at a time? Are you able to stick with or lead a research project over many years? Read more about predicting success in research.
Other paths that may turn out to be very promising
Below we list some more career options. Some are included in this list rather than above because while we think they could be top options for some of our readers, we think they’ll typically be less impactful than our priority paths for people who can succeed in either. Others seem very promising but only have room for a few people. Others are likely to be written up as or included as part of priority paths, but we haven’t yet written full profiles for them. Still others seem like they could be as promising as our priority paths, but because we haven’t investigated them much, we’re unsure.
Our impression is that although many of these topics have received attention from historians and other academics (examples: 1, 2, 3, 4, 5), some are comparatively neglected, especially from a more quantitative or impact-focused perspective.
In general, there seem to be a number of gaps that skilled historians, anthropologists, or economic historians could help fill. Revealingly, the Open Philanthropy Project commissioned their own studies of the history and successes of philanthropy because they couldn’t find much existing literature that met their needs. Most existing research is not aimed at deriving action-relevant lessons.
However, this is a highly competitive path, which is not able to absorb many people. Although there may be some opportunities to do this kind of historical work in foundations, or to get it funded through private grants, pursuing this path would in most cases involve seeking an academic career. Academia generally has a shortage of positions, and especially in the humanities often doesn’t provide many backup options. It seems less risky to pursue historical research as an economist, since an economic PhD also gives you other good options.
How can you estimate your chance of success as a history academic? We haven’t looked into the fields relevant to history in particular, but some of our discussion of parallel questions for philosophy academia or academia in general may be useful.
Although we think technical AI safety research and AI policy are particularly impactful, we think having very talented people focused on safety and social impact at top AI labs may also be very valuable, even when they aren’t in technical or policy roles.
For example, you might be able to shift the culture around AI more toward safety and positive social impact by talking publicly about what your organization is doing to build safe and beneficial AI (example from DeepMind), helping recruit safety-minded researchers, designing internal processes to consider social impact issues more systematically in research, or helping different teams coordinate around safety-relevant projects.
We’re not sure which roles are best, but in general ones involved in strategy, ethics, or communications seem promising. Or you can pursue a role that makes an AI lab’s safety team more effective — like in operations or project management.
That said, it seems possible that some such roles could have a veneer of contributing to AI safety without doing much to head off bad outcomes. For this reason it seems particularly important here to continue to think critically and creatively about what kinds of work in this area are useful.
Some roles in this space may also provide strong career capital for working in AI policy by putting you in a position to learn about the work these labs are doing, as well as the strategic landscape in AI.
There is likely a lot of policy work with the potential to positively affect the long run future that doesn’t fit into either of our priority paths of AI policy or biorisk policy.
We aren’t sure what it might be best to ultimately aim for in policy outside these areas. But working in an area that is plausibly important for safeguarding the long-term future seems like a promising way of building knowledge and career capital so that you can judge later what policy interventions might be most promising for you to pursue.
Other ‘broad interventions’ for making governments generally better at navigating global challenges, e.g. promoting ‘approval voting’ (a form of voting reform).
Interventions aimed at giving the interests of future generations greater representation in governments, for example requiring ‘posterity impact statements’ for relevant legislation or creating specialized legislative committees whose purpose is to consider the effect of policies on future generations’ interests. Read more.
See our problem profiles page for more issues, some of which you might be able to help address through a policy-oriented career.
There is a spectrum of options for making progress on policy, ranging from research to work out which proposals make sense, to advocacy for specific proposals, to implementation. (See our write-up on government and policy careers for more on this topic.)
It seems likely to us that many lines of work within this broad area could be as impactful as our priority paths, but we haven’t investigated enough to be confident about the most promising options or the best routes in. We hope to be able to provide more specific guidance in this area in the future.
Some people may be extraordinarily productive compared to the average. (Read about this phenomenon in research careers.). But these people often have to use much of their time on work that doesn’t take the best advantage of their skills, such as bureaucratic and administrative tasks. This may be especially true for people who work in university settings, as many researchers do, but it is also often true of entrepreneurs, politicians, writers, and public intellectuals.
Acting as a personal assistant can dramatically increase these peoples’ impact. By supporting their day-to-day activities and freeing up more of their time for work that other people can’t do, you can act as a ‘multiplier’ on their productivity. We think a highly talented personal assistant can make someone 10% more productive, or perhaps more, which is like having a tenth (or more) as much impact as they would have. If you’re working for someone doing really valuable work, that’s a lot. In general, we want to emphasize that helping others have a greater positive impact than they would have had otherwise is an important and valid way to do good (for instance, that’s our strategy here at 80,000 Hours).
Another related path is working in research management. Research managers help prioritize research projects within an institution and help coordinate research, fundraising, and communications to make the institution more impactful. Read more here. In general, being a PA or a research manager seems valuable for many of the same reasons working in operations management does — these coordinating and supporting roles are crucial for enabling researchers and others to have the biggest positive impact possible.
We’ve argued that because of China’s political, military, economic, and technological importance on the world stage, helping western organizations better understand and cooperate with Chinese actors might be highly impactful.
We think working with China represents a particularly promising path to impact. But a similar argument could be made for gaining expertise in other powerful nations, for example Russia or India. If you’re at the beginning of your career, it may even be valuable to think about which countries are most likely to be particularly influential in a few decades, and focus on gaining expertise there.
This is likely to be a better option for you if you are from or have spent a substantial amount of time in one of these countries. The best paths to impact here likely require deep understanding of the relevant cultures and institutions, as well as language fluency (e.g. at the level where you might be able to write a newspaper article about longtermism in the language).
If you are not from one of these countries, one way to get started might be to pursue area or language studies (one source of support available for US students is the Foreign Language and Area Studies scholarship programme), perhaps alongside economics or international relations. You could also start by working in policy in your home country and slowly concentrate more and more on issues related to the country you want to focus on, or try to work in philanthropy or directly on a top problem there.
There are likely many different promising options in this area, both for long-term career plans and useful next steps. Though they would of course have to be adapted to the local context, some of the options laid out in our article on becoming a specialist in China could have promising parallels in other national contexts as well.
There is a commonsense argument that if AI is an especially important technology, and hardware is an important input in the development and deployment of AI, specialists who understand AI hardware will have opportunities for impact — even if we can’t foresee exactly the form they will take.
Some ways hardware experts may be able to help positively shape the development of AI include:
More accurately forecasting progress in the capabilities of AI systems, for which hardware is a key and relatively quantifiable input.
Helping AI projects in making credible commitments by allowing them to verifiably demonstrate the computational resources they’re using.
Helping advise and fulfill the hardware needs for safety-oriented AI labs.
These ideas are just examples of ways hardware specialists might be helpful. We haven’t looked into this area very much (though we do talk a bit about AI hardware as a path to impact at the end of our podcast episode with Danny Hernandez). So, we are pretty unsure about the merits of different approaches, which is why we’ve listed working in AI hardware here instead of as a part of the AI technical safety and policy priority paths. (See an example of how one person has explored this area.)
We also haven’t come across research laying out specific strategies in this area, so pursuing this path would likely mean both developing skills and experience in hardware and thinking creatively about opportunities to have an impact in the area. If you do take this path, we encourage you to think carefully through the implications of your plans, ideally in collaboration with strategy and policy experts also focused on creating safe and beneficial AI.
Researchers at the Open Philanthropy Project have argued that better information security is likely to become increasingly important in the coming years. As powerful technologies like bioengineering and machine learning advance, improved security will likely be needed to protect these technologies from misuse, theft, or tampering. Moreover, the authors have found few security experts already in the field who focus on reducing catastrophic risks, and predict there will be high demand for them over the next 10 years.
In a recent podcast episode, Bruce Schneier also argued that applications of information security will become increasingly crucial, although he pushed back on the special importance of security for AI and biorisk in particular.
We would like to see more people investigating these issues and pursuing information security careers as a path to social impact. One option would be to try to work on security issues at a top AI lab, in which case the preparation might be similar to the preparation for AI safety work in general, but with a special focus on security. Another option would be to pursue a security career in government or a large tech company with the goal of eventually working on a project relevant to a particularly pressing area. In some cases we’ve heard it’s possible for people who start as engineers to train in information security at large tech companies that have significant security needs.
Compensation is usually higher in the private sector. But if you want to work eventually on classified projects, it may be better to pursue a public sector career as it may better prepare you to eventually earn a high level of security clearance.
There are certifications for information security, but it may be better to get started by investigating on your own the details of the systems you want to protect, and/or participating in public ‘capture the flag’ cybersecurity competitions. At the undergraduate level, it seems particularly helpful for many careers in this area to study CS and statistics.
Information security isn’t listed as a priority path because we haven’t spent much time investigating how people working in the area can best succeed and have a big positive impact. Still, we think there are likely to be exciting opportunities in the area, and if you’re interested in pursuing this career path, or already have experience in information security, we’d be interested to talk to you. Fill out this form, and we will get in touch if we come across opportunities that seem like a good fit for you.
Some people seem to have a very large positive impact by becoming public intellectuals and popularizing important ideas — often through writing books, giving talks or interviews, or writing blogs, columns, or open letters.
However, it’s probably even harder to become a successful and impactful public intellectual than a successful academic, since becoming a public intellectual often requires a degree of success within academia while also having excellent communication skills and spending significant time building a public profile. Thus this path seems to us to be especially competitive and a good fit for only a small number of people.
That said, this path seems like it could be extremely impactful for the right person. We think building awareness of certain global catastrophic risks, of the potential effects of our actions on the long-term future, or of effective altruism might be especially high value, as well as spreading positive values like concern for foreigners, nonhuman animals, future people, or others.
There are public intellectuals who are not academics — such as prominent bloggers, journalists and authors. However, academia seems unusually well-suited for becoming a public intellectual because academia requires you to become an expert in something and trains you to write (a lot), and the high standards of academia provide credibility for your opinions and work. For these reasons, if you are interested in pursuing this path, going into academia may be a good place to start.
Public intellectuals can come from a variety of disciplines — what they have in common is that they find ways to apply insights from their fields to issues that affect many people, and they communicate these insights effectively.
If you are an academic, experiment with spreading important ideas on a small scale through a blog, magazine, or podcast. If you share our priorities and are having some success with these experiments, we’d be especially interested in talking to you about your plans.
For the right person, becoming a journalist seems like it could be highly valuable for many of the same reasons being a public intellectual might be.
Good journalists keep the public informed and help positively shape public discourse by spreading accurate information on important topics. And although the news media tend to focus more on current events, journalists also often provide a platform for people and ideas that the public might not otherwise hear about.
However, this path is also very competitive, especially when it comes to the kinds of work that seem best for communicating important ideas (which are often complex), i.e., writing long-form articles or books, podcasts, and documentaries. And like being a public intellectual, it seems relatively easy to do make things worse as a journalist by directing people’s attention in the wrong way, so this path may require especially good judgement about which projects to pursue and with what strategy. We therefore think journalism is likely to be a good fit for only a small number of people.
‘Proof assistants’ are programs used to formally verify that computer systems have various properties — for example that they are secure against certain cyberattacks — and to help develop programs that are formally verifiable in this way.
Currently, proof assistants are not very highly developed, but the ability to create programs that can be formally verified to have important properties seems like it could be helpful for addressing a variety of issues, perhaps including AI safety and cybersecurity. So improving proof assistants seems like it could be very high-value.
For example, we might eventually be able to use proof assistants to generate programs for solving some sub-parts of the AI ‘alignment problem’. This would require we be able to correctly formally specify the sub-problems, for which training in formal verification is plausibly useful.
We haven’t looked into formal verification yet much, but both further research in this area and applying existing techniques to important issues seem potentially promising to us. You can enter this path by studying formal verification at the undergraduate or graduate level, or learning about it independently if you have a background in computer science. Jobs in this area exist both in industry and in academia.
The effective altruism community seeks to support people trying to have a large positive impact. As a part of this community, we may have some bias here, but we think helping to build the community and make it more effective might be one way to do a lot of good. Moreover, unlike other paths on this list, it might be possible to do this part time while you also learn about other areas.
There are many ways of helping build and maintain the effective altruism community that don’t involve working within an effective altruism organisations, such as consulting for one of these organizations, providing legal advice, or helping effective altruist authors with book promotion.
We think these roles are good to pursue in particular if you are very familiar with the effective altruism community and you already have the relevant skills and are keen to bring them to bear in a more impactful way.
If you can find a way to address a key bottleneck to progress in a pressing problem area which hasn’t been tried or isn’t being covered by an effective organisation, starting one of your own can be extremely valuable.
That said, this path seems to us to be particularly high-risk, which is why we don’t list it as a priority path. Most new organizations struggle, and non-profit entrepreneurship can often be even more difficult than for-profit entrepreneurship. Setting up a new organisation will also likely involve diverting resources from other organisations, which means it’s easier than it seems to set the area back. The risks are greater if you’re one of the first organizations in an area, as you could put off others from working on the issue, especially if you make poor progress (although this has to be balanced against the greater information value of exploring an uncharted area).
In general, we wouldn’t recommend that someone start off by aiming to set up a new organisation. Rather, we’d recommend starting by learning about and working within a pressing problem area, and then if through the course of that work you come across a gap, and that gap can’t be solved by an existing organisation, then consider founding a new one. Organisations developed more organically like this, and which are driven by the needs of a specific problem area, usually seem to be much more promising.
There is far more to say about the question of whether to start a new organisation, and how to compare different non-profit ideas and other alternatives. A great deal depends on the details of your situation, making it hard for us to give general advice on the topic.
We suspect that the effectiveness of different approaches to mitigating climate change vary greatly, which means taking an effective altruist approach to climate change — and trying hard to focus on the most effective ways of working on the problem — could make a big difference.
We don’t have well-developed career advice in this area. But here are some rules of thumb for choosing approaches we think can help maximise your impact:
Focus on the most extreme risks where possible. As we argue in our problem profile on extreme climate change, it’s generally more pressing to reduce the chances of potential effects of climate change the worse they are. This is especially clear from a longtermist perspective, because more extreme outcomes are disproportionately likely to contribute to existential risk. That said, many of the best interventions for reducing extreme climate change risks also reduce more anticipated risks, and may even be the best from that perspective as well, since reduction in greenhouse gas emissions are key regardless.
Pay attention to the best evidence on what kinds of interventions are the most cost effective in the long term.
This is not easy, as many people have strong opinions on what kinds of projects are important, and it can be difficult to sift through the variety of views. In doing so, here are some things to consider:
The majority of future energy demand will come from non-OECD countries, so solutions that aren’t geared toward those countries are unlikely to be most effective.
What’s most cost effective in the long term could well differ from what seems like the best deal now. For example, if some method for decarbonisation is cheap and in everyone’s interest, you might expect it to happen without your intervention, meaning it could be better to focus on something else.
Check out this talk to learn about more factors that shape what types of interventions are most cost effective.
Focus on more neglected strategies. If an approach or a research area has not yet been explored — like a new zero-emission technology, for example — you have a chance of enabling work that others have missed, and you’ll also gain valuable information about what works and what doesn’t, which you can share with others.
Look for leverage. Causing even a relatively small improvement to the use of others’ resources that might go toward climate change likely dwarfs anything you could do entirely on your own, because these other resources are so massive (government spending alone is in the hundreds of billions per year). This means it will probably be most effective to leverage these other resources. For example, if you help organise a grassroots movement, that means everyone who joins multiplies your effort. If your advocacy efforts are successful at influencing policy, then they can affect billion-dollar budgets — which in turn affect the behavior of private actors. Or, if you can improve the way the entire scientific community thinks about e.g. feedback loops or extreme risks, others can build on your work.
As we said above, we’re not sure which career paths are best in this area, but here are a few ideas:
Help build the field of research on extreme climate change risks — e.g. on the nature and likelihood of extreme feedback mechanisms, which are not currently included in the most influential climate models, or on any ways climate change might increase existential risks from other sources (a particularly understudied area). This might mean becoming a researcher yourself and working with an eye toward helping shift the scientific community’s attention toward the most important and neglected topics.
However, right now we have no way of effectively and securely investing resources over such long time periods. In particular, there are few if any financial vehicles that can be reliably expected to persist for more than 100 years and stay committed to their intended use, while also earning good investment returns. Figuring out how to set up and manage such a fund seems to us like it might be very worthwhile.
Founders Pledge — an organization that encourages effective giving for entrepreneurs — is currently exploring this idea and is actively seeking input. It seems likely that only a few people will be able to be involved in a project like this, as it’s not clear there will be room for multiple funds or a large staff. But for the right person we think this could be a great opportunity. Especially if you have a background in finance or relevant areas of law, this might be a promising path for you to explore.
If the problem area still seems potentially promising once you’ve built up a background, you could take on a project or try to build up the relevant fields, for instance by setting up a conference or newsletter to help people working in the area coordinate better.
If, after investigating, working on the issue doesn’t seem particularly high impact, then you’ve helped to eliminate an option, saving others time.
If you have an idea for a novel approach to addressing one of our highest priority problems, it could also be high impact to explore that. But because our highest priority problems have been more researched, the value of information of exploring more within them is likely to be lower.
We can’t really recommend exploration of new issues as a priority path because it’s so amorphous and uncertain. It also generally requires unusual degrees of entrepreneurialism and creativity, since you may get less support in your work, especially early on, and it’s challenging to think of new projects and research ideas that provide useful information about the promise of a less explored area.
However, if you fit this profile (and especially if you have existing interest in and knowledge of the problem you want to explore), this path could be an excellent option for you. If you think it is, we’d like to hear from you. We may be able to help you decide whether this is a good option for you, and how to go about it.
What to do if you already have well-developed skills?
If you already have well-developed skills, your best bet is probably to work out how best to apply those skills to the most pressing issues, which may involve doing something different from the paths listed above.
It’s difficult for us to give sufficiently specialised advice through online content, and if you wanted to enter a different path, you’d likely need to speak to experts in the area. We have a few ideas for career paths we think make sense given some common categories of skills.
Tips for coming up with even more options
There are many ways to have a big impact beyond the paths listed above, and we need people coming up with creative ways to contribute. It’s likely we haven’t even thought of many of the best ways to have an impact.
We cover some other ways to help in our profiles on individual problems. We also cover some advice on how to analyse the bottlenecks to progress on different issues in order to generate options here. In our article on decision making we cover some prompts to help you think of more possibilities.
If you have an idea for making a major contribution that doesn’t fit into the categories above, we highly encourage you to investigate the opportunity, talk to relevant experts, and determine if it’s a good personal fit (covered below).
If you’re still unsure which long-term paths to focus on, then your best next step may be to invest in yourself so that you can directly work on these areas in the future. We cover ‘career capital’ later. Otherwise, you might want to switch to a different problem area that better matches your skills.
Make a difference in any career
While it’s not our focus at 80,000 Hours, you can contribute to solving pressing problems no matter what career you’re in.
Another option is to spread important ideas. If you can introduce one friend to a pressing global problem and they end up working on that issue, then you may have had as much impact as switching yourself. We talk more about the potential impact of social advocates in general here and in our podcast with Cass Sunstein. In doing advocacy, it’s important not to be pushy; the best way to introduce your friends to an issue is often just to be interested yourself, or talk about how you’ve changed your own behaviour.
We also think that political action targeted at key longtermist issues is likely to be a good use of time in some cases. Indeed, our analysis suggests that in some US states simply voting in presidential elections can have high return per hour.
Read our most important articles on which careers are high-impact:
We hope the earlier sections have given you some new ideas about which long-term career paths to aim towards. In this section, we cover a number of strategic considerations to think about when figuring out how to get into and eventually thrive in one of them.
We’ve added to this list of strategic considerations over time, and in some cases have shifted our views about their key implications. Note also that taking account of each of these strategic considerations is usually a matter of balance — for instance, some people put too much weight on personal fit, while others put too little, so you need to bear in mind where your bias likely lies.
Once you’ve identified some promising options, the next most important step is to find the option where you have the best chance of excelling over the course of your career — where you have your greatest ‘personal fit’.7
The productivity of different people within a field varies greatly, sometimes by ten or even 100-fold. This means that, to the extent that it’s predictable, personal fit is likely one of the most important factors in determining the expected impact of your career.8 Excelling at your work also boosts your career capital, giving you more options in the future.
Because personal fit is so important, we would almost never encourage you to pursue a career you dislike — you’d be unlikely to persist and therefore to excel in the long-term. You can think of your degree of personal fit with an option as a multiplier on how promising that option is in general, such that total impact = (average impact of option) x (personal fit). This means that if you’re an especially good fit for a path, it can be worth taking it over another option that’s higher-impact on average. Likewise, it can also be worth pursuing a path that’s only an average fit if it’s unusually high-impact. The ideal, of course, is to have both.
Academic studies and common sense both suggest that while it’s possible to predict people’s performance in a path to some degree, it’s a difficult endeavour.9 What’s more, there’s not much reason to trust intuitive assessments, or career tests either.10
Instead, we think the best way to get an accurate read is to look objectively at your track record in similar paths and ask experts to candidly assess your chances. If you want to go more in-depth, sometimes you can find evidence-backed predictors and use those too (for instance, we discuss some predictors of performance in research here).
Beyond that, try to carry out small experiments to gain more information, such as a side-project, night study, internship, shadowing or work trial.11
Career decisions involve a huge amount of uncertainty. This means that rather than try to have an impact right away, it can be even better to try to learn about which options will be best for you in the long-term so you can make better decisions in the future — to gain ‘information value’.
To start, list your key uncertainties about your top long-term options, and then try to carry out low-cost tests to resolve these uncertainties. Start with the least costly ways to learn more, such as speaking to someone in the path, and then work towards more involved tests, such as making job applications, doing a side project, trying out the work for several months, or the other steps listed here.
If you remain uncertain about what to do after exhausting these tests (and you probably will), then you should see the next job you pursue as an experiment to gather more information.
The paths that have the most information value to try out are those that might be very high-impact, but where you’re very uncertain how they’ll work out.
More concretely, you can rank the long-term paths open to you in terms of how much impact you would have in them if you performed unusually well compared to your expectations (as opposed to your best guess at how you’ll perform). We call your top ranked path here your ‘upside path’, because it’s the option that has the best upside scenario.
One reasonable strategy is to first test out your top upside path. If you find you’re indeed performing ahead of your expectations, then continue; otherwise, switch into the next best upside path. This strategy is attractive because there’s an asymmetry: in the good case, you have a big impact and continue doing so for many years; if it doesn’t work out, you can switch to something else. This is one reason it’s important not to be underconfident. The costs of spending a few months trying out a path are often low relative to the huge benefits if it turns out well.
However, there are two important caveats to keep in mind. Firstly, you have to make sure that you’re in a sufficiently robust situation — personally, professionally and financially — that an experiment not working out won’t do you considerable long-term harm. That means making sure you have a back-up plan and remain able to switch in to something else. We discuss how to manage career risks below.
Secondly, you should be cautious about doing experiments that could significantly set back the field you’re working on. We cover the risk of accidentally causing harm next. You can learn more about the ‘upside path’ strategy in our podcast with Brian Christian.
Another difficulty is that your top upside path might well be something you haven’t even thought of yet. This could suggest exploring and testing out options well outside your normal experience.
If you’re very uncertain about which path to pursue longer-term, there are two other strategies to consider. One is to plan to try out several options, and then decide later which is best. This is generally easiest early career. For instance, a common pattern is to try internships while at college, then do something more unusual after graduating for 1-2 years, and then to go to graduate school in an area you might pursue long-term. Another strategy is to accumulate generally useful ‘career capital’, and decide what to do with it later. We come back to career capital below.
The earlier you are in your career, the more it’s worth focusing on exploration and testing. This is because at the start of your career, you have the most time remaining to take advantage of new options you discover, while also having the least information about what’s best. Additionally, it seems like the cost of exploring is lower earlier, since people expect young people to change jobs more often, and there are opportunities to try things out, like internships, which aren’t open when older.
To have a big impact, we encourage people to work on neglected, high-stakes issues. Unfortunately these fields are often ‘fragile’ — it’s easy to accidentally make the situation worse rather than better.
For example, it’s hard to overcome first impressions, so if you run a publicity campaign about a new topic, you can make it harder to run a different campaign in the future. This is an example of ‘lock-in’. If you later discover a more effective way to frame the problem, then the first campaign may have had a negative impact.
These risks mean that before you focus on taking potentially more risky upside paths, it’s important to try to mitigate the chance of having a major downside. Building a deep understanding of your field, finding good mentors, and coordinating well with others can help — we cover more strategies in the full article.
Another strategic consideration is ‘career capital’ — the skills, connections, credentials, and financial resources that can help you have a bigger impact in the future.
Career capital is potentially a vital consideration, because people seem to become dramatically more productive over their career. Our impression is that most people have little impact in their first couple of jobs, while productivity in most fields seems to peak age 40-50.12 This suggests that by building the right career capital, you can greatly increase your impact, and that career capital should likely be one of your top considerations early career.
This leaves the difficult question of which options help you gain the best career capital, i.e. put you in the best position to take the high-impact roles addressing the world’s most pressing problems.
We’ve noticed that people often think the best way to gain career capital is doing prestigious jobs, such as consulting. We think consulting is a good option for career capital, but it’s rarely the most direct route into our priority paths.
You can see our write ups of individual priority paths for our thoughts on the best next steps to gain career capital within those paths. Some options that stand out as good for a variety of paths and also have reasonable back-up options are as follows:
Go to graduate school in a subject that provides a good balance of personal fit, relevance, and backup options, with the aim of working in policy or doing relevant research. We’d especially highlight graduate study in economics and machine learning, since they provide good career capital and have great backup options, but some other useful subjects include: security studies, international relations, public policy, relevant subfields in biology, and more. Start with a masters, and only add a PhD if planning to focus on research.
Work as a research assistant at a top think tank, aiming to specialise in a relevant area of policy, such as technology policy or security — especially if you’d be working under a good mentor.
Take other entry routes into policy careers, such as (in the US) certain congressional staffer positions, joining a congressional campaign, or working directly in certain executive branch positions. In the UK, the equivalent would be working in the civil service, or working for a politician.
Work at a top AI lab, including in certain non-technical roles. This potentially gives you similar career capital benefits to consulting, but with more relevance to AI. Certain jobs in ‘big tech’ can also be more attractive than consulting, especially if you can work in a relevant area or develop a useful skill-set, such as AI or information security.
Join a particularly promising startup, especially in the for-profit tech sector, though small and rapidly growing organisations in any sector are worth considering. Look for an organisation with impressive people, and ideally look for a role that lets you develop concrete skills that are needed (e.g. especially management, operations, entrepreneurial skills, general productivity, generalist research). It’s also ideal to join an organisation working within a relevant area, such as AI or bioengineering. These roles potentially give you relevant skills and connections, and also give you the opportunity to advance quickly. If you can build a network of people working in startups, then you can also try to identify an organisation that’s unusually credible (e.g. backed by a range of impressive funders), and which might be on a breakout trajectory, which would give you the chance of further upside (e.g. reputation, money).
Do something that lets you learn about China, such as the taking the steps listed here.
Work at a top non-profit or research institute in a top problem area, such as some of those we list in our recommended organisations.
Take any option where you might be able to have unusually impressive achievements. For instance, we came across someone who had a significant chance of landing a national-level TV show in India as a magician and was deciding between that and… consulting. It seemed to us that the magician path was more exciting, since the skills and connections within media would be more unusual and valuable for work on pressing problems than those of another consultant.
If you might be able to do something with significant positive impact in the next five years (such as founding a new non-profit), that can often be better because it’s not only impressive, it also gives you connections and skills that are highly relevant to solving the problem you’re working on.
One common theme in the above is that surprisingly often there is little tradeoff between getting career capital and doing the the natural first step towards top long-term roles.
However, there can be some tradeoff between gaining ‘specialist’ career capital versus ‘transferable’ career capital. Transferable career capital is relevant in lots of different options, e.g. management skills which are needed by almost every organisation, or achievements which are widely recognised as impressive. Specialist career capital, like knowledge of and connections within a specific global problem, prepares you for a narrow range of paths, but is often necessary to enter the highest-impact options.
Our overall view is that the rewards to specialisation are often high-enough to make the costs worth it, especially if you also maintain some back-up options while specialising.
However, specialising does increase the risk that if the situation changes, your skills could become less useful. If you’re early in your career and/or very uncertain about which long-term options have the best fit for you, it can be better to focus on transferable career capital and plan on specialising later. Indeed, there’s a chance the best option for you is something you haven’t even thought of yet, and gaining transferable career capital is the best way to prepare for that.
Later in your career, you’re more likely to have the option to take a job with immediate impact right away. At this point, how to trade career capital against immediate impact becomes a much harder issue to settle. For instance, if you could work as an AI safety engineer today, should you still do a PhD to try to open up research positions you think might be higher-impact?
If you do the PhD, not only do you give up the impact you would have had early on, you’re also delaying your impact further into the future. Most researchers on this topic agree that all else equal, it’s better to put resources towards fixing the world’s most pressing problems sooner rather than later. You might also give up on trying to have an impact in the meantime, and informal polls suggest the annual risk of this might be quite high. Finally, you gain some career capital from almost every reasonable option, so what matters is the extra career capital you get from the PhD compared to working as an AI safety engineer.
On the other hand, by making the right investments, it’s possible to increase your impact a great deal, so the tradeoff can go either way. Overall, we’re excited to see people take unusually good opportunities to gain career capital, especially those that open up specific paths that seem much higher-impact.
Career capital is also on average more important earlier in your career, since you’ll have more time to make use of your investments.
When thinking about career capital, most people focus on concrete credentials like the brand name of their employer or specific qualifications. However, career capital is anything that puts you in a better position to make a difference in the future, including general efforts at personal development.
The aim is to maximise your impact over an entire 40 year career, and many of the highest-impact positions take decades to reach. This means it’s vital to be on a path you can stick with. And that probably means doing something you enjoy, which lets you get enough sleep, exercise, build up enough savings, have rewarding relationships, and fulfill other important personal priorities. It also means realising you’re not perfectly moral and treating yourself with some self-compassion. Neglecting to take care of yourself seems to be one of the more common ways for talented people to fail to live up to their potential.
One area we’d especially like to highlight is mental health. Depending on the definition you use, between 10% and 50% of people in their twenties are dealing with some kind of mental health problem. If you’re suffering from one – be it anxiety, bipolar disorder, ADHD, depression or something else – then probably make dealing with or learning to work around it one of your top priorities. It’s one of the best investments you can ever make, both for your own sake and your ability to help others.
We know many people who took the time to make mental health their top priority and who, having found treatments and techniques that reduced or even eliminated their symptoms, have gone on to perform at the highest levels.
You’re likely to have much more impact if you work as part of a community, since communities enable their members to specialise, share knowledge, achieve economies of scale, and thereby have a greater collective impact. Being part of a community can also help you stay motivated and find support when the going gets tough.
You may find it useful to get involved with the effective altruism community, which we helped to set up. There are also many other great communities to get involved with, such as those focused on particular global problems like biosecurity and climate change.
When you’re working with a community, you need to use different rules of thumb to identify which options are best. Rather than trying to identify the single best option at the margin, it becomes more important to try to estimate the ideal allocation of available people within the community across opportunities, and how you can nudge the community towards that ideal allocation. We call this the ‘portfolio approach’. It also becomes important to think about your relative fit compared to other community members.
This can mean focusing on issues that seem less pressing, or that you’re less well-suited to, if they’re especially neglected by the rest of the community or you’re unusually well placed to take them on compared to others.
From a personal perspective, it makes sense to be risk-averse about most goals. Having ten times as much money won’t make you ten times happier, so it doesn’t make sense to bet everything on a 10% chance of increasing your income ten-fold.
This is especially true if you have a small amount of resources compared to the needs of the problem area you’re working on, which means that ‘diminishing marginal returns’ won’t be a significant issue within what you allocate.
This reasoning doesn’t apply when you face the risk of significantly setting back your field (as opposed to failing to have an impact). We believe that it makes sense to be more cautious about taking on large risks of this kind, and we cover some advice on how to do that in the article on accidental harm.
We often find people who are keen to make sure they have some impact, and thus don’t pursue high-risk options even when they have higher expected value. Unfortunately, if the reasoning above is correct, this will often mean giving up the best opportunities to contribute.
We recommend clearly separating your personal goals from your altruistic goals. With your personal goals, it makes sense to try to reduce the risk you face.
However, once you’ve reduced your personal risk to an acceptable level, then you can pursue your impact-focused goals in a risk-neutral way, which means being open to high-risk high-reward options, and perhaps even seeking them out.
Here are some ways to manage career risks:
Analyse the specific downside scenarios you face. It’s easy to have a vague sense that an option is risky, but when you spell out a realistic worst case scenario, it doesn’t seem so bad. In doing this you might also realise there are straightforward things you can do to reduce the risks.
Create a ‘Plan Z’ — an option you can definitely pursue if all your other options don’t work out.
Consider eliminating paths that might cause you to burn out or become very dissatisfied (even if you take the steps above).
If you’re not in a good position to take risks right now, consider focus on building transferable career capital and financial runway until you feel more comfortable pursuing higher-risk options.
We think there’s less tension between the two than is often supposed. Finding work you excel at and that helps others is fulfilling, and many of our readers say they’ve become happier in the process. Moreover, you’ll have a greater impact if you find work you enjoy and that fits with your personal life, because you’ll have a greater chance of excelling in the long term. So enjoying your work and having an impact are often mutually supportive goals.
This said, sometimes conflicts do arise. For instance, the higher-impact path may involve working harder than would be ideal for your happiness, or it can involve taking the risk of trying out several paths that don’t go anywhere. How to handle these conflicts is a difficult issue.
We may live in a uniquely important time in history, with the opportunity to influence the development of new technologies that could impact the long-term future and reduce existential risks. We also have many other opportunities to help others a great deal with comparatively little cost to ourselves. This motivates some of our readers to make impartially doing good the main focus of their careers. Some philosophers, such as Peter Singer, have argued that we have a moral obligation to do so.
However, most of our readers see ‘making a difference’ in the way we’ve outlined as one among several important career goals, which may include other moral aims, supporting a family, or furthering other personal projects.
Whatever your views on this topic, we think it’s important to take seriously the risk of burning out if one engages in too much self-sacrifice. Even if your only career goal were to make a difference, you should most likely aim to contribute over your entire 40-year career. This means it’s important to cultivate self-compassion and take a path in which you’ll be motivated for the long term as we discussed earlier.
What’s more, one of the biggest ways to have more impact is to inspire others to contribute, and this is much easier when you’re enjoying your life and career.
One technique that can be helpful is setting a target for how much energy you want to invest in personal vs. altruistic goals. For instance, our co-founder Ben sees making a difference as the top goal for his career and forgoes 10% of his income. However, with the remaining 90% of his income, and most of his remaining non-work time, he does whatever makes him most personally happy. It’s not obvious this is the best tradeoff, but having an explicit decision means he doesn’t have to waste attention and emotional energy reassessing this choice every day, and can focus on the big picture.
But how do you actually plan your career so that you can help solve the world’s most pressing problems?
We’ve written a thorough step-by-step career planning process and template you can use to consider all the most important questions when planning out your career, and write an actionable plan.
At the end, you’ll also find advice and resources for successfully putting your plan into action.
The process is designed so anyone can use it, regardless of their career stage or which problems they think are the most pressing. But if you’re later in your career, you might end up wanting to go through some of the sections more quickly.
80,000 Hours is an independent nonprofit that is here to help you have a larger impact with your career. We’re building a community of people who focus their careers on addressing some of the world’s greatest challenges, and we hope you might join.
Enter a high-impact career
If you’re interested in working in one of our ‘priority paths’, or have other ideas about how to have a big impact on one of our top problem areas, our advising team might be able to speak with you one-on-one. They can help you consider your options, make connections with others working on these issues, and possibly even help you find jobs or funding opportunities.
Find ways to meet people interested in applying these ideas on our community page.
Our podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for working on these issues and suggest concrete ways to help. We also have an episode discussing most of the ideas on this page.