In which career can you have the biggest positive impact on the world?
Initially, we were most associated with “earning to give” — taking a higher paying career in order to donate more to charity, rather than working directly into a “social impact” job, though even at the time, we didn’t think it was typically the best option, and today we rarely think that’s where people should focus first.
Today, some of the paths we’re most often excited about include:
AI safety research in academia or industry.
Working in “effective altruist” and existential risk research non-profits.
AI and biorisk policy and government careers.
Global priorities research in academia.
Being a “China specialist”.
How have we arrived at these ideas?
If brief, we start with the list of global problems we think are most pressing, then we ask experts in the field about what the key bottlenecks are to those problems, and then we try to find career paths that contribute to those bottlenecks.
However, there are many other high-impact options out there, and what’s best for you depends greatly on your personal circumstances, so in this article we:
Start by outlining a method anyone can use to generate a short-list of high-impact career options.
Apply this method given our own view of global priorities and current knowledge, to produce a list of five broad categories of high-impact careers.
List some especially promising (but also highly competitive) “priority paths”, as well as a longer list of paths we’re less confident in, or are only a fit for a small number of people.
To generate a short-list of high-impact career options given your personal situation, you can use the following process:
Choose the 2-4 global problems you think are most pressing.
Identify the ‘key bottlenecks’ to progress within each of these.
Identify career paths that help address those bottlenecks.
Focus on the options where you have the best personal fit.
Make quantitative estimates of the impact of your top options.
Start to narrow down, or if there are no suitable options, go back to the start.
When it comes to specific options, right now we often recommend the following five key categories, which should produce at least one good option for almost all graduates:
Research in relevant areas
Government and policy relevant to top problem areas
Work at effective non-profits
Apply an unusual strength to a needed niche
Otherwise, earn to give
We’re especially excited about the following “priority paths”. They’re extremely competitive, but great options to test out if there’s a chance they’re a good fit. Use the table of contents to skip ahead and read a description of each path.
AI policy and strategy
AI safety technical research
Grantmaker focused on top problem areas
Work in effective altruist organisations
Global priorities researcher
Biorisk strategy and research
Earning to give in quantitative trading
Forecasting and related research and implementation
Six steps to generate high-impact career options
The best career path for you will depend on your values, strengths and situation, so the ideal approach is to generate your own list of promising options, rather than use a generic list.
In the first section, we’ll outline a process you can use to do this, focusing on the factors we think are most important. Later in the article we’ll apply the process (based on our list of problems and current knowledge) to come up with our five key categories and list of priority paths.
This process can be applied no matter your career stage – whether you’re an undergraduate or nearing retirement. If you already have significant experience in a skill, then also take a look at our advice by skill type.
The aim of the process is to give you ideas for options to aim for over the medium term. Elsewhere we cover how to further narrow down your shortlist in terms of your specific situation and personal fit. We wouldn’t usually encourage anyone to take a path where they have lower than average chances of success, or are likely to become unhappy.
1. Decide which global problems are most pressing
Should you focus on reducing climate change, improving education, lowering the chance of nuclear conflict, or another area entirely? The first stage is to come up with a short-list of 1-5 areas that you think are especially effective to work on.
This is perhaps the most important step, because we’ve argued some problem areas are 100 or even 1,000 times more pressing than those that people often focus on. By this we mean that we expect each unit of resources will lead to 100 times as much impact.1 So, if you start out focused on the wrong area, you might forgo over 99% of your impact.
Personally, we think the most important issues relate to improving the long-term future and reducing catastrophic risks, and include AI safety, biorisk, building the effective altruism community and global priorities research.
If you’d like to see more of our views on this question, see:
Each problem has different needs. For instance, we’d argue that developing a cure to cancer is not mainly held back by a lack of awareness, since everyone is already aware that cancer is a problem. Rather, it’s held back by a lack of progress on biomedical research, and that mainly requires talented biologists and funding.
We define the key bottleneck facing an area as the input it most needs to make progress.
More precisely, the input that’s the biggest bottleneck is the input which would yield the most progress towards solving the problem if one more person started working on it on average.
Again, we think there are large differences in the extent to which different inputs are bottlenecks, and so if you focus on the wrong one, you might give up most of your impact.
Some inputs we often consider include:
Funding – additional financial resources from donations or fundraising.
Insights – new ideas about how to solve the problem.
Awareness & support – how many people know and care about the issue, and how influential they are.
Political capital – the amount of political power that’s available for the issue.
Coordination – the extent to which existing resources effectively work together.
Community building – finding other people who want to work on the issue.
Logistics and operations – the extent to which programmes can be delivered at scale.
Leadership and management – the extent to which concrete plans can be formed and executed on using the resources already available.
One complication in what’s most relevant is the bottleneck in the future period while you’re working, so you need to try to predict how the area will unfold.
It’s useful to try to be as specific as possible about the bottleneck. For instance, within AI safety, we think that a key bottleneck right now is more talented researchers. We think this is a key bottleneck because:
In the long-term, to address major risks from AI, we’ll need a flourishing, credible field of AI safety research that can come up with solutions to the alignment problem.
Right now, the field isn’t held back by funding, since there are already several large funders and institutions who are willing to cover almost any good opportunities in the field (e.g. MILA, OpenAI, DeepMind, The Open Philanthropy, BERI / CHAI).
We think the field is also not mainly held back by a lack of general awareness, since recent press coverage about AI risks has reached many people and there’s a significant group of people who are concerned by the issues. (Though, further high-quality outreach could be useful, since much of the existing coverage misportrays the issue.) It’s also not held back by a lack of political capital, since we don’t know what policy changes would help.
Instead, we expect that what would most benefit the field is more talented researchers working on the issue, especially those who are able to publish well received papers on the topic.
More researchers would not only directly feed into progress on key research questions, but this could also start a positive feedback loop. If more great academic papers on the topic could be published, it would demonstrate that it’s a credible, tractable field, and attract even more researchers into the field.
This is borne out when we talk to experts in the area. These experts often estimate the value of an additional promising technical AI safety researcher is equivalent to $1-$10m of extra funding per year — much more than most people could donate.
Here are some general guidelines on where the bottlenecks are likely to be depending on the stage:
Defining the field – Early on, what’s often needed is insight. First, insight is needed to work out that it’s an issue worth working on, and then it’s needed again to define the key issues in the field (a form of “disentanglement research”). Insight is needed a third time to work out what the best solutions to the problem are. This is the stage we’re at in AI strategy and policy.
Community and field building – After it’s clearer what the solutions are, you can start to build a community focused on the issue. Building a community is often more effective than trying to solve the issue directly because it lets you mobilise others, and achieve a “multiplier” on your efforts. This is usually best achieved through targeted advocacy early on — it’s best to avoid broad advocacy until you’ve better worked out the message, because it’s easy to get wrong and hard to unwind. You may also need some work to make concrete progress on the problem to show that progress is possible, helping to mobilise others. This is closer to the stage that AI technical safety is in.
Scaling up the best solutions – Once the low hanging fruit in community building has been taken, then it’s time to solve the problem through whichever means seem most effective. This might mean launching a research programme, advocacy campaign, or the scale up of a promising intervention. At this point the bottleneck will depend on what the best solution is. If the best solution is research, the bottleneck will eventually be insight; if it’s policy change then it will be political capital; if it’s rolling out an intervention it might be logistics, funding or entrepreneurship. Global health seems to have a logistics bottleneck: one major challenge is to scale up existing evidence-backed treatments such as malaria nets which mainly requires money, logistics and management. An area can also get held up at an earlier stage. For instance, a field might ultimately need insight, but if it’s unable to attract researchers due to a lack of funding, then the funding bottleneck has to be overcome first.
Besides looking at the stage, you can also try to determine the key bottleneck by directly estimating which of the listed inputs are most needed at the margin. For this, it’s often useful to investigate which are most relatively neglected.
Before moving to the next step, aim to identify the 1-3 key bottlenecks in each of the top global problems you want to focus on. You can find some of our own assessments within our problem profiles.
3. Identify the career paths that best address these bottlenecks
In this stage, the aim is to generate specific career paths that help to resolve these bottlenecks.
Below, we list some general categories to consider. Try to think of at least one interesting concrete career option within each.
Direct work — find the best organisations addressing the problem, and work there. These are usually non-profits, but could also be for-profit organisations with a social mission. This could mean working in management, operations, outreach and using many other skills.
Entrepreneurship — help found a new organisation addressing a key bottleneck in the area.
Research — try to make progress on whichever research questions are most important in the area. This usually means seeking a PhD and aiming to work in academia, but there are also research options in non-profits, think tanks and companies.
Mass advocacy — make people more aware of the issue and how it can be solved, or encourage them to take action. This often means working in the media or a campaigning non-profit.
Targeted advocacy — as above, but focused on a niche audience. This could be done alongside another job, especially one that gives you access to influential people.
Government and policy — take a job in a relevant area of government / think tanks / party politics, then try to improve policy, or otherwise enable government to better manage the issue.
Don’t forget you can address bottlenecks either directly or indirectly. For instance, if insights are the key bottleneck, you could try to contribute directly by becoming a researcher, but you could also earn to give and fund researchers or do targeted advocacy to recruit more researchers.
Rather than contribute right away, another option is to invest in yourself (or a community) and explore in order to have a greater impact in the future. We will discuss whether or not to do this in a future article (for now, see our old article).
We’d roughly estimate that by focusing on the career paths that effectively address the key bottlenecks in an area, you can increase your impact about 2-5 times (compared to choosing randomly).2 This makes this stage significantly less important than your choice of area, but still important.
4. Focus on the career options with the best personal fit
As we argue in our main career guide, the most productive contributors in a complex job have 10 or even 100 times as much impact as the median. Not all of this difference is predictable, but even if it only partially is, then “personal fit” is still a key consideration. Among options that you might seriously consider, we’d roughly estimate that the best in terms of personal fit are 2-10 times better than the median.2
This means that although your choice of problem area is likely overall a more important factor than personal fit, once you have a list of options that are plausibly high-impact, then personal fit becomes the key consideration.
More technically, we define personal fit as:
Personal fit: how productive you expect to be in the job in the long-term compared to the average of others who typically take that job.
With this definition, then the total expected immediate impact of a role is:
Expected impact = (average impact of role) x (personal fit)
The “impact of role” depends on the other two factors — how pressing the problem and whether the role makes a large contribution to it on average. If we think of these as “effectiveness factors”, then we can write:
Expected impact = (avg pressingness of problem) x (avg effectiveness of method) x (personal fit)
This formula is the first element in our career framework. However, note that to make an all-considered comparison of options, you also need to consider career capital, job satisfaction, coordination with a community, and other personal factors. We’ll cover these factors in later articles.
We can now see that because the factors roughly multiply, balance is key. If an option is terrible on any dimension then you should probably eliminate it.
We can also see that if we can achieve an increase on each factor, the increases will multiply producing a very large total increase. For instance, if you can find a problem that’s 100-times more pressing, a method that’s 5-times more effective and an option that’s a 10-times better personal fit, then the total increase in impact would be 5000-times. However, in practice the factors will conflict. For instance, your best option for personal fit might not be in the best problem area. Still, we often think it’s possible to achieve a 10 to 100 times increase in impact depending on where you start.
5. Consider making quantitative estimates of the value of different career options
The process we’ve covered so far is mainly based on a broad qualitative analysis, but it’s also useful to compare your outputs to some quantitative estimates, and this section will cover one way of doing that.
The first point is that we can think of different career paths as contributing different resources to global problems. We can then try to compare the value of these different contributions in a standard unit.
One unit we’ve used is dollars of donations to the problem.
You can measure the value of different contributions in dollars of donations by considering tradeoffs like the following:
Which makes a bigger contribution?
An additional researcher willing to work on the area (at a specific level of personal fit).
$X of donations per year.
The value of “X” at which you’re indifferent between the two is an estimate of the dollar value of the contributions of this research.
If you do this for each career option, then you’ll have a quantitative estimate of the value of your contribution to each area.
If you ask the question for a specific person (e.g. consider Jane working as a researcher), then the estimate should consider all of the factors we’ve listed above (key bottleneck, appropriate career path, personal fit).
Note as well that if you phrase the question appropriately, then it should also consider the “replaceability” of different staff, since we’re asking about the value of an additional researcher considering who else the organisation could hire instead.3
That said, the difficulty of taking account of all of these factors means that people’s estimates will be highly uncertain, so should be used with caution.
For instance, we did this for the effective altruism community as a whole in our 2017 talent survey, which found results like the following:
Average weighted by org size
$12.8m ($4.1m excluding an outlier)
$7.6m ($3.6m excluding an outlier)
Unfortunately, these estimates will involve a huge amount of uncertainty and disagreement. But the presence of uncertainty doesn’t mean we shouldn’t at least try to make estimates. Rather, we should make the best estimates we can, while also bearing in mind that they could easily change, and combine quantitative analysis with (hopefully) more robust qualitative arguments. The process of making a quantitative estimate also helps to clarify and improve our reasoning.
Once you have the estimates for the value of your contribution to different areas, you need to combine them with your estimates of the relative effectiveness of working on different problem areas (or at different organisations). Then, the multiple of the two gives you the expected value of the career paths i.e.:
Expected impact = (effectiveness of area) * (dollar value of contribution to area)
For instance, suppose you’re comparing two options:
Earning to give supporting factory farming charities.
Working in a non-profit focused on global health.
You determine the following (these are entirely hypothetical figures):
Each dollar contributed to opposing factory farming produces 3 units of value.
If you earn to give, you can donate $10,000 per year.
If you work at a global health non-profit, they’d value having you working there as equivalent to their next best hire at an additional $20,000 of donations each year.
Then, the value of the earning to give for anti-factory farming work is:
3 * 10,000 = 30,000 units
The expected value of the global health non-profit option is:
1 * 20,000 = 20,000 units
So, with these figures, earning to give to support factory farming charities comes out ahead.
However, given the huge uncertainties in these kinds of estimates, the factory farming option is only narrowly ahead so this is close to a draw. Usually, we’d look for one option to be several times better before putting much weight on the estimate. When the differences are more narrow, then we’d mainly focus on more qualitative analysis (e.g. where you’ll have the best personal fit), or other factors that we will cover in later articles (e.g. career capital, value of information).
6. Start to narrow down, or go back to the start
Summing up what we’ve covered, the highest-impact career option for you is the one that does best based on a combination of whether it:
Problem: Contributes to a pressing global problem (perhaps 100-fold increase in impact).
Method: Makes a large contribution to a key bottleneck in one of these problems (perhaps 2-5 fold increase).
Fit: Is an excellent personal fit (2-10 fold increase among reasonable options, maybe 100-fold increase among a wide sample of options).
You can analyse these factors both qualitatively and quantitatively.
Having applied the process, if you don’t have any plausible options at this point, then go back to the start and widen your search. In general, you can either focus on the same problem area, but consider less pressing bottlenecks (e.g. earn to give); or you can consider a wider range of problem areas. We don’t recommend aiming for something with below average personal fit, but you could aim for “good” rather than “excellent”.
If you have a reasonable short-list, then you could start to narrow down and make your plan.
In brief, this will involve the following steps, which we cover in more depth in our career guide:
Decide how highly to weigh gaining career capital and flexibility compared to immediate impact (read more).
Assess your long-term options in terms of impact, career capital, personal fit and comparative advantage, to work out which is best (read more about our framework and assessing your options).
Decide whether to commit to entering the option that seems best or doing more to learn about which option is best. It may even be worth spending several years trying out different paths before revisiting your list (read more about how to try out your options)
If you decide to commit, work out the most effective next step to enter your top option.
Now we’ll cover what options we think are best if we apply the steps we’ve just covered.
Which career paths do we think are often highest-impact? Five key categories
If you take an effective altruism style approach to social impact, and roughly share our long-term focused view of global priorities, then we think the following five broad categories currently stand out as long-term options to aim at.
Try to think of at least one specific option that might be a good personal fit within each category. We’ve chosen these categories to be broad enough that there’s usually something for everyone.
We’ll start by listing the categories, and then we give more explanation of why we chose them.
Later, we list some narrower “priority paths” that cover the very highest-impact options we know.
Many of the top problem areas we focus on are mainly constrained by a need for additional research, and we’ve argued that research seems like a high-impact path in general.
Following this path usually means pursuing graduate study in a relevant area where you have good personal fit, then aiming to do research relevant to a top problem area, or else supporting other researchers who are doing this.
Research is the most difficult to enter of the five categories, but it has big potential upsides, and in some disciplines, going to graduate school gives you useful career capital for the other four categories. This is one reason why if you might be a good fit for a research career, it’s often a good path to start with (though we still usually recommend exploring other options for 1-2 years before starting a PhD unless you’re highly confident you want to spend your career doing research in a particular area).
After your PhD, it’s hard to re-enter academia if you leave, so at this stage if you’re still in doubt it’s often best to continue within academia (although this is less true in certain disciplines, like machine learning, where much of the most cutting-edge research is done in industry). Eventually, however, it may well be best to do research in non-profits, corporations, governments and think tanks instead of academia, since this can sometimes let you focus more on the most practically relevant issues and might suit you better.
You can also support the work of other researchers in a complementary role, such as a project manager, executive assistant, fundraiser or operations. We’ve argued these roles are often neglected, and therefore especially high-impact. It’s often useful to have graduate training in the relevant area before taking these roles.
Some especially relevant areas to study include (not in order and not an exhaustive list): machine learning, neuroscience, statistics, economics / international relations / security studies / political science / public policy, synthetic biology / bioengineering / genetic engineering, China studies, and decision psychology. (See more on the question of what to study.)
Government is often the most important force in addressing pressing global problems, and there are many positions that seem to offer a good network and a great deal of influence relative to how competitive they are.
In this category, we usually recommend that people aim to develop expertise in an area relevant to one of our priority problems and then take any government or policy job where you can help to improve policy relevant to that problem. Another option is to first develop policy relevant career capital (perhaps by working in a generalist policy job) and then use the skills and experience you’ve developed to work on a high-priority problem later in your career.
If you’re a U.S. citizen, working on U.S. federal policy can be particularly valuable because the U.S. federal government is so large and has so much influence over many of our priority problems. People whose career goal is to influence the U.S. federal government often switch between many different types of roles as they advance. In the U.S., many types of roles that can lead to a big impact on our priority problems fit into one of the following four categories. (We focus on the U.S. here because of its influence. We think working in policy can also be quite valuable in other countries, although the potential career paths look slightly different.)
Working in the executive branch such as the Defense Department, the State Department, intelligence agencies, or the White House. We don’t yet have a review of executive branch careers but our article on U.S. AI policy careers also makes a more general case for the promise of working in the U.S. federal government. (See also our profile on the UK civil service) Note, though, that in the U.S. top executive branch officials are often hired from outside the traditional career civil service. So even if your goal is to eventually be a top executive branch official, the best path might include spending much of your career in other types of roles, including those we describe next (but also including other roles such as some in the private sector) .
Working as a Congressional staffer. Congressional staffers can have a lot of influence over legislation, especially if they work on a committee relevant to one of our priority problems. It’s possible to achieve seniority and influence as a Congressional staffer surprisingly quickly. Our impression, though, is that the very top staffers often have graduate degrees, sometimes including degrees from top law schools. From this path it’s also common to move into the executive branch, or to seek elected office.
Working for a political campaign. We doubt that political campaign work is the highest impact option in the long run but if the candidate you work for wins this can be a great way to get a high-impact staff position. For example, some of the top people who work on a winning presidential campaign eventually get high-impact positions in the White House or elsewhere in the executive branch. This is a high-risk strategy because it only pays off if your candidate wins and, even then, not everybody on the campaign staff will get influential jobs or jobs in the areas they care about. Running for office yourself involves a similar high-risk, righ-reward dynamic.
Influencer positions outside of government, covering policy research and advocacy. For example, you might work at a think tank or a company interested in a relevant policy area. In a job like this, you might be able to: develop original proposals for policy improvements, lobby for specific policies, generally influence the conversation about a policy area, bring an area to the attention of policymakers, etc. You can also often build expertise and connections to let you switch into the executive branch, a campaign, or other policy positions. For many areas of technical policy, especially AI policy, we’d particularly like to emphasise jobs in industry. Working at a top company in an industry can sometimes be the best career capital for policy positions relevant to that industry. In machine learning in particular, some of the best policy research is being done at industry labs, like OpenAI’s and DeepMind’s. Journalists can also be very influential but our impression is that there is not as clear of a path from working as a journalist to getting other policy jobs.
In the UK, the options are similar. One difference is that there is more separation between political careers and careers in the civil service (which is the equivalent of the executive branch). A second difference is that the U.K. Ministry of Defence has less power in government than the U.S. Defense Department does. This means that roles outside of national security are comparatively more influential in the U.K. than in the U.S. Read more in our profiles on UK civil service careers and UK party political careers. (Both are unfortunately somewhat out of date but still provide useful information).
People also often start policy careers by doing graduate studies in an area that’s relevant to the type of policy you want to work on. In the US, it’s also common to enter from law school, a master of public policy, or a career in business.
Some especially relevant areas of policy expertise to gain and work within include: technology policy; security studies; international relations, especially China-West relations; and public health with a focus on pandemics and bioterrorism.
There are many government positions that require a wide range of skill types, so there should be some options in this category for nearly everyone. For instance, think tank roles involve more analytical skills (though more applied than the pure research pathway), while more political positions require relatively good social skills. Some positions are very intense and competitive, while many government positions offer reasonable work-life balance and some don’t have very tough entry conditions.
Although we suspect many non-profits don’t have much impact, there are still many great non-profits addressing pressing global issues, and they’re sometimes constrained by a lack of talent, which can make them a high-impact option.
One major advantage of non-profits is that they can tackle the issues that get most neglected by other actors, such as addressing market failures, carrying out research that doesn’t earn academic prestige, or doing political advocacy on behalf of disempowered groups such as animals or future generations.
To focus on this category, start by making a list of non-profits that address the top problem areas, have a large scale solution to that problem, and are well run. Then, consider any job where you might have great personal fit.
The top non-profits in an area are often very difficult to enter, but you can always expand your search to consider a wider range of organisations. These roles also cover a wide variety of skills, including outreach, management, operations, research, and others.
If you already have a strong existing skill set, is there a way to apply that to one of the key problems?
If there’s any option in which you might excel, it’s usually worth considering, both for the potential impact and especially for the career capital; excellence in one field can often give you opportunities in others.
This is even more likely if you’re part of a community that’s coordinating or working in a small field. Communities tend to need a small number of experts covering each of their main bases.
For instance, anthropology isn’t the field we’d most often recommend someone learn, but it turned out that during the Ebola crisis, anthropologists played a vital role, since they understood how burial practices might affect transmission and how to change them. So, the biorisk community needs at least a few people with anthropology expertise.
This means that if you have an existing skill set that covers a base for a community within a top area, it can be a promising option, even if it’s obscure.
However, there are limits to what can be made relevant. We struggle to think of a way to connect some subjects directly to the top problem areas, so sometimes it will be better to retrain rather than apply an existing skill.
If you have an unusual skill set, it’s hard for us to give general advice online about how best to use it. Ideally, you can speak to experts in the problem areas you want to work on about how it might be applied. For the problems we focus on, we have some rough ideas about how particular skillsets can be applied here.
We think many of our readers can excel in roles in the four areas mentioned above, and we encourage you not to rule out these categories prematurely.
If you’re able to take a job where you earn more than you need, and you think none of the categories above are a great fit for you, we’d encourage you to consider earning to give. It’s also worth considering this option if you have an unusually good fit for a very high-earning career.
By donating to the most effective organisations in an area, just about anyone can have an impact on the world’s most pressing problems.
You may be able to take this a step further and ‘earn to give’ by aiming to earn more than you would have done otherwise and to donate some of this surplus effectively.
Not everyone wants to make a dramatic career change, or is well-suited to the narrow range of jobs that have the most impact on the most pressing global problems. However, by donating, anyone can support these top priorities, ‘convert’ their labour into labour working on the most pressing issues, and have a much bigger impact.
This can allow you to pursue your preferred career, while still contributing to pressing areas that require a specialised skill set like biosecurity or global priorities research.
For those who are an especially good fit with a higher-earning career (compared to the other paths), earning to give can be their highest-impact option. For instance, people who were earning to give provided early funding for many organisations we now think are high-impact, and some of those organisations could not have existed without this funding (including us!).
We list some of the highest-earning jobs available in a separate article, and for those with quantitative skills, we especially highlight quantitative trading. However, you can earn to give in any job that pays you more than you need to live comfortably.
When earning to give, it’s also important to pick a job with good personal fit, that doesn’t cause significant harm, and that builds career capital, particularly if you might want to transition into other high-impact options later on.
Considering both income and career capital leads us to favour jobs in high-performing organisations where you can develop skills that are useful in one of the other four categories, such as management or operations. Tech startups with 20-100 employees are often a good place to consider. Management consulting is another option.
Why do we especially highlight research, policy and non-profit jobs; deprioritize earning to give; and omit advocacy and entrepreneurship?
In brief, we think our list of top problems (AI safety, biorisk, EA, GPR, nuclear security, institutional decision-making) are mainly constrained by research insights, either those that directly solve the issue or insights about policy solutions.
Our top problem areas are also complex, and it’s easy to make the situation worse rather than better through poorly informed actions or poor coordination.
This means that what’s most needed is people able to develop deep expertise in the area, have relevant insights, and effectively coordinate. We call people able to do this “dedicated talent”. In contrast, it’s hard to contribute to these areas part-time or indirectly.
People have the best chances of contributing to the insights bottleneck by taking roles in academia, corporate research labs, government & policy positions, and relevant non-profits. They can also take complementary positions in the same sectors, such as in operations and management.
On the other hand, there is currently more funding available than is being spent in these areas, so earning to give doesn’t seem like the key bottleneck. It also often seems like you can raise more money by working as a fundraiser in a relevant organisation than you can by earning to give.
Advocacy can be useful if well targeted, but doesn’t typically seem like the top priority bottleneck because many of the key people are already aware of the issues (they’re instead constrained by a lack of solutions), we don’t have simple messages to promote, and advocacy often misfires.
Likewise, non-profit entrepreneurship can also easily have unintended consequences, so we prefer people to build up expertise first, and it’s difficult to see for-profit ways to contribute to many of these issues.5
These positions are both our own assessment and backed up by results of our surveys of community leaders about talent constraints, skill needs and key bottlenecks.
If you’re more focused on global health and factory farming, how do these recommendations change?
If you’re focused on global health or factory farming, then earning to give, for-profit work and advocacy become much more attractive; though, research, non-profit and government positions remain great choices.
Earning to give is more attractive because there are bigger funding gaps in these fields. GiveWell estimates their top charities will have a funding gap of about $200m in 2018, even after taking account of the donations they expect to cause, and this amount would be much higher if GiveDirectly were included.
Likewise, factory farming also seems to face greater funding constraints, though Lewis Bollard from Open Philanthropy reports that funding constraints have decreased significantly in the last few years. This would again make it more important to try to work directly in the key non-profit organisations.
This said, we think that people focused on global health often underestimate options beyond earning to give, perhaps because their impact is less tangible.
Besides earning to give, mass advocacy is also relatively more attractive within these areas because you can promote a simple message like “donate here to end malaria”, “vote for the candidate that will increase foreign aid for health”, or “don’t eat factory farmed meat”. These messages are more robust, and allow many people to take useful action. This means that if you’re focused on these areas, then options like working in the media are more attractive.
In contrast, it’s much less clear how to run a successful mass outreach campaign about AI safety or even the idea of effective altruism — past efforts to spread the idea of effective altruism have often led to it being simplified down to “individuals should donate to evidence-backed health charities”, which put off many of the people we’d most like to appeal to, because it seems oversimplified and like there’s little role for researchers and policy experts (some of the people who are most needed).
Finally, building new for-profit companies is more attractive in these areas. For instance, developing the meat substitute industry seems like one of the best ways to end factory farming, because if meat substitutes can compete with factory farmed meat on both taste and cost, then they could drive factory farms out of business. This area is taking off right now, is being driven by for-profit companies, and is in large part constrained by people with business skill-sets able to lead and scale up companies (though also by research). To learn more, see our profile on factory farming and podcasts with Bruce Friedrich and Marie Gibbons.
Likewise, in international development, for-profit companies can play a role by creating products for the world’s poorest people, such as with SendWave or Segovia Technology.
In contrast, it’s not possible to set up companies that directly help people who don’t yet exist, and therefore make progress on long-term issues.
Our list of priority paths
In this section, we list some narrower career options that we think are among the highest-impact options we know. They’re mostly sub-options from the five key categories above that seem especially promising — focusing both on a top problem and a key bottleneck — and also offer reasonable career capital.
These options are also extremely competitive, and few jobs exist within each. So, they’re good options to strive for, but it’s important to have a back-up plan.
They’re also high-stakes, by which we mean it’s easy to accidentally cause harm within them. This is a further reason to check that you have good personal fit and the right expertise before going ahead.
The list is tentative. We’ve likely missed lots of good options, and the order could easily change, but it’s a starting point.
Note that we just sketch out the main arguments in favour of the option; counter-arguments are usually listed in the further reading.
If you’re aligned with our prioritisation, and want to pursue these options, then here’s a simplified version of a process to take.
Start by working through the following list (roughly in order) to try to determine which might be a reasonable personal fit.
Do more in-depth investigations over several months to work out which is best for you.
Then, try to pursue whichever option seems to have the best personal fit and comparative advantage. Try it for several years, and if you’re on track to succeed, continue, and otherwise switch to another path.
While doing this, make sure you have 1-3 solid back-up options to fall back on if your initial choice doesn’t work out.
Remember that we think personal fit is really important and we don’t encourage people to take options that would make them unhappy. But if one of these paths might be a good fit, it will usually be the top recommendation we’d make to you.
As we’ve argued, the next few decades might see the development of powerful machine learning algorithms with the potential to transform society. This could have both huge upsides and downsides, including the possibility of catastrophic risks.
To manage these risks, one need is technical research into the design of safe AI systems (including the “alignment problem”), which we cover later. But in addition to the technical problems, there are many other important questions to address. These can be roughly categorised into the three key challenges of transformative AI strategy:
Ensuring broad sharing of the benefits from developing powerful AI systems, as opposed to letting AI’s development harm humanity or unduly concentrate power.
Avoiding exacerbating military competition or conflict caused by increasingly powerful AI systems.
Ensuring that the groups that develop AI are working together to develop and implement safety features.
To overcome these challenges, we need a community of experts who understand the intersection of modern AI systems and policy, and work together to mitigate long-term risks and ensure humanity reaps the benefits of advanced AI. These experts would broadly carry out two overlapping activities: (i) research – to develop strategy and policy proposals, and (ii) implementation – working together to put policy into practice.
Ultimately, we see these issues as equally important as the technical ones, but currently they are more neglected. Many of the top academic centres and AI companies have started to hire researchers working on technical AI safety, and there’s perhaps a community of 20-50 full-time researchers focused on the issue. However, there are only a handful of researchers focused on strategic issues or working in AI policy with a long-term perspective.
Note that there is already a significant amount of work being done on nearer-term issues in AI policy, such as the regulation of self-driving cars. What’s neglected is work on issues that are likely to arise as AI systems become substantially more powerful than those in existence today — so-called “transformative AI” — such as the three non-technical challenges outlined above.
Some examples of top jobs to work towards long-term in this path include the following, which fit a variety of skill types:
Work at top AI labs, such as DeepMind or OpenAI, especially in relevant policy team positions or other influential roles.
In academia, become a researcher at one of the institutes focused on long-term AI policy, especially the Future of Humanity Institute at Oxford, which already has several researchers working on these issues at the Center for the Governance of AI.
In party politics, aim to get an influential position, especially as an advisor with a focus on emerging technology policy (e.g. start as a staffer in Congress).
How to enter
In the first few years of this path, you’d focus on learning more about the issues and how government works, as well as meeting key people in the field, and doing research, rather than pushing for a specific proposal. AI policy and strategy is a deeply complicated area, and it’s easy to make things worse by accident (e.g. see the Unilateralist’s Curse).
Some common early career steps include:
Relevant graduate study. Some especially useful fields include international relations, strategic studies, machine learning, economics, law, public policy, and political science. Our top recommendation right now is machine learning if you can get into a top 10 school in computer science. Otherwise, our top recommendation tends to be: i) law school if you can get into Yale or Harvard ii) international relations if you want to focus on research and iii) strategic studies if you want to focus on implementation. However, the best choice for you will also depend heavily on your personal fit and the particular schools you get into.
Working at a top AI company, especially DeepMind and OpenAI.
Any general entry-level government and policy positions (as listed earlier), which let you gain expertise and connections, such as think tank internships, being a researcher or staffer for a politician, joining a campaign, and government leadership schemes.
This field is at a very early stage of development, which creates multiple challenges. For one, the key questions have not been formalised, which creates a need for “disentanglement research” to enable other researchers to get traction. For another, there is a lack of mentors and positions, which can make it hard for people to break into the area.
Until recently, it’s been very hard to enter this path as a researcher unless you’re able to become one of the top approximately 30 people in the field relatively quickly. While mentors and open positions are still scarce, some top organisations have recently recruited junior and mid-career staff to serve as research assistants, analysts, and fellows. Our guess is that obtaining a research position will remain very competitive but positions will continue to gradually open up. On the other hand, the field is still small enough for top researchers to make an especially big contribution by doing field-founding research.
If you’re not able to land a research position now, then you can either (i) continue to build up expertise and contribute to research when the field is more developed, or (ii) focus more on the policy positions, which could absorb hundreds of people.
Most of the first steps on this path also offer widely useful career capital. For instance, depending on the subarea you start in, you could often switch into other areas of policy, the application of AI to other social problems, operations, or earning to give. So, the risks of starting down this path if you may want to switch later are not too high.
Since this is one of our top priority paths, we have a specialist advisor, Niel Bowerman, who focuses on finding and helping people who want to enter it. He is especially focused on roles aimed at improving US AI public policy. If you would like advice, get in touch here.
Could this be a good fit for you?
One key question is whether you have a reasonable chance of getting some of the top jobs listed earlier.
The government and political positions require people with a well-rounded skill set, the ability to meet lots of people and maintain relationships, and the patience to work with a slow-moving bureaucracy. It’s also ideal if you’re a US citizen that might be able to get security clearance, and don’t have an unconventional past that could create problems if you choose to work in politically sensitive roles.
The more research-focused positions would typically require the ability to get into a top 10 grad school in a relevant area and deep interest in the issues. For instance, when you read about the issues, do you get ideas for new approaches to them? Read more about predicting fit in research.
Turning to other factors, you should only enter this path if you’re convinced of the importance of long-term AI safety. This path also requires making controversial decisions under huge uncertainty, so it’s important to have excellent judgement, caution and a willingness to work with others, or it would be easy to have an unintended negative impact. This is hard to judge, but you can get some information early on by seeing how well you’re able to work with others in the field.
However, if you can succeed in this area, then you have the opportunity to make a significant contribution to what might well be the most important issue of the next century.
As we’ve argued, the next few decades might see the development of powerful machine learning algorithms with the potential to transform society. This could have both huge upsides and downsides, including the possibility of existential risks.
Besides strategy and policy work discussed above, another key way to limit these risks is research into the technical challenges raised by powerful AI systems, such as the alignment problem. In short, how do we design powerful AI systems so they’ll do what we want, and not have unintended consequences?
This field of research has started to take off, and there are now major academic centres and AI labs where you can work on these issues, such as MILA in Montreal, FHI at Oxford, CHAI at Berkeley, DeepMind in London and OpenAI in San Francisco. We’ve advised over 100 people on this path, with several already working at the above institutions. The Machine Intelligence Research Institute, in Berkeley, has been working in this area for a long time and has an unconventional perspective and research agenda relative to the other labs.
There is plenty of funding available for talented researchers, including academic grants, and philanthropic donations from major grantmakers like Open Philanthropy. It’s also possible to get funding for your PhD programme. The main need of the field is more people capable of using this funding to carry out the research.
In this path, the aim is to get a position at one of the top AI safety research centres, either in industry, nonprofits or academia, and then try to work on the most pressing questions, with the eventual aim of becoming a research lead overseeing safety research.
Broadly, AI safety technical positions can be divided into (i) research and (ii) engineering. Researchers direct the research programme. Engineers create the systems and do the analysis needed to carry out the research. Although engineers have less influence over the high-level research goals, it can still be important that engineers are concerned about safety. This concern means they’ll better understand the ultimate goals of the research (and so prioritise better), be more motivated, shift the culture towards safety, and use the career capital they gain to benefit other safety projects in the future. This means that engineering can be a good alternative for those who don’t want to be a research scientist.
It can also be useful to have people who understand and are concerned by AI safety in AI research teams that aren’t directly focused on AI safety to help promote concern for safety in general, so this is another backup option. This is especially true if you can end up in a management position with some influence over the organisation’s priorities.
How to enter
The first step on this path is usually to pursue a PhD in machine learning at a good school. It’s possible to enter without a PhD, but it’s close to a requirement in research roles at the academic centres and DeepMind, which represent a large fraction of the best positions. A PhD in machine learning also opens up options in AI policy, applied AI and earning to give, so this path has good backup options.
However, if you want to pursue engineering over research, then the PhD is not necessary. Instead, you can do a masters programme or train up in industry.
It’s also possible to enter this path from neuroscience, especially computational neuroscience, so if you already have a background in that area you may not have to return to study. Recently, opportunities have also opened up for social scientists to contribute to AI safety (we plan to cover this in future work).
Could this be a good fit for you?
Might you have a shot of getting into a top 5 graduate school in machine learning? This is a reasonable proxy for whether you can get a job at a top AI research centre, though it’s not a requirement. Needless to say these places are very academically demanding.
Are you convinced of the importance of long-term AI safety?
Are you a software or machine learning engineer who’s been able to get jobs at FAANG and other competitive companies? You may be able to train to enter a research position, or otherwise take an engineering position.
Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, sometimes have ideas for questions to look into, and can’t resist pursuing them? Read more about how to tell if you’re a good fit for working in research.
The Open Philanthropy takes an effective altruism approach to advising philanthropists on where to give. It likely has over $10bn of committed funds from Dustin Moskovitz and Cari Tuna, and is aiming to advise other philanthropists. There are other “angel” donors in the community who could give in the $1-$10m range per year, but aren’t at their maximum level of giving. And we know a number of other billionaires who are interested in effective altruism and might want to start new foundations.
One reason why these donors don’t give more is a lack of concrete “shovel-ready” opportunities. This is partly due to a lack of qualified leaders able to run projects in the top problem areas (especially to found nonprofits working on research, policy and community building). But another reason is a lack of grantmakers able to vet these opportunities or generate new projects themselves. A randomly chosen new project in this area likely has little expected impact — since there’s some chance it helps and some chance it makes the situation worse — so it’s vital to have grantmakers able to distinguish good projects from the bad.
The skill of grantmaking involves being able to survey the opportunities available in an area, and come to reasonable judgements about their likelihood of success, and probable impact if they do succeed. Grantmakers also need to build a good network, both so they can identify opportunities early, and identify groups with good judgement and the right intentions.
In addition, grantmakers need to get into a position where they’re trusted by the major funders, and this requires having some kind of relevant track record.
All of this makes it incredibly difficult to become a grantmaker, especially early in your career. The Open Philanthropy’s last hiring round for research analysts had hundreds of applicants, only twelve of whom got in-person trials, of which 5 received job offers.
However, the high stakes involved mean that if you are able to get into one of these positions, then you can have a huge impact. A small scale grantmaker might advise on where several million dollars of donations are given each year. Meanwhile, a grantmaker at a large foundation — typically called a “programme officer” or “programme director” — might oversee $5-$40m of grants per year.
Given the current situation, it’s likely that a significant fraction of the money a grantmaker oversees wouldn’t have been donated otherwise for at least several years, so they get good projects started sooner and may increase the total amount of giving by creating capacity before potential donors lose interest.
What’s more, by having more talented grantmakers, the money can be donated more effectively. If you can improve the effectiveness of $10m per year to a top problem area by 10%, that’s equivalent to donating about $1m yourself. This often seems achievable because the grantmakers have significant influence over where the funds go and there’s a lot of potential to do more detailed research than what currently exists.
Overall, we think top grantmakers working in effective altruism can create value equal to millions or even tens of millions of dollars per year in donations to top problem areas, making it one of the highest-impact positions right now.
Finally, these positions offer good career capital because you’ll make lots of important connections within the top problem areas. This creates opportunities to exit into direct work. Another exit option is government and policy. Or you could switch into operations or management, and have an impact by enabling other grantmakers to be more effective.
One related path is to work as a grantmaker in a foundation that doesn’t explicitly identify with effective altruism, in order to help bring in an effective altruism perspective. The advantage of this path is that it might be easier to add value. However, the downside is that most foundations are not willing to change their focus areas, and we think choice of focus area is the most important decision. Existing foundations also often require significant experience in the area, and sometimes it’s not possible to work from junior positions up to programme officer.
Another related path is philanthropic advising. One advantage of this path is that you can pursue it part-time to build a track record. This also means you could combine it with earning to give and donating your own money, or with advocacy positions that might let you meet potential philanthropists. We’ve seen several people give informal advice to philanthropists, or be given regranting funds to donate on their behalf.
A third related path is to work at a government agency that funds relevant research, such as IARPA, DARPA and NIH. Grantmakers in these agencies often oversee even larger pools of funding, but you’ll face more restrictions on where it can go. They also often require a PhD.
How to enter
One entry route is to take a junior position at one of these foundations (e.g. research analyst), then work your way up to being a grantmaker. We think the best place to work if you’re taking this path is the Open Philanthropy (disclaimer: we’ve received grants from them). Founders Pledge also has a philanthropic advising team, though it has less of a track record and is less focused on long-term focused problem areas. You could also consider research positions at other effective altruism organisations — wherever will let you build a track record of this kind of research (e.g. the Future of Humanity Institute).
Another key step is to build up a track record of grant making. You could start by writing up your thoughts about where to give your own money on the effective altruism forum. From there, it might be possible to start doing part-time philanthropic advice, and then work up to joining a foundation or having a regranting pool (funds given to you by another donor to allocate).
A third option is to pursue work in the problem area where you want to make grants, perhaps in nonprofits, policy or research, in order to build up expertise and connections in the area. This is the usual route into grantmaking roles. For instance, Open Philanthropy hired Lewis Bollard to work on factory farming grants after he was Policy Advisor & International Liaison to the CEO at The Humane Society of the United States, one of the leading organisations in the area.
Could this be a good fit for you?
This position requires a well-rounded skill set. You need to be analytical, but also able to meet and build relationships with lots of people in your problem area of focus.
Like AI policy, it requires excellent judgement.
Some indications of potential: Do you sometimes have ideas for grants others haven’t thought of, or only came to support later? Do you think you could persuade a major funder of a new donation opportunity? Can you clearly explain the reasons you hold particular views, and their biggest weaknesses? Could you develop expertise and strong relationships with the most important actors in a top problem area? Could you go to graduate school in a relevant area at a top 20 school? (This isn’t needed, but is an indication of analytical ability.)
Note that working as support or research staff for an effective grantmaker is also high-impact, so that’s a good backup option.
We think building the effective altruism community is a promising way to build capacity to address pressing global problems in the future. This is because it seems possible to grow a great deal and contains people who are willing to switch area to work on whichever issues turn out to be most urgent in the future, so is robust to changes in priorities.
We realise this seems self-promotional, since we ourselves run an effective altruism organisation. However, if we didn’t recommend what we ourselves do, then we’d be contradicting ourselves. We also wouldn’t want everyone to work on this area, since then we’d only build a community and never do anything. But we think recommending it as one path among about ten makes sense.
A key way to contribute to building the effective altruism community is to take a job at one of the organisations in the community — see a list of organisations. Many of these organisations have a solid track record, are growing and have significant funding, so a big bottleneck is finding staff who are a good fit.
An additional staff member can often grow the community by several additional people each year, achieving a multiplier on their effort. And these organisations also do other useful work, such as research, fundraising and providing community infrastructure.
These roles let you develop expertise in effective altruism, top global problem areas and running startup nonprofits. They put you at the heart of the effective altruism movement and long-term future communities, letting you build a great network there. Many of the organisations also put a lot of emphasis on personal development.
However, it’s also important to bear in mind that these roles require a specific type of person: someone who both has strong skills that are needed by the organisations (which often require a style of research and reasoning which isn’t common elsewhere), a good fit with the specific team, and deep engagement in effective altruism. Many of the organisations are also management constrained, which raises the bar for getting hired — your application may need to demonstrate a high likelihood of excelling with relatively little supervision almost immediately.
This means if you are a good fit for one of these roles, then you probably won’t be significantly replaceable, and taking the role can be very high-impact.
However, it also means that most people are not a good fit for most of these roles. This means that unless you have strong evidence otherwise, you shouldn’t expect to have more than a couple of percent chance of landing a specific job. Given that there are usually not many jobs available within these organisations at a given time, you shouldn’t have ‘work at EA orgs’ as the only category of jobs you pursue. You should also apply to another category, such as policy positions or something to build career capital.
There are a variety of roles available, broadly categorised into the following:
Management, operations and administration – e.g. hiring staff, setting strategy, creating internal processes, setting budgets.
Research and advice – e.g. developing the ideas of effective altruism, writing and talking about them.
Outreach, marketing and community – e.g. running social media accounts, content marketing, running promotional accounts, visual design, moderating forums, market research, responding to the media, helping people in the community.
Systems and engineering – e.g. web engineering, data capture and analysis, web design, creating internal tools.
We’d like to especially highlight roles in operations management, since there’s a significant need for them by organisations in the community, but we often find that these roles get neglected, perhaps because they’re seen as less glamorous. Another common assumption is that these roles are easier to enter, which makes them more replaceable. Our view, however, is that operations management jobs are both essential and difficult, and require people to make that the main focus of their career. Read more in our full article about operations management.
How to enter
To enter these roles, you can apply directly to the organisations. Organisations often hire people who are already involved in the community, because commitment to and knowledge of the community are a requirement for many jobs and because it’s easier to evaluate a candidate if you already know their work. This means that if you want to aim towards these positions, the most important step is to start meeting people in the community, and doing small projects to build your reputation (e.g. writing on the forum, volunteering at EA Global, starting a local group, or doing freelance consulting for an organisation). We list more advice in our full profile.
As mentioned, because these positions are scarce, almost nobody can count on getting one. This means you should make sure you’re acquiring career capital that would be relevant to other paths (e.g. a full-time job or graduate school) at the same time as you’re building your reputation within effective altruism. It’s usually not a good idea to commit to this path or build plans that depend on getting one of these jobs before you’ve gotten an offer.
If you want to get a job that puts you in a better position to enter these roles in the future, then do something that lets you develop a concrete skill that’s relevant to one of the role types listed above. Well-run tech startups with 10-100 people are often a good place to learn these skills in a similar context. Alternatively, some effective altruism organisations frequently hire people from our other priority paths. Excelling in any of those paths is a great way to better position yourself for a job at an effective altruism organisation and could be equally or more impactful on its own.
Could this be a good fit for you?
Whether you might be a good fit in part depends on the type of role you’re going for. However, there are some common characteristics the organisations typically look for:
A track record that demonstrates intelligence and an ability to work hard.
Evidence of deep interest in effective altruism — for some roles you need to be happy to talk about it much of the day. This breaks down into a focus on social impact and a scientific mindset, as well as knowledge of the community.
Flexibility and independence – the organisations are relatively small, so staff need to be happy to work on lots of different projects with less structure.
It’s not a requirement, but it seems to be becoming difficult to get most of these jobs without several years of experience in a relevant skill.
We’ve argued that one of the most important priorities is working out what the priorities should be. There’s a huge amount that’s not known about how to do the most good, and although this is one of the most important questions someone would ask, it has received little systematic study.
The study of which actions do the most good is especially neglected if you take a long-term perspective, in which what most matters is the effects of our actions on future generations. This position has only been recently explored, and we know little about its practical implications. Given this, we could easily see our current perspective on global priorities shifting given more research, so these questions have practical significance.
The study of how to help others is also especially neglected from a high-level perspective. People have done significant work on questions like “how can we reduce climate change”, but much less on the question “how pressing is climate change compared to health?” and “what methods should we use to make that comparison?”. It’s these high-level questions we especially want to see addressed.
We call the study of high-level questions about how best to help others “global priorities research”. It’s primarily a combination of moral philosophy and economics, but it also draws on decision theory, decision-making psychology, moral psychology, and a wide variety of other disciplines, especially those concerning technology and public policy. You can see a research agenda produced by the Global Priorities Institute at Oxford.
We’d like to see global priorities research turned into a flourishing field, both within and outside of academia.
To make this happen, perhaps the biggest need right now is to find more researchers able to make progress on the key questions of the field. There is already enough funding available to hire more people if they could demonstrate potential in the area (though there’s a greater need for funding than with AI safety). Demonstrating potential is hard, especially because the field is even more nascent than AI safety, resulting in a lack of mentorship. However, if you are able to enter, then it’s extremely high-impact — you might help define a whole new discipline.
Another bottleneck to progress on global priorities research might be operations staff, as discussed earlier, so that’s another option to consider if you want to work on this issue.
You can broadly pursue this path either in academia or nonprofits.
We think building this field within academia is a vital goal, because if it becomes accepted there, then it will attract the attention of hundreds of other researchers.
The only major academic centre currently focused on this research is the Global Priorities Institute at Oxford (GPI), so if you want to pursue this path as an academic, that’s one of the top places to work. One problem is that GPI only has a couple of open positions, and you’d usually need to have a top academic background in philosophy or economics to get one of them (e.g. if you did well in a PhD from a top 10 school in your subject that’s a good sign). Positions are especially competitive in philosophy.
A new organisation called the Forethought Foundation for Global Priorities Research offers scholarships and fellowships to students in global priorities research as well as research grants for established scholars. We expect you’ll need a top background in philosophy or economics to get one of these (e.g. an undergrad who could get into a top 10 philosophy PhD programme or a top 10-20 economics PhD programme; a grad student attending one of those programmes; or a postdoc or academic who graduated from – or teaches at – one of those programmes).
That said, we expect that other centres will be established over the coming years. In the meantime, you could try to build expertise. For instance, doing an economics PhD (and postdoc) opens up lots of other options, so is a reasonable path to pursue even if you’re not sure that global priorities research is a good fit for you. It’s also important to have academics doing global priorities research (and potentially collaborating with GPI) at other universities.
One downside of academia, however, is that you need to work on topics that are publishable, and these are often not those that are most relevant to real decisions. This means it’s also important to have researchers working elsewhere on more practical questions.
We think the leading applied centre working on this research is the Open Philanthropy. One other advantage of working there is that your findings will directly feed into how billions of dollars are spent (disclaimer: we have received grants from them). However, you can also pursue this research at other effective altruism organisations. 80,000 Hours, for instance, does a form of applied global priorities research focused on career strategy.
How to enter?
The best entry route to the academic end of the field is to study a PhD in economics or philosophy. This is both because PhDs provide useful training and because they are required for most academic positions. Currently, economics is in shorter supply than philosophy, and also gives you better back-up options, so is preferable if you have the choice.
It’s also possible to enter from other disciplines. A number of people in the field have backgrounds in maths, computer science and physics. Psychology is perhaps the next most relevant subject, especially the areas around decision-making psychology and moral psychology. The field also crosses into AI and emerging technology strategy, so the options we listed in the earlier sections are also relevant, as well as knowledge of relevant areas of science. Finally, as the field develops there will be more demand for people with a policy focus, who might have studied political science, international relations, or security studies. In general, this is a position where wide general knowledge is more useful than most.
With the non-academic positions, a PhD isn’t necessary, but you do ideally need to find a way to demonstrate potential in this kind of research. It’s useful to develop skills in clear writing and basic quantitative analysis. Sometimes people enter the non-academic roles directly from undergrad if they’re sufficiently talented.
Could this be a good fit for you?
Might you be able to get into a PhD in economics or philosophy at a top 10 school? (This isn’t to say this qualification is required, it’s just that if you would be able to succeed in such a path, it’s an indicator of ability.)
Do you have excellent judgement? By this we mean, can you take on messy, ill-defined questions, and come up with reasonable assessments about them? This is not required in all roles, but it is especially useful right now given the nascent nature of the field and nature of the questions that are ultimately being addressed.
Do you have general knowledge or an interest in a wide range of academic disciplines?
Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, and sometimes have ideas for questions to look into? Are you able to work independently for many days at a time? Are you able to stick with or lead a research project over many years? Read more about predicting success in research.
There is already a significant community working on pandemic prevention, and there are many ways to contribute to this field. However, most of the existing work is focused on naturally-caused pandemics like those we’ve seen in the past and COVID-19 (though this is starting to change a bit). While these are very important to mitigate, we think it’s even more important to prevent pandemics that pose catastrophic risks, especially those that might totally end human civilisation. There is substantial overlap between work that mitigates these known pandemic risks and more extreme risks, so work in the one is also helpful for work in the other; still, work that is particularly focused on the extreme risks seems somewhat neglected in the field right now.
For reasons our profile explains, catastrophic pandemics seem more likely to be human-caused, and perhaps even deliberately caused. So they may be more well-targeted by security and biodefence interventions than conventional public health ones. Moreover, much past funding for work on bioterrorism seems to have focused on more well-known risks such as anthrax, which doesn’t pose a catastrophic risk.
This means that despite significant existing work on pandemic prevention, global catastrophic biological risksseem neglected.
We rate biorisk as a less pressing issue than AI safety, mainly because we think biorisks are less likely to be truly existential, and AI seems more likely to play a key role in shaping the long-term future in other ways. However, working to prevent catastrophic pandemics seems very high value to us, and can easily be your top option if you have a comparative advantage in this path (e.g., a background in medicine).
We can roughly divide this path into working in government and related organizations on the one hand, and working in research on the other.
The main line of defence against these risks is government, so it’s valuable to build up a community of experts in relevant areas of national government and intergovernmental organisations. These include:
The US Centers for Disease Control
The World Health Organization
The European Centre for Disease Prevention and Control
Another option is to work in academia. This involves developing a relevant area of expertise, such as synthetic biology, genetics, public health, epidemiology, international relations, security studies, or political science. Note that it’s possible—and at times beneficial—to start by studying a quantitative subject (sometimes even to graduate level), and then switch into biology later. Quantitative skills are in demand in biology and give you better back-up options.
Once you’ve completed training, you could do a number of things—including but not limited to: research on directly useful technical questions (such as how to create broad-spectrum diagnostics or rapidly deploy vaccines), research on strategic questions (such as how dangerous technologies should be controlled), or advising for policy-makers and other groups on the relevant issues. One top research centre you could aim to work at is the Center for International Security and Cooperation at Stanford.
As with AI strategy, the study of global catastrophic biological risk is still a nascent field. This again can make it hard to contribute, since—although progress is being made—we don’t yet know which research questions are most important, and there is often a shortage of mentorship.
This means that there’s an especially pressing need for more “field building” or “disentanglement” research, with the aim of defining the field. If you might be able to do this kind of work, then your contribution is especially valuable since you can unlock the efforts of other researchers. The main home for most of this kind of research with a long-term focus right now is the Future of Humanity Institute in Oxford.
If you’re not able to contribute to disentanglement research right now, there are several other things you can do, including: (i) tackle more straightforward relevant research questions, (ii) work in more mainstream biorisk organisations to build up expertise, (iii) focus on policy positions with the aim of building a community and expertise, or (iv) become an expert on a relevant area of biology, international relations, or a related field.
One advantage of working on biorisk is that many of the top positions seem somewhat less competitive than in AI technical safety work, because they don’t require world-class quantitative skills.
Besides pandemic risks, we’re also interested in how to safely manage the introduction of other potentially transformative discoveries in biology which could be used to fundamentally alter human characteristics and values (such as genetic engineering) and anti-ageing research. We see these issues as somewhat less pressing than the possibility of engineered pandemics, but they provide another reason to develop expertise in these areas.
Often the way to enter this path is to pursue relevant graduate studies (such as in the subjects listed above) because this takes you along the academic path, and is also helpful in the policy path, where many positions require graduate study. Alternatively, you can try to directly enter relevant jobs in government, international organisations, or nonprofits, and build expertise on the job.
The backup options for this path depend on what expertise you have, but they include other options in the policy realm—it’s usually possible to switch your focus within a policy career. You could also work on adjacent research questions that also have the potential to make a positive difference, such as in global health, ageing, or genetics. These backup options seem generally attractive, though somewhat less promising and more competitive than the ones made available by pursuing AI safety policy or technical research (which is one reason we rank this path a bit lower).
Could this be a good fit for you?
Are you deeply concerned with reducing catastrophic risks, and especially extinction risks?
Do you have reasonably strong quantitative skills? (They don’t need to be as strong as they do for AI fields.)
Do you already have experience in a relevant research area relevant to biology (such as those listed above)?
Might you be capable of getting a PhD from a top 30 school in one of these areas? This isn’t required but is a good indicator. Read more about predicting success in research.
If focused on field-building research, can you take on messy, ill-defined questions, and come up with reasonable assessments about them?
If focused on policy, might you be capable of getting and being satisfied in a relevant position in government? In policy, it’s useful to have relatively stronger social skills, such as being happy to speak to people all day, and being able to maintain a robust professional network. Policy careers also require patience in working with large bureaucracies, and sometimes also involve facing public scrutiny.
China will play a role in solving many of the biggest challenges of the next century, including emerging technologies and global catastrophic risks, and China is one of the most influential countries in AI development and deployment.
However, it often seems like there’s a lack of understanding and coordination between China and the West. For instance, even today, 3 times as many Americans study French as study Chinese. For this reason, we’d like to see more people (both Chinese and non-Chinese) develop a deep understanding of the intersection of effective altruism and China, and help to coordinate between the two countries.
In particular, we want people to learn about the aspects of China most relevant to our top problem areas, which means topics like artificial intelligence, international relations, pandemic response, bioengineering, political science, and so on. China is also crucial in improving farm animal welfare, though we currently rate this as a lower priority.
More concretely, this could mean options like:
Graduate study in a relevant area, such as machine learning or synthetic biology; economics, international relations, or security studies with a focus on China or emerging technology; or Chinese language, history and politics.
If possible, a prestigious China-based fellowship, such as the Schwarzman Scholarship programme or Yenching Scholars, is a great option.
Research at a think tank or academic institute focused on these topics.
Work at a Chinese technology company.
If you are a foreigner, learn Chinese in China, or find another option that lets you live there.
If you are Chinese, work at an international effective altruism organisation.
Work at an influential philanthropic foundation.
Once you have this expertise, you could aim to contribute to AI and biorisk strategy & policy questions that involve China. You could also advise and assist international organisations that want to coordinate with Chinese organisations. You might also directly work for Chinese organisations that are concerned with these problem areas.
Note that we’re not in favour of promoting effective altruism in China, working in the government, or attempting to fundraise from Chinese philanthropists. This could easily backfire if the message was poorly framed or if its intent was misperceived in China.
Rather, the aim of this path is to learn more about China, and then aim to improve cooperation between international and Chinese groups. If you are considering doing outreach in China, get in touch and we can introduce you to people who can help you navigate the downsides.
To help with this priority path, we work with a part-time advisor who’s a specialist in China. You can get help from them here.
Could this be a good fit for you?
Do you already have knowledge of China? If not, could you see yourself becoming interested in Chinese politics, economy, culture, and so on, and also being involved in the effective altruism community?
Compared to other options on the list, this path requires more humanities skills (e.g. understanding international relations and cross-cultural differences) rather than scientific skill set.
Otherwise the skill set required is fairly similar to the AI strategy and policy path earlier.
If you can find a job you have a good fit with, it seems like it’s usually possible to make a larger contribution to the problem areas we highlight by working directly on them rather than earning to give. We generally think that these problem areas are more “talent constrained” than “funding constrained”.
However, additional funding is still useful, so earning to give is still a high-impact option. Earning to give can also be your top option if your best fit is with an unusually well-paid career. One kind of job we’ve seen work especially well for this is quantitative trading in hedge funds and proprietary firms.
Quantitative trading means using algorithms to trade the financial markets for profit. We think that, for the most part, the direct impact of the work is likely neutral. But it might be the highest-paid career path out there.
Compensation starts around $100k–$300k per year, and can reach $1m per year within a couple of years at the best firms. Eventually, it’s possible to earn over $10m per year if you make partner. We estimate that if you can work as a quantitative trader at a good firm, the expected earnings average around $1m per year over a career. This is similar to being a tech startup founder, except that for startup founders who make it into a top accelerator, the deal is more like a 10% chance of getting $30m in 3–10 years, so the startup option involves much more risk and a major delay.
Given the expected earnings in quantitative trading, this work can enable you to make large donations to effective charities relatively quickly. We know several people in this career path, such as Sam, who donated six figures within their first couple of years of work. This is enough to set you up as an “angel donor” in the community, meaning you could fund promising new projects that larger donors could later scale up if they’re successful enough in the early stages.
Many people also find the work surprisingly engaging. Creating a winning trading strategy is an intellectual challenge, and you get to work closely with a team and receive rapid feedback on how you’re performing. These firms often have “geeky” cultures, making them quite unlike the stereotype about financial workplaces. The hours are often a reasonable 45–60h per week, rather than the 60–80h reported in entry-level investment banking.
These jobs are prestigious due to their earnings, and you can learn useful technical skills, as well as transferable skills like teamwork.
The main downsides of these positions are that they may not help you make that many good connections—since you’ll mainly only work with others in your firm—and they don’t help you learn about global problems on the job. They’re also highly competitive. There are more comments on fit below.
Role types and top firms
There are two broad pathways:
Traders: develop and oversee the strategies
Engineers: create the systems to collect data, implement trades, and track performance
It varies from firm to firm, but typically the engineers get paid less, but have more stability in their earnings.
It’s also important to know that the salaries can vary significantly by firm. There are perhaps only a couple of firms where it’s possible to progress to seven figures relatively quickly without a graduate degree. These include Jane Street, Hudson River Trading, DE Shaw, and possibly some others. The pay is often significantly less in other firms, though still often several hundred thousand dollars per year. Other firms offer earnings on the level of the aforementioned ones, but require a PhD.
In addition, note that quantitative trading positions are very different from “quant” jobs at other financial companies, such as investment banks or non-quantitative investment firms. Usually “quants” are “middle office” staff who provide analysis to the “front office” staff who oversee the key decisions. This makes them more stable but significantly lower paid, and sometimes less prestigious. Such firms also typically have a less geek-friendly culture.
Could this be a good fit for you?
One indication of potential fit is that you’d be capable of finishing in the top half of the class at a top 30 school in mathematics, theoretical physics, or computer science at the undergraduate level.
One option is to enter this path based on your programming skills as an engineer. This might be possible if you’re someone who would be able to get a top software engineering job at a tech company such as Google.
Besides intelligence, top firms also look for good judgement and rapid decision-making skills. One indication of these is that you like playing strategy games or poker.
Compared to academia, you need to have relatively better communication and teamwork skills to pursue this path, since you’ll work closely with your colleagues hour-by-hour in potentially stressful situations.
Would you be capable of reliably giving a large fraction of your income to charity? Finding support in your giving though community or public commitments can help.
What about other options for earning to give outside quantitative trading? Although we don’t know of a career path that has as high and as secure earning potential, if you can find another very high paying career—such as in other areas of finance or in some cases (particularly in the US) law—donating part of your income from such a career could be your best option for making a positive difference. Entrepreneurship may also be a promising option—though it’s risky, we know several successful founders donating substantial and in some cases very large sums to supporting effective organisations.
Whether earning to give is worth it in a particular case depends on how much the potential role earns, and how good your fit is for other options. If you would be able to do useful AI policy research or pursue one of the other ‘priority paths’ discussed above, then your potential earnings would have to be much higher in order for earning to give to be your best option. But if you aren’t a good fit for one of those options or something else that seems similarly high-impact, it’s more likely that earning to give could be your top option even if you pursue a path that pays less than quantitative trading. Unfortunately, because of this variability it’s hard to give one-size-fits-all advice in this area.
It’s also challenging to figure out where it is best to give, especially if you’re hoping to provide funding to early projects in unusual areas. We provide some giving advice here. If you are a large donor (say, giving over 100k/year) it might also be worth it to seek professional giving advice. Regardless, you can learn from the work of professional philanthropic advisors like Effective Giving or Open Philanthropy by reading about their grants and reasoning online.
Governments and other important institutions frequently have to make complex, high-stakes decisions based on judgement calls, often from just a handful of people. There’s reason to believe that human judgements can be flawed in a number of ways, but can be substantially improved using more systematic processes and techniques. One of the most promising areas we’ve seen is the potential to use more rigorous forecasting methods to make better predictions about important future events. Improving the quality of foresight and decision-making in important institutions could improve our ability to solve almost all other problems.
We’d like to help form a new community of researchers and practitioners who develop and implement these techniques. We’re especially keen to help people who want to work on the areas of policy most relevant to global catastrophic risks, such as nuclear security, AI, and biorisk. Note that we’re not talking about the popular “nudge” work in behavioural sciences, which is focused on making small improvements to personal behaviours. Rather, we’re interested in neglected work relevant to high-stakes decisions like whether to go to war, such as Tetlock’s research into forecasting.
This path divides into two main options: (i) developing better forecasting and decision-making techniques (ii) getting them applied in important organisations, especially those relevant to catastrophic risks.
To enter, the first step is to gain relevant expertise. This is most naturally done by working on relevant techniques in a lab like Tetlock’s or studying other important decision-making processes in a graduate programme. However, you could also take a more practical route by starting your career in government and policy, and learning about the science on the side.
Once you have the expertise, you can either try to make progress on key research questions in the field, or work with an important organisation to improve their processes. We can introduce you to people working on this.
As with global priorities research, this is a nascent field that could become much bigger, and now is an exciting time to enter.
Could this be a good fit for you?
Might you be able to get a job in a relevant area of government? Do you know how to influence choices within a bureaucracy?
On the research path, might you be able to get into a relevant PhD at a top 30 school?
Might you have a shot at making a contribution to one of the relevant research questions? For instance, are you highly interested in the topic, and sometimes have ideas for questions to look into? Are you able to work independently for many days at a time? Are you able to stick with or lead a research project over many years? Read more about predicting success in research.
Below we list some more career options. We think that some of these options have a chance of being as promising for many people as our priority paths. However, we’ve spent much less time investigating them, so we’re very unsure.
Our impression is that although many of these topics have received attention from historians and other academics (examples: 1, 2, 3, 4, 5), some are comparatively neglected, especially from a more quantitative or impact-focused perspective.
In general, there seem to be a number of gaps that skilled historians, anthropologists, or economic historians could help fill. Revealingly, Open Philanthropy commissioned their own studies of the history and successes of philanthropy because they couldn’t find much existing literature that met their needs. Most existing research is not aimed at deriving action-relevant lessons.
However, this is a highly competitive path, which is not able to absorb many people. Although there may be some opportunities to do this kind of historical work in foundations, or to get it funded through private grants, pursuing this path would in most cases involve seeking an academic career. Academia generally has a shortage of positions, and especially in the humanities often doesn’t provide many backup options. It seems less risky to pursue historical research as an economist, since an economic PhD also gives you other good options.
How can you estimate your chance of success as a history academic? We haven’t looked into the fields relevant to history in particular, but some of our discussion of parallel questions for philosophy academia or academia in general may be useful.
Although we think technical AI safety research and AI policy are particularly impactful, we think having very talented people focused on safety and social impact at top AI labs may also be very valuable, even when they aren’t in technical or policy roles.
For example, you might be able to shift the culture around AI more toward safety and positive social impact by talking publicly about what your organization is doing to build safe and beneficial AI (example from DeepMind), helping recruit safety-minded researchers, designing internal processes to consider social impact issues more systematically in research, or helping different teams coordinate around safety-relevant projects.
We’re not sure which roles are best, but in general ones involved in strategy, ethics, or communications seem promising. Or you can pursue a role that makes an AI lab’s safety team more effective — like in operations or project management.
That said, it seems possible that some such roles could have a veneer of contributing to AI safety without doing much to head off bad outcomes. For this reason it seems particularly important here to continue to think critically and creatively about what kinds of work in this area are useful.
Some roles in this space may also provide strong career capital for working in AI policy by putting you in a position to learn about the work these labs are doing, as well as the strategic landscape in AI.
There is likely a lot of policy work with the potential to positively affect the long run future that doesn’t fit into either of our priority paths of AI policy or biorisk policy.
We aren’t sure what it might be best to ultimately aim for in policy outside these areas. But working in an area that is plausibly important for safeguarding the long-term future seems like a promising way of building knowledge and career capital so that you can judge later what policy interventions might be most promising for you to pursue.
Other ‘broad interventions’ for making governments generally better at navigating global challenges, e.g. promoting ‘approval voting’ (a form of voting reform).
Interventions aimed at giving the interests of future generations greater representation in governments, for example requiring ‘posterity impact statements’ for relevant legislation or creating specialized legislative committees whose purpose is to consider the effect of policies on future generations’ interests. Read more.
See our problem profiles page for more issues, some of which you might be able to help address through a policy-oriented career.
There is a spectrum of options for making progress on policy, ranging from research to work out which proposals make sense, to advocacy for specific proposals, to implementation. (See our write-up on government and policy careers for more on this topic.)
It seems likely to us that many lines of work within this broad area could be as impactful as our priority paths, but we haven’t investigated enough to be confident about the most promising options or the best routes in. We hope to be able to provide more specific guidance in this area in the future.
Some people may be extraordinarily productive compared to the average. (Read about this phenomenon in research careers.) But these people often have to use much of their time on work that doesn’t take the best advantage of their skills, such as bureaucratic and administrative tasks. This may be especially true for people who work in university settings — as many researchers do — but it is also often true of entrepreneurs, politicians, writers, and public intellectuals.
Acting as a personal assistant for one of these people can dramatically increase their impact. By supporting their day-to-day activities and freeing up more of their time for work that other people can’t do, you can act as a ‘multiplier’ on their productivity. We think that having a highly talented personal assistant can make someone 10% more productive, or perhaps more, which is like having one-tenth (or more) as much impact as they have. If you’re working for someone who is doing really valuable work, that’s a lot. In general, we think that helping others to have a greater positive impact than they would have had otherwise is sometimes underappreciated, and that it’s an important and valid way to do good. Indeed, that’s our strategy here at 80,000 Hours.
Another way of enhancing the impact of others’ work is research management. Research managers help prioritise research projects within an institution and help coordinate research, fundraising, and communications to make the institution more impactful. In some cases research managers also help set strategy for an organisation, though this is usually in cases where they have previously been researchers themselves. In general, being a research manager seems valuable for many of the same reasons working in operations management does — these coordinating roles are crucial for enabling researchers and others to have the biggest positive impact possible. Read more about research management.
We’ve argued that because of China’s political, military, economic, and technological importance on the world stage, helping western organizations better understand and cooperate with Chinese actors might be highly impactful.
We think working with China represents a particularly promising path to impact. But a similar argument could be made for gaining expertise in other powerful nations, for example Russia or India. If you’re at the beginning of your career, it may even be valuable to think about which countries are most likely to be particularly influential in a few decades, and focus on gaining expertise there.
This is likely to be a better option for you if you are from or have spent a substantial amount of time in one of these countries. The best paths to impact here likely require deep understanding of the relevant cultures and institutions, as well as language fluency (e.g. at the level where you might be able to write a newspaper article about longtermism in the language).
If you are not from one of these countries, one way to get started might be to pursue area or language studies (one source of support available for US students is the Foreign Language and Area Studies scholarship programme), perhaps alongside economics or international relations. You could also start by working in policy in your home country and slowly concentrate more and more on issues related to the country you want to focus on, or try to work in philanthropy or directly on a top problem there.
There are likely many different promising options in this area, both for long-term career plans and useful next steps. Though they would of course have to be adapted to the local context, some of the options laid out in our article on becoming a specialist in China could have promising parallels in other national contexts as well.
There is a commonsense argument that if AI is an especially important technology, and hardware is an important input in the development and deployment of AI, specialists who understand AI hardware will have opportunities for impact — even if we can’t foresee exactly the form they will take.
Some ways hardware experts may be able to help positively shape the development of AI include:
More accurately forecasting progress in the capabilities of AI systems, for which hardware is a key and relatively quantifiable input.
Helping AI projects in making credible commitments by allowing them to verifiably demonstrate the computational resources they’re using.
Helping advise and fulfill the hardware needs for safety-oriented AI labs.
These ideas are just examples of ways hardware specialists might be helpful. We haven’t looked into this area very much (though we do talk a bit about AI hardware as a path to impact at the end of our podcast episode with Danny Hernandez). So, we are pretty unsure about the merits of different approaches, which is why we’ve listed working in AI hardware here instead of as a part of the AI technical safety and policy priority paths. (See an example of how one person has explored this area.)
We also haven’t come across research laying out specific strategies in this area, so pursuing this path would likely mean both developing skills and experience in hardware and thinking creatively about opportunities to have an impact in the area. If you do take this path, we encourage you to think carefully through the implications of your plans, ideally in collaboration with strategy and policy experts also focused on creating safe and beneficial AI.
Researchers at the Open Philanthropy have argued that better information security is likely to become increasingly important in the coming years. As powerful technologies like bioengineering and machine learning advance, improved security will likely be needed to protect these technologies from misuse, theft, or tampering. Moreover, the authors have found few security experts already in the field who focus on reducing catastrophic risks, and predict there will be high demand for them over the next 10 years.
In a recent podcast episode, Bruce Schneier also argued that applications of information security will become increasingly crucial, although he pushed back on the special importance of security for AI and biorisk in particular.
We would like to see more people investigating these issues and pursuing information security careers as a path to social impact. One option would be to try to work on security issues at a top AI lab, in which case the preparation might be similar to the preparation for AI safety work in general, but with a special focus on security. Another option would be to pursue a security career in government or a large tech company with the goal of eventually working on a project relevant to a particularly pressing area. In some cases we’ve heard it’s possible for people who start as engineers to train in information security at large tech companies that have significant security needs.
Compensation is usually higher in the private sector. But if you want to work eventually on classified projects, it may be better to pursue a public sector career as it may better prepare you to eventually earn a high level of security clearance.
There are certifications for information security, but it may be better to get started by investigating on your own the details of the systems you want to protect, and/or participating in public ‘capture the flag’ cybersecurity competitions. At the undergraduate level, it seems particularly helpful for many careers in this area to study CS and statistics.
Information security isn’t listed as a priority path because we haven’t spent much time investigating how people working in the area can best succeed and have a big positive impact. Still, we think there are likely to be exciting opportunities in the area, and if you’re interested in pursuing this career path, or already have experience in information security, we’d be interested to talk to you. Fill out this form, and we will get in touch if we come across opportunities that seem like a good fit for you.
Some people seem to have a very large positive impact by becoming public intellectuals and popularizing important ideas — often through writing books, giving talks or interviews, or writing blogs, columns, or open letters.
However, it’s probably even harder to become a successful and impactful public intellectual than a successful academic, since becoming a public intellectual often requires a degree of success within academia while also having excellent communication skills and spending significant time building a public profile. Thus this path seems to us to be especially competitive and a good fit for only a small number of people.
That said, this path seems like it could be extremely impactful for the right person. We think building awareness of certain global catastrophic risks, of the potential effects of our actions on the long-term future, or of effective altruism might be especially high value, as well as spreading positive values like concern for foreigners, nonhuman animals, future people, or others.
There are public intellectuals who are not academics — such as prominent bloggers, journalists, podcasters, youtubers, and authors. However, academia seems unusually well-suited for becoming a public intellectual because academia requires you to become an expert in something and trains you to write (a lot), and the high standards of academia provide credibility for your opinions and work. For these reasons, if you are interested in pursuing this path, going into academia may be a good place to start.
Public intellectuals can come from a variety of disciplines — what they have in common is that they find ways to apply insights from their fields to issues that affect many people, and they communicate these insights effectively.
If you are an academic, experiment with spreading important ideas on a small scale through a blog, magazine, youtube channel, or podcast. If you share our priorities and are having some success with these experiments, we’d be especially interested in talking to you about your plans.
For the right person, becoming a journalist seems like it could be highly valuable for many of the same reasons being a public intellectual might be.
Good journalists keep the public informed and help positively shape public discourse by spreading accurate information on important topics. And although the news media tends to focus more on current events, journalists also often provide a platform for people and ideas that the public might not otherwise hear about.
One subpath we’re especially excited about is science journalism, due to the important role of science and technology in many of the problems we highlight as most pressing. There is also a lot of oversimplification and confusion in much current public discussion of science and technology, perhaps because few people have the right combination of skills: communication, interest in improving public understanding, and a sufficient understanding of science and technology. (Based on our audience research, if you’re reading this, you might be more likely than average to be able to bridge that gap.)
Journalists could also write about ideas in effective altruism and apply them to current events, which we think can be very valuable. See Vox’s Future Perfect for an example of this work.
All that said, this path is also very competitive, especially when it comes to the kinds of work that seem best for communicating important ideas (which are often complex), i.e. writing longform articles or books, podcasts, and documentaries. And like being a public intellectual, it seems relatively easy to make things worse as a journalist by directing people’s attention the wrong way — so this path may require especially good judgement about which projects to pursue and with what strategy. We therefore think journalism is likely to be a good fit for only a small number of people.
‘Proof assistants’ are programs used to formally verify that computer systems have various properties — for example that they are secure against certain cyberattacks — and to help develop programs that are formally verifiable in this way.
Currently, proof assistants are not very highly developed, but the ability to create programs that can be formally verified to have important properties seems like it could be helpful for addressing a variety of issues, perhaps including AI safety and cybersecurity. So improving proof assistants seems like it could be very high-value.
For example, we might eventually be able to use proof assistants to generate programs for solving some sub-parts of the AI ‘alignment problem’. This would require we be able to correctly formally specify the sub-problems, for which training in formal verification is plausibly useful.
We haven’t looked into formal verification much, but both further research in this area and applying existing techniques to important issues seem potentially promising to us. For some pushback on the importance of these tools for solving pressing problems, you might check out this thread.
You can enter this path by studying formal verification at the undergraduate or graduate level, or learning about it independently if you have a background in computer science. Jobs in this area exist both in industry and in academia.
The effective altruism community seeks to support people trying to have a large positive impact. As a part of this community, we may have some bias here, but we think helping to build the community and make it more effective might be one way to do a lot of good. Moreover, unlike other paths on this list, it might be possible to do this part time while you also learn about other areas.
There are many ways of helping build and maintain the effective altruism community that don’t involve working within an effective altruism organisations, such as consulting for one of these organizations, providing legal advice, or helping effective altruist authors with book promotion.
We think few of these kinds of roles make promising longer-term career paths, though there are niche exceptions. Still, they can be a great way to meet people while having a positive impact right away.
These roles seem good to pursue in particular if you are very familiar with the effective altruism community and you already have relevant skills and are keen to bring them to bear in a more impactful way.
If you can find a way to address a key bottleneck to progress in a pressing problem area which hasn’t been tried or isn’t being covered by an effective organisation, starting one of your own can be extremely valuable.
There seems to be an especially pressing need for nonprofit entrepreneurs in the effective altruism community right now. Large funders like Open Philanthropy could spend more than they are currently if there were more great funding opportunities. There is also a significant number of ideas for new projects (see some ideas here. But there is an apparent shortage of people able to run projects really successfully, especially on a large scale.
That said, we don’t list nonprofit entrepreneurship as a priority path because it’s a difficult route to take. First, you need good enough leadership skills to manage an organisation of significant size; second, you’ll need a good network in the community; and third, you’ll need to have great judgement about strategy and what activities really have the most positive impact in order to keep the organisation focused on what actually matters.
The latter is one way in which nonprofit entrepreneurship is even harder than running a for-profit for-profit startup. The lack of good feedback mechanisms — revenue, in the for-profit case — means that nonprofits have to rely more on leaders’ judgement about what to prioritise, and it’s easy to drift off mission and achieve comparatively little. Indeed, founding a new organisation can easily risk setting an emerging area back by damaging its reputation (although this has to be balanced against the greater information value of exploring an uncharted area).
Most nonprofit entrepreneurs also struggle to raise enough funding — while it’s common to raise a moderate amount of money, most organisations quickly plateau.
However, if you do have the skills and are able to find a funder who is willing to scale you up, it’s possible to make very rapid progress.
Most projects will start at a much smaller scale than this one, but an especially exciting recent example of nonprofit entrepreneurship is the Center for Security and Emerging Technology at Georgetown, which was founded by Jason Matheny, Helen Toner, and others. When we interviewed Helen on our podcast, she was in her 20s. CSET raised a $55m grant from Open Philanthropy and now has a good case to be the leading think tank working on the intersection of AI and national security.
Another successful example outside a university and on a somewhat smaller scale is AI Impacts, started by Katja Grace in 2014. AI Impacts seeks to improve our understanding of the likely impacts of advanced artificial intelligence and to communicate these insights to policymakers and other actors. AI Impacts has received support from Open Philanthropy, The Centre for Effective Altruism, and others.
If you think there’s a chance you might be able to pursue this path, we’d strongly encourage you to consider it as a potential long-term option, and to build career capital that’s relevant to it.
It generally seems best to first work within the problem area in which you want to found an organisation and get involved in the effective altruism community, so that you can build up connections, learn about the area, and build enough of a track record to get funding.
Doing this also gives you the opportunity to develop your own ideas. In our experience, it’s usually best for entrepreneurs to develop their own vision (though there’s some evidence that it’s less important in the nonprofit space). This also makes it more likely that you’ve identified a genuine gap that needs filling rather than just an idea that sounds good.
As a first step, you could seek almost any relevant job in the area, such as working at an effective altruism organisation, working in policy if you want to start a think tank, working in academia if you want to start a research institute, working at a tech startup, and so on.
If you’re already more senior, you could start by advising organisations in the area in which you want to found the nonprofit.
There is far more to say about the question of whether to start a new organisation and how to compare different nonprofit ideas. A great deal depends on the details of your situation, making it hard to give general advice on the topic.
We suspect that the effectiveness of different approaches to mitigating climate change vary greatly, which means taking an effective altruist approach to climate change — and trying hard to focus on the most effective ways of working on the problem — could make a big difference.
We don’t have well-developed career advice in this area. But here are some rules of thumb for choosing approaches we think can help maximise your impact:
Focus on the most extreme risks where possible. As we argue in our problem profile on extreme climate change, it’s generally more pressing to reduce the chances of potential effects of climate change the worse they are. This is especially clear from a longtermist perspective, because more extreme outcomes are disproportionately likely to contribute to existential risk. That said, many of the best interventions for reducing extreme climate change risks also reduce more anticipated risks, and may even be the best from that perspective as well, since reduction in greenhouse gas emissions are key regardless.
Pay attention to the best evidence on what kinds of interventions are the most cost effective in the long term.
This is not easy, as many people have strong opinions on what kinds of projects are important, and it can be difficult to sift through the variety of views. In doing so, here are some things to consider:
The majority of future energy demand will come from non-OECD countries, so solutions that aren’t geared toward those countries are unlikely to be most effective.
What’s most cost effective in the long term could well differ from what seems like the best deal now. For example, if some method for decarbonisation is cheap and in everyone’s interest, you might expect it to happen without your intervention, meaning it could be better to focus on something else.
Check out this talk to learn about more factors that shape what types of interventions are most cost effective.
Focus on more neglected strategies. If an approach or a research area has not yet been explored — like a new zero-emission technology, for example — you have a chance of enabling work that others have missed, and you’ll also gain valuable information about what works and what doesn’t, which you can share with others.
Look for leverage. Causing even a relatively small improvement to the use of others’ resources that might go toward climate change likely dwarfs anything you could do entirely on your own, because these other resources are so massive (government spending alone is in the hundreds of billions per year). This means it will probably be most effective to leverage these other resources. For example, if you help organise a grassroots movement, that means everyone who joins multiplies your effort. If your advocacy efforts are successful at influencing policy, then they can affect billion-dollar budgets — which in turn affect the behavior of private actors. Or, if you can improve the way the entire scientific community thinks about e.g. feedback loops or extreme risks, others can build on your work.
As we said above, we’re not sure which career paths are best in this area, but here are a few ideas:
Help build the field of research on extreme climate change risks — e.g. on the nature and likelihood of extreme feedback mechanisms, which are not currently included in the most influential climate models, or on any ways climate change might increase existential risks from other sources (a particularly understudied area). This might mean becoming a researcher yourself and working with an eye toward helping shift the scientific community’s attention toward the most important and neglected topics.
However, right now we have no way of effectively and securely investing resources over such long time periods. In particular, there are few if any financial vehicles that can be reliably expected to persist for more than 100 years and stay committed to their intended use, while also earning good investment returns. Figuring out how to set up and manage such a fund seems to us like it might be very worthwhile.
Founders Pledge — an organization that encourages effective giving for entrepreneurs — is currently exploring this idea and is actively seeking input. It seems likely that only a few people will be able to be involved in a project like this, as it’s not clear there will be room for multiple funds or a large staff. But for the right person we think this could be a great opportunity. Especially if you have a background in finance or relevant areas of law, this might be a promising path for you to explore.
If the problem area still seems potentially promising once you’ve built up a background, you could take on a project or try to build up the relevant fields, for instance by setting up a conference or newsletter to help people working in the area coordinate better.
If, after investigating, working on the issue doesn’t seem particularly high impact, then you’ve helped to eliminate an option, saving others time.
If you have an idea for a novel approach to addressing one of our highest priority problems, it could also be high impact to explore that. But because our highest priority problems have been more researched, the value of information of exploring more within them is likely to be lower.
We can’t really recommend exploration of new issues as a priority path because it’s so amorphous and uncertain. It also generally requires unusual degrees of entrepreneurialism and creativity, since you may get less support in your work, especially early on, and it’s challenging to think of new projects and research ideas that provide useful information about the promise of a less explored area.
However, if you fit this profile (and especially if you have existing interest in and knowledge of the problem you want to explore), this path could be an excellent option for you. If you think it is, we’d like to hear from you. We may be able to help you decide whether this is a good option for you, and how to go about it.
Go back to the general 6-step process and five key categories we outlined earlier. In particular, go back to your list of problem areas, and consider whether there might be any relevant jobs in non-profits, government & policy, research, or using a particular skill you have. There should be options within at least one of these categories for any graduate.
In general, one option is to consider a wider range of problem areas. Another option is to stick with the same problem areas but contribute to a less pressing bottleneck. We don’t recommend compromising too much on personal fit.
One option that’s always there is to take any job you find satisfying and donate, or you can focus on building career capital that might let you take one of these jobs later.
If you already have skills, where to focus?
We’ve provided some advice on how to quickly narrow down the options we’ve considered based on your existing skills and experience. See this separate article.
Want to enter one of our priority paths?
If you’d like to try to have a big impact in one of these paths, our one-on-one advice might be able to help you choose an option and enter. As well as advice, we often know mentors, job opportunities and sources of funding in each path, and all our help is free. So, if you think you have a shot at entering one of them, let us know.
They do more to resolve the key bottlenecks facing our top recommended problem areas.
When leaders in the community are forced to trade funding against additional people in these roles, they prefer the people to the amount they could likely donate in earning to give.
These options let you learn the skills that are most needed in the top problems.
The additional flexibility of earning to give isn’t enough to offset the other downsides.
The main reason we highlight these paths is that the key bottlenecks to the problems we prioritise most often relate to research or policy change. In particular, the bottlenecks either involve technical fixes or more research to work out what the best solutions are, especially from a policy perspective. Otherwise, the key bottlenecks relate to community building. Either way, this means three broad types of positions are needed: those working in research or supporting it; those in relevant areas of government, and those in non-profits that work on research, policy or community building.
When we survey organisations working in our top problem areas, they tend to report that their fields are most constrained by people who are talented and aligned – those able to develop relevant expertise and work directly on solving the problem. This is especially true because the solutions to these problems are uncertain and require innovation.
In contrast, experts don’t usually say that additional funding — although it’s still useful — is the key bottleneck. This is because these areas are small, and there are already major donors willing to donate more than they currently do (read more). What most often holds back more funding is a lack of aligned and trusted people who are able to deploy it, and this means we need more people in non-profit or academic positions.
What’s more, even if funding was a key bottleneck, it often seems like you can raise more money by working as a fundraiser in a relevant organisation, doing advocacy for the issue rather than earning to give, or by simply making progress in the area, which has historically attracted more donors.
This prioritisation is reflected when we ask area experts to make quantitative tradeoffs. In our 2017 talent survey of the effective altruism community, we asked leaders at 17 key organisations to compare between donations and typical recent hires. (We also found similar results in our not yet published 2018 survey.)
In particular, we asked how organisations would have to be compensated in donations for their last ‘junior hire’ or ‘senior hire’ to disappear and not do valuable work for a 3 year period, yielding the following results:
Average weighted by org size
$12.8m ($4.1m excluding an outlier)
$7.6m ($3.6m excluding an outlier)
It’s unclear exactly how to interpret these estimates, and we discuss some of their weaknesses in the full article about the survey. However, it certainly suggests that a talented person working directly on the issues can often have an impact equivalent to hundreds of thousands of dollars per year. This is more than most people could donate, which suggests that if you have good fit with one of these positions, they’re often higher-impact than earning to give.
The survey was mainly about jobs at the relevant non-profits rather than academic or government positions, but we expected the figures would be similar for these positions if someone was a good personal fit, and perhaps higher. (And this was borne out by our not-yet-published 2018 survey.)
We think the trade-off would hold for less competitive positions. For instance, if faced with the choice of someone being a software engineer and donating 10-20% of their salary per year (maybe $6-50k) or working in a relevant area of government, we think most in our survey would prefer the person in government.
Another reason why we highlight these three paths is that they seem to resolve the key skill gaps reported in the survey. We asked leaders in the community about which skills are most needed in the community as a whole, they fell into the following broad categories:
Government and policy experts.
Management & operations & administration.
Experts in machine learning, the life sciences, economics.
The most direct way to gain these skills and fill the gaps is probably to take the three paths we highlight: do relevant graduate study and research; work in government & policy; or work in a non-profit where you can learn about these issues and learn these skills.
The other highly rated skill in the survey was “good judgement” (phrased as “good calibration, wide knowledge, and ability to work out what’s important”) and again, if you want to develop good judgement about issues concerning the world’s most pressing problems, then it seems like usually the best option is probably just to start working on those problems. You can improve your judgement as you go by tracking your predictions and following the other steps listed here. (Though there might be some exceptions, such as jobs in financial trading, which require making lots of judgement calls and give you feedback on them.)
The main exception on skill building is that in the right corporate sector job you could also develop relevant skills in management, operations, administration, some types of research, machine learning, marketing, and so on. So, if you were to earn to give, we’d encourage you to do in a job that simultaneously lets you develop one of these skills, to give you the option of filling a skill gap later. This might mean something like working at a tech startup on operations, or doing ML research at Google.
The main counterpoint in favour of earning to give is likely its flexibility over time: it’s easier to redirect your donations to a different problem area than to change where you work. This remains a good reason to earn to give over the other paths, especially for people who are especially uncertain about which problem area to focus on.
However, we think we might have overstated the differences in flexibility in the past. One key point is that some of the other options we highlight also often provide significant flexibility about problem areas. For instance, in government and policy careers, it’s relatively easy to switch which area of policy-making you work on. Likewise, if you gain transferable skills like management and outreach, then you can apply them to different problem areas over time. It’s also possible to design organisations that can flexibly support different areas, such as 80,000 Hours or Founders Pledge.
You gain even more flexibility about jobs to pursue within an area, because you’ll gain expertise and connections relative to the problem, which will mean you’ll have plenty of opportunities in the future. People who earn to give, however, often end up with skills that aren’t especially relevant to the top problem areas.
We also value flexibility a little less than we did in the past. In part, this is because we think our top problems are urgent, so it’s important to prioritise immediate impact compared to putting yourself in a good position decades in the future. We also feel more confident in our choice of problem areas, since our ranking has stabilised in the last three years. Finally, when you’re working with a community, it seems like there are greater reasons to specialise. We’ll cover these factors in upcoming articles.
Although we discuss becoming a journalist or a public intellectual as promising options for a small number of people, we don’t usually recommend paths focused primarily on advocacy, especially mass advocacy focused on raising awareness.
One reason for this is that often the most relevant people are already aware of the issues we highlight. For instance, AI safety has had plenty of press coverage recently, so almost all AI scientists and many relevant policy makers are aware of it. The bigger bottleneck is having the right people take the ideas seriously, and this requires in-depth engagement rather than mass advocacy.
The problem areas we’re focused on are also usually complex, uncertain, and mainly in need of innovation — so there often aren’t simple actions to promote. This makes it difficult for people who are superficially involved to contribute, which makes mass advocacy less useful. Instead, we need to develop better proposals for action, and this often means working directly in research, non-profits or policy.
All that said, we think that targeted advocacy around top problem areas can be very useful if you’re in a position to do it unusually well, and “movement building” has been listed as a key skill bottleneck. If you’re interested in doing targeted advocacy or movement building, we suggest first working in one of the other paths — especially in relevant non-profits or academia — in order to build up expertise and learn what kinds of outreach or engagement are needed within your field.
We’re open to for-profit work as a strategy for social impact, especially due to its scalability.
However, in most of our top problem areas it is difficult to find for-profit ways to directly contribute. As mentioned, addressing many of the key bottlenecks in these areas requires theoretical research or policy change, and it’s often hard for for-profits to contribute to this.
This is not surprising. We think the most pressing global issues relate to improving the long-term future, and issues that affect the long-term future face severe market failures (because the people who are affected don’t exist yet), making it hard to use for-profit methods to make progress.
That said, there are important exceptions. For example, Google DeepMind conducts AI strategy and safety research alongside AI development. It’s also possible to create a for-profit mechanism for funding research. This is what Elon Musk has tried to do with SpaceX, which sells satellite launches in order to fund research relevant to space colonisation in general. And there may be opportunities for profitable work in biosecurity, like developing cheap diagnostics to enable early detection of new diseases, which would also be helpful for increasing pandemic preparedness.
However, in all these cases, it seems likely that pursuing a for-profit strategy will come at the significant cost of being less focused on the most crucial issues within an area — as at the end of the day companies’ bottom line is profit, rather than impact.
A different idea is to try to set up a for-profit company that has some positive impact on a key problem, but where the primary strategy is earning to give. This has the advantage over traditional earning-to-give strategies of helping you build a network and credibility that’s relevant to the problem.
In general, working at or starting for-profit companies might be most promising for building career capital, especially if you can do work that allows you to build up knowledge and connections relevant to a top problem area.
For those focused on problem areas outside those we prioritize most highly, there can be a wider range of relevant businesses. For example many people are working on clean meat businesses in order to shrink demand for factory farmed meat. And any business that grows in the developing world can help to raise people’s incomes.
It’s helpful to focus your career on a broad ‘area’ rather than a narrow intervention as a way to deal with uncertainty. Over time, which intervention seems best is likely to change, so if you aim at an overly narrow intervention that turns out to not be promising, then you could waste a lot of effort. Instead, it’s useful to focus your efforts on a “cluster” of promising but similar interventions. This means that if one intervention doesn’t work out, there are plenty of others you can switch into. It also means that the expertise you accumulate will be relevant to lots of relevant interventions, giving you better options in the future.
We’ve become convinced this approach is useful through our work with people over the years, where narrowing down in terms of “areas” often seems like one of the most productive steps of our advice. We’ve also seen it productively applied elsewhere in the effective altruism community, especially at the Open Philanthropy.
This means that the definition of a “problem area” is a pragmatic one. We define problem areas as whatever clusters of promising interventions are useful for career planning. Similarly, Open Phil defines them as the clusters that are useful to treat as a group for the purpose of grantmaking.
Though if we consider your options more broadly, in the worst you’ll have near zero impact, so the ratio could be arbitrarily large. Likewise, in the worst case you’ll have negative impact which would make the ratio negative.↩
Though, this tradeoff doesn’t usually include the opportunity cost of the donations i.e. the value of what else could have been done with the the money.
So, if the organisation says a new hire is worth $100,000, it doesn’t follow that a donor should be willing to pay this much for a new hire, since they might have an even better use of the money.↩
Ideally experts who are sympathetic with the idea of comparing paths in terms of impact, and have some sense of the distribution of opportunities and resources in the area, and are well calibrated.
Large for-profits can fund relevant research, such as Google, and often these can be good places to work, but this is more like an example of earning to give than a direct contribution by a for-profit.↩