Never miss a high impact job

Get our curated list of job openings sent to your inbox once a month.

The most exciting jobs we know about

These are important positions, at our recommended organisations, within our list of the most urgent problems. They’re all very competitive, but if you’re a good fit for one, it could be your highest-impact option.

Last updated: September 2017

Advise tech founders on where to donate $10M+

Deployment Coordinator
Founders Pledge
London

Founders Pledge is a non-profit that encourages entrepreneurs to make a commitment to donate at least 2% of their personal proceeds to charity when they sell their business. In just a few years, they have raised $200 million in legally-binding pledges of equity, and are growing fast.

The Deployment Coordinator helps advise the founders when they make their exit on where to donate, as well as about the mechanics of the donation. This position provides an opportunity to present effective altruist giving opportunities, encourage wealthy individuals to give more at a crucial moment, and discuss their next career steps after exit. We estimate founders who have taken the pledge will donate at least $10m next year, increasing to over $30m per annum within the coming years. So a small increase in how well the funds are spent could be hugely impactful.

You’ll also have the opportunity to learn how to promote effective giving from a team with a great track record, and to learn about charity evaluation while managing a team of researchers.

Founders Pledge is also hiring for their growth and community teams, as well as generalist researchers.

With the exception of the below, these jobs are not yet advertised, but if you think you would be a good fit, apply here.
Apply

Communicate about in-depth research on global health and development

Research Analyst, Outreach Focus
GiveWell
San Francisco

GiveWell has conducted some of the highest-quality research into which programmes and charities within international development will most help people for each dollar donated.

Research Analysts with an outreach focus work closely with GiveWell’s research team and use their in-depth understanding of the research to communicate it to donors, as well as others who rely on GiveWell’s work, such as researchers or the media.

This role has the potential to raise millions of dollars for GiveWell’s top recommended charities, and finding someone with especially good fit could substantially increase this figure. GiveWell has also consistently reported being talent-constrained rather than funding-constrained. It’s also a great opportunity to learn about the cutting-edge of charity evaluation and how to promote effective philanthropy.

See details
Disclaimer: 80,000 Hours is being considered by the Open Philanthropy Project, which works closely with GiveWell, to receive a grant.

Allocate $10M+/year to promote safe emerging technology research

IARPA is a government agency that funds research to improve security and the U.S. intelligence community. IARPA has supported research into cybersecurity; how to improve biosecurity and pandemic preparedness; and some of the earliest experiments into how to improve geopolitical forecasts (as covered in Philip Tetlock’s Superforecasting). The director of IARPA has shown interest in reducing the risk of human extinction, an area we consider among the most neglected and important problems facing the world.

Program Managers create, run and advocate for IARPA’s funding programmes. A typical programme involves tens of millions of dollars of funding. You’ll also be able to establish a career in the intelligence community, one of the major players in forecasting and mitigating global catastrophic risks.

Program Managers historically accepted either have a PhD, or substantial expertise in a relevant area of scientific research. To be admitted, you’ll need to develop an example funding proposal.

See details

Work on neglected and cutting-edge research in machine learning

Postdoctoral researchers, software engineers, and internships
Multiple positions (see below)

We think solving the AI control problem is one of the world’s most important and neglected research questions. If you could make a technical contribution to this research, it’s likely to be one of the highest-impact things you could do with your life.

This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained. Beyond this technical impact, it can also be effective to work alongside technical teams to influence the culture towards greater concern for safety and positive social impact.

What follows are some of the best job openings within the leading organisations that undertake technical research relevant to the control problem.

To get these research positions, you will probably need a PhD in a relevant quantitative subject, such as computer science, maths or statistics, as well as familiarity with the latest research into AI safety. The engineering positions usually require at least a few years’ experience (except internships), and a varying degree of knowledge of machine learning. If you’re not at that stage yet, read our profile on how to enter AI safety research.


Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.

Research Scientist
Research Scientists at DeepMind set the research agenda in exploring cutting-edge Machine Learning and other AI techniques to solve real world issues. On the safety team, you would work closely with the DeepMind founders to explore how to ensure that as AI systems become increasingly powerful, they work for the benefit of all.

See details

Program Manager
Program Managers at DeepMind do what’s necessary to facilitate novel research, using their exceptional organisational skills to coordinate projects and manage people and deadlines. We see this position as important because i) DeepMind is an important organisation in AI safety, and ii) it has great potential for building your skills and using them to work with safety researchers.

See details

Research Engineer
Being a Research Engineer would be similar to being a Program Manager in the mechanisms for potential positive impact, but rather than providing organisational research support you would primarily be developing the algorithms, applications, and software used by the research team.

See details

See all jobs at DeepMind.


UC Berkeley is one of the top schools for Computer Science, and the goal of CHAI is to ensure that as the capabilities of AI systems increase, they continue to operate in a way which is beneficial to humanity.

Postdoctoral Researcher
Postdoctoral Researchers will have considerable freedom to explore topics within this area, allowing them to work on the control problem.

Successful candidates will work with the CHAI Director, Stuart Russell, or with one of the Berkeley co-Principal Investigators, Pieter Abbeel, Anca Dragan, and Tom Griffiths.

See details


The Machine Intelligence Research Institute (MIRI) was one of the first groups to become concerned about the risks from AI in the early 2000s, and has published a number of papers on safety issues and how to resolve them.

Research Fellow
Research Fellows work to make progress on the alignment problem, which involves novel research on a number of open problems in computer science, decision theory, mathematical logic, and other fields. MIRI are looking for Research Fellows for both their traditional and machine learning agendas. MIRI selects candidates strongly on math talent and weighs traditional academic backgrounds less heavily than other research positions in AI safety. So if you think you’d enjoy research in a non-traditional academic background, and get along with the team, this is an excellent path to contribute to solving the control problem.

See details


OpenAI
San Francisco, CA

OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe and ensuring the benefits are as widely distributed as possible. It has received $1 billion in funding commitments from the technology community, and is one of the leading organisations working on general AI development.

Safety Researcher
As a safety researcher you’ll be working out how to ensure that AI systems are beneficial to humanity. You’ll need a strong research background in deep learning, although you don’t need to have worked on safety before. Listen to our interview with the head of AI safety research at OpenAI to find out more about what it’s like and how to get in.
See details

Machine Learning (Researcher)
This position is a research role with the a broad remit, so it’s an opportunity to do vital work on the ‘control problem’ of ensuring AI safety. To get this role, you need to have shown exceptional achievements in a quantitative field (not necessarily ML/AI), and have a shot at being a world expert. For this reason, this role is incredibly competitive, but if you can get it, it’s likely to be one of your highest-impact options.

See details

Special Projects
The Special Projects role involves working on one of the projects listed here, which are “problem areas likely to be important both for advancing AI and for its long-run impact on society”. They were formulated by experts we believe are trying to improve the long-run safety and consequences of AI/ML systems, and we see this as a concrete area in which those with the relevant background can contribute to their improvement.

See details

See all jobs at OpenAI.


Fathom Computing
Palo Alto, CA

Machine Learning Safety Researcher
Fathom Computing is building an optical computer that replaces electricity in processors with light, to allow better neural network performance. They’re looking for machine learning safety researchers to develop algorithms for their hardware and carry out novel research in AI safety. This could be a good transitionary position for people looking to get into a top tier research centre focussed on algorithmic advancements in machine learning.

See details


Berkeley Existential Risk Initiative
San Francisco Bay area, but remote work possible

The Berkeley Existential Risk Initiative is a non-profit that aims to accelerate the work of existential risk researchers. It gives them support that academic funding often doesn’t cover, such as administrative and technical help.

Machine Learning Engineer
You will develop machine learning software for the Center for Human Compatible AI at the University of California, Berkeley. Possible projects include open sourcing existing packages and implementing machine learning algorithms.

See details


AI Fellows (funding for PhD students)
Open to AI and machine learning PhD students, the fellowship will provide funding for your PhD programme as well as connecting you with other AI fellows.

See details


The Future of Humanity Institute is a world-leading research centre at the University of Oxford which works on big-picture questions for human civilization. It was founded by Professor Nick Bostrom, author of the New York Times bestseller Superintelligence, and it does a lot of work on AI safety.

AI safety and reinforcement learning internship
Interns on the technical research team have the opportunity to contribute to work on a specific project in reinforcement learning (RL). Previous interns have worked on software for Inverse Reinforcement Learning, on a framework for RL with a human teacher, and on RL agents that do active learning. We see this as an outstanding opportunity to work with an excellent team, improve your technical skills, and explore ideas with some of the key players in AI safety, while strengthening your application to graduate school.

See details


The Montreal Institute for Learning Algorithms (MILA) is one of the leading academic AI labs focussed on deep learning, and they have received funding from the Open Philanthropy Project to support work on AI safety.

Internship
Machine learning research internships are available to students at undergraduate, masters, and PhD levels.

See details


Google
Mountain View, California

AI Residency Program
This is a 12-month, highly-competitive programme designed to accelerate you into a career in state-of-the-art machine learning research. You’ll work with some of the leading AI researchers in industry, and position yourself for entry into the top AI labs. You can join after a Bachelors, Masters, or PhD (or equivalent industry experience).

See details


AI alignment research funding
Remote

Research funding is available for projects working on building AI systems which robustly advances human interests. You don’t have to be part of an academic institution for this funding.

See details


Make your own machine learning research internship
Remote

It’s possible to get research internships by pitching research ideas to potential supervisors. We see this as one of the best ways to test your fit for doing AI research, and improve your chances of getting into graduate school. It can often be done at all stages of your career, and these positions are often not advertised. How to do this is described on page 13 of this document.

Work on AI policy, ethics, and governance

Postdoctoral and non-academic researchers, policymakers, and legal scholars
Multiple positions (see below)

The development of powerful AI doesn’t only pose the technical challenge of the control problem, but also major political and social challenges. AI policy is fast becoming an important area, but policy-makers are overwhelmingly focused on short-term issues like how to regulate self-driving cars, rather than the key long-term issues (i.e. the future of civilization).

The following is a list of positions focused on policy and strategy research relevant to AI. If you could make a contribution to this area, it’s likely to be one of the highest-impact things you could do with your life. This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained.


Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.

Policy Researcher
This role would allow you to play a key role in researching the societal impacts of AI and advocating for the actions that ensure its use benefits others as much as possible. An example of this kind of research could be looking at what effects automation will have on the economy and what governments should do to prepare.

This role will become more important over time as AI systems have an increasingly large impact on society, allowing you to conduct novel research, and define strategy and responses into the immediate and long-run impacts of AI developments.

You’d be a good fit for this role if you have a strong understanding of the impact of emerging technology on society, experience in a policy setting, and an ability to synthesise information from a range of fields. As this is predominantly a stakeholder engagement position, you’ll also need outstanding communication and interpersonal skills.

DeepMind’s reputation, connections, opportunities to conduct novel and important research, focus on learning and professional development, and talented employees will also give you outstanding career capital.

See details


The Future of Humanity Institute is a world-leading research centre at the University of Oxford which works on big-picture questions for human civilization. It was founded by Professor Nick Bostrom, author of the New York Times bestseller Superintelligence, and it does a lot of work on AI safety.

AI Policy and Governance Internship
You will contribute to the Institute’s work on AI policy, AI governance, and AI strategy. Previous interns have worked on issues such as technology race modelling, the bridge between short-term and long-term AI policy and the development of AI, and AI policy in China. Skills in political science, public policy, economics, law, computer science, or Chinese language and politics would be particularly helpful.

See details

Research Fellow in Macrostrategy
You will be evaluating strategies to reduce existential risk, particularly with respect to the long-term outcomes of technologies such as artificial intelligence.

See details

Senior Research Fellow in Macrostrategy
You will be working on identifying crucial considerations for improving humanity’s long-run potential. The work will include working on existential risk research and AI strategy.

They are looking for a polymath with an academic background related to economics, mathematics, physical sciences, computer science, philosophy, political science, or international governance, who has both outstanding analytical ability and a working understanding of many basic tools used in these fields.

See details

See all jobs at the Future of Humanity Institute


American Association for the Advancement of Science
Various, but particularly Washington, DC

Science & Technology Policy Fellowships
AAAS Science & Technology Policy Fellowships are one year placements for scientists and engineers in the executive, legislative, and judicial branches of the U.S. federal government in Washington DC. Fellows learn first-hand about policymaking while contributing their knowledge and analytical skills to the federal policymaking process.

You’ll need a PhD in a scientific field, or a master’s in engineering plus three years of professional engineering experience. If you have a background in AI, you there’s a good chance you’ll be placed in an AI-relevant organization. We have heard that these fellowships are a good way to develop career capital, experience and understanding of how the political system works.

See details


Legal or Policy Intern
The OSTP internships provide a coveted opportunity to work with senior White House officials, policy analysts, or the legal team on how science and technology intersect with the economy, national security, homeland security, health, foreign relations, and the environment. It’s likely to be one of the most important departments in determining the regulatory response to AI. It’s also an excellent launch pad into other science policy positions.

See details


Researcher
The Foundational Research Institute is one of the few organisations working on how to minimise the risks of large scale accidents or misuse of AI systems. As a researcher you’ll contribute to studying the scenarios in which development of advanced machine intelligence could negatively impact people and animals, and how to mitigate those scenarios. Remote working is available.

See details


Affiliate
The Global Catastrophic Risk Institute (GCRI) is a nonpartisan think tank that aims to reduce the risk of events large enough to significantly harm or even destroy human civilization at the global scale. They’re seeking Junior Affiliates and paid senior “Associates” (at the doctoral level or equivalent) to collaborate on reducing these risks in their focus areas (global warming, nuclear war, pandemics, and artificial intelligence), as well as determining how to assess and compare them (their ‘Integrated Assessment’).

See details


AI Policy Lead
The Future of Life Institute does a combination of communications and grant-making to organisations in the AI safety space, in addition to work on the risks from nuclear war and pandemics. They’re looking for an AI policy lead to lead the development and implementation of AI policy and advocacy initiatives.

See details


Research Associate, Technology & National Security Program
The Centre for a New American Security is a bipartisan think tank working on national security and defence policy. Their technology and national security program looks at how new technologies such as artificial intelligence affect national security. One of their adjunct fellows co-authored this report on AI and security for the US Intelligence Advanced Research Projects Activity. They’re looking for a research associate.

See details


Mozilla
Remote

Tech Policy Fellowship
A fellowship for people with extensive experience with tech policy. Fellows spend a year working independently on policy issues that they’re interested in, collaborating with Mozilla’s policy teams and wider network. It could be a good place to do independent AI policy work.

See details


University of Agder
Kristiansand and Grimstad, Norway

PhD research fellow in ethics and artificial intelligence
A PhD programme working on the ethics of artificial intelligence. The ideal background would include both philosophy and artificial intelligence.

See details


The AI Now Institute does interdisciplinary research on the social implications of artificial intelligence. They are focused mainly on short-term AI policy issues such as AI and civil liberties, work and automation, and the potential for bias.

Postdoctoral Fellowships
You’ll work on the social and economic implications of AI. Examples of possible research areas include the geopolitical implications of AI, how AI will shape work, and bias and fairness in algorithmic systems. It’s one of the best places to work on these kinds of issues.

See details


Google
Multiple locations

Public Policy Manager (Artificial Intelligence and Emerging Technology)
Mountain view or San Francisco
You’ll work in Google’s Public Policy team advising on policy issues and developing Google’s policy strategies. Although you’ll be working on short-term AI policy, you’ll get valuable practical policy experience for later work on more long-term issues.

See details


Future Advocacy helps organisations that are working on humanity’s biggest challenges improve their advocacy. One of their projects is advocating for policies to manage the future of AI.

Research, Advocacy, and Communications Coordinator
You’ll work on a variety of tasks including research, social media, and organising events. This would be a good fit for a recent graduate.

See details


Cyber security is an important area of study for AI safety and the Center for Long-Term Cybersecurity is one of the best places in the world to work on it. It’s a particularly good place to research these issues as it works closely with the Centre for Human Compatible AI, a leading AI safety research lab.

Researcher
You’ll study emerging issues in cyber security, making connections between technical developments and their implications on policy and society. They are particularly interested in hiring people in four priority areas: the security implications of artificial intelligence and machine learning; the cyber talent pipeline; new governance and regulatory regimes in cybersecurity; and helping vulnerable people online. They’re looking for both junior and senior researchers.

See details

Are you an employer?
Get in touch to let us know about a high impact job.

Organisations we recommend

Some of the best jobs are never advertised and are created for the right applicants, so here is our list of some of the best organisations within each of our recommended problem areas. These are all potentially very high-impact places to work (in any role), and many can also help you to develop great career capital. To see why we picked these organisations, read the full problem profile.

Why work on this problem?


Why work on this problem?

  • GiveWell conducts in-depth research to find the best charities that help people in the developing world. See current vacancies. Its partner the Open Philanthropy Project researches giving opportunities in fields other than global health and poverty. See current vacancies. Disclaimer of conflict of interest: we are being considered for a grant by the Open Philanthropy Project.
  • 80,000 Hours – yes, that’s us. We do research into the careers which do the most good and help people pursue them. If you’d like to express interest in working with us, fill out this short form.
  • The Centre for Effective Altruism conducts research into fundamental questions on how to do the most good, and encourages donations to the best charities available working on priority problems. It includes the project Giving What We Can, which encourages people to pledge 10% of their income to the most effective organisations for helping others. See current vacancies. If you’d like express interest in working at the Centre for Effective Altruism, fill out this short form. Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism.
  • Effective Altruism Foundation promotes effective altruist ideas across the German-speaking world. See current vacancies.
  • Founder’s Pledge encourages entrepreneurs to make a legally binding commitment to donate at least 2% of their personal proceeds to charity when they sell their business.

Why work on this problem?

Why work on this problem?

Advocacy for animals on factory farms

Development of meat substitutes

Why work on this problem?

Why work on this problem?

Note: Our investigation of this area is only shallow, so we are not confident in our analysis and recommendations. See the Open Philanthropy Project’s overview of this area for more detail and a longer list of organisations.

Why work on this problem?

Didn’t find anything?
Let us know what you were looking for, or see the full list of organisations we sometimes recommend.

See our methodology here.

Never miss a high impact job

Get our curated list of job openings sent to your inbox once a month.