Never miss a high impact job
Get our curated list of job openings sent to your inbox once a month.
The most exciting jobs we know about
These are important positions, at our recommended organisations, within our list of the most urgent problems. They’re all very competitive, but if you’re a good fit for one, it could be your highest-impact option.
Help a top-rated charity prevent thousands of cases of malaria

AMF has been able to scale up to $50m per year with only two full time staff, protecting millions of people from malaria. AMF could scale even further using funding from GiveWell, but is constrained by the time of the current two staff.
This role is an opportunity to add capacity to the team and help them scale up even faster, while also learning from an extremely productive team. You’ll have a wide range of responsibilities, from talking to AMF’s distribution partners in Africa, to preparing financial reports.
You might be a good fit for this role if you have exceptional organisational skills, are highly analytical, and are highly motivated to help AMF succeed.
AMF is also hiring for an IT developer, which we also consider as very high impact.
Lead a new institute to conduct important AI research

The Assistant Director will play a crucial role in the early stages of establishing this institute, autonomously managing its day-to-day operations. You’ll need to be highly organised to coordinate with the university bureaucracy, manage finances, run events, draft grants, hire new staff, and manage collaborations with other organisations. Basically, you’ll handle anything that needs to be done.
We think this role will also provide you with excellent career capital: intelligent coworkers and contacts within an important field, a high degree of autonomy, and the opportunity to manage a large, fast-growing organisation.
Advise tech founders on where to donate $10M+

The Deployment Coordinator helps advise the founders when they make their exit on where to donate, as well as about the mechanics of the donation. This position provides an opportunity to present effective altruist giving opportunities, encourage wealthy individuals to give more at a crucial moment, and discuss their next career steps after exit. We estimate founders who have taken the pledge will donate at least $10m next year, increasing to over $30m per annum within the coming years. So a small increase in how well the funds are spent could be hugely impactful.
You’ll also have the opportunity to learn how to promote effective giving from a team with a great track record, and to learn about charity evaluation while managing a team of researchers.
Other roles
Founders Pledge is also hiring for their growth and community teams, as well as generalist researchers.
With the exception of the below, these jobs are not yet advertised, but if you think you would be a good fit and want to find out more, email why you would be a good fit, attaching your CV or LinkedIn, to: peter.mcintyre@80000hours.org.
Country Manager for Germany
Berlin
Founders Pledge is expanding to Germany, an important European startup hub. You’ll need strong interpersonal skills and grit, as the your impact in this role will be through marketing and sales.
Raise money for top-rated global health charities

Research Analysts with an outreach focus work closely with GiveWell’s research team and use their in-depth understanding of the research to communicate it to donors, as well as others who rely on GiveWell’s work, such as researchers or the media.
This role has the potential to raise millions of dollars for GiveWell’s top recommended charities, and finding someone with especially good fit could substantially increase this figure. GiveWell has also consistently reported being talent-constrained rather than funding-constrained. It’s also a great opportunity to learn about the cutting-edge of charity evaluation and how to promote effective philanthropy.
See details
Disclaimer: 80,000 Hours is being considered by the Open Philanthropy Project, which works closely with GiveWell, to receive a grant.
Help GiveWell run more efficiently to fund evidence-based global health programmes

The Operations and Legal Program Manager will contribute to the smooth running of the organisation by recommending and implementing improvements and managing projects in many domains for which the operations team is responsible. We see this as a high-impact position because around $100m (and growing) is allocated on the basis of GiveWell’s recommendations each year, so even modest improvements to the efficiency of the organisation could lead to millions of dollars of extra donations. GiveWell also consistently reports being highly talent-constrained rather than funding-constrained.
See details
Disclaimer: 80,000 Hours is being considered by the Open Philanthropy Project, which works closely with GiveWell, to receive a grant.
Allocate $10M+/year to promote safe emerging technology research

Program Managers create, run and advocate for IARPA’s funding programmes. A typical programme involves tens of millions of dollars of funding. You’ll also be able to establish a career in the intelligence community, one of the major players in forecasting and mitigating global catastrophic risks.
Program Managers historically accepted either have a PhD, or substantial expertise in a relevant area of scientific research. To be admitted, you’ll need to develop an example funding proposal.
Help a large, innovative foundation run more efficiently

The Grants Associate will help process philanthropic grants totalling several tens of millions of dollars per year. They will coordinate between Open Philanthropy staff and grantee organisations to collect all necessary information, ensure legal compliance, and free up time for the research staff.
As a new organisation with plenty of funding, Open Philanthropy is mainly constrained by how fast it can identify and train people who fit the culture and mission. They report being highly talent-constrained rather than funding-constrained. By taking this role you will be helping them scale up more rapidly, so if you’re a good fit for the role, it may be one of the highest-impact things you can do. You’ll also be able to learn from the world-class GiveWell team and Open Philanthropy’s expertise on effective altruism.
See details
Disclaimer: 80,000 Hours is being considered by the Open Philanthropy Project to receive a grant.
Work on neglected and cutting-edge research in machine learning

Multiple positions (see below)
This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained. Beyond this technical impact, it can also be effective to work alongside technical teams to influence the culture towards greater concern for safety and positive social impact.
What follows are some of the best job openings within the leading organisations that undertake technical research relevant to the control problem.
To get these research positions, you will probably need a PhD in a relevant quantitative subject, such as computer science, maths or statistics, as well as familiarity with the latest research into AI safety. The engineering positions usually require at least a few years’ experience (except internships), and a varying degree of knowledge of machine learning. If you’re not at that stage yet, read our profile on how to enter AI safety research.

Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.
Research Scientist
Research Scientists at DeepMind set the research agenda in exploring cutting-edge Machine Learning and other AI techniques to solve real world issues. On the safety team, you would work closely with the DeepMind founders to explore how to ensure that as AI systems become increasingly powerful, they work for the benefit of all.
Program Manager
Program Managers at DeepMind do what’s necessary to facilitate novel research, using their exceptional organisational skills to coordinate projects and manage people and deadlines. We see this position as important because i) DeepMind is an important organisation in AI safety, and ii) it has great potential for building your skills and using them to work with safety researchers.
Research Engineer
Being a Research Engineer would be similar to being a Program Manager in the mechanisms for potential positive impact, but rather than providing organisational research support you would primarily be developing the algorithms, applications, and software used by the research team.

UC Berkeley is one of the top schools for Computer Science, and the goal of CHAI is to ensure that as the capabilities of AI systems increase, they continue to operate in a way which is beneficial to humanity.
Postdoctoral Researcher
Postdoctoral Researchers will have considerable freedom to explore topics within this area, allowing them to work on the control problem.
Successful candidates will work with the CHAI Director, Stuart Russell, or with one of the Berkeley co-Principal Investigators, Pieter Abbeel, Anca Dragan, and Tom Griffiths.

The Machine Intelligence Research Institute (MIRI) was one of the first groups to become concerned about the risks from AI in the early 2000s, and has published a number of papers on safety issues and how to resolve them.
Research Fellow
Research Fellows work to make progress on the alignment problem, which involves novel research on a number of open problems in computer science, decision theory, mathematical logic, and other fields. MIRI are looking for Research Fellows for both their traditional and machine learning agendas. MIRI selects candidates strongly on math talent and weighs traditional academic backgrounds less heavily than other research positions in AI safety. So if you think you’d enjoy research in a non-traditional academic background, and get along with the team, this is an excellent path to contribute to solving the control problem.

OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe and ensuring the benefits are as widely distributed as possible. It has received $1 billion in funding commitments from the technology community, and is one of the leading organisations working on general AI development.
Machine Learning (Researcher)
This position is a research role with the a broad remit, so it’s an opportunity to do vital work on the ‘control problem’ of ensuring AI safety. To get this role, you need to have shown exceptional achievements in a quantitative field (not necessarily ML/AI), and have a shot at being a world expert. For this reason, this role is incredibly competitive, but if you can get it, it’s likely to be one of your highest-impact options.
Special Projects
The Special Projects role involves working on one of the projects listed here, which are “problem areas likely to be important both for advancing AI and for its long-run impact on society”. They were formulated by experts we believe are trying to improve the long-run safety and consequences of AI/ML systems, and we see this as a concrete area in which those with the relevant background can contribute to their improvement.

The Future of Humanity Institute at Oxford University was founded by Professor Nick Bostrom, author of Superintelligence. It has a number of academic staff conducting both technical and strategic research. AI safety is one of its focus areas.
Reinforcement Learning Intern
Interns on the technical research team have the opportunity to contribute to work on a specific project in reinforcement learning (RL). Previous interns have worked on software for Inverse Reinforcement Learning, on a framework for RL with a human teacher, and on RL agents that do active learning. We see this as an outstanding opportunity to work with an excellent team, improve your technical skills, and explore ideas with some of the key players in AI safety, while strengthening your application to graduate school.
Ensure regulatory responses to emerging technologies benefit everyone

Multiple positions (see below)
The following is a list of positions focused on policy and strategy research relevant to AI. If you could make a contribution to this area, it’s likely to be one of the highest-impact things you could do with your life. This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained.

Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.
Policy Researcher
This role would allow you to play a key role in researching the societal impacts of AI and advocating for the actions that ensure its use benefits others as much as possible. An example of this kind of research could be looking at what effects automation will have on the economy and what governments should do to prepare.
This role will become more important over time as AI systems have an increasingly large impact on society, allowing you to conduct novel research, and define strategy and responses into the immediate and long-run impacts of AI developments.
You’d be a good fit for this role if you have a strong understanding of the impact of emerging technology on society, experience in a policy setting, and an ability to synthesise information from a range of fields. As this is predominantly a stakeholder engagement position, you’ll also need outstanding communication and interpersonal skills.
DeepMind’s reputation, connections, opportunities to conduct novel and important research, focus on learning and professional development, and talented employees will also give you outstanding career capital.

Legal or Policy Intern
The OSTP internships provide a coveted opportunity to work with senior White House officials, policy analysts, or the legal team on how science and technology intersect with the economy, national security, homeland security, health, foreign relations, and the environment. It’s likely to be one of the most important departments in determining the regulatory response to AI. It’s also an excellent launch pad into other science policy positions.

CFI is a multidisciplinary research institute founded at the University of Cambridge with a £10 million grant from the Leverhulme Trust. As well as collaborating with researchers at the Centre’s sister organisation, The Centre for the Study of Existential Risk, CFI works with centres at Oxford, Berkeley, and Imperial College London to examine the impacts of AI on humanity over the coming decades.
Research Assistant/Associate, Impact of AI
Research Assistants / Associates will investigate areas related to the impact, ethics or nature of AI. This is a new position that will enable you to pursue your own research objectives, and at the same time develop new projects and programmes acting as CFI’s ‘Research Programme Coordinator’. Given the freedom in setting a research agenda for yourself and other researchers to come, this would allow the right candidate to guide this research to higher impact areas.

Researcher
IVADO, the parent organisation of the Montreal Institute for Learning Algorithms and one of the top AI labs, is searching for a researcher to lead their investigation into the ethics of AI. As an early hire in this area and organisation, this is an opportunity to be one of the few researchers to explore the ethics of AI, and to create a new research agenda and culture which focuses on all of the implications of advanced AI, and not just those that will occur in the short term.

Researcher
The Foundational Research Institute is one of the few organisations working on how to minimise the risks of large scale accidents or misuse of AI systems. As a researcher you’ll contribute to studying the scenarios in which development of advanced machine intelligence could negatively impact people and animals, and how to mitigate those scenarios. Remote working is available.

Affiliate
The Global Catastrophic Risk Institute (GCRI) is a nonpartisan think tank that aims to reduce the risk of events large enough to significantly harm or even destroy human civilization at the global scale. They’re seeking Junior Affiliates and paid senior “Associates” (at the doctoral level or equivalent) to collaborate on reducing these risks in their focus areas (global warming, nuclear war, pandemics, and artificial intelligence), as well as determining how to assess and compare them (their ‘Integrated Assessment’).

Project Manager
The AI Index, an offshoot of the AI100 project, is a new effort to measure AI progress over time in a factual, objective fashion. The committee (comprising leading scientists like Eric Horvitz and Erik Brynjolfsson) is seeking a project manager for the first stage of defining the index. This is an important role for improving AI safety because tracking AI progress over time is likely to improve forecasting.
From the announcement:
“The tasks involved are to assist the committee in assembling relevant data sets, through both primary research online and special arrangements with specific dataset owners. The position calls for being comfortable with datasets, strong interpersonal and communication skills, and an entrepreneurial spirit. The person would be hired by Stanford University and report to Professor emeritus Yoav Shoham. The position is for an initial period of six months, at full time employment, though a slightly lower time commitment is also possible. Interested candidates are invited to send their resumés to Ray Perrault at ray.perrault@sri.com.”
See details
(Search for AI Index)
Are you an employer?
Get in touch to let us know about a high impact job.
Organisations we recommend
Some of the best jobs are never advertised and are created for the right applicants, so here is our list of some of the best organisations within each of our recommended problem areas. These are all potentially very high-impact places to work (in any role), and many can also help you to develop great career capital. To see why we picked these organisations, read the full problem profile.
Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically. See current vacancies. Google Brain is another deep learning research project at Google. See current vacancies.
The Machine Intelligence Research Institute (MIRI) was one of the first groups to become concerned about the risks from machine intelligence in the early 2000s, and has published a number of papers on safety issues and how to resolve them. See current vacancies.
The Future of Humanity Institute at Oxford University was founded by Professor Nick Bostrom, author of Superintelligence. It has a number of academic staff conducting both technical and strategic research. See current vacancies.
OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe and freely sharing the information. It has received $1 billion in funding commitments from the technology community. See current vacancies.
The Future of Life Institute does a combination of communications and grant-making to organisations in the AI safety space, in addition to work on the risks from nuclear war and pandemics. See current vacancies.
The The Cambridge Centre for the Study of Existential Risk and Leverhulme Centre for the Future of Intelligence at Cambridge University house academics studying both technical and strategic questions related to AI safety. See current vacancies.
The Berkeley Center for Human-Compatible Artificial Intelligence is very new, but intends to conduct primarily technical research, with a budget of several million dollars a year. See current vacancies.
Allan Dafoe at Yale University (Research Associate with FHI, Oxford University), who is conducting research on ‘global politics of AI’, including its effects on international conflict. PhD or research assistant positions may be available – contact global.politics.ai@gmail.com for more information.
AI Impacts works on forecasting progress in machine intelligence and predicting its likely impacts.
- GiveWell conducts in-depth research to find the best charities that help people in the developing world. See current vacancies. Its partner the Open Philanthropy Project researches giving opportunities in fields other than global health and poverty. See current vacancies. Disclaimer of conflict of interest: we are being considered for a grant by the Open Philanthropy Project.
- 80,000 Hours – yes, that’s us. We do research into the careers which do the most good and help people pursue them. If you’d like to express interest in working with us, fill out this short form.
- The Centre for Effective Altruism conducts research into fundamental questions on how to do the most good, and encourages donations to the best charities available working on priority problems. It includes the project Giving What We Can, which encourages people to pledge 10% of their income to the most effective organisations for helping others. See current vacancies. If you’d like express interest in working at the Centre for Effective Altruism, fill out this short form. Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism.
- Effective Altruism Foundation promotes effective altruist ideas across the German-speaking world. See current vacancies.
- Founder’s Pledge encourages entrepreneurs to make a legally binding commitment to donate at least 2% of their personal proceeds to charity when they sell their business.
- Open Philanthropy Project, which advises GoodVentures, a several billion dollar foundation, on its philanthropy. See current vacancies. Disclaimer of conflict of interest: we are being considered for a grant by the Open Philanthropy Project.
- Centre for Effective Altruism (our parent organisation), which is developing quantitative models for prioritising global problems. Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism. See current vacancies.
- Future of Humanity Institute, which does macrostrategy research to analyse the long-term outcomes of present day actions. See current vacancies.
- Copenhagen Consensus Center, which brings together top economists to assess which solutions to global problems are most effective in helping those in developing countries. See current vacancies.
Advocacy for animals on factory farms
- Animal Charity Evaluators conducts research to find the highest impact ways to help non-human animals. See current vacancies.
- The Humane League runs programmes, such as corporate campaigns and grassroots outreach, that aim to persuade individuals and organisations to adopt behaviours that reduce farmed animal suffering. See current vacancies.
- Sentience Politics is an anti-speciesist political think tank that advocates for more humane treatment of all sentient beings. See current vacancies.
- Animal Equality is a farmed animal advocacy organisation that conducts undercover investigations and promotes them online and through media outlets, as well as doing grassroots outreach. See current vacancies.
- Mercy for Animals engages in a variety of farmed animal advocacy programmes through undercover investigations of factory farms, legal advocacy and corporate outreach campaigns. See current vacancies.
Development of meat substitutes
- The Good Food Institute seeks out entrepreneurs and scientists to join or form start-ups focused on producing plant-based and cultured meat, and provides advice and lobbying to help them succeed. See current vacancies.
- Impossible Foods is creating plant-based meat and dairy alternatives, and has already created the widely-acclaimed Impossible Burger. See current vacancies.
- Beyond Meat creates plant-based meat alternatives that are sold in Whole Foods stores in the US. See current vacancies.
- New Harvest supports, funds, and promotes the development of animal products made without animals, such as cultured meat, milk and egg whites.
- Hampton Creek develops plant based animal product alternatives, such as vegan mayo, cookies and salad dressing. See current vacancies.
- The Center for Health Security (CHS) received a $16 million grant from the Open Philanthropy Project, who see CHS “as the preeminent U.S. think tank doing policy research and development in the biosecurity and pandemic preparedness (BPP) space”. See current vacancies.
- Blue Ribbon Study Panel on Biodefense is a panel that analyses the United States’ defense capabilities against biological threats, and recommends and lobbies for improvements.
The Cambridge Centre for the Study of Existential Risk at Cambridge University houses academics studying both technical and strategic questions related to biosecurity. See current vacancies.
The Future of Humanity Institute (FHI) at Oxford University conducts multidisciplinary research on how to ensure a positive long-run future. With the recent hire of Piers Millett, FHI is looking to expand its research and policy functions to reduce catastrophic risks from biotechnology. See current vacancies.
- Global Catastrophic Risk Institute is an independent research institute that investigates how to minimise the risks of large scale catastrophes. See current vacancies.
The Nuclear Threat Initiative is a U.S. non-partisan think tank that works to prevent catastrophic attacks and accidents with nuclear, biological, radiological, chemical and cyber weapons of mass destruction and disruption. See current vacancies.
- The Skoll Global Threats Fund provides funding for work tackling climate change, pandemics, water scarcity, nuclear proliferation and conflicts in the Middle East. See current vacancies.
- Intelligence Advanced Research Projects Activity (IARPA) is a government agency that funds research relevant to the U.S. intelligence community. It has sponsored research on how to improve biosecurity and pandemic preparedness. See current vacancies
Note: Our investigation of this area is only shallow, so we are not confident in our analysis and recommendations. See the Open Philanthropy Project’s overview of this area for more detail and a longer list of organisations.
The Nuclear Threat Initiative is a U.S. non-partisan think tank that works to prevent catastrophic attacks and accidents with nuclear, biological, radiological, chemical, and cyber weapons of mass destruction and disruption. See current vacancies.
- Ploughshares Fund is a the largest U.S. philanthropic foundation focused exclusively on peace and security grantmaking. It supports initiatives to reduce current nuclear arsenals and to limit the likelihood of nuclear war (and to a lesser extent risks from chemical and biological weapons). See current vacancies.
- The Future of Life Institute works to reduce the risks from a number of areas, in particular nuclear war, pandemics, and advanced AI. See current vacancies.
- Global Catastrophic Risk Institute is an independent research institute that investigates how to minimise the risks of large scale catastrophes, such as from nuclear war. See current vacancies.
- GiveWell conducts thorough research to find the best charities available to help people in the developing world. See current vacancies.
- The Center for Global Development is a U.S. nonprofit think tank that focuses on international development. See current vacancies.
- Against Malaria Foundation is one of charity evaluator GiveWell’s top charities and provides funding for antimalarial bed net distributions.
- Schistosomiasis Control Initiative is one of charity evaluator GiveWell’s top charities and works with governments across Sub-Saharan Africa and Yemen to develop national schistosomiasis control programmes.
- Evidence Action scales proven interventions to improve life for the global poor. Their Deworm the World Initiative is one of GiveWell’s top-rated charities. See current vacancies.
- Innovations for Poverty Action is a non-profit research and policy organisation which, since its inception in 2002, has conducted over 600 randomised controlled trials and other evaluations. See current vacancies.
- GiveDirectly, one of GiveWell’s top-rated charities, distributes unconditional cash transfers to people living in East Africa. See current vacancies.
- Wave allows immigrants to send money from North America to East Africa with much lower fees than if they used Western Union (saving their customers $2.33 for each dollar of revenue) and is run by a member of our community. Read more about Wave.
- The Life You Can Save, based on Peter Singer’s namesake book, raises awareness of extreme poverty and encourages effective giving.
Didn’t find anything?
Let us know what you were looking for, or see the full list of organisations we sometimes recommend.
See our methodology here.
Never miss a high impact job
Get our curated list of job openings sent to your inbox once a month.