The most exciting jobs we know about
These are important positions, at our recommended organisations, within our list of the most urgent problems. They’re all very competitive, but if you’re a good fit for one, it could be your highest-impact option.
Last updated: August 2017
Help a top-rated charity prevent thousands of cases of malaria
Against Malaria Foundation
AMF has been able to scale up to $50m per year with only two full time staff, protecting millions of people from malaria. AMF could scale even further using funding from GiveWell, but is constrained by the time of the current two staff.
This role is an opportunity to add capacity to the team and help them scale up even faster, while also learning from an extremely productive team. You’ll have a wide range of responsibilities, from talking to AMF’s distribution partners in Africa, to preparing financial reports.
You might be a good fit for this role if you have exceptional organisational skills, are highly analytical, and are highly motivated to help AMF succeed.
AMF is also hiring for an IT developer, which we also consider as very high impact.
Lead a new institute to conduct important AI research
UC Berkeley Center for Human-Compatible AI
The Assistant Director will play a crucial role in the early stages of establishing this institute, autonomously managing its day-to-day operations. You’ll need to be highly organised to coordinate with the university
bureaucracy, manage finances, run events, draft grants, hire new staff, and manage collaborations with other organisations. Basically, you’ll handle anything that needs to be done.
We think this role will also provide you with excellent career capital: intelligent coworkers and contacts within an important field, a high degree of autonomy, and the opportunity to manage a large, fast-growing organisation.
Advise tech founders on where to donate $10M+
The Deployment Coordinator helps advise the founders when they make their exit on where to donate, as well as about the mechanics of the donation. This position provides an opportunity to present effective altruist giving opportunities, encourage wealthy individuals to give more at a crucial moment, and discuss their next career steps after exit. We estimate founders who have taken the pledge will donate at least $10m next year, increasing to over $30m per annum within the coming years. So a small increase in how well the funds are spent could be hugely impactful.
You’ll also have the opportunity to learn how to promote effective giving from a team with a great track record, and to learn about charity evaluation while managing a team of researchers.
Founders Pledge is also hiring for their growth and community teams, as well as generalist researchers.
With the exception of the below, these jobs are not yet advertised, but if you think you would be a good fit, apply here.
Communicate about in-depth research on global health and development
Research Analysts with an outreach focus work closely with GiveWell’s research team and use their in-depth understanding of the research to communicate it to donors, as well as others who rely on GiveWell’s work, such as researchers or the media.
This role has the potential to raise millions of dollars for GiveWell’s top recommended charities, and finding someone with especially good fit could substantially increase this figure. GiveWell has also consistently reported being talent-constrained rather than funding-constrained. It’s also a great opportunity to learn about the cutting-edge of charity evaluation and how to promote effective philanthropy.
Disclaimer: 80,000 Hours is being considered by the Open Philanthropy Project, which works closely with GiveWell, to receive a grant.
Allocate $10M+/year to promote safe emerging technology research
Intelligence Advanced Research Projects Agency (IARPA)
Program Managers create, run and advocate for IARPA’s funding programmes. A typical programme involves tens of millions of dollars of funding. You’ll also be able to establish a career in the intelligence community, one of the major players in forecasting and mitigating global catastrophic risks.
Program Managers historically accepted either have a PhD, or substantial expertise in a relevant area of scientific research. To be admitted, you’ll need to develop an example funding proposal.
Work on neglected and cutting-edge research in machine learning
Multiple positions (see below)
This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained. Beyond this technical impact, it can also be effective to work alongside technical teams to influence the culture towards greater concern for safety and positive social impact.
What follows are some of the best job openings within the leading organisations that undertake technical research relevant to the control problem.
To get these research positions, you will probably need a PhD in a relevant quantitative subject, such as computer science, maths or statistics, as well as familiarity with the latest research into AI safety. The engineering positions usually require at least a few years’ experience (except internships), and a varying degree of knowledge of machine learning. If you’re not at that stage yet, read our profile on how to enter AI safety research.
Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.
Research Scientists at DeepMind set the research agenda in exploring cutting-edge Machine Learning and other AI techniques to solve real world issues. On the safety team, you would work closely with the DeepMind founders to explore how to ensure that as AI systems become increasingly powerful, they work for the benefit of all.
Program Managers at DeepMind do what’s necessary to facilitate novel research, using their exceptional organisational skills to coordinate projects and manage people and deadlines. We see this position as important because i) DeepMind is an important organisation in AI safety, and ii) it has great potential for building your skills and using them to work with safety researchers.
Being a Research Engineer would be similar to being a Program Manager in the mechanisms for potential positive impact, but rather than providing organisational research support you would primarily be developing the algorithms, applications, and software used by the research team.
UC Berkeley is one of the top schools for Computer Science, and the goal of CHAI is to ensure that as the capabilities of AI systems increase, they continue to operate in a way which is beneficial to humanity.
Postdoctoral Researchers will have considerable freedom to explore topics within this area, allowing them to work on the control problem.
Successful candidates will work with the CHAI Director, Stuart Russell, or with one of the Berkeley co-Principal Investigators, Pieter Abbeel, Anca Dragan, and Tom Griffiths.
The Machine Intelligence Research Institute (MIRI) was one of the first groups to become concerned about the risks from AI in the early 2000s, and has published a number of papers on safety issues and how to resolve them.
Research Fellows work to make progress on the alignment problem, which involves novel research on a number of open problems in computer science, decision theory, mathematical logic, and other fields. MIRI are looking for Research Fellows for both their traditional and machine learning agendas. MIRI selects candidates strongly on math talent and weighs traditional academic backgrounds less heavily than other research positions in AI safety. So if you think you’d enjoy research in a non-traditional academic background, and get along with the team, this is an excellent path to contribute to solving the control problem.
San Francisco, CA
OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe and ensuring the benefits are as widely distributed as possible. It has received $1 billion in funding commitments from the technology community, and is one of the leading organisations working on general AI development.
Machine Learning (Researcher)
This position is a research role with the a broad remit, so it’s an opportunity to do vital work on the ‘control problem’ of ensuring AI safety. To get this role, you need to have shown exceptional achievements in a quantitative field (not necessarily ML/AI), and have a shot at being a world expert. For this reason, this role is incredibly competitive, but if you can get it, it’s likely to be one of your highest-impact options.
The Special Projects role involves working on one of the projects listed here, which are “problem areas likely to be important both for advancing AI and for its long-run impact on society”. They were formulated by experts we believe are trying to improve the long-run safety and consequences of AI/ML systems, and we see this as a concrete area in which those with the relevant background can contribute to their improvement.
Ensure regulatory responses to emerging technologies benefit everyone
Multiple positions (see below)
The following is a list of positions focused on policy and strategy research relevant to AI. If you could make a contribution to this area, it’s likely to be one of the highest-impact things you could do with your life. This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained.
Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.
This role would allow you to play a key role in researching the societal impacts of AI and advocating for the actions that ensure its use benefits others as much as possible. An example of this kind of research could be looking at what effects automation will have on the economy and what governments should do to prepare.
This role will become more important over time as AI systems have an increasingly large impact on society, allowing you to conduct novel research, and define strategy and responses into the immediate and long-run impacts of AI developments.
You’d be a good fit for this role if you have a strong understanding of the impact of emerging technology on society, experience in a policy setting, and an ability to synthesise information from a range of fields. As this is predominantly a stakeholder engagement position, you’ll also need outstanding communication and interpersonal skills.
DeepMind’s reputation, connections, opportunities to conduct novel and important research, focus on learning and professional development, and talented employees will also give you outstanding career capital.
Various, but particularly Washington, DC
Science & Technology Policy Fellowships
AAAS Science & Technology Policy Fellowships are one year placements for scientists and engineers in the executive, legislative, and judicial branches of the U.S. federal government in Washington DC. Fellows learn first-hand about policymaking while contributing their knowledge and analytical skills to the federal policymaking process.
You’ll need a PhD in a scientific field, or a master’s in engineering plus three years of professional engineering experience. If you have a background in AI, you there’s a good chance you’ll be placed in an AI-relevant organization. We have heard that these fellowships are a good way to develop career capital, experience and understanding of how the political system works.
Legal or Policy Intern
The OSTP internships provide a coveted opportunity to work with senior White House officials, policy analysts, or the legal team on how science and technology intersect with the economy, national security, homeland security, health, foreign relations, and the environment. It’s likely to be one of the most important departments in determining the regulatory response to AI. It’s also an excellent launch pad into other science policy positions.
The Foundational Research Institute is one of the few organisations working on how to minimise the risks of large scale accidents or misuse of AI systems. As a researcher you’ll contribute to studying the scenarios in which development of advanced machine intelligence could negatively impact people and animals, and how to mitigate those scenarios. Remote working is available.
The Global Catastrophic Risk Institute (GCRI) is a nonpartisan think tank that aims to reduce the risk of events large enough to significantly harm or even destroy human civilization at the global scale. They’re seeking Junior Affiliates and paid senior “Associates” (at the doctoral level or equivalent) to collaborate on reducing these risks in their focus areas (global warming, nuclear war, pandemics, and artificial intelligence), as well as determining how to assess and compare them (their ‘Integrated Assessment’).
Are you an employer?
Get in touch to let us know about a high impact job.
Organisations we recommend
Some of the best jobs are never advertised and are created for the right applicants, so here is our list of some of the best organisations within each of our recommended problem areas. These are all potentially very high-impact places to work (in any role), and many can also help you to develop great career capital. To see why we picked these organisations, read the full problem profile.
Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically. See current vacancies. Google Brain is another deep learning research project at Google. See current vacancies.
The Machine Intelligence Research Institute (MIRI) was one of the first groups to become concerned about the risks from machine intelligence in the early 2000s, and has published a number of papers on safety issues and how to resolve them. See current vacancies.
The Future of Humanity Institute at Oxford University was founded by Professor Nick Bostrom, author of Superintelligence. It has a number of academic staff conducting both technical and strategic research. See current vacancies.
OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe and freely sharing the information. It has received $1 billion in funding commitments from the technology community. See current vacancies.
The Future of Life Institute does a combination of communications and grant-making to organisations in the AI safety space, in addition to work on the risks from nuclear war and pandemics. See current vacancies.
The The Cambridge Centre for the Study of Existential Risk and Leverhulme Centre for the Future of Intelligence at Cambridge University house academics studying both technical and strategic questions related to AI safety. See current vacancies.
The Berkeley Center for Human-Compatible Artificial Intelligence is very new, but intends to conduct primarily technical research, with a budget of several million dollars a year. See current vacancies.
Allan Dafoe at Yale University (Research Associate with FHI, Oxford University), who is conducting research on ‘global politics of AI’, including its effects on international conflict. PhD or research assistant positions may be available – contact firstname.lastname@example.org for more information.
AI Impacts works on forecasting progress in machine intelligence and predicting its likely impacts.
- GiveWell conducts in-depth research to find the best charities that help people in the developing world. See current vacancies. Its partner the Open Philanthropy Project researches giving opportunities in fields other than global health and poverty. See current vacancies. Disclaimer of conflict of interest: we are being considered for a grant by the Open Philanthropy Project.
- 80,000 Hours – yes, that’s us. We do research into the careers which do the most good and help people pursue them. If you’d like to express interest in working with us, fill out this short form.
- The Centre for Effective Altruism conducts research into fundamental questions on how to do the most good, and encourages donations to the best charities available working on priority problems. It includes the project Giving What We Can, which encourages people to pledge 10% of their income to the most effective organisations for helping others. See current vacancies. If you’d like express interest in working at the Centre for Effective Altruism, fill out this short form. Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism.
- Effective Altruism Foundation promotes effective altruist ideas across the German-speaking world. See current vacancies.
- Founder’s Pledge encourages entrepreneurs to make a legally binding commitment to donate at least 2% of their personal proceeds to charity when they sell their business.
- Open Philanthropy Project, which advises GoodVentures, a several billion dollar foundation, on its philanthropy. See current vacancies. Disclaimer of conflict of interest: we are being considered for a grant by the Open Philanthropy Project.
- Centre for Effective Altruism (our parent organisation), which is developing quantitative models for prioritising global problems. Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism. See current vacancies.
- Future of Humanity Institute, which does macrostrategy research to analyse the long-term outcomes of present day actions. See current vacancies.
- Copenhagen Consensus Center, which brings together top economists to assess which solutions to global problems are most effective in helping those in developing countries. See current vacancies.
Advocacy for animals on factory farms
- Animal Charity Evaluators conducts research to find the highest impact ways to help non-human animals. See current vacancies.
- The Humane League runs programmes, such as corporate campaigns and grassroots outreach, that aim to persuade individuals and organisations to adopt behaviours that reduce farmed animal suffering. See current vacancies.
- Sentience Politics is an anti-speciesist political think tank that advocates for more humane treatment of all sentient beings. See current vacancies.
- Animal Equality is a farmed animal advocacy organisation that conducts undercover investigations and promotes them online and through media outlets, as well as doing grassroots outreach. See current vacancies.
- Mercy for Animals engages in a variety of farmed animal advocacy programmes through undercover investigations of factory farms, legal advocacy and corporate outreach campaigns. See current vacancies.
Development of meat substitutes
- The Good Food Institute seeks out entrepreneurs and scientists to join or form start-ups focused on producing plant-based and cultured meat, and provides advice and lobbying to help them succeed. See current vacancies.
- Impossible Foods is creating plant-based meat and dairy alternatives, and has already created the widely-acclaimed Impossible Burger. See current vacancies.
- Beyond Meat creates plant-based meat alternatives that are sold in Whole Foods stores in the US. See current vacancies.
- New Harvest supports, funds, and promotes the development of animal products made without animals, such as cultured meat, milk and egg whites.
- Hampton Creek develops plant based animal product alternatives, such as vegan mayo, cookies and salad dressing. See current vacancies.
- The Center for Health Security (CHS) received a $16 million grant from the Open Philanthropy Project, who see CHS “as the preeminent U.S. think tank doing policy research and development in the biosecurity and pandemic preparedness (BPP) space”. See current vacancies.
- Blue Ribbon Study Panel on Biodefense is a panel that analyses the United States’ defense capabilities against biological threats, and recommends and lobbies for improvements.
The Cambridge Centre for the Study of Existential Risk at Cambridge University houses academics studying both technical and strategic questions related to biosecurity. See current vacancies.
The Future of Humanity Institute (FHI) at Oxford University conducts multidisciplinary research on how to ensure a positive long-run future. With the recent hire of Piers Millett, FHI is looking to expand its research and policy functions to reduce catastrophic risks from biotechnology. See current vacancies.
- Global Catastrophic Risk Institute is an independent research institute that investigates how to minimise the risks of large scale catastrophes. See current vacancies.
The Nuclear Threat Initiative is a U.S. non-partisan think tank that works to prevent catastrophic attacks and accidents with nuclear, biological, radiological, chemical and cyber weapons of mass destruction and disruption. See current vacancies.
- The Skoll Global Threats Fund provides funding for work tackling climate change, pandemics, water scarcity, nuclear proliferation and conflicts in the Middle East. See current vacancies.
- Intelligence Advanced Research Projects Activity (IARPA) is a government agency that funds research relevant to the U.S. intelligence community. It has sponsored research on how to improve biosecurity and pandemic preparedness. See current vacancies
Note: Our investigation of this area is only shallow, so we are not confident in our analysis and recommendations. See the Open Philanthropy Project’s overview of this area for more detail and a longer list of organisations.
The Nuclear Threat Initiative is a U.S. non-partisan think tank that works to prevent catastrophic attacks and accidents with nuclear, biological, radiological, chemical, and cyber weapons of mass destruction and disruption. See current vacancies.
- Ploughshares Fund is a the largest U.S. philanthropic foundation focused exclusively on peace and security grantmaking. It supports initiatives to reduce current nuclear arsenals and to limit the likelihood of nuclear war (and to a lesser extent risks from chemical and biological weapons). See current vacancies.
- The Future of Life Institute works to reduce the risks from a number of areas, in particular nuclear war, pandemics, and advanced AI. See current vacancies.
- Global Catastrophic Risk Institute is an independent research institute that investigates how to minimise the risks of large scale catastrophes, such as from nuclear war. See current vacancies.
- GiveWell conducts thorough research to find the best charities available to help people in the developing world. See current vacancies.
- The Center for Global Development is a U.S. nonprofit think tank that focuses on international development. See current vacancies.
- Against Malaria Foundation is one of charity evaluator GiveWell’s top charities and provides funding for antimalarial bed net distributions.
- Schistosomiasis Control Initiative is one of charity evaluator GiveWell’s top charities and works with governments across Sub-Saharan Africa and Yemen to develop national schistosomiasis control programmes.
- Evidence Action scales proven interventions to improve life for the global poor. Their Deworm the World Initiative is one of GiveWell’s top-rated charities. See current vacancies.
- Innovations for Poverty Action is a non-profit research and policy organisation which, since its inception in 2002, has conducted over 600 randomised controlled trials and other evaluations. See current vacancies.
- GiveDirectly, one of GiveWell’s top-rated charities, distributes unconditional cash transfers to people living in East Africa. See current vacancies.
- Wave allows immigrants to send money from North America to East Africa with much lower fees than if they used Western Union (saving their customers $2.33 for each dollar of revenue) and is run by a member of our community. Read more about Wave.
- The Life You Can Save, based on Peter Singer’s namesake book, raises awareness of extreme poverty and encourages effective giving.
See our methodology here.