Never miss a high impact job

Get our curated list of job openings sent to your inbox once a month.

The most exciting jobs we know about

These are important positions, at our recommended organisations, within our list of the most urgent problems. They’re all very competitive, but if you’re a good fit for one, it could be your highest-impact option.
Last updated: June 2017

Help a top-rated charity prevent thousands of cases of malaria

Operations Manager
Against Malaria Foundation
London
The Against Malaria Foundation (AMF) distributes insecticide-treated mosquito nets to prevent the spread of malaria. After a decade of extensive research, it has been identified by the leading charity evaluator focused on international development, GiveWell, as one of the most effective charities working to improve health in developing countries.

AMF has been able to scale up to $50m per year with only two full time staff, protecting millions of people from malaria. AMF could scale even further using funding from GiveWell, but is constrained by the time of the current two staff.

This role is an opportunity to add capacity to the team and help them scale up even faster, while also learning from an extremely productive team. You’ll have a wide range of responsibilities, from talking to AMF’s distribution partners in Africa, to preparing financial reports.

You might be a good fit for this role if you have exceptional organisational skills, are highly analytical, and are highly motivated to help AMF succeed.

AMF is also hiring for an IT developer, which we also consider as very high impact.

See details

Help grow the effective altruism movement

Full Stack Developer, Assistant Producer – Events Team, Growth Hacker
Centre for Effective Altruism
Oxford, UK & San Francisco

The Centre for Effective Altruism (CEA) is the leading organisation in charge of growing and strengthening the effective altruism movement, which we think is among the most promising areas to work in. It runs Giving What We Can, Effective Altruism Global conferences, local effective altruism groups, and effectivealtruism.org. In early 2017, CEA went through Y Combinator – the world’s most famous startup accelerator.

Full Stack Developer
CEA is looking for a web developer to help grow CEA’s online presence (e.g. effectivealtruism.org, givingwhatwecan.org, eaglobal.org, centreforeffectivealtruism.org). The majority of people who come across effective altruism discover the community online, and in this role you’ll be in a unique position to help shape that experience.

If you’ve got experience working as a full-stack developer, this is a great opportunity to apply your skills to help grow the effective altruism movement, and build infrastructure that improves the community’s effectiveness and co-ordination.

See details

Assistant Producer, Events Team
The primary focus of the Assistant Producer is helping to run Effective Altruism Global (EAG) conferences. The role includes assisting with logistical planning, managing speaker communications, and helping recruit and manage teams of volunteers.

We think this position is high impact because EAG conferences play a central role in growing the effective altruism movement and increasing its impact through improving co-ordination among members of the community. You may be a good fit if you are closely involved in the community and have excellent organisational skills.

See details

Growth Hacker

CEA is looking for an experienced growth hacker who can help grow the EA community 10 fold in the next 2 years. This is a varied role which includes developing new metrics to track growth of the EA community, optimising website conversion rates, running social media campaigns, writing copy for online content and recruiting and managing volunteers.

If you have experience in marketing or growth hacking, and have a deep understanding of the core ideas of effective altruism, this is a great opportunity to apply your skills to help grow the effective altruism movement.

See details

See all jobs at the Centre for Effective Altruism.
Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism.

Lead a new institute to conduct important AI research

Assistant Director
UC Berkeley Center for Human-Compatible AI
Berkeley, CA
The Center for Human-Compatible AI (CHAI) is one of only a few academic institutes in the world working on solving the AI control problem, which we see as one of the world’s most important and neglected research questions. It was established at UC Berkeley, one of the top schools for Computer Science, with a large grant from the Open Philanthropy Project and has many of the top researchers in AI and robotics.

The Assistant Director will play a crucial role in the early stages of establishing this institute, autonomously managing its day-to-day operations. You’ll need to be highly organised to coordinate with the university bureaucracy, manage finances, run events, draft grants, hire new staff, and manage collaborations with other organisations. Basically, you’ll handle anything that needs to be done.

We think this role will also provide you with excellent career capital: intelligent coworkers and contacts within an important field, a high degree of autonomy, and the opportunity to manage a large, fast-growing organisation.

See details

Advise tech founders on where to donate $10M+

Deployment Coordinator
Founders Pledge
London
Founders Pledge is a non-profit that encourages entrepreneurs to make a commitment to donate at least 2% of their personal proceeds to charity when they sell their business. In just a few years, they have raised $200 million in legally-binding pledges of equity, and are growing fast.

The Deployment Coordinator helps advise the founders when they make their exit on where to donate, as well as about the mechanics of the donation. This position provides an opportunity to present effective altruist giving opportunities, encourage wealthy individuals to give more at a crucial moment, and discuss their next career steps after exit. We estimate founders who have taken the pledge will donate at least $10m next year, increasing to over $30m per annum within the coming years. So a small increase in how well the funds are spent could be hugely impactful.

You’ll also have the opportunity to learn how to promote effective giving from a team with a great track record, and to learn about charity evaluation while managing a team of researchers.

Founders Pledge is also hiring for their growth and community teams, as well as generalist researchers.

With the exception of the below, these jobs are not yet advertised, but if you think you would be a good fit, apply here.
Apply

Communicate about in-depth research on global health and development

Research Analyst, Outreach Focus
GiveWell
San Francisco
GiveWell has conducted some of the highest-quality research into which programmes and charities within international development will most help people for each dollar donated.

Research Analysts with an outreach focus work closely with GiveWell’s research team and use their in-depth understanding of the research to communicate it to donors, as well as others who rely on GiveWell’s work, such as researchers or the media.

This role has the potential to raise millions of dollars for GiveWell’s top recommended charities, and finding someone with especially good fit could substantially increase this figure. GiveWell has also consistently reported being talent-constrained rather than funding-constrained. It’s also a great opportunity to learn about the cutting-edge of charity evaluation and how to promote effective philanthropy.

See details
Disclaimer: 80,000 Hours is being considered by the Open Philanthropy Project, which works closely with GiveWell, to receive a grant.

Allocate $10M+/year to promote safe emerging technology research

IARPA is a government agency that funds research to improve security and the U.S. intelligence community. IARPA has supported research into cybersecurity; how to improve biosecurity and pandemic preparedness; and some of the earliest experiments into how to improve geopolitical forecasts (as covered in Philip Tetlock’s Superforecasting). The director of IARPA has shown interest in reducing the risk of human extinction, an area we consider among the most neglected and important problems facing the world.

Program Managers create, run and advocate for IARPA’s funding programmes. A typical programme involves tens of millions of dollars of funding. You’ll also be able to establish a career in the intelligence community, one of the major players in forecasting and mitigating global catastrophic risks.

Program Managers historically accepted either have a PhD, or substantial expertise in a relevant area of scientific research. To be admitted, you’ll need to develop an example funding proposal.

See details

Work on neglected and cutting-edge research in machine learning

Postdoctoral researchers, software engineers, and internships
Multiple positions (see below)
We think solving the AI control problem is one of the world’s most important and neglected research questions. If you could make a technical contribution to this research, it’s likely to be one of the highest-impact things you could do with your life.

This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained. Beyond this technical impact, it can also be effective to work alongside technical teams to influence the culture towards greater concern for safety and positive social impact.

What follows are some of the best job openings within the leading organisations that undertake technical research relevant to the control problem.

To get these research positions, you will probably need a PhD in a relevant quantitative subject, such as computer science, maths or statistics, as well as familiarity with the latest research into AI safety. The engineering positions usually require at least a few years’ experience (except internships), and a varying degree of knowledge of machine learning. If you’re not at that stage yet, read our profile on how to enter AI safety research.


Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.

Research Scientist
Research Scientists at DeepMind set the research agenda in exploring cutting-edge Machine Learning and other AI techniques to solve real world issues. On the safety team, you would work closely with the DeepMind founders to explore how to ensure that as AI systems become increasingly powerful, they work for the benefit of all.

See details

Program Manager
Program Managers at DeepMind do what’s necessary to facilitate novel research, using their exceptional organisational skills to coordinate projects and manage people and deadlines. We see this position as important because i) DeepMind is an important organisation in AI safety, and ii) it has great potential for building your skills and using them to work with safety researchers.

See details

Research Engineer
Being a Research Engineer would be similar to being a Program Manager in the mechanisms for potential positive impact, but rather than providing organisational research support you would primarily be developing the algorithms, applications, and software used by the research team.

See details

See all jobs at DeepMind.


UC Berkeley is one of the top schools for Computer Science, and the goal of CHAI is to ensure that as the capabilities of AI systems increase, they continue to operate in a way which is beneficial to humanity.

Postdoctoral Researcher
Postdoctoral Researchers will have considerable freedom to explore topics within this area, allowing them to work on the control problem.

Successful candidates will work with the CHAI Director, Stuart Russell, or with one of the Berkeley co-Principal Investigators, Pieter Abbeel, Anca Dragan, and Tom Griffiths.

See details


The Machine Intelligence Research Institute (MIRI) was one of the first groups to become concerned about the risks from AI in the early 2000s, and has published a number of papers on safety issues and how to resolve them.

Research Fellow
Research Fellows work to make progress on the alignment problem, which involves novel research on a number of open problems in computer science, decision theory, mathematical logic, and other fields. MIRI are looking for Research Fellows for both their traditional and machine learning agendas. MIRI selects candidates strongly on math talent and weighs traditional academic backgrounds less heavily than other research positions in AI safety. So if you think you’d enjoy research in a non-traditional academic background, and get along with the team, this is an excellent path to contribute to solving the control problem.

See details

Software Engineer

Software engineers support MIRI’s research on the alignment problem, through prototyping, implementing and testing AI alignment ideas. MIRI’s goal is to hire full-time engineers, but they are initially looking for paid interns. Successful internships may then transition into staff positions. If you have strong programming skills, this is a great way to get up to speed on the AI safety field and test your fit for working on the problem, without committing to a permanent position.

See details


OpenAI
San Francisco, CA

OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe and ensuring the benefits are as widely distributed as possible. It has received $1 billion in funding commitments from the technology community, and is one of the leading organisations working on general AI development.

Machine Learning (Researcher)
This position is a research role with the a broad remit, so it’s an opportunity to do vital work on the ‘control problem’ of ensuring AI safety. To get this role, you need to have shown exceptional achievements in a quantitative field (not necessarily ML/AI), and have a shot at being a world expert. For this reason, this role is incredibly competitive, but if you can get it, it’s likely to be one of your highest-impact options.

See details

Special Projects
The Special Projects role involves working on one of the projects listed here, which are “problem areas likely to be important both for advancing AI and for its long-run impact on society”. They were formulated by experts we believe are trying to improve the long-run safety and consequences of AI/ML systems, and we see this as a concrete area in which those with the relevant background can contribute to their improvement.

See details

See all jobs at OpenAI.


The Leverhulme Centre for the Future of Intelligence (CFI) is a new research centre at the University of Cambridge, funded by a £10 million grant from the Leverhulme Trust. CFI does interdisciplinary research into the opportunities and challenges to humanity from the development of artificial intelligence.

Research Fellow in Machine Learning
This is a joint position in the Machine Learning Group in the Department of Engineering at Cambridge and the Leverhume Centre for the Future of Intelligence. The main focus of the role is developing theory and methods to ensure that machine learning/artificial intelligence systems are scalable, reliable and interpretable. To get this role you’ll need to have, or be near to, completing a PhD in computer science, statistics or a related area, have publications in machine learning, and outstanding mathematical and programming skills.

See details

Ensure regulatory responses to emerging technologies benefit everyone

Postdoctoral and non-academic researchers, policymakers, and legal scholars
Multiple positions (see below)
The development of powerful AI doesn’t only pose the technical challenge of the control problem, but also major political and social challenges. AI policy is fast becoming an important area, but policy-makers are overwhelmingly focused on short-term issues like how to regulate self-driving cars, rather than the key long-term issues (i.e. the future of civilization).

The following is a list of positions focused on policy and strategy research relevant to AI. If you could make a contribution to this area, it’s likely to be one of the highest-impact things you could do with your life. This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained.


Google DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.

Policy Researcher
This role would allow you to play a key role in researching the societal impacts of AI and advocating for the actions that ensure its use benefits others as much as possible. An example of this kind of research could be looking at what effects automation will have on the economy and what governments should do to prepare.

This role will become more important over time as AI systems have an increasingly large impact on society, allowing you to conduct novel research, and define strategy and responses into the immediate and long-run impacts of AI developments.

You’d be a good fit for this role if you have a strong understanding of the impact of emerging technology on society, experience in a policy setting, and an ability to synthesise information from a range of fields. As this is predominantly a stakeholder engagement position, you’ll also need outstanding communication and interpersonal skills.

DeepMind’s reputation, connections, opportunities to conduct novel and important research, focus on learning and professional development, and talented employees will also give you outstanding career capital.

See details


Fellowship in Artificial Intelligence, Law, and Policy
UCLA’s Program on Understanding Law, Science, and Evidence (PULSE) does interdisciplinary research into the connections between law, policy making, science and technology. The fellowship primarily consists of research into the social, economic, and legal implications of artificial intelligence and machine learning. The role is flexible, allowing you to focus on longer-term issues in AI policy. You’ll need a J.D. or other advanced degree, and demonstrated interest or background in the fields of law and science, artificial intelligence, or social risk assessment.

See details


American Association for the Advancement of Science
Various, but particularly Washington, DC

Science & Technology Policy Fellowships
AAAS Science & Technology Policy Fellowships are one year placements for scientists and engineers in the executive, legislative, and judicial branches of the U.S. federal government in Washington DC. Fellows learn first-hand about policymaking while contributing their knowledge and analytical skills to the federal policymaking process.

You’ll need a PhD in a scientific field, or a master’s in engineering plus three years of professional engineering experience. If you have a background in AI, you there’s a good chance you’ll be placed in an AI-relevant organization. We have heard that these fellowships are a good way to develop career capital, experience and understanding of how the political system works.

See details


Legal or Policy Intern
The OSTP internships provide a coveted opportunity to work with senior White House officials, policy analysts, or the legal team on how science and technology intersect with the economy, national security, homeland security, health, foreign relations, and the environment. It’s likely to be one of the most important departments in determining the regulatory response to AI. It’s also an excellent launch pad into other science policy positions.

See details


Researcher
The Foundational Research Institute is one of the few organisations working on how to minimise the risks of large scale accidents or misuse of AI systems. As a researcher you’ll contribute to studying the scenarios in which development of advanced machine intelligence could negatively impact people and animals, and how to mitigate those scenarios. Remote working is available.

See details


Affiliate
The Global Catastrophic Risk Institute (GCRI) is a nonpartisan think tank that aims to reduce the risk of events large enough to significantly harm or even destroy human civilization at the global scale. They’re seeking Junior Affiliates and paid senior “Associates” (at the doctoral level or equivalent) to collaborate on reducing these risks in their focus areas (global warming, nuclear war, pandemics, and artificial intelligence), as well as determining how to assess and compare them (their ‘Integrated Assessment’).

See details

Are you an employer?
Get in touch to let us know about a high impact job.

Organisations we recommend

Some of the best jobs are never advertised and are created for the right applicants, so here is our list of some of the best organisations within each of our recommended problem areas. These are all potentially very high-impact places to work (in any role), and many can also help you to develop great career capital. To see why we picked these organisations, read the full problem profile.

Why work on this problem?

Why work on this problem?

  • GiveWell conducts in-depth research to find the best charities that help people in the developing world. See current vacancies. Its partner the Open Philanthropy Project researches giving opportunities in fields other than global health and poverty. See current vacancies. Disclaimer of conflict of interest: we are being considered for a grant by the Open Philanthropy Project.
  • 80,000 Hours – yes, that’s us. We do research into the careers which do the most good and help people pursue them. If you’d like to express interest in working with us, fill out this short form.
  • The Centre for Effective Altruism conducts research into fundamental questions on how to do the most good, and encourages donations to the best charities available working on priority problems. It includes the project Giving What We Can, which encourages people to pledge 10% of their income to the most effective organisations for helping others. See current vacancies. If you’d like express interest in working at the Centre for Effective Altruism, fill out this short form. Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism.
  • Effective Altruism Foundation promotes effective altruist ideas across the German-speaking world. See current vacancies.
  • Founder’s Pledge encourages entrepreneurs to make a legally binding commitment to donate at least 2% of their personal proceeds to charity when they sell their business.

Why work on this problem?

Why work on this problem?

Advocacy for animals on factory farms

Development of meat substitutes

Why work on this problem?

Why work on this problem?

Note: Our investigation of this area is only shallow, so we are not confident in our analysis and recommendations. See the Open Philanthropy Project’s overview of this area for more detail and a longer list of organisations.

Why work on this problem?

Didn’t find anything?
Let us know what you were looking for, or see the full list of organisations we sometimes recommend.

See our methodology here.

Never miss a high impact job

Get our curated list of job openings sent to your inbox once a month.