Never miss a high impact job

Get our curated list of job openings sent to your inbox once a month.

The most exciting jobs we know about

These are important positions, at our recommended organisations, within our list of the most urgent problems. They’re all very competitive, but if you’re a good fit for one, it could be your highest-impact option.

Interested in one of these jobs?

Get one-on-one advice and introductions

We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you have a shot at any of these jobs, want help comparing between them, introductions (we’re in touch with most of the employers on this list), and to hear about jobs we can’t advertise publicly:

Get in touch

Work on neglected and cutting-edge research in machine learning

We think solving the AI control problem is one of the world’s most important and neglected research questions. If you could make a technical contribution to this research, it’s likely to be one of the highest-impact things you could do with your life.

This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained. Beyond this technical impact, it can also be effective to work alongside technical teams to influence the culture towards greater concern for safety and positive social impact.

What follows are some of the best job openings within the leading organisations that undertake technical research relevant to the control problem.

To get these research positions, you will probably need a PhD in a relevant quantitative subject, such as computer science, maths or statistics, as well as familiarity with the latest research into AI safety. The engineering positions usually require at least a few years’ experience (except internships), and a varying degree of knowledge of machine learning. If you’re not at that stage yet, read our profile on how to enter AI safety research.

Research funding is available for projects working on building AI systems which robustly advances human interests. You don’t have to be part of an academic institution for this funding.

See details

Berkeley Existential Risk Initiative

San Francisco Bay area, but remote work possible

The Berkeley Existential Risk Initiative is a non-profit that aims to accelerate the work of existential risk researchers. It gives them support that academic funding often doesn’t cover, such as administrative and technical help.

Machine Learning Engineer

You will develop machine learning software for the Center for Human Compatible AI at the University of California, Berkeley. Possible projects include open sourcing existing packages and implementing machine learning algorithms.

See details

Project Manager

You’ll be supporting BERI’s projects doing a variety of activities such as communicating with the partner organisations they help and researching new ways that the organisation can support its partners.

See details

Machine Learning Team Lead

You’ll grow and manage a team of machine learning engineers to work in collaboration with researchers at UC Berkeley, Stanford, and other machine learning research groups in the Bay Area.

See details

DeepMind

London

DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.

Research Scientist

Research Scientists at DeepMind set the research agenda in exploring cutting-edge Machine Learning and other AI techniques to solve real world issues. On the safety team, you would work closely with the DeepMind founders to explore how to ensure that as AI systems become increasingly powerful, they work for the benefit of all.

See details

Program Manager

Program Managers at DeepMind do what’s necessary to facilitate novel research, using their exceptional organisational skills to coordinate projects and manage people and deadlines. We see this position as important because i) DeepMind is an important organisation in AI safety, and ii) it has great potential for building your skills and using them to work with safety researchers.

See details

Research Engineer

Being a Research Engineer would be similar to being a Program Manager in the mechanisms for potential positive impact, but rather than providing organisational research support you would primarily be developing the algorithms, applications, and software used by the research team.

See details

The Future of Humanity Institute is a world-leading research centre at the University of Oxford which works on big-picture questions for human civilization. It was founded by Professor Nick Bostrom, author of the New York Times bestseller Superintelligence, and it does a lot of work on AI safety.

N.B. 80,000 Hours is affiliated with The Future of Humanity Institute.

AI Safety and Reinforcement Learning Internship

Interns on the technical research team have the opportunity to contribute to work on a specific project in reinforcement learning (RL). Previous interns have worked on software for Inverse Reinforcement Learning, on a framework for RL with a human teacher, and on RL agents that do active learning. We see this as an outstanding opportunity to work with an excellent team, improve your technical skills, and explore ideas with some of the key players in AI safety, while strengthening your application to graduate school.

See details

AI Policy and Governance Internship

You will contribute to the Institute’s work on AI policy, AI governance, and AI strategy. Previous interns have worked on issues such as technology race modelling, the bridge between short-term and long-term AI policy and the development of AI, and AI policy in China. Skills in political science, public policy, economics, law, computer science, or Chinese language and politics would be particularly helpful.

See details

The Machine Intelligence Research Institute (MIRI) was one of the first groups to become concerned about the risks from AI in the early 2000s, and has published a number of papers on safety issues and how to resolve them.

Research Fellow

Research Fellows work to make progress on the alignment problem, which involves novel research on a number of open problems in computer science, decision theory, mathematical logic, and other fields. MIRI are looking for Research Fellows for both their traditional and machine learning agendas. MIRI selects candidates strongly on math talent and weighs traditional academic backgrounds less heavily than other research positions in AI safety. So if you think you’d enjoy research in a non-traditional academic background, and get along with the team, this is an excellent path to contribute to solving the control problem.

See details

The Montreal Institute for Learning Algorithms (MILA) is one of the leading academic AI labs focussed on deep learning, and they have received funding from the Open Philanthropy Project to support work on AI safety.

Internship

Machine learning research internships are available to students at undergraduate, masters, and PhD levels.

See details

OpenAI

San Francisco, CA

OpenAI was founded in 2015 with the goal of conducting research into how to make AI safe and ensuring the benefits are as widely distributed as possible. It has received $1 billion in funding commitments from the technology community, and is one of the leading organisations working on general AI development.

Machine Learning Researcher

This position is a research role with the a broad remit, so it’s an opportunity to do vital work on the ‘control problem’ of ensuring AI safety. To get this role, you need to have shown exceptional achievements in a quantitative field (not necessarily ML/AI), and have a shot at being a world expert. For this reason, this role is incredibly competitive, but if you can get it, it’s likely to be one of your highest-impact options.

See details

Machine Learning Fellow

The fellowship program is for people who are beginning a career in deep learning and AI. As a Fellow, you’ll have the opportunity to work on one of our teams and be mentored by a top deep learning expert. The fellowship lasts for 6 months, allowing for for a period of background learning and exploration followed by a research project.

See details

Machine Learning Engineer

You’ll be responsible for building AI systems that can perform previously impossible tasks or achieve unprecedented levels of performance. This requires good engineering (for example: designing, implementing, and improving a massive-scale distributed machine learning system), writing bug-free machine learning code, and building the science behind the algorithms employed.

You should apply if you are a software engineer who is fairly comfortable with linear algebra, basic statistics, and the basics of stochastic gradient descent. Experience in machine learning is a bonus. You are not expected to have mastery of deep learning.

See details

Office manager

You’ll be a critical part of the team, helping cultivate a pleasant, productive work environment. You’ll work with smart, hardworking people who are passionate about our mission.

See details

Safety Researcher

As a safety researcher you’ll be working out how to ensure that AI systems are beneficial to humanity. You’ll need a strong research background in deep learning, although you don’t need to have worked on safety before. Listen to our interview with the head of AI safety research at OpenAI to find out more about what it’s like and how to get in.

See details

Ought

San Francisco, but remote work is possible

Ought is a non-profit doing research on using machine learning to support deliberation. The organization aims to build scalable tools for thought, so that future advances in machine learning translate into making us better at thinking and reflection. Amplifying human thinking using AI tools without abandoning human preferences is a core ingredient for some approaches to AI alignment. You can read more about the problem they’re trying to solve here.

Researcher

As a researcher at Ought, you’ll invent and test new schemes for distributed and automated deliberation, building on the state of the art in machine learning.

Your primary focus will be on conceptual work: you’ll synthesize the current state of the field and come up with new systems that might plausibly help support people’s thinking, and you’ll come up with hypotheses worth testing.

See details

Research engineer

As a research engineer at Ought, you’ll design and implement flexible back-end architectures that allow us to quickly explore different approaches to combining machine learning and crowdsourcing.

You’ll take high-level project descriptions, work with researchers at Ought and collaborating institutions to turn them into more precise specs, and implement them. You’ll come up with appropriate abstractions, data structures, and algorithms, and build robust systems with intuitive interfaces.

See details

Full stack web developer

As a web developer at Ought, you’ll be in charge of the front-ends we use across different projects aimed at helping people think. Your primary responsibility will be to design and build everything users see when they interact with our projects.

While your day-to-day work will be centered around user interfaces, you need to have a strong grasp of back-end technologies, so that you can work with our researchers and engineers on integration with different kinds of server-side technologies, especially ones that use crowdsourcing and machine learning.

See details

Chief operating officer

As COO at Ought, you’ll help organize job interviews and internships. You’ll work with accountants, lawyers, and banks. You’ll handle HR, pay contractors, do grant-related work, organize visas and office space, and generally establish good processes and feedback loops within the organization.

You’ll have significant independent decision-making capability and corresponding impact on the future of the organization.

See details

UC Berkeley is one of the top schools for Computer Science, and the goal of the Center for Human Compatible AI is to ensure that as the capabilities of AI systems increase, they continue to operate in a way which is beneficial to humanity.

External Relations Specialist

This position involves the incumbent working closely with the PI to provide operational support to ensure the growth and success of CHAI. This individual will oversee the communication, outreach and administrative functions for CHAI primarily providing direct support to the PI.

See details

Postdoctoral Researcher

Postdoctoral Researchers will have considerable freedom to explore topics within this area, allowing them to work on the control problem.

Successful candidates will work with the CHAI Director, Stuart Russell, or with one of the Berkeley co-Principal Investigators, Pieter Abbeel, Anca Dragan, and Tom Griffiths.

See details

Internship

The internship program offers undergraduates and recent graduates the opportunity to work alongside CHAI researchers in UC Berkeley on a research project in human-compatible AI.

See details

Work on AI policy, ethics, and governance

The development of powerful AI doesn’t only pose the technical challenge of the control problem, but also major political and social challenges. AI policy is fast becoming an important area, but policy-makers are overwhelmingly focused on short-term issues like how to regulate self-driving cars, rather than the key long-term issues (i.e. the future of civilization).

The following is a list of positions focused on policy and strategy research relevant to AI. If you could make a contribution to this area, it’s likely to be one of the highest-impact things you could do with your life. This is particularly true because a significant amount of funding has emerged for this research in the last two years, but the relevant organisations struggle to find people with the right skills, meaning that the area is highly talent-constrained.

AI Impacts

Berkeley, CA

AI Impacts aims to improve our understanding of the likely impacts of human-level artificial intelligence. The goal of the project is to present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant.

Researcher

As a researcher for AI Impacts, you’ll be designing projects, conducting generalist research, and writing up your findings for an audience that may include AI researchers, policy makers, and philanthropic organizations. Depending on the project, you could be working independently, with one or more members of our small team, or with similarly aligned organizations and individuals. You may also have the opportunity to present AI Impacts research at conferences, workshops and other events.

Since we’re currently a small organization, hires have the potential to play a large role in shaping AI Impacts as it grows, as well as influencing the space of AI forecasting.

See details

The AAAS as the world’s largest scientific society and does a lot of work on science policy.

Science & Technology Policy Fellowships

AAAS Science & Technology Policy Fellowships are one year placements for scientists and engineers in the executive, legislative, and judicial branches of the U.S. federal government in Washington DC. Fellows learn first-hand about policymaking while contributing their knowledge and analytical skills to the federal policymaking process.

You’ll need a PhD in a scientific field, or a master’s in engineering plus three years of professional engineering experience. If you have a background in AI, you there’s a good chance you’ll be placed in an AI-relevant organization. We have heard that these fellowships are a good way to develop career capital, experience and understanding of how the political system works.

See details

The Center for a New American Security is a bipartisan think tank working on national security and defence policy. Their technology and national security program looks at how new technologies such as artificial intelligence affect national security. One of their adjunct fellows co-authored this report on AI and security for the US Intelligence Advanced Research Projects Activity.

Technology and national security research internship

Your main responsibilities will include administrative and substantive work to support ongoing projects and can range from event planning and background research to collaborative writing of major reports. It’s helpful but not necessary to have a technical background.

See details

The Council on Foreign Relations is one of the most influential foreign policy think tanks in the United States.

International Economics and Global Governance Internship

This is a part-time, volunteer internship doing research and writing. There are three projects that you might work on: one examining international cooperation and competition in relation to transformative technologies (artificial intelligence, synthetic biology, quantum computing, etc.), one on reforming national and global governance to better combat illicit financial flows, and one informing the American public of the benefits of international trade.

See details

DeepMind

London, UK

DeepMind is probably the largest and most advanced research group developing general machine intelligence. It includes a number of staff working on safety and ethics issues specifically.

Ethics & Policy Researcher (Partnership on AI)

Deepmind’s Ethics & Society Research Unit explores the real-world impacts of AI, with the aim of helping technologists put ethics into practice, and helping society to anticipate and control the effects of AI.

See details

The Future of Humanity Institute is a world-leading research centre at the University of Oxford which works on big-picture questions for human civilization. It was founded by Professor Nick Bostrom, author of the New York Times bestseller Superintelligence, and it does a lot of work on AI safety.

N.B. 80,000 Hours is affiliated with The Future of Humanity Institute.

AI Safety and Reinforcement Learning Internship

Interns on the technical research team have the opportunity to contribute to work on a specific project in reinforcement learning (RL). Previous interns have worked on software for Inverse Reinforcement Learning, on a framework for RL with a human teacher, and on RL agents that do active learning. We see this as an outstanding opportunity to work with an excellent team, improve your technical skills, and explore ideas with some of the key players in AI safety, while strengthening your application to graduate school.

See details

AI Policy and Governance Internship

You will contribute to the Institute’s work on AI policy, AI governance, and AI strategy. Previous interns have worked on issues such as technology race modelling, the bridge between short-term and long-term AI policy and the development of AI, and AI policy in China. Skills in political science, public policy, economics, law, computer science, or Chinese language and politics would be particularly helpful.

See details

IARPA is a government agency that funds research to improve security and the U.S. intelligence community. IARPA has supported research into cybersecurity; how to improve biosecurity and pandemic preparedness; and some of the earliest experiments into how to improve geopolitical forecasts (as covered in Philip Tetlock’s Superforecasting). The director of IARPA has shown interest in reducing the risk of human extinction, an area we consider among the most neglected and important problems facing the world.

Program Manager

Program Managers create, run and advocate for IARPA’s funding programmes. A typical programme involves tens of millions of dollars of funding. You’ll also be able to establish a career in the intelligence community, one of the major players in forecasting and mitigating global catastrophic risks.

Program Managers historically accepted either have a PhD, or substantial expertise in a relevant area of scientific research. To be admitted, you’ll need to develop an example funding proposal.

See details

Cyber security is an important area of study for AI safety and the Center for Long-Term Cybersecurity is one of the best places in the world to work on it. It’s a particularly good place to research these issues as it works closely with the Centre for Human Compatible AI, a leading AI safety research lab.

Researcher

You’ll study emerging issues in cyber security, making connections between technical developments and their implications on policy and society. They are particularly interested in hiring people in four priority areas: the security implications of artificial intelligence and machine learning; the cyber talent pipeline; new governance and regulatory regimes in cybersecurity; and helping vulnerable people online. They’re looking for both junior and senior researchers.

See details

The White House Office of Science and Technology Policy (OSTP) provides the President and others within the Executive Office of the President with advice on the scientific, engineering, and technological aspects of the economy, national security, homeland security, health, foreign relations, the environment, and the technological recovery and use of resources, among other topics. It’s likely to be one of the most important parts of government for determining how the government responds to AI.

Legal or Policy Intern

The OSTP internships provide an excellent opportunity to work with senior White House officials on science and technology policy. It’s also an excellent launch pad into other science policy positions.

The next application deadlines are May 1st for legal interns and June 28th for policy interns.

See details

Help prioritise global efforts to do good

Every year governments, foundations and individuals spend over $500 billion on efforts to improve the world as a whole. They fund research on cures for cancer, the rebuilding of areas devastated by natural disasters, and thousands of other projects.

$500 billion is a lot of money, but it’s not enough to solve all the world’s problems. This means that organisations and individuals have to prioritise and pick which global problems they work on. For example, if a foundation wants to improve others’ lives as much as possible, should it focus on immigration policy, international development, scientific research or something else? Or if the government of India wants to spur economic development, should it focus on improving education, healthcare, microeconomic reform, or something else?

There are vast differences between the effectiveness of working on different global problems. But of the $500 billion spent each year, only a miniscule fraction (less than 0.01%) is spent on global priorities research: efforts to work out which global problems are the most pressing to work on. We think that this research as one of the most pressing problem areas.

You’ll be able to make a big impact on prioritisation work in the following positions.

The Interdisciplinary Ethics Research Group (IERG) is involved in many funded projects in which unusual moral issues arise. Many are to do with the use of new technologies, especially cyber-based technologies in security and health, including public health. IERG works with academic partners in a range of fields including human rights law, engineering, computer science, forensic linguistics, business studies, economics and other social sciences.

Research Fellow

IERG are seeking a Research Fellow to support Associate Professor Keith Hyams on his Leverhulme Project Grant entitled Anthropogenic Global Catastrophic Risk: The Challenge of Governance for a period of three years. This project studies human-induced risk that threatens sustained and wide scale loss of life and damage to civilisation across the globe.

Closing date: 23rd May.

See details

The Future of Humanity Institute is a world-leading research centre at the University of Oxford which works on big-picture questions for human civilization.

N.B. 80,000 Hours is affiliated with The Future of Humanity Institute.

Research Scholar

Humanity today may be in a position to affect the long-term future, but it is poorly equipped to figure out how to do so. The Future of Humanity Institute, as one of the few groups studying these topics, believes the world needs more people who can do such research well. To this end we are launching a Research Scholars Programme, an intimate, salaried two-year programme for early-career researchers aspiring to pursue this mission. Deviating from the traditional academic model, the programme offers lots of latitude for exploration across topics and disciplines, as well as significant one-on-one training and support elements.

See details

Although effective altruism is gaining increasing traction in the non-profit world, it has gained fairly little within academia. This is a problem because many of the world’s most talented researchers are academics, and would, therefore, have a lot to offer in answering the central question of effective altruism: how a given unit of resources can do the most good.

This neglect is also worrying because academics are the authorities consulted by policymakers and the media, and teach the next generation of leaders. If society is to transition to a position where the major decisions in the world are made by determining what the worldwide priorities are in terms of importance, neglectedness and tractability, these principles need to be first accepted within academia. GPI seeks to make this happen.

By working with GPI, you can contribute both to immediately answering foundational questions about how to do the most good and to building the groundwork for improved global decision making.

Academic visitors

GPI welcome expressions of interest from established researchers, employed by a higher education institution, who would like to visit Oxford for a short period (1 week to 2 months) to engage with GPI and its research themes.

GPI is also keen to explore possibilities for collaboration with researchers based at other institutions.

See details

Summer research visitor programme

We are looking for researchers to conduct foundational research on how to do the most good. The programme is intended for PhD students and early career researchers in economics, philosophy or other relevant fields. Travel and accommodation is funded by GPI.

See details

Open Philanthropy Project

San Francisco, CA, with some remote positions

The Open Philanthropy Project advises GoodVentures, a several billion dollar foundation, on its philanthropy. It’s committed to maximising the impact of its giving and has given grants in areas such as farmed animal welfare, biosecurity, AI safety, scientific research, and global health and development.

Disclaimer of conflict of interest: we have received a grant from the Open Philanthropy Project.

Analyst Specializing in Potential Risks from Advanced AI

The Open Philanthropy Project seeks to hire people to specialize in key analyses relevant to potential risks from advanced artificial intelligence.

See details

Grants Associate

You’ll help process philanthropic grants totalling more than 100 million dollars per year.

See details

Operations Associate

You’ll mainly work on external communications, internal coordination, grants tracking and processing, and other operations projects.

See details

Research Analyst

The Open Philanthropy Project seeks to hire several Research Analysts in 2018. They intend to invest heavily in these hires, in the hopes that over the long run they will have the potential to become core contributors to the organization. People who perform successfully in these roles may have significant influence over Open Philanthropy’s overall allocation of capital and grantmaking/investment strategies.

See details

Senior Research Analyst

The Open Philanthropy Project seeks to hire 1-3 Senior Research Analysts in 2018. They intend to invest heavily in these hires, in the hopes that over the long run they will have the potential to become core contributors to the organization. People who perform successfully in these roles may have significant influence over Open Philanthropy’s overall allocation of capital and grantmaking/investment strategies.

See details

Help promote effective altruism

Many people aren’t aware of the best ways to help others, and as a result, they miss opportunities to make a tremendous difference. Effective altruism is a growing social movement dedicated to using evidence and reason to figure out how to benefit others as much as possible. Promoting effective altruism means promoting the key ideas of effective altruism and growing the community of people who take these ideas seriously, and put them into action.

By working on promoting effective altruism you can multiply your impact several-fold, by helping other altruists avoid ineffective ways of helping others, and channelling their efforts into strategies that are many times more effective.

You can read more about this in our problem profile on promoting effective altruism.

Veddis

London, UK

Veddis Philanthropy is the foundation arm of the Veddis family office. They seek out high impact interventions and provide grants to organisations which are best placed to execute these interventions.

Philanthropic Associate (Technical or Advocate)

Veddis Philanthropy is expanding their leadership team. They are looking for either one associate to research where to give grants and build up the supporting documentation or one advocate to support their outreach efforts. This role will be 50% of a team steering large amounts of funding towards high impact interventions.

See details

Help reduce the risk of nuclear war

Nuclear weapons that are armed at all times have the potential to kill hundreds of millions of people directly, and billions due to subsequent effects on agriculture. They pose some unknown risk of human extinction through the potential for a ‘nuclear winter’ and a social collapse from which we never recover. There are many examples in history of moments in which the US or Russia came close to accidentally or deliberately using their nuclear weapons.

We go into more detail in our problem profile on nuclear security.

Ploughshares Fund

San Francisco and Washington DC

Ploughshares Fund is a US foundation focused on reducing and eliminating the dangers posed by nuclear weapons. To this end, they provide funding, produce research, advocate for policy change, and bring together experts and advocates.

Director of Communications

The director of communications will run Ploughshares Fund’s external communications, supervising a team of two in the San Francisco office.

See details

Help end factory farming

Each year, 50 billion animals are raised and slaughtered in factory farms globally. Over a billion animals live in factory farms at any point of time in the United States. Most experience serious levels of suffering. The problem is neglected relative to its scale — less than $20 million per year is spent trying to solve it.

There are promising paths to improving the conditions of factory farmed animals and for changing attitudes towards farm animals. We go into more detail on this in our problem profile on factory farming.

Working at the organisations below will enable you to have a big impact on this problem.

Animal Charity Evaluators researches which charities most cost-effectively help nonhuman animals.

Data Analyst

The Data Analyst is responsible for collecting, analyzing, and reporting on web data associated with website visitors, social media communities, web behavior, and advertising campaign effectiveness with the goal of measuring performance and finding opportunities to support ACE’s mission.

See details

Research & Communications Internships

Mentored by our Director of Research Allison Smith, our research internship involves cataloging information of charities from around the world, helping develop charts and graphs to illustrate data, writing research pages for our site, and learning/implementing some basic research designs.

Mentored by our Director of Communications Erika Alonso, our communications internship involves promotion through social media, improving functionality of our site, editing, writing, media outreach, generalized web development, and tracking possible media opportunities.

See details

Imagine a food system where the most affordable and delicious products are also good for our bodies and the planet. The Good Food Institute is a nonprofit which works with scientists, investors, and entrepreneurs to make groundbreaking good food a reality. They focus on clean meat and plant-based alternatives to animal products—foods that are more delicious, safer to eat, and better for the planet than their outdated counterparts. By bringing together the brightest innovators, forward-thinking investors, and food marketing visionaries, we can produce food in a new and better way.

Various internships

See details

The Humane League runs campaigns to persuade individuals and organisations to adopt behaviours that reduce farmed animal suffering. They mostly focus on corporate campaigns and grassroots organising.

Animal Charity Evaluators recommends them as a top charity.

Research Associate

The Research Associate will work with and report to the Research Manager at Humane League Labs. Responsibilities will include: keep up-to-date with farm animal advocacy strategies used in the movement, help develop hypotheses on their relative effectiveness and generate ideas for testing them in a scientifically sound manner; help develop designs for experiments, define metrics, evaluate controlled and uncontrolled biases, and assess external validity of studies; work with the Research Manager to communicate the methodology in proposals, pre-analysis reports or internal documents, as necessary, with clarity and precision.

See details

Help reduce poverty and disease

Every year around ten million people in poorer countries die of illnesses that can be very cheaply prevented or managed, including malaria, HIV, tuberculosis and diarrhoea.

Around $100 is spent on the healthcare of the poorest 2 billion people per capita each year (adjusted for purchasing power). As a result there remain many opportunities to scale up treatments that are known to prevent or cure these conditions.

We go into more detail in our problem profile on health in poor countries.

We’ve selected the jobs below from the organisations that are most focused on cost effectiveness.

Evidence Action

Washington, DC

Evidence Action scales rigorously evaluated approaches to global health and development challenges. They are most well known for their Deworm the World Initiative which helps governments develop school-based deworming programs in Kenya, India, Ethiopia, and Vietnam. This program has been recommended by GiveWell as one of their top-rated charities for several years.

Associate Director, Innovation

The Associate Director will build and lead a portfolio of early-stage incubation efforts within the Beta incubator (including potentially some efforts that are already underway), with the support of Beta staff, and lead efforts to develop and/or refine Beta’s strategy and delivery approach on different aspects of the early-stage incubation work.

See details

Senior Finance & Operations Manager

The Senior Finance & Operations Manager will serve as the overall finance and operational management lead for the Beta portfolio, supported by one direct report. This role involves overseeing or directly executing on a number of finance and operations activities on behalf of the Beta portfolio that are mission-critical to our work.

See details

GiveDirectly

Various locations

GiveDirectly is among the most impact-driven and well-regarded NGOs working on global poverty. They run an effective and efficient intervention that is backed by a compelling evidence base. They have been a GiveWell recommended charity since 2012, and are also recommended by The Life You Can Save and other evaluators. A full list of their open positions can be found here.

Country Director (DRC, Malawi)

GiveDirectly are hiring Country Directors for 2 new offices (likely Malawi and the DRC), who will be responsible for starting up and managing all aspects of their country offices.

The Country Directors will play a crucial role in driving our successful expansion in these countries, and developing our relationship with USAID, an influential donor.

Closing date: 13th May.

See details

Growth

You will join a small & nimble team that is responsible for raising $30M in revenue and driving growth across our retail funnel. You will report to the Chief Growth Officer and own several functions: operations, analytics, and select acquisition-focused initiatives. You will both ensure that mission-critical processes are well-structured & run, and launch a number of experiments to fill the top of our donor funnel.

See details

GiveWell

San Francisco

GiveWell has conducted some of the highest-quality research into which programmes and charities within international development will most help people for each dollar donated.

Disclaimer: 80,000 Hours has received a grant from the Open Philanthropy Project, which works closely with GiveWell.

Director of Operations

The Director of Operations will lead a team of 8 people. GiveWell’s operations team is responsible for all finance and budget, donations processing, HR, legal, and tech functions at GiveWell.

See details

Head of Growth

The Head of Growth will lead GiveWell’s efforts to increase the amount of money their recommended charities receive as a result of their recommendations.

See details

Research Analyst

Research analysts work on all parts of GiveWell’s research process.

See details

Senior Research Analyst

They are looking for someone with a strong background in causal inference and quantitative research methods (either through academic coursework at the graduate level or previous work experience) to work on assessing the evidence and cost-effectiveness of interventions such as immunization campaigns, bednet distribution, breastfeeding promotion, etc.

See details

IDinsight

Many locations around the world

IDinsight designs and deploys evidence-generating tools that help people eliminate poverty worldwide. They tailor the best methodologies to their client’s needs and constraints to integrate evidence into the realities of implementation. They serve governments, innovative foundations, organizations, and enterprises across Africa and Asia in all major program areas including health, education, agriculture, livelihoods, finance, energy, and governance.

They have partnered with GiveWell to help potential top charities improve their monitoring and evaluation.

Administrative Lead - San Francisco

Our operational and administrative teammates are taking IDinsight through a unique stage of organizational growth, tripling the size our current staff over the next two years (from 65 to 200+) and adding multiple new global offices.

See details

Strategic Communications Director

The strategic communications director will work on IDinsight’s communication strategy and oversee the communications team.

See details

Manager

Managers oversee all aspects of how IDinsight works with its clients.

See details

New Incentives gives poor mothers in rural West Africa small stipends on the condition that they vaccinate their children against deadly diseases. The stipends allow the women to afford transport to the clinic where the vaccinations are provided along with food for their families. 1.5 million children under five lose their lives to vaccine-preventable diseases each year. The New Incentives program protects infants against these risks.

They have received a GiveWell incubation grant and taken part in Y Combinator’s nonprofit program.

Junior operations manager

You will help plan, direct, and coordinate New Incentives’ operations. Your responsibilities will include improving efficiency, performance, productivity, and accountability using effective strategies. You will lead a team of managers and be responsible for their training and output.

See details

Wave

Various

Wave is a tech start-up that makes it cheaper to send remittances to East Africa. In 2017, one billion migrants worldwide will send over $600 billion home to family and friends, dwarfing the foreign governmental aid their home countries receive. These people often have to pay fees averaging over 7% for transfers that typically take 24 hours or more to reach their recipients.

Wave’s mission is to change that by making sending money anywhere in the world as easy and affordable as sending a text. Since 2014, their app has allowed Africans in the US, the UK, and Canada to send money instantly and affordably to mobile money wallets in Kenya, Uganda, & Tanzania, saving their users countless hours and tens of millions of dollars in transfer fees.

Distribution team

Distribution team members work to help Wave reach more users. They spend most of their time talking to users in the diaspora communities who send remittances to East Africa.

See details

Are you an employer?
Get in touch to let us know about a high impact job.

Organisations we recommend

Some of the best jobs are never advertised and are created for the right applicants, so here is our list of some of the best organisations within each of our recommended problem areas. These are all potentially very high-impact places to work (in any role), and many can also help you to develop great career capital. To see why we picked these organisations, read the full problem profile.

Why work on this problem?

Why work on this problem?

  • GiveWell conducts in-depth research to find the best charities that help people in the developing world. See current vacancies. Its partner the Open Philanthropy Project researches giving opportunities in fields other than global health and poverty. See current vacancies. Disclaimer of conflict of interest: we have received a grant from the Open Philanthropy Project.
  • 80,000 Hours – yes, that’s us. We do research into the careers which do the most good and help people pursue them. If you’d like to express interest in working with us, fill out this short form.
  • The Centre for Effective Altruism conducts research into fundamental questions on how to do the most good, and encourages donations to the best charities available working on priority problems. It includes the project Giving What We Can, which encourages people to pledge 10% of their income to the most effective organisations for helping others. See current vacancies. If you’d like express interest in working at the Centre for Effective Altruism, fill out this short form. Disclaimer of conflict of interest: we are financially sponsored by the Centre for Effective Altruism.
  • Founder’s Pledge encourages entrepreneurs to make a legally binding commitment to donate at least 2% of their personal proceeds to charity when they sell their business.

Why work on this problem?

Why work on this problem?

Advocacy for animals on factory farms

Development of meat substitutes

Why work on this problem?

Why work on this problem?

Note: Our investigation of this area is only shallow, so we are not confident in our analysis and recommendations. See the Open Philanthropy Project’s overview of this area for more detail and a longer list of organisations.

Why work on this problem?

See our methodology here.

Didn’t find anything?

See the full list of organisations we sometimes recommend.

Never miss a high impact job

Get our curated list of job openings sent to your inbox once a month.