Top recommended organisations
We think getting a job at one of these organisations is a promising route to working on some of the world’s most pressing problems.
AI labs in industry that have technical safety teams, or are focused entirely on safety:
- Anthropic is an AI safety and research company working on building reliable, interpretable, and steerable AI systems. They focus on empirical AI safety research. Anthropic cofounders Daniela and Dario Amodei gave an interview about the lab on the Future of Life Institute podcast. On our podcast, we spoke to Chris Olah, who leads Anthropic’s research into interpretability, and Nova DasSarma, who works on information security at Anthropic. See current vacancies.
- The Center for AI Safety is a nonprofit aiming to reduce high-consequence risks from artificial intelligence through technical research and field-building. See current vacancies.
- DeepMind is probably the largest and most well-known research group developing general artificial machine intelligence, and is famous for its work creating AlphaGo, AlphaZero, and AlphaFold. It is not principally focused on safety, but has two teams focused on AI safety. DeepMind is owned by Alphabet (Google’s parent company). We’re only confident about recommending DeepMind roles working specifically on safety, ethics, policy, and security issues. See current vacancies.
- OpenAI is an established AI lab attempting to build safe artificial general intelligence. We’re only confident in recommending opportunities in their policy, safety, and security teams. See current vacancies.
- Ought is a machine learning lab building Elicit, an AI research assistant. Their aim is to align open-ended reasoning by learning human reasoning steps, and to direct AI progress towards helping with evaluating evidence and arguments. See current vacancies.
- Redwood Research is an AI safety research organisation, whose first big project attempted to make sure language models (like GPT-3) produce output following certain rules with very high probability, in order to address failure modes too rare to show up in standard training. See current vacancies.
Conceptual AI safety labs:
- The Alignment Research Center (ARC) is attempting to produce alignment strategies that could be adopted in industry today while also being able to scale to future systems. They focus on conceptual work, developing strategies that could work for alignment and which may be promising directions for empirical work, rather than doing empirical AI work themselves. Their first project was releasing a report on Eliciting Latent Knowledge, the problem of getting advanced AI systems to honestly tell you what they believe (or ‘believe’) about the world. On our podcast, we interviewed ARC founder Paul Christiano about his research (before he founded ARC). See current vacancies.
- The Center on Long-Term Risk works to address worst-case risks from advanced AI. They focus on conflict between AI systems. See current vacancies.
- The Machine Intelligence Research Institute was one of the first groups to become concerned about the risks from machine intelligence in the early 2000s, and its team has published a number of papers on safety issues and how to resolve them. See current vacancies.
AI safety in academia:
- The Algorithmic Alignment Group in the Computer Science and Artificial Intelligence Laboratory at MIT, led by Dylan Hadfield-Menell.
- The Center for Human-Compatible AI at UC Berkeley, led by Stuart Russell, which focuses on academic research to ensure AI is safe and beneficial to humans. (Our podcast with Stuart Russell examines his approach to provably beneficial AI.) See current vacancies.
- Jacob Steinhardt’s research group in the Department of Statistics at UC Berkeley.
- Sam Bowman’s research group at the NYU Machine Learning for Language group.
- David Krueger’s research group at the Computational and Biological Learning Laboratory at the University of Cambridge.
- The Foundations of Cooperative AI Lab at Carnegie Mellon University.
- The Future of Humanity Institute at the University of Oxford has an AI safety research group.
- AI Impacts works on forecasting progress in machine intelligence and predicting its likely impacts. See current vacancies.
- The American Association for the Advancement of Science offers Science & Technology Policy Fellowships, which provide hands-on opportunities to apply scientific knowledge and technical skills to important societal challenges. Fellows are assigned for one year to a selected area of the United States federal government, where they participate in policy development and implementation.
- The Center for Security and Emerging Technology at Georgetown University produces data-driven research at the intersection of security and technology (including AI, advanced computing, and biotechnology) and provides nonpartisan analysis to the policy community. See current vacancies.
- The Centre for the Governance of AI is focused on building a global research community that’s dedicated to helping humanity navigate the transition to a world with advanced AI. See current vacancies.
- The Centre for Long-Term Resilience facilitates access to the expertise of leading academics who work on long-term global challenges, such as AI, biosecurity, and risk management policy. It helps convert cutting-edge research into actionable recommendations that are grounded in the UK context.
- DeepMind is probably the largest research group developing general machine intelligence in the Western world. We’re only confident about recommending DeepMind roles working specifically on safety, ethics, policy, and security issues. See current vacancies.
- Epoch is a team of researchers investigating and forecasting the future development of advanced AI. See current vacancies.
- The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. Academics at FHI bring the tools of mathematics, philosophy, and social sciences to bear on big-picture questions about humanity and its prospects.
- The Legal Priorities Project is an independent, global research project founded by researchers from Harvard University. It conducts legal research that tackles the world’s most pressing problems, influenced by the principles of effective altruism and longtermism. See current vacancies.
- OpenAI is an established AI lab attempting to build safe artificial general intelligence. We’re only confident in recommending opportunities in their policy, safety, and security teams. See current vacancies.
- United States Congress (for example, as a congressional staffer).
- The United States Office of Science and Technology Policy works to maximise the benefits of science and technology to advance health, prosperity, security, environmental quality, and justice for all Americans.
- Alvea is a biotechnology company that works on medical countermeasures that fight infectious diseases, they are starting with new vaccines against SARS-CoV-2 and its variants.
- The American Association for the Advancement of Science offers Science & Technology Policy Fellowships, which provide hands-on opportunities to apply scientific knowledge and technical skills to important societal challenges. Fellows are assigned for one year to a selected area of the United States federal government, where they participate in policy development and implementation.
- The Center for International Security and Cooperation is Stanford University’s hub for researchers tackling some of the world’s most pressing security and international cooperation issues. See current vacancies.
- The Council on Strategic Risks is dedicated to anticipating, analysing, and addressing this century’s core systemic risks to security, with special examination of the ways in which these risks intersect and exacerbate one another.
- The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. Academics at FHI bring the tools of mathematics, philosophy, and social sciences to bear on big-picture questions about humanity and its prospects.
- The Johns Hopkins Center for Health Security explores how new policy approaches, scientific advances, and technological innovations can strengthen health security and save lives. See current vacancies.
- The Nucleic Acid Observatory project aims to pioneer widespread, pathogen-agnostic, untargeted metagenomic sequencing of our environment to provide reliable early warning of all biological threats – including those we have never seen before. It is a project of Kevin Esvelt’s group at the MIT Media Lab (see below).
- The Sabeti Lab at Harvard University uses computational methods and genomics to understand mechanisms of evolutionary adaptation in humans and pathogens. See current vacancies.
- Kevin Esvelt’s Sculpting Evolution Group at the MIT Media Lab is a group of biotechnologists working to cultivate wisdom and guard against catastrophe via evolutionary and ecological engineering. Projects span from robotics and machine learning, to preventing catastrophic misuse of biotechnology, to working with wild populations and ecosystems.
- The Secure DNA project is a global team of academic life scientists, cryptographers, and policy analysts working to develop an automated DNA screening system capable of secure and universal DNA synthesis, which will be freely available everywhere. See current vacancies.
Advocacy for animals on factory farms:
- Animal Charity Evaluators conducts research to find the highest impact ways to help non-human animals. See current vacancies.
Animal Charity Evaluators’ top charities, for Factory Farming:
- The Humane League runs programmes, such as corporate campaigns and grassroots outreach, that aim to persuade individuals and organisations to adopt behaviours that reduce farmed animal suffering. See current vacancies.
- Faunalytics is a U.S.-based organisation working to connect animal advocates with information relevant to advocacy.
Development of meat substitutes:
- The Good Food Institute seeks out entrepreneurs and scientists to join or form start-ups focused on producing plant-based and cultured meat, and provides advice and lobbying to help them succeed. See current vacancies.
- There are many organisations developing alternative proteins and meat substitutes such as Impossible Foods and Beyond Meat. There are now so many roles in these organisations that we do not list them on this job board.
- The Decision Lab is a socially conscious applied research firm that is dedicated to empowering the world to make better decisions. See current vacancies.
- Good Judgment was cofounded by Professor Philip Tetlock. This for-profit company maintains a global network of elite ‘superforecasters’ who collaborate to tackle clients’ forecasting questions with unparalleled accuracy.
- Metaculus is a community dedicated to generating accurate predictions about future real-world events by aggregating the collective wisdom, insight, and intelligence of its participants. See current vacancies.
- Open Philanthropy uses an approach inspired by effective altruism to identify high-impact giving opportunities across a wide range of problem areas, shares this research freely online, and uses it to advise top philanthropists on where to give. See current vacancies. Open Philanthropy is 80,000 Hours’ largest funder.
- The Quantified Uncertainty Research Institute aims to advance forecasting and epistemics to improve the long-term future of humanity, which it does by conducting research and making software.
- Charity Entrepreneurship runs an incubation programme for potential founders of new organisations. They also have meta charity ideas and ideas in global health, animal welfare, and mental health
Other project ideas:
- Ideas for megaprojects on the EA Forum
- Big biosecurity projects by Andrew Snyder-Beattie and Ethan Alley on the EA Forum
- Gaps in effective altruism by Michelle Hutchinson on the EA Forum
- Our profile on founding effective international development charities
- Alternative protein project ideas by the Good Food Institute (and you may also want to listen to our podcast with the founder, Bruce Friedrich)
- GiveWell conducts thorough research to find the best charities available to help people in the developing world. See current vacancies.
- Charity Entrepreneurship helps people start new charities that have the potential to become recommended by GiveWell, such as the Lead Exposure Elimination Project and Fortify Health. See current vacancies.
GiveWell’s top charities:
- Malaria Consortium’s seasonal malaria chemoprevention programme gives children antimalarial medicine during the four months of the year when malaria infection rates are highest. See current vacancies
- Against Malaria Foundation provides funding for antimalarial bed net distributions.
- Helen Keller International’s vitamin A supplementation programme. See current vacancies and internships.
- New Incentives runs a conditional cash transfer program in North West Nigeria which seeks to increase uptake of routine immunizations. See current vacancies.
- The Berkeley Existential Risk Initiative provides free services and support to university research groups working to reduce existential risk. See current vacancies. 80,000 Hours has received funding from the Berkeley Existential Risk Initiative.
- The Center on Long-term Risk addresses worst-case risks from the development and deployment of advanced AI systems. It is currently focused on conflict scenarios as well as technical and philosophical aspects of cooperation. Their work includes conducting interdisciplinary research, making and recommending grants, and building a community of professionals and other researchers around these priorities. See current vacancies.
- The Forethought Foundation for Global Priorities Research aims to promote academic work that addresses the question of how to use our scarce resources to improve the world as much as possible, with a particular focus on influencing the very long-run future. See current vacancies.
- Founders Pledge encourages entrepreneurs to make a legally binding commitment to donate at least 2% of their personal proceeds to charity when they sell their business. See current vacancies.
- Giving What We Can is a community of effective givers. It provides the support, community, and information that donors need to do the most good with their charitable giving. See current vacancies.
- Longview Philanthropy designs and executes custom giving strategies for major donors, with a focus on using evidence and reason to find the highest-impact opportunities to protect future generations. See current vacancies.
- Open Philanthropy uses an approach inspired by effective altruism to identify high-impact giving opportunities across a wide range of problem areas, and uses it to advise top philanthropists on where to give. See current vacancies. Open Philanthropy is 80,000 Hours’ largest funder.
- Rethink Priorities is a research organisation that conducts critical research to inform policymakers and major foundations about how to best help people and nonhuman animals in both the present and the long-term future — spanning everything from animal welfare to the threat of nuclear war. See current vacancies.
Wikipedia has a list of the wealthiest charitable foundations.
- The Center for Security and Emerging Technology at Georgetown University produces data-driven research at the intersection of security and technology (including AI, advanced computing, and biotechnology) and provides nonpartisan analysis to the policy community. See current vacancies.
- Concordia Consulting is currently focused on promoting the safe and responsible development of artificial intelligence. To do this, it advises and connects leaders from industry and academia in China, the United States, and Europe, drawing upon its deep cross-cultural expertise and an extensive network of advisers and collaborators.
- The Council on Foreign Relations is an independent and nonpartisan membership organisation, think tank, and publisher dedicated to helping stakeholders and interested citizens better understand the world and the foreign policy choices facing the United States and other countries. See current vacancies.
- The Schwarzman Scholars programme is a one-year, fully-funded master’s programme at Tsinghua University in Beijing, designed to build a global community of future leaders who will deepen understanding between China and the rest of the world.
- The Yenching Scholars programme at Peking University aims to build bridges between China and the rest of the world through an interdisciplinary master’s programme in China Studies.
- 80,000 Hours — yes, that’s us. We research the careers that do the most good and help people pursue them. See our current vacancies.
- The Berkeley Existential Risk Initiative works to improve human civilisation’s long-term prospects for survival and flourishing by providing free services and support to university research groups working to reduce existential risk. See current vacancies.
- The Centre for Effective Altruism works to coordinate the effective altruism community. They run the EA Global conference series, the EA Forum, and effectivealtruism.org. See current vacancies.
- Effective Ventures’ Operations team (EV Ops) provides operations support to all of the organisations in the Effective Ventures organisation. This includes 80,000 Hours, Centre for Effective Altruism, Forethought Foundation, GovAI, Longview Philanthropy and other organisations. See current vacancies.
- The Forethought Foundation for Global Priorities Research aims to promote academic work that addresses the question of how to use our scarce resources to improve the world as much as possible, with a particular focus on influencing the very long-run future. See current vacancies.
- GiveWell conducts thorough research to find the best charities available to help people in the developing world. See current vacancies.
- Global Challenges Project works to inspire a new generation of students to tackle the world’s most pressing problems with their careers.
- Lightcone Infrastructure is a LessWrong spin-out that builds services and infrastructure for people working to help safeguard humanity’s long-term future.
- Longview Philanthropy designs and executes custom giving strategies for major donors, with a focus on using evidence and reason to find the highest-impact opportunities to protect future generations. See current vacancies.
- Open Philanthropy uses an approach inspired by effective altruism to identify high-impact giving opportunities across a wide range of problem areas, shares this research freely online, and uses it to advise top philanthropists on where to give. See current vacancies. Open Philanthropy is 80,000 Hours’ largest funder.
- Rethink Priorities is a research organisation that conducts critical research to inform policymakers and major foundations about how to best help people and nonhuman animals in both the present and the long-term future — spanning everything from animal welfare to the threat of nuclear war. See current vacancies.
- The Forethought Foundation for Global Priorities Research aims to promote academic work that addresses the question of how to use our scarce resources to improve the world as much as possible, with a particular focus on influencing the very long-run future. See current vacancies.
- The Global Priorities Institute is an interdisciplinary research centre at the University of Oxford. It conducts foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible, using the tools of multiple academic disciplines (especially philosophy and economics) to explore the issues at stake. See current vacancies, or submit a general expression of interest.
- Longview Philanthropy designs and executes custom giving strategies for major donors, with a focus on using evidence and reason to find the highest-impact opportunities to protect future generations. See current vacancies.
- Open Philanthropy uses an approach inspired by effective altruism to identify high-impact giving opportunities across a wide range of problem areas, shares this research freely online, and uses it to advise top philanthropists on where to give. See current vacancies. Open Philanthropy is 80,000 Hours’ largest funder.
- Our World in Data is a scientific online publication that uses research and data to make progress against the world’s largest problems. See current vacancies.
- Rethink Priorities is a research organisation that conducts critical research to inform policymakers and major foundations about how to best help people and nonhuman animals in both the present and the long-term future — spanning everything from animal welfare to the threat of nuclear war. See current vacancies.