At 80,000 Hours, we are interested in the question: “if you want to find the best way to have a positive impact with your career, what should you do on the margin?” The ‘on the margin’ qualifier is crucial. We are asking how you can have a bigger impact, given how the rest of society spends its resources.
To help our readers think this through, we publish a list of what we see as the world’s most pressing problems. We rank the top most issues by our assessment of where additional work and resources will have the greatest positive impact, considered impartially and in expectation.
Every problem on our list is there because we think it’s very important and a big opportunity for doing good. We’re excited for our readers to make progress on all of them, and think all of them would ideally get more resources and attention than they currently do from society at large.
The most pressing problems are those that have the greatest combination of being:
Large in scale: solving the issue improves more lives to a larger extent over the long run.
Neglected by others: the best interventions aren’t already being done.
Tractable: we can make progress if we try.
We’ve recently updated our list. Here are the biggest changes:
We now rank factory farming among the top problems in the world.
Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.
How the role of culture has been crucial in enabling human technological progress.
Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.
Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.
Whether we can and should avoid death by uploading human minds.
And plenty more.
Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore
In the field of biosecurity, many experts are concerned with managing information hazards (or infohazards). This is information that some believe could be dangerous if it were widely known — such as the gene sequence of a deadly virus or particular threat models.
Navigating the complexities of infohazards and the potential misuse of biological knowledge is contentious, and experts often disagree about how to approach this issue.
So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the third instalment of our biosecurity anonymous answers series. Below, we present 11 responses from these experts addressing their views on managing information hazards in biosecurity, particularly as it relates to global catastrophic risks
Some key topics and areas of disagreement that emerged include:
How to balance the need for transparency with the risks of information misuse
The extent to which discussing biological threats could inspire malicious actors
Whether current approaches to information hazards are too conservative or not cautious enough
How to share sensitive information responsibly with different audiences
The impact of information restrictions on scientific progress and problem solving
What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.
Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.
Why eliminating major age-related diseases might only extend average lifespan by 15 years.
The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.
And plenty more.
Producer: Keiran Harris Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore
The idea this week: even some sceptics of AI risk think there’s a real chance of a catastrophe in the next 1,000 years.
That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about existential risk.
It’s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the Existential Risk Persuasion Tournament to investigate these disagreements and find out whether they could be resolved.
The interview covers a lot of issues, but here are some key details that stood out on the topic of AI risk:
In today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies.
They cover:
Why training robots is harder than training large language models like ChatGPT.
The biggest engineering challenges that still remain before robots can be widely useful in the real world.
The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine.
Whether we should be worried about robot labour affecting human employment.
Recent breakthroughs in robotics, and what cutting-edge robots can do today.
Ken’s work as an artist, where he explores the complex relationship between humans and technology.
And plenty more.
Producer: Keiran Harris Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore
Problem profile by Cody Fenwick · Published September 11th, 2024
We think understanding the moral status of digital minds is a top emerging challenge in the world. This means it’s potentially as important as our top problems, but we have a lot of uncertainty about it and the relevant field is not very developed.
The fast development of AI technology will force us to confront many important questions around the moral status of digital minds that we’re not prepared to answer. We want to see more people focusing their careers on this issue, building a field of researchers to improve our understanding of this topic and getting ready to advise key decision makers in the future. We also think people working in AI technical safety and AI governance should learn more about this problem and consider ways in which it might interact with their work.
This is Part Two of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions.
Preventing catastrophic pandemics is one of our top priorities.
But the landscape of pandemic preparedness is complex and multifaceted, and experts don’t always agree about what the most effective interventions are or how resources should be allocated.
So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the second instalment of our biosecurity anonymous answers series.
Below, we present 12 responses from these experts addressing their views on neglected interventions in pandemic preparedness and advice for capable young people entering the field, particularly as it relates to global catastrophic risks.
Some key topics and areas of disagreement that emerged include:
The relative importance of technical interventions versus policy work
The prioritisation of prevention strategies versus response capabilities
The focus on natural pandemic threats versus deliberate biological risks
The role of intelligence and national security in pandemic preparedness
The importance of behavioural science and public communication in crisis response
The potential of various technologies like improved PPE, biosurveillance, and pathogen-agnostic approaches
Here’s what the experts had to say.
Expert 1: Improving PPE and detection technologies Expert 2: Enhancing security measures against malicious actors Expert 3: Implementing biosecurity safeguards and behavioural science Expert 4: Protecting field researchers and advancing vaccine platforms Expert 5: Focusing on containment and early detection Expert 6: Balancing policy and technical interventions Expert 7: Understanding the bioeconomy Expert 8: Prioritising biosurveillance and risk modelling Expert 9: Increasing biodefense efforts Expert 10: Integrating pathogen-agnostic sequencing Expert 11: Bolstering intelligence and early detection Expert 12: Promoting biosafety research Learn more
How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
The challenges of predicting low-probability, high-impact events.
Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
Whether large language models could help or outperform human forecasters.
How people can improve their calibration and start making better forecasts personally.
Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
And plenty more.
Producer: Keiran Harris Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris Transcriptions: Katy Moore
We know of many people with technical talent and track records choosing to work in governance right now because they think it’s where they can make a bigger difference.
It’s become more clear that policy-shaping and governance positions within key AI organisations can play critical roles in how the technology progresses.
We’re seeing a particularly large increase in the number of roles available in AI governance and policy,
In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.
They cover:
What’s actually in SB 1047, and which AI models it would apply to.
The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
Why California is taking state-level action rather than waiting for federal regulation.
How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
And plenty more.
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
In today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects.
They cover:
The scale of potential insect suffering in the wild, on farms, and in labs.
Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.
How size bias might help explain why many people assume insects can’t feel pain.
Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.
Challenges facing the nascent field of insect welfare research, and where the main research gaps are.
Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.
And much more.
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?
That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the 11 people who left OpenAI to launch Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.
As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.
As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.
Nick points out three big virtues to the RSP approach:
It allows us to set aside the question of when any of these things will be possible, and focus the conversation on what would be necessary if they are possible — something there is usually much more agreement on.
It means we don’t take costly precautions that developers will resent and resist before they are actually called for.
As the policies don’t allow models to be deployed until suitable safeguards are in place, they align a firm’s commercial incentives with safety — for example, a profitable product release could be blocked by insufficient investments in computer security or alignment research years earlier.
Rob then pushes Nick on some of the best objections to the RSP mechanisms he’s found, including:
It’s hard to trust that profit-motivated companies will stick to their scaling policies long term and not water them down to make their lives easier — particularly as commercial pressure heats up.
Even if you’re trying hard to find potential safety concerns, it’s difficult to truly measure what models can and can’t do. And if we fail to pick up a dangerous ability that’s really there under the hood, then perhaps all we’ve done is lull ourselves into a false sense of security.
Importantly, in some cases humanity simply hasn’t invented safeguards up to the task of addressing AI capabilities that could show up soon. Maybe that will change before it’s too late — but if not, we’re being written a cheque that will bounce when it comes due.
Nick explains why he thinks some of these worries are overblown, while others are legitimate but just point to the hard work we all need to put in to get a good outcome.
Nick and Rob also discuss whether it’s essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.
In addition to all of that, Nick and Rob talk about:
What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).
What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.
What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.
And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at [email protected].
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Video engineering: Simon Monsour Transcriptions: Katy Moore
Career review by Cody Fenwick · Last updated August 22nd, 2024 · First published June 2023
As advancing AI capabilities gained widespread attention in late 2022 and 2023, interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI has become more prominent, potentially opening up opportunities for policy that could mitigate the threats.
There’s still a lot of uncertainty about which AI governance strategies would be best. Many have proposed policies and strategies aimed at reducing the largest risks, which we discuss below.
But there’s no roadmap here. There’s plenty of room for debate about what’s needed, and we may not have found the best ideas yet in this space. In any case, there’s still a lot of work to figure out how promising policies and strategies would work in practice. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and policy.
Why this could be a high-impact career path
Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks.
And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous.
We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems,
Blog post by Cody Fenwick · Published August 16th, 2024
The idea this week: mpox and a bird flu virus are testing our pandemic readiness.
Would we be ready for another pandemic?
It became clear in 2020 that the world hadn’t done enough to prepare for the rapid, global spread of a particularly deadly virus. Four years on, our resilience faces new tests.
H5N1 — a strain of bird flu — has been spreading among animals in the United States and elsewhere, with a small number of infections reported in humans.
Here’s what we know about each:
Mpox
Mpox drew international attention in 2022 when it started spreading globally, including in the US and the UK. During that outbreak, around 95,000 cases and about 180 deaths were reported. That wave largely subsided in much of the world, in part due to targeted vaccination campaigns, but the spread of another strain of the virus has sharply accelerated in Central Africa.
The strain driving the current outbreak may be significantly more deadly. Around 22,000 suspected mpox infections and more than 1,200 deaths have been reported in the DRC since January 2023..
Candidates for sentience — such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
Chilling tales about overconfident policies that probably caused significant suffering for decades.
How policymakers can act ethically given real uncertainty.
Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
Why Jonathan is so excited about citizens’ assemblies.
Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
And plenty more.
Producer and editor: Keiran Harris Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore
So it’s natural to wonder whether you should try to work at one of the companies that are doing the most to build and shape these future AI systems.
As of summer 2024, OpenAI, Google DeepMind, Meta, and Anthropic seem to be the leading frontier AI companies — meaning they have produced the most capable models so far and seem likely to continue doing so. Mistral, and xAI are contenders as well — and others may enter the industry from here.
Why might it be high impact to work for a frontier AI company? Some roles at these companies might be among the best for reducing risks
We suggest working at frontier AI companies in several of our career reviews because a lot of important safety, governance, and security work is done in them.
Blog post by Stephen Clare · Published August 6th, 2024
The idea this week: totalitarian regimes killed over 100 million people in less than 100 years — and in the future they could be far worse.
That’s because advanced artificial intelligence may prove very useful for dictators. They could use it to surveil their population, secure their grip on power, and entrench their rule, perhaps indefinitely.
This is a serious risk. Many of the worst crimes in history, from the Holocaust to the Cambodian Genocide, have been perpetrated by totalitarian regimes. When megalomaniacal dictators decide massive sacrifices are justified to pursue national or personal glory, the results are often catastrophic.
However, even the most successful totalitarian regimes rarely survive more than a few decades. They tend to be brought down by internal resistance, war, or the succession problem — the possibility for sociopolitical change, including liberalisation, after a dictator’s death.
But that could all be upended if technological advancements help dictators overcome these challenges.
We’re keen to hire another advisor to talk to talented and altruistic people in order to help them find high-impact careers.
It’s a great sign you’d enjoy being an 80,000 Hours advisor if you’ve enjoyed managing, mentoring, or teaching. We’ve found that experience with coaching is not necessary — backgrounds in a range of fields like medicine, research, management consulting, and more have helped our advisors become strong candidates for the role.
For example, Laura González-Salmerón joined us after working as an investment manager, Abigail Hoskin completed her PhD in Psychology, and Matt Reardon was previously a corporate lawyer. But it’s also particularly useful for us to have a broad range of experience on the team, so we’re excited to hear from people with all kinds of backgrounds.
The core of this role is having one-on-one conversations with people to help them plan their careers. We have a tight-knit, fast-paced team, though, so people take on a variety of responsibilities . These include, for example, building networks and expertise in our priority paths, analysing data to improve our services, and writing posts for the 80,000 Hours website or the EA Forum.
What we’re looking for
We’re looking for someone who has:
A strong interest in effective altruism and longtermism
In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.
They cover:
Real-world examples of sophisticated security breaches, and what we can learn from them.
Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
New security measures that Sella hopes can mitigate with the growing risks.
Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
And plenty more.
Producer and editor: Keiran Harris Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong Additional content editing: Katy Moore and Luisa Rodriguez Transcriptions: Katy Moore