Ideas for high impact careers beyond our priority paths

Below we list some more career options beyond our priority paths that seem promising to us for positively influencing the long-term future.

Some of these are likely to be written up as priority paths in the future, or wrapped into existing ones, but we haven’t written full profiles for them yet—for example policy careers outside AI and biosecurity policy that seem promising from a longtermist perspective.

Others, like information security, we think might be as promising for many people as our priority paths, but because we haven’t investigated them much we’re still unsure.

Still others seem like they’ll typically be less impactful than our priority paths for people who can succeed equally in either, but still seem high-impact to us and like they could be top options for a substantial number of people, depending on personal fit—for example research management.

Finally some—like becoming a public intellectual—clearly have the potential for a lot of impact, but we can’t recommend them widely because they don’t have the capacity to absorb a large number of people, are particularly risky, or both.

Who is best suited to pursue these paths? Of course the answer is different for each one, but in general pursuing a career where less research has been done on how to have a large impact within it—especially if few of your colleagues will share your perspective on how to think about impact—may require you to think especially critically and creatively about how you can do an unusual amount of good in that career.

Continue reading →

Global issues beyond 80,000 Hours’ current priorities

Here we list over 30 global issues beyond the ones we usually prioritize most highly in our work that you might consider focusing your career on tackling.

Although we spend the majority of our time at 80,000 Hours on our highest priority problem areas, and we recommend working on them to many of our readers, these are just the most promising issues among those we’ve spent time investigating. There are many other global issues that we haven’t properly investigated, and which might be very promising for more people to work on.

In fact, we think working on some of the issues listed below could be as high-impact for some people as working on our priority problem areas — though we haven’t looked into them enough to be confident.

See our full article on different global problems for more explanation of our prioritization and advice on choosing an area to focus on. (The full article will also be kept updated as we learn more, whereas this blog post won’t be.)

Potential highest priorities

The following are some global issues that seem like they might be especially pressing from the perspective of improving the long-term future. We think these have a chance of being as pressing for people to work on as our priority problems, but we haven’t investigated them enough to know.

Great power conflict

A large violent conflict between major powers such as the US,

Continue reading →

#83 – Jennifer Doleac on ways to prevent crime other than police and prisons

…they randomly determined when officers would take the training… so it’s a really nice natural experiment. And they found that this one-day training program pretty dramatically reduced both complaints and use of force.

Jennifer Doleac

The killing of George Floyd has prompted a great deal of debate over whether the US should shrink its police departments. The research literature suggests that the presence of police officers does reduce crime, though they’re not cheap, and as is increasingly recognised, impose substantial harms on the populations they are meant to be protecting, especially communities of colour.

So maybe we ought to shift our focus to unconventional but effective approaches to crime prevention — approaches that would shrink the need for police or prisons and the human toll they bring with them.

Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three alternative ways to effectively prevent crime: better street lighting, cognitive behavioral therapy, and lead abatement.

One of Jennifer’s papers used the switch into and out of daylight saving time as a ‘natural experiment’ to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double.

The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You’re just more likely to get caught.

You might think: “Well, people will just commit crime in the morning instead”. But it looks like criminals aren’t early risers, and that doesn’t happen.

(Incidentally, a different experiment used the discontinuity in daylight savings time to quantify racial bias in police traffic stops.)

While we can’t keep the sun out all day, just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone.

On her unusually rigorous podcast Probable Causation, Jennifer interviewed Aaron Chalfin, who studied what happened when very bright streetlights were randomly added to some public housing complexes but not others. His team found the lights reduced outside night-time crime by a massive 36%, even after taking account of possible displacement to other locations.

The second approach is cognitive behavioral therapy (CBT), in which you’re taught to slow down your decision-making and think through your assumptions before acting.

One randomised controlled trial looked at schools and juvenile detention facilities in Chicago, and compared kids randomly assigned to receive CBT with those who weren’t. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%.

Jennifer says the program isn’t that expensive, and its benefits are massive. Everyone would probably benefit from being able to talk through their problems and figure out why they make the decisions they do, but it might be especially helpful for people who’ve grown up with the trauma of violence in their lives.

A somewhat similar study of one-day ‘procedural justice’ training sessions for police officers in Chicago found they reduced civilian complaints against police by 10%.

Finally, Jennifer thinks that reducing lead levels might be the best buy of all in crime prevention.

There is really compelling evidence that lead not only increases crime, but also dramatically reduces educational outcomes.

In the US and other countries, there’s been a lengthy and mysterious drop in crime rates since the mid nineties, resulting in crime rates that are now just 25-50% of what they were in 1993.

That drop coincided with gasoline being deleaded. Before that, exhaust from cars would spread lead all over the place. While there’s no conclusive evidence that this huge drop in crime was due to kids growing up in a less polluted environment, there is compelling evidence that lead exposure does increase crime.

While average lead levels are much lower nowadays, some places still have shockingly high levels. Famously, Flint, Michigan still has major problems with lead in its water, but it’s far from the worst.

Jennifer believes that lead affects people’s brains in such a negative way that driving exposure down even further would be extremely cost-effective for its crime-reduction benefits alone, even setting aside broader benefits to people’s health.

In today’s conversation, Rob and Jennifer also cover, among many other things:

  • Misconduct, hiring practices and accountability among US police
  • Procedural justice training
  • Overrated policy ideas
  • Policies to try to reduce racial discrimination
  • The effects of DNA databases
  • Diversity in economics
  • The quality of social science research

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#82 – James Forman Jr on reducing the cruelty of the US criminal legal system

…we have created the largest prison system in the world and some of the highest barriers to re-entry and re-employment for people when they get back out… There is no area of our criminal legal system I could point to and say, “We’re good here”.

James Forman Jr

No democracy has ever incarcerated as many people as the United States. To get its incarceration rate down to the global average, the US would have to release 3 in 4 people in its prisons today.

The effects on Black Americans have been especially severe — Black people make up 12% of the US population but 33% of its prison population. In the early 2000’s when incarceration reached its peak, the US government estimated that 32% of black boys would go to prison at some point in their lives, 5.5 times the figure for whites.

Contrary to popular understanding, nonviolent drug offenses account for less than a fifth of the incarcerated population. The only way to get its incarceration rate near the global average will be to shorten prison sentences for so-called ‘violent criminals’ — a politically toxic idea. But could we change that?

According to today’s guest, Professor James Forman Jr — a former public defender in Washington DC, Pulitzer Prize-winning author of Locking Up Our Own: Crime and Punishment in Black America, and now a professor at Yale Law School — there are two things we have to do to make that happen.

First, he thinks we should lose the term ‘violent offender’, and maybe even ‘violent crime’. When you say ‘violent crime’, most people immediately think of murder and rape — but they’re only a small fraction of the crimes that the law deems as violent.

In reality, the crime that puts the most people in prison in the US is robbery. And the law says that robbery is a violent crime whether a weapon is involved or not. By moving away from the catch-all category of ‘violent criminals’ we can judge the risk posed by individual people more sensibly.

Second, he thinks we should embrace the restorative justice movement. Instead of asking “What was the law? Who broke it? What should the punishment be”, restorative justice asks “Who was harmed? Who harmed them? And what can we as a society, including the person who committed the harm, do to try to remedy that harm?”

Instead of being narrowly focused on how many years people should spend in prison for the purpose of retribution, it starts a different conversation.

You might think this apparently softer approach would be unsatisfying to victims of crime. But Forman has discovered that a lot of victims of crime find that the current system doesn’t help them in any meaningful way. What they want to know above all else is: why did this happen to me?

The best way to find that out is to actually talk to the person who harmed them, and in doing so gain a better understanding of the underlying factors behind the crime. The restorative justice approach facilitates these conversations in a way the current system doesn’t, and can include restitution, apologies, and face-to-face reconciliation.

The city of Washington DC has demonstrated another way to reduce the number of people incarcerated for violent crimes. They recently passed a law that gives anyone sentenced to more than 15 years in prison the right to return to court after those 15 years, show a judge all of the positive ways they’ve changed, and petition for a new sentence.

They’ve also moved aggressively in a direction of bringing in restorative justice, with a focus on juvenile courts.

So, although the road is hard, James does see examples of jurisdictions really trying to tackle the core of the problem of mass incarceration.

That’s just one topic of many covered in today’s episode, with much of the conversation focusing on Forman’s 2018 book Locking Up Our Own — an examination of the historical origins of contemporary criminal legal practices in the US, and his experience setting up a charter school for at-risk youth in DC.

Rob and James also discuss:

  • The biggest problems in policing and the criminal legal system today
  • How racism shaped the US criminal legal system
  • How Black America viewed policing through the 20th century
  • How class divisions fostered a ‘tough on crime’ approach
  • Important recent successes
  • How you can have a positive impact as a public prosecutor

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#81 – Ben Garfinkel on scrutinising classic AI risk arguments

I’m a bit worried that sometimes the effective altruism community sends a signal that, “Oh, AI is the most important thing. It’s the most important thing by such a large margin”. That even if you’re doing something else that seems quite good that’s pretty different, you should switch into it. I basically feel like I would really want the arguments to be much more sussed out, and much more well analysed before I really feel comfortable advocating for that.

Ben Garfinkel

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments.

Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment.

In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances.

Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there’s very little existing writing on existential accidents. Some more recent AI risk arguments do seem plausible to Ben, but they’re fragile and difficult to evaluate since they haven’t yet been expounded at length.

There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world.

He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence.

Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it’s really not clear that we should expect such jumps or find them plausible.

These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them.

But Ben points out that it’s also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can’t specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences?

Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance.

He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in.

This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things:

  • The threat of AI systems increasing the risk of permanently damaging conflict or collapse
  • The possibility of permanently locking in a positive or negative future
  • Contenders for types of advanced systems
  • What role AI should play in the effective altruism portfolio

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Misconceptions about effective altruism

Effective altruism is widely misunderstood, even among its supporters.

A recent paper – The Definition of Effective Altruism by Will MacAskill – lists some of the most common misconceptions. It’s aimed at academic philosophers, but works as a general summary.

In short, effective altruism is commonly viewed as being about the moral obligation to donate as much money as possible to evidence-backed global poverty charities, or other measurable ways of making a short-term impact.

In fact, effective altruism is not about any specific way of doing good.

Rather, the core is idea is that some ways of contributing to the common good are far more effective than typical. In other words, ‘best’ is far better than ‘pretty good’, and that seeking out the best will let you have far more impact. (If I were writing a business book, I would say it’s the ’80/20 principle’ applied to doing good.)

Insofar as people interested in effective altruism do in practice focus on specific ways of doing good, donating to global health charities is just one. As explained below, a majority focus on different issues, such as seeking to help future generations by reducing global catastrophic risks, or reducing animal suffering by ending factory farming.

Moreover, they often do this by working on high-risk high-return projects rather than evidence-backed ones, and through research, policy-change and entrepreneurship rather than donations.

What unites people interested in effective altruism is they pose the question – how can I best contribute with what I’m willing to give?

Continue reading →

#80 – Professor Stuart Russell on why our approach to AI is broken and how to fix it

…if no one’s allowed to talk about the problems, then no one is going to fix them. So it’s kind of like saying you’ve come across a terrible accident and you say, “Well, no one should call an ambulance because someone’s going to call an ambulance”.

Stuart Russell

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed.

In his new book, Human Compatible, he outlines the ‘standard model’ of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we’ve stated explicitly. This is so obvious it almost doesn’t even seem like a design choice, but it is.

Unfortunately there’s a big problem with this approach: it’s incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we’ve asked it to. That’s true even if the goal isn’t what we really want, or the methods it’s choosing are ones we would never accept.

We already see AIs misbehaving for this reason. Stuart points to the example of YouTube’s recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn’t something we wanted, but it helped achieve the algorithm’s objective: maximise viewing time.

Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we’ve asked for.

This ‘alignment’ problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we’re ever to hand over much of the economy to thinking machines, we can’t count on ourselves correctly saying exactly what we want the AI to do every time.

Stuart isn’t just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles:

  1. The AI system’s objective is to achieve what humans want.
  2. But the system isn’t sure what we want.
  3. And it figures out what we want by observing our behaviour.

Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI.

For instance, a machine built on these principles would be happy to be turned off if that’s what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, “you can’t fetch the coffee if you’re dead.”

These principles lend themselves towards machines that are modest and cautious, and check in when they aren’t confident they’re truly achieving what we want.

We’ve made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we’ve rejected an option because we’ve considered it and decided it’s a bad idea, and when we simply haven’t thought about it at all.

Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political.

When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? How considerate of other people’s interests do we expect AIs to be? How do we avoid them being used in malicious or anti-social ways?

And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?

Despite all these problems, the rewards of success could be enormous. If cheap thinking machines can one day do most of the work people do now, it could dramatically raise everyone’s standard of living, like a second industrial revolution.

Without having to work just to survive, people might flourish in ways they never have before.

In today’s conversation we cover, among many other things:

  • What are the arguments against being concerned about AI?
  • Should we develop AIs to have their own ethical agenda?
  • What are the most urgent research questions in this area?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

What 80,000 Hours learned by interviewing people we respect ‘anonymously’

We recently released the fifteenth and final installment in our series of posts with anonymous answers.

These are from interviews with people whose work we respect and whose answers we offered to publish without attribution.

It features answers to 23 different questions including How have you seen talented people fail in their work? and What’s one way to be successful you don’t think people talk about enough?.

We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they spanned a wide range of opinions.

For example, one person had seen talented people fail by being too jumpy:

“It seems particularly common in effective altruism for people to be happy to jump ship onto some new project that seems higher impact at the time. And I think that this tendency systematically underestimates the costs of switching, and systematically overestimates the benefits — so you get kind of a ‘grass is greener’ effect.

In general, I think, if you’re taking a job, you should be imagining that you’re going to do that job for several years. If you’re in a job, and you’re not hating it, it’s going pretty well — and some new opportunity presents itself, I think you should be extremely reticent to jump ship.

I think there are also a lot of gains from focusing on one activity or a particular set of activities;

Continue reading →

Anonymous answers: Are there myths you feel obliged to support publicly? And five other questions.

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the fifteenth and final in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

If we see global problems as puzzles, A), it’s more motivating because puzzles have a solution. B), you cooperate instead of fighting over the solution. I don’t want to call stopping nuclear war fun, but at least I find it personally much more motivating. I think it’s a better way to frame the world.

A.J. Jacobs

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, “You know what, she’s not so bad”.

Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history.

He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His next book will ask: if we reframe global problems as puzzles, would the world be a better place?

This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at a clever blog post that changes styles each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.)

We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.)

Another reason to listen is for the facts:

  • The Bayer aspirin company invented heroin as a cough suppressant
  • Coriander is just the British way of saying cilantro
  • Dogs have a third eyelid to protect the eyeball from irritants
  • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.)

One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the Bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.)

I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; Rob and the rest of the 80,000 Hours team for their help; the thousands of people who’ll listen to this; my fiancée who let me talk about her to those thousands of people; the construction worker who told me how to get to my subway platform on the morning of the interview; Queen Jadwiga for making bagels popular in the 14th century, which kept me going during the recording; and the folks at the New York reservoir whose work allows A.J.’s coffee to be made, without which he’d never have had the energy to talk to me for more than five minutes. (Thanks a Thousand.)

We also discuss:

  • The most extreme ideas A.J.’s ever considered
  • Respecting your older self
  • Blackmailing yourself
  • The experience of having his book made into a CBS sitcom
  • Talking to friends and family about effective altruism
  • Utilitarian movie reviews
  • The value of fiction focused on the long-term future
  • Doing good as a journalist
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#78 – Danny Hernandez on forecasting and measuring some of the most important drivers of AI progress

I think there is something that’s often extremely helpful and neglected, which is to try and find a decision boundary. […] When I think about transformative science, I think about the fact that a lot of science comes out of great scientists like Einstein or Turing. What if at some point AI was making it like there were more such scientists? […] What chance would you need to give that to be interested in AI or to want to work on AI? Is that a 1% chance in 10 years? Is that like a 10% chance in 10 years? What is the threshold?

Danny Hernandez

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.

These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today’s episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy.

Danny and I talk about how to understand his team’s results and what they mean (and don’t mean) for how we should think about progress in AI going forward.

Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field.

If this research sounds appealing, you might be interested in applying to join OpenAI’s Foresight team — they’re currently hiring research engineers.

In the interview, Danny and I also discuss a range of other topics, including:

  • The question of which experts to believe
  • Danny’s journey to working at OpenAI
  • The usefulness of “decision boundaries”
  • The importance of Moore’s law for people who care about the long-term future
  • What OpenAI’s Foresight Team’s findings might imply for policy
  • The question whether progress in the performance of AI systems is linear
  • The safety teams at OpenAI and who they’re looking to hire
  • One idea for finding someone to guide your learning
  • The importance of hardware expertise for making a positive impact

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#77 – Professor Marc Lipsitch on whether we're winning or losing against COVID-19

…I think it remains to be seen whether we in the United States can do better than just letting everybody get it gradually … If it’s about 5 or 10% of the population now, I can’t envision a scenario where we have a vaccine or a really good treatment before it’s about twice that. … Clearly we need a lot of creative thinking about alternative ways to make life go on.

Marc Lipsitch

In March Professor Marc Lipsitch — director of Harvard’s Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalists knocking down his door, as everyone turned to him for information they could trust.

Here he lays out where the fight against COVID-19 stands today, why he’s open to deliberately giving people COVID-19 to speed up vaccine development, and how we could do better next time.

As Marc tells us, island nations like Taiwan and New Zealand are successfully suppressing SARS-COV-2. But everyone else is struggling.

Even Singapore, with plenty of warning and one of the best test and trace systems in the world, lost control of the virus in mid-April after successfully holding back the tide for 2 months.

This doesn’t bode well for how the US or Europe will cope as they ease their lockdowns. It also suggests it would have been exceedingly hard for China to stop the virus before it spread overseas.

But sadly, there’s no easy way out.

The original estimates of COVID-19’s infection fatality rate, of 0.5-1%, have turned out to be basically right. And the latest serology surveys indicate only 5-10% of people in countries like the US, UK and Spain have been infected so far, leaving us far short of herd immunity. To get there, even these worst affected countries would need to endure something like ten times the number of deaths they have so far.

Marc has one good piece of news: research suggests that most of those who get infected do indeed develop immunity, for a while at least.

To escape the COVID-19 trap sooner rather than later, Marc recommends we go hard on all the familiar options — vaccines, antivirals, and mass testing — but also open our minds to creative options we’ve so far left on the shelf.

Despite the importance of his work, even now the training and grant programs that produced the community of experts Marc is a part of, are shrinking. We look at a new article he’s written about how to instead build and improve the field of epidemiology, so humanity can respond faster and smarter next time we face a disease that could kill millions and cost tens of trillions of dollars.

We also cover:

  • How listeners might contribute as future contagious disease experts, or donors to current projects
  • How we can learn from cross-country comparisons
  • Modelling that has gone wrong in an instructive way
  • What governments should stop doing
  • How people can figure out who to trust, and who has been most on the mark this time
  • Why Marc supports infecting people with COVID-19 to speed up the development of a vaccines
  • How we can ensure there’s population-level surveillance early during the next pandemic
  • Whether people from other fields trying to help with COVID-19 has done more good than harm
  • Whether it’s experts in diseases, or experts in forecasting, who produce better disease forecasts

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#76 – Tara Kirk Sell on COVID-19 misinformation, who's over and under-performed, and what we can reopen first

…we all went into lockdown at incredible cost to ourselves right now, and to our kids in the future… and still six weeks go by and I don’t see huge improvements in testing capacity, in serology, in PPE, in hospital capacity. These things just haven’t happened…

Tara Kirk Sell

Amid a rising COVID-19 death toll, and looming economic disaster, we’ve been looking for good news — and one thing we’re especially thankful for is the Johns Hopkins Center for Health Security (CHS).

CHS focuses on protecting us from major biological, chemical or nuclear disasters, through research that informs governments around the world. While this pandemic surprised many, just last October the Center ran a simulation of a ‘new coronavirus’ scenario to identify weaknesses in our ability to quickly respond. Their expertise has given them a key role in figuring out how to fight COVID-19.

Today’s guest, Dr Tara Kirk Sell, did her PhD in policy and communication during disease outbreaks, and has worked at CHS for 11 years on a range of important projects.

Last year she was a leader on Collective Intelligence for Disease Prediction, designed to sound the alarm about upcoming pandemics before others are paying attention. Incredibly, the project almost closed in December, with COVID-19 just starting to spread around the world — but received new funding that allowed the project to respond quickly to the emerging disease.

She also contributed to a recent report attempting to explain the risks of specific types of activities resuming when COVID-19 lockdowns end.

It’s not possible to reach zero risk — so differentiating activities on a spectrum is crucial. Choosing wisely can help us lead more normal lives without reviving the pandemic.

Dance clubs will have to stay closed, but hairdressers can adapt to minimise transmission, and Tara (who happens to also be an Olympic silver medalist swimmer) suggests outdoor non-contact sports could resume soon at little risk.

Her latest work deals with the challenge of misinformation during disease outbreaks.

Analysing the Ebola communication crisis of 2014, they found that even trained coders with public health expertise sometimes needed help to distinguish between true and misleading tweets — showing the danger of a continued lack of definitive information surrounding a virus and how it’s transmitted.

The challenge for governments is not simple. If they acknowledge how much they don’t know, people may look elsewhere for guidance. But if they pretend to know things they don’t, or actively mislead the public, the result can be a huge loss of trust.

Despite their intense focus on COVID-19, researchers at the Center for Health Security know that this is not a one-time event. Many aspects of our collective response this time around have been alarmingly poor, and it won’t be long before Tara and her colleagues need to turn their mind to next time.

You can now donate to CHS through Effective Altruism Funds. Donations made through EA Funds are tax-deductible in the US, the UK, and the Netherlands.

Tara and Rob also discuss:

  • Who has overperformed and underperformed expectations during COVID-19?
  • When are people right to mistrust authorities?
  • The media’s responsibility to be right
  • What policies should be prioritised for next time
  • Should we prepare for future pandemic while the COVID-19 is still going?
  • The importance of keeping non-COVID health problems in mind
  • The psychological difference between staying home voluntarily and being forced to
  • Mistakes that we in the general public might be making
  • Emerging technologies with the potential to reduce global catastrophic biological risks

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

I typically try to get people to go from thinking, “What are a whole host of jobs that would do some good and then which seem most appealing,” to instead thinking, “What things are very most impactful and then… which of those do I think I might be personally well suited for”

Michelle Hutchinson

Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on the most plausible paths for them, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths.

I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford’s Global Priorities Institute, and these days I’m 80,000 Hours’ Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers.

We all thought it would be useful to discuss some of those on the show for others to hear. Among other topics we cover:

  • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you’re already accomplishing.
  • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in.
  • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it’s wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations.
  • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties.
  • Why many listeners aren’t spending enough time finding out about what the day-to-day work is like in paths they’re considering, or reaching out to people for advice or opportunities.

I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it.

If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people:

  1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address.
  2. Who don’t yet have close connections with people working at effective altruist organisations.
  3. Who aren’t strongly locationally constrained.

If you’re unsure, it doesn’t take long to apply and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds.

Want to talk to one of our advisors?

We speak to hundreds of people each year and can offer introductions and answer specific questions you might have. You can join the waitlist here:

Request a career advising session

Also in this episode:

  • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with.
  • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path.
  • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob.

It continues to be awe inspiring to me how many people I talk to are donating to save lives, making dietary changes to avoid intolerable suffering, and carefully planning their lives to improve the future trajectory of the world. I hope we can continue to support each other in doing those things, and appreciate how important all this work is.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Policy and research ideas to reduce existential risk

In his book The Precipice: Existential Risk and the Future of Humanity, 80,000 Hours trustee Dr Toby Ord suggests a range of research and practical projects that governments could fund to reduce the risk of a global catastrophe that could permanently limit humanity’s prospects.

He compiles over 50 of these in an appendix, which we’ve reproduced below. You may not be convinced by all of these ideas, but they help to give a sense of the breadth of plausible longtermist projects available in policy, science, universities and business.

There are many existential risks and they can be tackled in different ways, which makes it likely that great opportunities are out there waiting to be identified.

Many of these proposals are discussed in the body of The Precipice which you can buy here. We’ve also got a 3 hour interview with Toby or you can get Chapter 1 for free by joining our newsletter:

Policy and research recommendations
Engineered Pandemics

  • Bring the Biological Weapons Convention into line with the Chemical Weapons Convention: taking its budget from $1.4 million up to $80 million, increasing its staff commensurately, and granting the power to investigate suspected breaches.
  • Strengthen the WHO’s ability to respond to emerging pandemics through rapid disease surveillance, diagnosis and control. This involves increasing its funding and powers, as well as R&D on the requisite technologies.

Continue reading →

Anonymous contributors answer: How should the effective altruism community think about diversity?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.

But it’s the fourteenth in this series of posts with anonymous answers — many of which are likely to be useful to everyone. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

In the words of Drew Endy, most biotechnology has yet to be conceived, let alone made true. And so in this large territory of unknown unknowns, it may be optimistic to presume there are only familiar dangers. So all of these make my concern focus more on human generated events using biology, in some sense, rather than dangers arising through the natural world itself.

Greg Lewis

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise.

The question of how to tackle COVID-19 has been foremost in the minds of many, including here at 80,000 Hours.

Today’s guest, Dr Gregory Lewis, acting head of the Biosecurity Research Group at Oxford University’s Future of Humanity Institute, puts the crisis in context, explaining how COVID-19 compares to other diseases, pandemics of the past, and possible worse crises in the future.

COVID-19 is a vivid reminder that we are vulnerable to biological threats and underprepared to deal with them. We have been unable to suppress the spread of COVID-19 around the world and, tragically, global deaths will at least be in the hundreds of thousands.

How would we cope with a virus that was even more contagious and even more deadly? Greg’s work focuses on these risks — of outbreaks that threaten our entire future through an unrecoverable collapse of civilisation, or even the extinction of humanity.

If such a catastrophe were to occur, Greg believes it’s more likely to be caused by accidental or deliberate misuse of biotechnology than by a pathogen developed by nature.

There are a few direct causes for concern: humans now have the ability to produce some of the most dangerous diseases in history in the lab; technological progress may enable the creation of pathogens which are nastier than anything we see in nature; and most biotechnology has yet to even be conceived, so we can’t assume all the dangers will be familiar.

This is grim stuff, but it needn’t be paralysing. In the years following COVID-19, humanity may be inspired to better prepare for the existential risks of the next century: improving our science, updating our policy options, and enhancing our social cohesion.

COVID-19 is a tragedy of stunning proportions, and its immediate threat is undoubtedly worthy of significant resources.

But we will get through it; if a future biological catastrophe poses an existential risk, we may not get a second chance. It is therefore vital to learn every lesson we can from this pandemic, and provide our descendants with the security we wish for ourselves.

Today’s episode is the hosting debut of our Strategy Advisor, Howie Lempel.

80,000 Hours has focused on COVID-19 for the last few weeks and published over ten pieces about it, and a substantial benefit of this interview was to help inform our own views. As such, at times this episode may feel like eavesdropping on a private conversation, and it is likely to be of most interest to people primarily focused on making the long-term future go as well as possible.

In this episode, Howie and Greg cover:

  • Reflections on the first few months of the pandemic
  • Common confusions around COVID-19
  • How COVID-19 compares to other diseases
  • What types of interventions have been available to policymakers
  • Arguments for and against working on global catastrophic biological risks (GCBRs)
  • Why state actors would even use or develop biological weapons
  • How to know if you’re a good fit to work on GCBRs
  • The response of the effective altruism community, as well as 80,000 Hours in particular, to COVID-19
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type “80,000 Hours” into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

What programmes will 80,000 Hours provide (and not provide) within the effective altruism community?

There are many career services that would be useful to the effective altruism community, and unfortunately 80,000 Hours is not able to provide them all.

In this post, I aim to sum up what we intend to provide and what we can’t, to make it easier for other groups to fill these gaps.

80,000 Hours’ online content is also serving as one of the most common ways that people get introduced to the effective altruism community, but we’re not the ideal introduction for many types of people, which I also list in the section on online articles.

You can see our full plans in our annual review.

Target audience

Our aim is to do the most we can to fill the key skill gaps in the world’s most pressing problems. We think that is the best way we can help to improve the lives of others over the long term.

We think that the best way to do this is – given our small team – to initially specialise on a single target audience, and gradually expand the audience over time.

Given this, most of our effort (say 50%+) is on advice and support for English-speaking people age 20-35 who might be able to enter one of our current priority paths.

We also aim to put ~30% of our effort into other ways of addressing our priority problems (AI,

Continue reading →

200+ opportunities to work on COVID-19, and 60+ places to get funding

Below is a list of opportunities to help the global response to COVID-19. The list focuses on opportunities in research, policy, technology and startups, especially in the US and UK, and includes jobs, volunteering opportunities, and opportunities to receive funding. It accompanies our article on how to volunteer to help tackle the crisis most effectively.

Continue reading →

80,000 Hours Annual Review – December 2019

We’ve released our 2019 annual review here.

It summarises our annual impact evaluation, and outlines our progress, plans, mistakes and fundraising needs.

The document was initially prepared in Nov 2019. We delayed its release until we heard back from some of our largest donors so that other stakeholders would be fully informed about our funding situation before we asked for their support. Most claims should be taken to be made “as of November 2019.”

We include:

If you would like to go into more detail, we also provide the following optional sections:

You can find our previous evaluations here.

Continue reading →