#81 – Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments.

Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment.

In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances.

Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there’s very little existing writing on existential accidents. Some more recent AI risk arguments do seem plausible to Ben, but they’re fragile and difficult to evaluate since they haven’t yet been expounded at length.

There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world.

He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence.

Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it’s really not clear that we should expect such jumps or find them plausible.

These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them.

But Ben points out that it’s also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can’t specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences?

Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance.

He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in.

This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things:

  • The threat of AI systems increasing the risk of permanently damaging conflict or collapse
  • The possibility of permanently locking in a positive or negative future
  • Contenders for types of advanced systems
  • What role AI should play in the effective altruism portfolio

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#80 – Professor Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed.

In his new book, Human Compatible, he outlines the ‘standard model’ of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we’ve stated explicitly. This is so obvious it almost doesn’t even seem like a design choice, but it is.

Unfortunately there’s a big problem with this approach: it’s incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we’ve asked it to. That’s true even if the goal isn’t what we really want, or the methods it’s choosing are ones we would never accept.

We already see AIs misbehaving for this reason. Stuart points to the example of YouTube’s recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn’t something we wanted, but it helped achieve the algorithm’s objective: maximise viewing time.

Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we’ve asked for.

This ‘alignment’ problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we’re ever to hand over much of the economy to thinking machines, we can’t count on ourselves correctly saying exactly what we want the AI to do every time.

Stuart isn’t just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles:

  1. The AI system’s objective is to achieve what humans want.
  2. But the system isn’t sure what we want.
  3. And it figures out what we want by observing our behaviour.

Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI.

For instance, a machine built on these principles would be happy to be turned off if that’s what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, “you can’t fetch the coffee if you’re dead.”

These principles lend themselves towards machines that are modest and cautious, and check in when they aren’t confident they’re truly achieving what we want.

We’ve made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we’ve rejected an option because we’ve considered it and decided it’s a bad idea, and when we simply haven’t thought about it at all.

Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political.

When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? How considerate of other people’s interests do we expect AIs to be? How do we avoid them being used in malicious or anti-social ways?

And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?

Despite all these problems, the rewards of success could be enormous. If cheap thinking machines can one day do most of the work people do now, it could dramatically raise everyone’s standard of living, like a second industrial revolution.

Without having to work just to survive, people might flourish in ways they never have before.

In today’s conversation we cover, among many other things:

  • What are the arguments against being concerned about AI?
  • Should we develop AIs to have their own ethical agenda?
  • What are the most urgent research questions in this area?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

What 80,000 Hours learned by anonymously interviewing people we respect

We recently released the fifteenth and final installment in our series of posts with anonymous answers.

These are from interviews with people whose work we respect and whose answers we offered to publish without attribution.

It features answers to 23 different questions including How have you seen talented people fail in their work? and What’s one way to be successful you don’t think people talk about enough?.

We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they spanned a wide range of opinions.

For example, one person had seen talented people fail by being too jumpy:

“It seems particularly common in effective altruism for people to be happy to jump ship onto some new project that seems higher impact at the time. And I think that this tendency systematically underestimates the costs of switching, and systematically overestimates the benefits — so you get kind of a ‘grass is greener’ effect.

In general, I think, if you’re taking a job, you should be imagining that you’re going to do that job for several years. If you’re in a job, and you’re not hating it, it’s going pretty well — and some new opportunity presents itself, I think you should be extremely reticent to jump ship.

I think there are also a lot of gains from focusing on one activity or a particular set of activities;

Continue reading →

Anonymous answers: Are there myths you feel obliged to support publicly? And five other questions.

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the fifteenth and final in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, “You know what, she’s not so bad”.

Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history.

He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His next book will ask: if we reframe global problems as puzzles, would the world be a better place?

This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at a clever blog post that changes styles each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.)

We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.)

Another reason to listen is for the facts:

  • The Bayer aspirin company invented heroin as a cough suppressant
  • Coriander is just the British way of saying cilantro
  • Dogs have a third eyelid to protect the eyeball from irritants
  • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.)

One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the Bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.)

I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; Rob and the rest of the 80,000 Hours team for their help; the thousands of people who’ll listen to this; my fiancée who let me talk about her to those thousands of people; the construction worker who told me how to get to my subway platform on the morning of the interview; Queen Jadwiga for making bagels popular in the 14th century, which kept me going during the recording; and the folks at the New York reservoir whose work allows A.J.’s coffee to be made, without which he’d never have had the energy to talk to me for more than five minutes. (Thanks a Thousand.)

We also discuss:

  • The most extreme ideas A.J.’s ever considered
  • Respecting your older self
  • Blackmailing yourself
  • The experience of having his book made into a CBS sitcom
  • Talking to friends and family about effective altruism
  • Utilitarian movie reviews
  • The value of fiction focused on the long-term future
  • Doing good as a journalist
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#78 – Danny Hernandez on forecasting and measuring some of the most important drivers of AI progress

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.

These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today’s episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy.

Danny and I talk about how to understand his team’s results and what they mean (and don’t mean) for how we should think about progress in AI going forward.

Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field.

If this research sounds appealing, you might be interested in applying to join OpenAI’s Foresight team — they’re currently hiring research engineers.

In the interview, Danny and I (Arden Koehler) also discuss a range of other topics, including:

  • The question of which experts to believe
  • Danny’s journey to working at OpenAI
  • The usefulness of “decision boundaries”
  • The importance of Moore’s law for people who care about the long-term future
  • What OpenAI’s Foresight Team’s findings might imply for policy
  • The question whether progress in the performance of AI systems is linear
  • The safety teams at OpenAI and who they’re looking to hire
  • One idea for finding someone to guide your learning
  • The importance of hardware expertise for making a positive impact

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

In March Professor Marc Lipsitch — director of Harvard’s Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalists knocking down his door, as everyone turned to him for information they could trust.

Here he lays out where the fight against COVID-19 stands today, why he’s open to deliberately giving people COVID-19 to speed up vaccine development, and how we could do better next time.

As Marc tells us, island nations like Taiwan and New Zealand are successfully suppressing SARS-COV-2. But everyone else is struggling.

Even Singapore, with plenty of warning and one of the best test and trace systems in the world, lost control of the virus in mid-April after successfully holding back the tide for 2 months.

This doesn’t bode well for how the US or Europe will cope as they ease their lockdowns. It also suggests it would have been exceedingly hard for China to stop the virus before it spread overseas.

But sadly, there’s no easy way out.

The original estimates of COVID-19’s infection fatality rate, of 0.5-1%, have turned out to be basically right. And the latest serology surveys indicate only 5-10% of people in countries like the US, UK and Spain have been infected so far, leaving us far short of herd immunity. To get there, even these worst affected countries would need to endure something like ten times the number of deaths they have so far.

Marc has one good piece of news: research suggests that most of those who get infected do indeed develop immunity, for a while at least.

To escape the COVID-19 trap sooner rather than later, Marc recommends we go hard on all the familiar options — vaccines, antivirals, and mass testing — but also open our minds to creative options we’ve so far left on the shelf.

Despite the importance of his work, even now the training and grant programs that produced the community of experts Marc is a part of, are shrinking. We look at a new article he’s written about how to instead build and improve the field of epidemiology, so humanity can respond faster and smarter next time we face a disease that could kill millions and cost tens of trillions of dollars.

We also cover:

  • How listeners might contribute as future contagious disease experts, or donors to current projects
  • How we can learn from cross-country comparisons
  • Modelling that has gone wrong in an instructive way
  • What governments should stop doing
  • How people can figure out who to trust, and who has been most on the mark this time
  • Why Marc supports infecting people with COVID-19 to speed up the development of a vaccines
  • How we can ensure there’s population-level surveillance early during the next pandemic
  • Whether people from other fields trying to help with COVID-19 has done more good than harm
  • Whether it’s experts in diseases, or experts in forecasting, who produce better disease forecasts

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#76 – Tara Kirk Sell on COVID-19 misinformation, who's over and under-performed, and what we can reopen first

Amid a rising COVID-19 death toll, and looming economic disaster, we’ve been looking for good news — and one thing we’re especially thankful for is the Johns Hopkins Center for Health Security (CHS).

CHS focuses on protecting us from major biological, chemical or nuclear disasters, through research that informs governments around the world. While this pandemic surprised many, just last October the Center ran a simulation of a ‘new coronavirus’ scenario to identify weaknesses in our ability to quickly respond. Their expertise has given them a key role in figuring out how to fight COVID-19.

Today’s guest, Dr Tara Kirk Sell, did her PhD in policy and communication during disease outbreaks, and has worked at CHS for 11 years on a range of important projects.

Last year she was a leader on Collective Intelligence for Disease Prediction, designed to sound the alarm about upcoming pandemics before others are paying attention. Incredibly, the project almost closed in December, with COVID-19 just starting to spread around the world — but received new funding that allowed the project to respond quickly to the emerging disease.

She also contributed to a recent report attempting to explain the risks of specific types of activities resuming when COVID-19 lockdowns end.

It’s not possible to reach zero risk — so differentiating activities on a spectrum is crucial. Choosing wisely can help us lead more normal lives without reviving the pandemic.

Dance clubs will have to stay closed, but hairdressers can adapt to minimise transmission, and Tara (who happens to also be an Olympic silver medalist swimmer) suggests outdoor non-contact sports could resume soon at little risk.

Her latest work deals with the challenge of misinformation during disease outbreaks.

Analysing the Ebola communication crisis of 2014, they found that even trained coders with public health expertise sometimes needed help to distinguish between true and misleading tweets — showing the danger of a continued lack of definitive information surrounding a virus and how it’s transmitted.

The challenge for governments is not simple. If they acknowledge how much they don’t know, people may look elsewhere for guidance. But if they pretend to know things they don’t, or actively mislead the public, the result can be a huge loss of trust.

Despite their intense focus on COVID-19, researchers at the Center for Health Security know that this is not a one-time event. Many aspects of our collective response this time around have been alarmingly poor, and it won’t be long before Tara and her colleagues need to turn their mind to next time.

You can now donate to CHS through Effective Altruism Funds. Donations made through EA Funds are tax-deductible in the US, the UK, and the Netherlands.

Tara and Rob also discuss:

  • Who has overperformed and underperformed expectations during COVID-19?
  • When are people right to mistrust authorities?
  • The media’s responsibility to be right
  • What policies should be prioritised for next time
  • Should we prepare for future pandemic while the COVID-19 is still going?
  • The importance of keeping non-COVID health problems in mind
  • The psychological difference between staying home voluntarily and being forced to
  • Mistakes that we in the general public might be making
  • Emerging technologies with the potential to reduce global catastrophic biological risks

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on the most plausible paths for them, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths.

I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford’s Global Priorities Institute, and these days I’m 80,000 Hours’ Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers.

We all thought it would be useful to discuss some of those on the show for others to hear. Among other topics we cover:

  • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you’re already accomplishing.
  • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in.
  • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it’s wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations.
  • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties.
  • Why many listeners aren’t spending enough time finding out about what the day-to-day work is like in paths they’re considering, or reaching out to people for advice or opportunities.

I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it.

If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people:

  1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address.
  2. Who don’t yet have close connections with people working at effective altruist organisations.
  3. Who aren’t strongly locationally constrained.

If you’re unsure, it doesn’t take long to apply and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds.

Want to talk to one of our advisors?

We speak to hundreds of people each year and can offer introductions and answer specific questions you might have. You can join the waitlist here:

Request a career advising session

Also in this episode:

  • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with.
  • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path.
  • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob.

It continues to be awe inspiring to me how many people I talk to are donating to save lives, making dietary changes to avoid intolerable suffering, and carefully planning their lives to improve the future trajectory of the world. I hope we can continue to support each other in doing those things, and appreciate how important all this work is.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Policy and research ideas to reduce existential risk

In his book The Precipice: Existential Risk and the Future of Humanity, 80,000 Hours trustee Dr Toby Ord suggests a range of research and practical projects that governments could fund to reduce the risk of a global catastrophe that could permanently limit humanity’s prospects.

He compiles over 50 of these in an appendix, which we’ve reproduced below. You may not be convinced by all of these ideas, but they help to give a sense of the breadth of plausible longtermist projects available in policy, science, universities and business.

There are many existential risks and they can be tackled in different ways, which makes it likely that great opportunities are out there waiting to be identified.

Many of these proposals are discussed in the body of The Precipice. We’ve got a 3 hour interview with Toby you could listen to, or you can get a copy of the book mailed you for free by joining our newsletter:

Policy and research recommendations
Engineered Pandemics

  • Bring the Biological Weapons Convention into line with the Chemical Weapons Convention: taking its budget from $1.4 million up to $80 million, increasing its staff commensurately, and granting the power to investigate suspected breaches.
  • Strengthen the WHO’s ability to respond to emerging pandemics through rapid disease surveillance, diagnosis and control. This involves increasing its funding and powers, as well as R&D on the requisite technologies.

Continue reading →

Anonymous contributors answer: How should the effective altruism community think about diversity?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.

But it’s the fourteenth in this series of posts with anonymous answers — many of which are likely to be useful to everyone. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise.

The question of how to tackle COVID-19 has been foremost in the minds of many, including here at 80,000 Hours.

Today’s guest, Dr Gregory Lewis, acting head of the Biosecurity Research Group at Oxford University’s Future of Humanity Institute, puts the crisis in context, explaining how COVID-19 compares to other diseases, pandemics of the past, and possible worse crises in the future.

COVID-19 is a vivid reminder that we are vulnerable to biological threats and underprepared to deal with them. We have been unable to suppress the spread of COVID-19 around the world and, tragically, global deaths will at least be in the hundreds of thousands.

How would we cope with a virus that was even more contagious and even more deadly? Greg’s work focuses on these risks — of outbreaks that threaten our entire future through an unrecoverable collapse of civilisation, or even the extinction of humanity.

If such a catastrophe were to occur, Greg believes it’s more likely to be caused by accidental or deliberate misuse of biotechnology than by a pathogen developed by nature.

There are a few direct causes for concern: humans now have the ability to produce some of the most dangerous diseases in history in the lab; technological progress may enable the creation of pathogens which are nastier than anything we see in nature; and most biotechnology has yet to even be conceived, so we can’t assume all the dangers will be familiar.

This is grim stuff, but it needn’t be paralysing. In the years following COVID-19, humanity may be inspired to better prepare for the existential risks of the next century: improving our science, updating our policy options, and enhancing our social cohesion.

COVID-19 is a tragedy of stunning proportions, and its immediate threat is undoubtedly worthy of significant resources.

But we will get through it; if a future biological catastrophe poses an existential risk, we may not get a second chance. It is therefore vital to learn every lesson we can from this pandemic, and provide our descendants with the security we wish for ourselves.

Today’s episode is the hosting debut of our Strategy Advisor, Howie Lempel.

80,000 Hours has focused on COVID-19 for the last few weeks and published over ten pieces about it, and a substantial benefit of this interview was to help inform our own views. As such, at times this episode may feel like eavesdropping on a private conversation, and it is likely to be of most interest to people primarily focused on making the long-term future go as well as possible.

In this episode, Howie and Greg cover:

  • Reflections on the first few months of the pandemic
  • Common confusions around COVID-19
  • How COVID-19 compares to other diseases
  • What types of interventions have been available to policymakers
  • Arguments for and against working on global catastrophic biological risks (GCBRs)
  • Why state actors would even use or develop biological weapons
  • How to know if you’re a good fit to work on GCBRs
  • The response of the effective altruism community, as well as 80,000 Hours in particular, to COVID-19
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type “80,000 Hours” into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

What programmes will 80,000 Hours provide (and not provide) within the effective altruism community?

There are many career services that would be useful to the effective altruism community, and unfortunately 80,000 Hours is not able to provide them all.

In this post, I aim to sum up what we intend to provide and what we can’t, to make it easier for other groups to fill these gaps.

80,000 Hours’ online content is also serving as one of the most common ways that people get introduced to the effective altruism community, but we’re not the ideal introduction for many types of people, which I also list in the section on online articles.

You can see our full plans in our annual review.

Target audience

Our aim is to do the most we can to fill the key skill gaps in the world’s most pressing problems. We think that is the best way we can help to improve the lives of others over the long term.

We think that the best way to do this is – given our small team – to initially specialise on a single target audience, and gradually expand the audience over time.

Given this, most of our effort (say 50%+) is on advice and support for English-speaking people age 20-35 who might be able to enter one of our current priority paths.

We also aim to put ~30% of our effort into other ways of addressing our priority problems (AI,

Continue reading →

200+ opportunities to work on COVID-19, and 60+ places to get funding

Below is a list of opportunities to help the global response to COVID-19. The list focuses on opportunities in research, policy, technology and startups, especially in the US and UK, and includes jobs, volunteering opportunities, and opportunities to receive funding. It accompanies our article on how to volunteer to help tackle the crisis most effectively.

Continue reading →

80,000 Hours Annual Review – December 2019

benjamin todd 80000 hours

We’ve released our 2019 annual review here.

It summarises our annual impact evaluation, and outlines our progress, plans, mistakes and fundraising needs.

The document was initially prepared in Nov 2019. We delayed its release until we heard back from some of our largest donors so that other stakeholders would be fully informed about our funding situation before we asked for their support. Most claims should be taken to be made “as of November 2019.”

We include:

If you would like to go into more detail, we also provide the following optional sections:

You can find our previous evaluations here.

Continue reading →

Good news about COVID-19

Many of us feel depressed about the COVID-19 situation, and it is without doubt a horrible tragedy.

But millions of people are rising to the occasion, and there’s a lot of good news mixed in with the bad.

The media has a tendency to give extra coverage to bad news, because readers find negative stories more eye-catching.

So in the interest of balance, here are some positive things we’ve learned in the last week while writing articles on how to tackle the coronavirus crisis through temporary work, donations, or policy, and compiling over 250 job opportunities and 60 funding sources.

Some countries are turning COVID-19 away at the door, while others are turning the tide of the pandemic

As you can see in this chart, COVID-19 remains mostly controlled in South Korea, Taiwan and Singapore. Taiwan is barely visible down there at the bottom, while Singapore actually hasn’t had enough deaths to make it onto the figure yet.

Once they emerge from their ‘lockdowns’, other places can potentially copy the methods which these three countries have shown can work.

COVID-19 may also be controlled in Hong Kong, Japan and China, which are reporting few new cases. (Unfortunately Hong Kong and Japan aren’t testing enough people to be sure, and China doesn’t say how many tests it’s running, so we’ll have to wait and see.)

As the figure below shows,

Continue reading →

Essential Facts and Figures – COVID-19

This page aims to summarise our understanding of the current science on key questions about COVID-19 (as of 3 April, 2020), as best we can given the state of the evidence and the fast moving situation. We provide more explanation as well as sources in the footnotes.

Symptoms and severity

  • The most common reported symptoms are cough (appearing in about 80% of confirmed cases – meaning those who have been tested and found to be infected with the virus) and fever (80%-90%). Many also experience shortness of breath, usually later in the disease progression. Diarrhea and other GI symptoms have also been seen in some patients. Nasal congestion and runny nose seem uncommon (<5%). Anecdotally, loss of the sense of taste or smell have also been reported.
  • Once someone is infected, it seems to typically take ~7 days for symptoms to develop. One study with a large sample size found that for 11.5% of confirmed cases it took more than 14 days.
  • According to initial data from China, around 81% of confirmed cases are ‘mild’ (though can still involve pneumonia), 14% are severe (requiring hospitalisation), and 5% are critical. A large proportion of people infected with the virus have mild symptoms, and around 20% may have no symptoms, though there is not reliable data here.
  • Most current estimates of the fraction of infected people (rather than people with confirmed cases) who die from the disease (the ‘IFR’) seem to be between 0.1% and 2%.

Continue reading →

Options for donating to fight COVID-19

Many people have been asking about where they can donate to fight COVID-19, so we asked a couple of advisors for their initial thoughts on which opportunities could be especially high-leverage.

We haven’t evaluated how these compare to donation opportunities in other areas, but if you are keen to donate specifically to COVID-19-related work then read on.

1. Johns Hopkins Center for Health Security

Personally, I would donate to the Center for Health Security at Johns Hopkins (CHS), which researches biosecurity and advocates for better policy. It takes donations here, or you can donate through the Effective Altruism Funds.

  • They’ve been one of the most influential sources of information and analysis for helping inform policymakers’ response to the crisis, for instance releasing influential situation reports at least once a day since January 22nd.
  • Getting the policy response right seems like a crucial lever in navigating the crisis, and requires comparatively little funding.
  • They had a good track record of work on pandemic preparedness before the crisis, and received a large grant from Open Philanthropy in 2019.
  • My best guess is that if the CHS has urgent funding needs during the crisis, those needs will be met by other donors, especially Open Philanthropy. However, the Center’s budget is large, so in the longer term I expect it could make productive use of additional funding,

Continue reading →

If you want to help the world tackle COVID-19, what should you do?

To tackle the COVID-19 crisis, there are five main things we need to do:

  1. Research to understand the disease and to develop new treatments and a vaccine.
  2. Determine the right policies, both for public health and the economic response.
  3. Increase healthcare capacity, especially for testing, ventilators, personal protective equipment, and critical care.
  4. Slow the spread through testing & isolating cases, as well as mass advocacy to promote social distancing and other key behaviours, buying us more time to do the above.
  5. We also need to keep society functioning through the progression of the pandemic.

Everyone can help stem the spread of COVID-19 by practicing proper hygiene and staying at home whenever possible. But if you want to do more, what can you do that’s most effective?

To maximise your impact, the aim is to identify a high-leverage opportunity to contribute to one of these bottlenecks that’s a good fit for your skills.

In this article, we’ll discuss some opportunities to work within each of these five categories, and some rules of thumb to work out which might be highest-impact for you, drawing from the rest of our research on high-impact careers. We also provide a long list of specific projects we’ve seen proposed.

We cover where to donate, in a separate article on donation opportunities to fight COVID-19.

Continue reading →

The coronavirus crisis and our new review of how to prevent the worst possible pandemics

At the time of this writing, COVID 19 — a flu-like respiratory disease causing fever and pneumonia — has killed over 11,000 people and has likely infected over 2 million. The growth in new cases is exponential, although cases are slowing substantially in places where strict containment measures have been instituted.

Cities are shutting down around the world. Although it is very hard to predict what will happen, it seems likely this outbreak will end up being among the worst economic and humanitarian disasters of the last 100 years.

Yesterday we put out a detailed interview and set of 70 links covering what both individuals and governments can do to fight the coronavirus crisis.

We will be producing plenty more on this topic and it will all be posted on our COVID-19 landing page.

COVID-19 is proof that a global pandemic can happen in the 21st century. It has also shown how underprepared we are as a world to coordinate with one another and deal with disasters like these.

Unfortunately, it’s possible for things to get much worse than COVID-19.

From the perspective of preventing threats to the long term future of humanity, preventing global catastrophic biological risks (GCBRs) is especially important. GCBRs are risks from biological agents that threaten great worldwide damage to human welfare, and place the long-term trajectory of humankind in jeopardy.

GCBRs seem much more likely to arise from engineered pandemics than natural ones.

Continue reading →