#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise.

The question of how to tackle COVID-19 has been foremost in the minds of many, including here at 80,000 Hours.

Today’s guest, Dr Gregory Lewis, acting head of the Biosecurity Research Group at Oxford University’s Future of Humanity Institute, puts the crisis in context, explaining how COVID-19 compares to other diseases, pandemics of the past, and possible worse crises in the future.

COVID-19 is a vivid reminder that we are vulnerable to biological threats and underprepared to deal with them. We have been unable to suppress the spread of COVID-19 around the world and, tragically, global deaths will at least be in the hundreds of thousands.

How would we cope with a virus that was even more contagious and even more deadly? Greg’s work focuses on these risks — of outbreaks that threaten our entire future through an unrecoverable collapse of civilisation, or even the extinction of humanity.

If such a catastrophe were to occur, Greg believes it’s more likely to be caused by accidental or deliberate misuse of biotechnology than by a pathogen developed by nature.

There are a few direct causes for concern: humans now have the ability to produce some of the most dangerous diseases in history in the lab; technological progress may enable the creation of pathogens which are nastier than anything we see in nature; and most biotechnology has yet to even be conceived, so we can’t assume all the dangers will be familiar.

This is grim stuff, but it needn’t be paralysing. In the years following COVID-19, humanity may be inspired to better prepare for the existential risks of the next century: improving our science, updating our policy options, and enhancing our social cohesion.

COVID-19 is a tragedy of stunning proportions, and its immediate threat is undoubtedly worthy of significant resources.

But we will get through it; if a future biological catastrophe poses an existential risk, we may not get a second chance. It is therefore vital to learn every lesson we can from this pandemic, and provide our descendants with the security we wish for ourselves.

Today’s episode is the hosting debut of our Strategy Advisor, Howie Lempel.

80,000 Hours has focused on COVID-19 for the last few weeks and published over ten pieces about it, and a substantial benefit of this interview was to help inform our own views. As such, at times this episode may feel like eavesdropping on a private conversation, and it is likely to be of most interest to people primarily focused on making the long-term future go as well as possible.

In this episode, Howie and Greg cover:

  • Reflections on the first few months of the pandemic
  • Common confusions around COVID-19
  • How COVID-19 compares to other diseases
  • What types of interventions have been available to policymakers
  • Arguments for and against working on global catastrophic biological risks (GCBRs)
  • Why state actors would even use or develop biological weapons
  • How to know if you’re a good fit to work on GCBRs
  • The response of the effective altruism community, as well as 80,000 Hours in particular, to COVID-19
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type “80,000 Hours” into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

Hours ago from home isolation Rob and Howie recorded an episode on:

  1. How many could die in the coronavirus crisis, and the risk to your health personally.
  2. What individuals might be able to do.
  3. What we suspect governments should do.
  4. The properties of the SARS-CoV-2 virus, the importance of not contributing to its spread, and how you can reduce your chance of catching it.
  5. The ways some societies have screwed up, which countries have been doing better than others, how we can avoid this happening again, and why we’re optimistic.

We’ve rushed this episode out, accepting a higher risk of errors, in order to share information as quickly as possible about a very fast-moving situation.

We’ve compiled 70 links below to projects you could get involved with, as well as academic papers and other resources to understand the situation and what’s needed to fix it.

A rough transcript is also available.

Please also see our ‘problem profile’ on global catastrophic biological risks for information on these grave risks and how you can contribute to preventing them.

For more see the COVID-19 landing page on our site. You can also keep up to date by following Rob and 80,000 Hours’ Twitter feeds.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris.

Continue reading →

#73 – Phil Trammell on patient philanthropy and waiting to do good

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that’s all wrong.

If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you’d have $125,000 to give away instead. And in 200 years you’d have $17 million.

This astonishing fact has driven today’s guest, economics researcher Philip Trammell at Oxford’s Global Priorities Institute, to investigate the case for and against so-called ‘patient philanthropy’ in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now.

He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they’ll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn’t have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn’t even know about germs, and almost nothing in medicine was justified by science.

What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways?

And there’s a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It’s possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.

Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?

Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?

Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes’ Scholarships initial charter, which limited it to ‘white Christian men’.

Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.

Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today’s conversation with researcher Phil Trammell and my 80,000 Hours colleague Howie Lempel, we try to answer that, and also discuss:

  • Real attempts at patient philanthropy in history and how they worked out
  • Should we have a mixed strategy, where some altruists are patient and others impatient?
  • Which causes are most likely to need money now, and which later?
  • What is the research frontier in this issue of global prioritisation?
  • What does this all mean for what listeners should do differently?

COVID-19

Finally, note that we recorded this podcast before the appearance of COVID-19. And as we discuss, Phil makes the case that patient philanthropists should wait for moments in history when patient philanthropic resources can do the most good. Could the coronavirus crisis be one of those important historical episodes during which Phil would argue that even patient philanthropists should ramp up their spending?

We’ve spoken with him more recently, and he says that this strikes him as unlikely. The virus is certainly doing widespread damage, but most of this damage is expected to accrue in the next few years at most. As a result, this is the sort of crisis that governments and impatient philanthropists are happy to spend on (to the extent that spending can help at all).

On Phil’s view, therefore, patient philanthropists are still best advised to wait i) until they’re rich enough to better address, or fund more substantial preparation for, similar future crises, or, ii) until we face crises with unusually long-lasting impacts, not just unusually severe impacts.

If this is right, COVID-19 just serves as an example of the many temptations to spend in the present that patient philanthropists will have to resist, in order to reap the benefits that can come from waiting to do good.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#72 – Toby Ord on the precipice and humanity's potential futures

This week Oxford academic and advisor to 80,000 Hours Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It’s about how our long-term future could be better than almost anyone believes, but also how humanity’s recklessness is putting that future at grave risk, in Toby’s reckoning a 1 in 6 chance of being extinguished this century.

I loved the book and learned a great deal from it.

While preparing for this interview I copied out 87 facts that were surprising to me or seemed important. Here’s a sample of 16:

  1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined.
  2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s.
  3. In 2008 a ‘gamma ray burst’ reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren’t sure what generates gamma ray bursts but one cause may be two neutron stars colliding.
  4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped pursuing the Bomb.
  5. If we eventually burn all the fossil fuels we’re confident we can access, the leading Earth-system models suggest we’d experience 9–13°C of warming by 2300, an absolutely catastrophic increase.
  6. In 1939, the renowned nuclear scientist Enrico Fermi told colleagues that a nuclear chain reaction was but a ‘remote possibility’. Four years later Fermi himself was personally overseeing the world’s first nuclear reactor. Wilbur Wright predicted heavier-than-air flight was at least fifty years away — just two years before he himself invented it.
  7. The Japanese bioweapons programme in the Second World War — which included using bubonic plague against China — was directly inspired by an anti-bioweapons treaty. The reasoning ran that if Western powers felt the need to outlaw their use, these weapons must especially good to have.
  8. In the early 20th century the Spanish Flu killed 3-6% of the world’s population. In the 14th century the Black Death killed 25-50% of Europeans. But that’s not the worst pandemic to date: that’s the passage of European diseases to the Americans, which may have killed as much as 90% of the local population.
  9. A recent paper estimated that even if honeybees were completely lost — and all other pollinators too — this would only create a 3 to 8 percent reduction in global crop production.
  10. In 2007, foot-and-mouth disease, a high-risk pathogen that can only be studied in labs following the top level of biosecurity, escaped from a research facility leading to an outbreak in the UK. An investigation found that the virus had escaped from a badly-maintained pipe. After repairs, the lab’s licence was renewed — only for another leak to occur two weeks later.
  11. Toby estimates that ‘great power wars effectively pose more than a percentage point of existential risk over the next century. This makes it a much larger contributor to total existential risk than all the natural risks like asteroids and volcanos combined.
  12. During the Cuban Missile Crisis, Kennedy and Khrushchev found it so hard to communicate, and the long delays so dangerous, that they established the ‘red telephone’ system so they could write to one another directly, and better avoid future crises coming so close to the brink.
  13. A US Airman claims that during a nuclear false alarm in 1962 that he himself witnessed, two airmen from one launch site were ordered to run through the underground tunnel to the launch site of another missile, with orders to shoot a lieutenant if he continued to refuse to abort the launch of his missile.
  14. In 2014 GlaxoSmithKline accidentally released 45 litres of concentrated polio virus into a river in Belgium. In 2004, SARS escaped from the National Institute of Virology in Beijing. In 2005 at the University of Medicine and Dentistry in New Jersey, three mice infected with bubonic plague went missing from the lab and were never found.
  15. The Soviet Union covered 22 million square kilometres, 16% of the world’s land area. At its height, during the reign of Genghis Khan’s grandson, Kublai Khan, the Mongol Empire had a population of 100 million, around 25% of world’s population at the time.
  16. All the methods we’ve come up with for deflecting asteroids wouldn’t work on one big enough to cause human extinction.
  17. Here’s fifty-one ideas for reducing existential risk from the book.

While I’ve been studying this topic for a long time, and known Toby eight years, a remarkable amount of what’s in the book was new to me.

Of course the book isn’t a series of isolated amusing facts, but rather a systematic review of the many ways humanity’s future could go better or worse, how we might know about them, and what might be done to improve the odds.

And that’s how we approach this conversation, first talking about each of the main risks, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved.

Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which my colleague Arden Koehler and I barely even had to work for.

For those wondering about pandemic just now, this extract about diseases like COVID-19 was the most read article in the The Guardian USA the day the book was launched.

Some topics Arden and I bring up:

  • What Toby changed his mind about while writing the book
  • Asteroids, comets, supervolcanoes, and threats from space
  • Why natural and anthropogenic risks should be treated so differently
  • Are people exaggerating when they say that climate change could actually end civilization?
  • What can we learn from historical pandemics?
  • How to estimate likelihood of nuclear war
  • Toby’s estimate of unaligned AI causing human extinction in the next century
  • Is this century be the most important time in human history, or is that a narcissistic delusion?
  • Competing visions for humanity’s ideal future
  • And more.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#71 – Benjamin Todd on the key ideas of 80,000 Hours

The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible.

Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift.

All of us added something to it, but the single biggest contributor was our CEO and today’s guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012.

This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we’ve discovered since we started investigating high impact careers.

But it’s perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words.

Fortunately it’s designed to be highly modular and it’s easy to work through it over multiple sessions, scanning over the articles it links to on each topic.

Perhaps though, you’d prefer to absorb our most essential ideas in conversation form, in which case this episode is for you.

If you want to have a big impact with your career, and you say you’re only going to read one article from us, we recommend you read our key ideas page.

And likewise, if you’re only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through:

  • The most common misunderstandings of our advice
  • A high level overview of what 80,000 Hours generally recommends
  • Our key moral positions
  • What are the most pressing problems to work on and why?
  • Which careers effectively contribute to solving these problems?
  • Central aspects of career strategy like how to weigh up career capital, personal fit, and exploration
  • As well as plenty more.

One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we’re least sure about, or didn’t yet cover within the article.

Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we’re aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page!

Update: As of Sept 2021, you can now see this more recent introduction to the key ideas of 80,000 Hours and our story on the Superdatascience podcast, which is especially good for people with STEM backgrounds. You can also see another introduction on Clearer Thinking, which is a bit more in-depth.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Arden & Rob on demandingness, work-life balance and injustice (80k team chat #1)

Today’s bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show.

Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. Issues we talk about include:

  • If you’re not going to be completely moral, should you try being a bit more moral or give up?
  • Should you feel angry if you see an injustice, and if so, why?
  • How much should we ask people to live frugally?

So far the feedback on the post-episode chats that we’ve done have been positive, so we thought we’d go ahead and try out this freestanding one. But fair warning: it’s among the more difficult episodes to follow, and probably not the best one to listen to first as you’ll benefit from having more context!

If you’d like to listen to more of Arden, you can find her in episode 67 — David Chalmers on the nature and ethics of consciousness, or episode 66 – Peter Singer on being provocative, effective altruism & how his moral views have changed.

Here’s more information on some of the issues we touch on:

And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I’ve read it and very much enjoyed it. Find out where you can pre-order it here. We’ll have an interview with him coming up soon.

Continue reading →

#70 – Dr Cassidy Nelson on the twelve best ways to stop the next pandemic (and limit COVID-19)

COVID-19 (previously known as nCoV) is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places.

But bad though it is, it’s much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both.

Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can’t do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all.

This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like.

In today’s episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University’s Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we’re to keep the risk at acceptable levels. The ideas are:

Science

1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go.
2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes.
3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria.

Response

4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely.
5. Rigorously evaluate in what situations travel bans are warranted. (They’re more often counterproductive.)
6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible.
7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms.
8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out.

Oversight

9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens.
10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen.
11. Require full cost-benefit analysis of ‘dual-use’ research projects that can generate global risks.

12. And finally, to maintain momentum, it’s necessary to clearly assign responsibility for the above to particular individuals and organisations.

Very simply, there are multiple cutting edge technologies and policies that offer the promise of detecting new diseases right away, and delivering us effective treatments in weeks rather than years. All of them can use additional funding and talent.

At the same time, health systems around the world also need to develop pandemic response plans — something few have done — so they don’t have to figure everything out on the fly.

For example, if we don’t have good treatments for a disease, at what point do we stop telling people to come into hospital, where there’s a particularly high risk of them infecting the most medically vulnerable people? And if borders are shut down, how will we get enough antibiotics or facemasks, when they’re almost all imported?

Separately, we need some way to stop bad actors from accessing the tools necessary to weaponise a viral disease, before they cost less than $1,000 and fit on a desk.

These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem.

In the episode Rob and Cassidy also talk about:

  • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential
  • The pros, and significant cons, of travel restrictions
  • Whether the same policies work for natural and anthropogenic pandemics
  • Where we stand with nCoV as of today.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Transcriptions: Zakee Ulhaq.

Continue reading →

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

The State Council of China’s 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard?

In his paper ‘Deciphering China’s AI Dream’ today’s guest, PhD student Jeff Ding, outlines why he believes none of these claims are true.

He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development?

Jeff emphasises that China’s AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It’s connected with a plan to develop an ‘Internet of Things’, and linked to a history of strategic planning for technology in areas like aerospace and biotechnology.

And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse.

What are the different levers that China is pulling to try to spur AI development?

Here, Jeff wanted to challenge the myth that China’s AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government.

Are China’s AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China’s progress in AI.

Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China’s AI capabilities have surpassed the US or make it the world’s leading AI power.

Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we’d need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016.

Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance.

He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues.

In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover:

  • The best analogies for thinking about the growing influence of AI
  • How do prominent Chinese figures think about AI?
  • Cultural cliches in the West and China
  • Coordination with China on AI
  • Private companies vs. government research
  • How are things are going to play out with ‘compute’?
  • China’s social credit system
  • The relationship between China and other countries beyond AI
  • Suggestions for people who want to become professional China specialists
  • And more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Bonus episode: What we do and don't know about the 2019-nCoV coronavirus

UPDATE: Please also see our COVID-19 landing page for many more up-to-date articles about the pandemic.


Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, just recorded a discussion about the 2019-nCoV virus.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show.

In the 1h15m conversation we cover:

  • What is it?
  • How many people have it?
  • How contagious is it?
  • What fraction of people who contract it die?
  • How likely is it to spread out of control?
  • What’s the range of plausible fatalities worldwide?
  • How does it compare to other epidemics?
  • What don’t we know and why?
  • What actions should listeners take, if any?
  • How should the complexities of the above be communicated by public health professionals?

Below are some links we discuss in the episode, or otherwise think are informative:

Advice on how to avoid catching contagious diseases

Forecasts

General summaries of what’s going on

Our previous episodes about pandemic control

Thoughts on how to communicate risk to the public

Official updates

Published papers

General advice on disaster preparedness

Tweets mentioned

Continue reading →

#68 – Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person’s life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?

A committed consequentialist might say, “Sure! Free money!” But most will think it obvious that you should say no. You’ve only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die.

And yet, according to today’s return guest, philosophy Professor Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others.

To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you’ve probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you’ve changed the identity of a future person.

That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. Thanks to these ripple effects, after 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies.

As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the ‘new’ people will cause car crashes that wouldn’t have occurred in their absence, including crashes that prematurely kill people alive today.

Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise.

So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie (worth $10). Should you do it?

This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers.

To see how it implies inaction as an ideal, recall the distinction between consequentialism and non-consequentialism. For consequentialists, who just add up the net consequences of everything, there’s no problem here. The benefits and costs perfectly cancel out, and you get to see a free movie.

But most ‘non-consequentialists’ endorse an act/omission distinction: it’s worse to knowingly cause a harm than it is to merely allow a harm to occur. And they further believe harms and benefits are asymmetric: it’s more wrong to hurt someone a given amount than it is right to benefit someone else an equal amount.

So, in this example, the fact that your actions caused X deaths should be given more moral weight than the fact that you also saved X lives.

It’s because of this that the nonconsequentialist feels they shouldn’t roll the dice just to gain $10. But as we can see above, if they’re being consistent, rather than leave the house, they’re obligated to do whatever would count as an ‘inaction’, in order to avoid the moral responsibility of foreseeably causing people’s deaths.

Will’s best idea for resolving this strange implication? In this episode we discuss a few options:

  • give up on the benefit/harm asymmetry
  • find a definition of ‘action’ under which leaving the house counts as an inaction
  • accept a ‘Pareto principle’, where actions can’t be wrong so long as everyone affected would approve or be indifferent to them before the fact.

Will is most optimistic about the last, but as we discuss, this would bring people a lot closer to full consequentialism than is immediately apparent.

Finally, a different escape — conveniently for Will, given his work — is to dedicate your life to improving the long-term future, and thereby do enough good to offset the apparent harms you’ll do every time you go for a drive. In this episode Rob and Will also cover:

  • Are, or are we not, living at the most influential time in history?
  • The culture of the effective altruism community
  • Will’s new lower estimate of the risk of human extinction over the next hundred years
  • Why does AI stand out a bit less for Will now as a particularly pivotal technology?
  • How he’s getting feedback while writing his book
  • The differences between Americans and Brits
  • Does the act/omission distinction make sense?
  • The case for strong longtermism, and longtermism for risk-averse altruists
  • Caring about making a difference yourself vs. caring about good things happening
  • Why feeling guilty about characteristics you were born with is crazy
  • And plenty more.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#67 – David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You’re seeing this text on the screen, you smell the coffee next to you, feel the warmth of the cup, and hear your housemates arguing about whether Home Alone was better than Home Alone 2: Lost in New York. There’s a lot going on in your head — your conscious experiences.

Now imagine beings that are identical to humans, except for one thing: they lack conscious experience. If you spill that coffee on them, they’ll jump like anyone else, but inside they’ll feel no pain and have no thoughts: the lights are off.

The concept of these so-called ‘philosophical zombies’ was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic ‘trolley problem’:

Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?

Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is greatly reduced, or absent entirely.

So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.

He asks us to consider the ‘Vulcans’. If you’ve never seen Star Trek, Vulcans experience rich forms of cognitive and sensory consciousness; they see and hear and reflect on the world around them. But they’re incapable of experiencing pleasure or pain.

Does such a being lack moral status?

To answer this Dave invites us to imagine a further trolley problem: suppose you have a conscious human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?

Dave firmly believes the answer is no, and if he’s right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.

Dave is one of the world’s top experts on the philosophy of consciousness. He helped return the question ‘what is consciousness?’ to the centre stage of philosophy with his 1996 book ‘The Conscious Mind’, which argued against then-dominant materialist theories of consciousness.

This comprehensive interview, at over four and a half hours long, outlines each contemporary answer to the mystery of consciousness, what it has going for it, and its likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an ‘illusion’, to panpsychism, according to which it’s a fundamental physical property present in all matter.

These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If accurate computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?

Dave Chalmers is probably the best person on the planet to interview about these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode and our personal favourite so far.

They discuss:

  • Why is there so little consensus among philosophers about so many key questions?
  • Can free will exist, even in a deterministic universe?
  • Might we be living in a simulation? Why is this worth talking about?
  • The hard problem of consciousness
  • Materialism, functionalism, idealism, illusionism, panpsychism, and other views about the nature of consciousness
  • The story of ‘integrated information theory’
  • What philosophers think of eating meat
  • Should we worry about AI becoming conscious, and therefore worthy of moral concern?
  • Should we expect to get to conscious AI well before we get human-level artificial general intelligence?
  • Could minds uploaded to a computer be conscious?
  • If you uploaded your mind, would that mind be ‘you’?
  • Why did Dave start thinking about the ‘singularity’?
  • Careers in academia
  • And whether a sense of humour is useful for research.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#66 – Peter Singer on provocative advocacy, EA, how his ethical views have changed, and drowning children

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off.

According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention.

But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times.

Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one?

Singer’s book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free ebook and audiobook, read by a range of celebrities. Get it here.

Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences. But as Singer tells it, he gives public relations considerations plenty of thought.

One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump.

Another is the focus of the effective altruism (EA) community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement.

He suspects there’s a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns.

Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover:

  • What does he think are the most plausible alternatives to consequentialism?
  • Is it more humane to eat wild caught animals than farmed animals?
  • The re-release of The Life You Can Save
  • Whether it’s good to polarize people in favour and against your views
  • His active opposition to the Vietnam war and conscription
  • Should we make it easier for people to express unpopular opinions?
  • His most and least strategic career decisions
  • What does he think are the effective altruism community’s biggest mistakes?
  • Population ethics and arguments for and against prioritising the long-term future
  • What led to his changing his mind on significant questions in moral philosophy?
  • What is at the heart of making moral mistakes?
  • What should we do when we are morally uncertain?
  • And more.

In the post-episode discussion, Rob and Arden continue talking about:

  • The pros and cons of keeping EA as one big movement
  • Singer’s thoughts on immigration
  • And consequentialism with side constraints

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.
Illustration of Singer: Matthias Seifarth.

Continue reading →

#65 – Amb. Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security.

Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. And in 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS).

But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation.

In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries.

What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next.

Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the last.

Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda (GHSA) discussed at length in episode 27.

Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the National Commission on Terrorist Attacks Upon the United States (9/11 Commission). Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11.

She’s also a retired Naval Reserves officer and received several awards for her service. Bonnie remembers the military fondly. She didn’t want that life 24 hours a day, which is why she never went full time. But she liked the rules, loved the camaraderie and remembers it as a time filled with laughter.

And as if that all weren’t curious enough, four years ago Bonnie decided to go vegan. We talk about her work so far as well as:

  • How listeners can start a career like hers
  • The history of Cooperative Threat Reduction work
  • Mistakes made by Mr Obama and Mr Trump
  • Biggest uncontrolled nuclear material threats today
  • Biggest security issues in the world today
  • The Biological Weapons Convention
  • Where does Bonnie disagree with her colleagues working on peace and security?
  • The implications for countries who give up WMDs
  • The fallout from a change in government
  • Networking, the value of attention, and being a vegan in DC
  • And the best 2020 Presidential candidates.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.

November 3 2020, 11:46PM: The NY Times, Washington Post and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don’t see how they can figure it out.

What on Earth happens next?

Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result.

Unfortunately the US has no recovery system for a situation like this, unlike Parliamentary democracies, which can just rerun the election a few weeks later.

The constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker.

Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn’t fair.

Schneier thinks there’s a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage.

And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits.

He likes the system in Minnesota: you get a paper ballot with ovals you fill in, which are then fed into a computerised reader. The computer reads the ballot, and the paper falls into a locked box that’s available for recounts. That gives you the speed of electronic voting, with the security of a paper ballot.

On the back-end, he wants risk limiting audits that are automatically triggered based on the margin of victory. If there’s a large margin of victory, you need a small audit. For a small margin of victory, you need a large audit.

Those two things would do an enormous amount to improve voting security, and we should move to that as soon as possible.

According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they’re designing, because they have a bureaucrat’s rather than hacker’s mindset.

The ideal computer security expert walks into a shop and thinks, “You know, here’s how I would shoplift.” They automatically see where the cameras are, whether there are alarms, and where the security guards aren’t watching.

In this impassioned episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn’t have access to them.

We also cover:

  • How can we have surveillance of dangerous actors, without falling back into authoritarianism?
  • When if ever should information about weaknesses in society’s security be kept secret?
  • How secure are nuclear weapons systems around the world?
  • How worried should we be about deep-fakes?
  • The similarities between hacking computers and hacking our biology in the future
  • Schneier’s critiques of blockchain technology
  • How technologists could be vital in shaping policy
  • What are the most consequential computer security problems today?
  • Could a career in information security be very useful for reducing global catastrophic risks?
  • What are some of the most kind of widely-held but incorrect beliefs among computer security people?
  • And more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long term is really possible

Today on our podcast feed, we’re releasing some interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast

To listen, subscribe to the 80,000 Hours Podcast by searching for 80,000 Hours wherever you get your podcasts, or find us on Apple Podcasts, Google Podcasts, Spotify or SoundCloud.

If you’ve listened to absolutely everything on our podcast feed, you’ll have heard four interviews with me already, but fortunately I think these two don’t include too much repetition, and I’ve gotten a decent amount of positive feedback on both. 

First up, I speak with David Kadavy on Love Your Work

This is a particularly personal and relaxed interview. We talk about all sorts of things, including nicotine gum, plastic straw bans, whether recycling is important, how many lives a doctor saves, why interviews should go for at least 2 hours, how athletes doping could be good for the world, and many other fun topics. 

At some points we even actually discuss effective altruism and 80,000 Hours, but you can easily skip through those bits if they feel too familiar. 

The second interview is with Jeremiah Johnson on the Neoliberal Podcast. It starts at 2 hours and 15 minutes into this recording. 

Neoliberalism in the sense used by this show is not the free market fundamentalism you might associate with that term. Rather it’s a centrist or even centre-left view that supports things like social liberalism, multilateral international institutions, trade, high rates of migration, racial justice, inclusive institutions, financial redistribution, prioritising the global poor, market urbanism, and environmental sustainability. 

This is the more demanding of the two conversations, as listeners to that show have already heard of effective altruism, and so we were able to have Jeremiah offer the best arguments he could against focusing on improving the long term future of the world. 

Jeremiah is more of a fan of donating to evidence-backed global health charities recommended by GiveWell, and does so himself. 

I appreciate him having done his homework and forcing me to do my best to explain how well my views can stand up to counterarguments. It was a challenge for me to paint the whole picture in the half an hour we spent on longtermism and I expect there’s answers in there which will be fresh even for regular listeners. 

I hope you enjoy both conversations! Feel free to email me with any feedback.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communication between both law-abiding citizens and dangerous criminals. Could it have similarly significant consequences in future?

Today’s guest — Vitalik Buterin — is world-famous as the lead developer of Ethereum, a successor to the cryptographic-currency Bitcoin, which added the capacity for smart contracts and decentralised organisations. Buterin first proposed Ethereum at the age of 20, and by the age of 23 its success had likely made him a billionaire.

At the same time, far from indulging hype about these so-called ‘blockchain’ technologies, he has been candid about the limited good accomplished by Bitcoin and other currencies developed using cryptographic tools — and the breakthroughs that will be needed before they can have a meaningful social impact. In his own words, “blockchains as they currently exist are in many ways a joke, right?”

But Buterin is not just a realist. He’s also an idealist, who has been helping to advance big ideas for new social institutions that might help people better coordinate to pursue their shared goals.

By combining theories in economics and mechanism design with advances in cryptography, he has been pioneering the new interdiscriplinary field of ‘cryptoeconomics’. Economist Tyler Cowen has observed that, “at 25, Vitalik appears to repeatedly rediscover important economics results from famous papers — without knowing about the papers at all.”

Though its applications have faced major social and technical problems, Ethereum has been used to crowdsource investment for projects and enforce contracts without the need for a central authority. But the proposals for new ways of coordinating people are far more ambitious than that.

For instance, along with previous guest Glen Weyl, Vitalik has helped develop a model for so-called ‘quadratic funding’, which in principle could transform the provision of ‘public goods’. That is, goods that people benefit from whether they help pay for them or not.

Examples of goods that are fully or partially public goods include sound decision-making in government, international peace, scientific advances, disease control, the existence of smart journalism, preventing climate change, deflecting asteroids headed to Earth, and the elimination of suffering. Their underprovision in part reflects the difficulty of getting people to pay for anything when they can instead free-ride on the efforts of others. Anything that could reduce this failure of coordination might transform the world.

The innovative leap of the ‘quadratic funding’ formula is that individuals can in principle be given the incentive to voluntarily contribute amounts that together signal to a government how much society as a whole values a public good, how much should be spent on it, and where that funding should be directed.

But these and other related proposals face major hurdles. They’re vulnerable to collusion, might be used to fund scams, and remain untested at a small scale. Not to mention that anything with a square root sign in it is going to struggle to achieve widespread societal legitimacy. Is the prize large enough to justify efforts to overcome these challenges?

In today’s extensive three-hour interview, Buterin and I cover:

  • What the blockchain has accomplished so far, and what it might achieve in the next decade;
  • Why many social problems can be viewed as a coordination failure to provide a public good;
  • Whether any of the ideas for decentralised social systems emerging from the blockchain community could really work;
  • His view of ‘effective altruism’ and ‘long-termism’;
  • The difficulty of establishing true identities and preventing collusion, and why this is an important enabling technology;
  • Why he is optimistic about ‘quadratic funding’, but pessimistic about replacing existing voting with ‘quadratic voting’;
  • When it’s good and bad for private entities to censor online speech;
  • Why humanity might have to abandon living in cities;
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

Imagine that, one day, humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out?

In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably is.

We could tell them hard-won lessons from history; mention some research questions we wish we’d started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons.

But, as Christiano points out, even if we could satisfactorily figure out what we’d like to be able to tell our ancestors, that’s just the first challenge. We’d need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth’s surface quickly gets buried far underground.

But even if we figure out a satisfactory message, and a ways to ensure it’s found, a civilization this far in the future won’t speak any language like our own. And being another species, they presumably won’t share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn’t break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery?

That’s just one of many playful questions discussed in today’s episode with Christiano — a frequent writer who’s willing to brave questions that others find too strange or hard to grapple with.

We also talk about why divesting a little bit from harmful companies might be more useful than I’d been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider.

Finally, we get a big update on progress in machine learning and efforts to make sure it’s reliably aligned with our goals, which is Paul’s main research project. He responds to the views that DeepMind’s Pushmeet Kohli espoused in a previous episode, and we discuss whether we’d be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors.

Some other issues that come up along the way include:

  • Are there any supplements people can take that make them think better?
  • What implications do our views on meta-ethics have for aligning AI with our goals?
  • Is there much of a risk that the future will contain anything optimised for causing harm?
  • An outtake about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#61 – Helen Toner on emerging technology, national security, and China

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.

Some think machine learning could alter 21st century life in a similar way.

In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to quickly communicate with units far away in the field.

How might international security be altered if the impact of machine learning is similar in scope to that of electricity? Today’s guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for any such disruptive technical changes that might threaten international peace.

Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop ‘intuitions’ that inform their judgement about future cases. This is something humans do constantly, whether we’re playing tennis, reading someone’s face, diagnosing a patient, or figuring out which business ideas are likely to succeed.

Sometimes these ML algorithms can seem uncannily insightful, and they’re only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth — all in the first five minutes of our day.

Rapid advances in ML, and the many prospective military applications, has people worrying about an ‘AI arms race’ between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could “destabilize everything from nuclear détente to human friendships.” Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.

But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?

In today’s episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen’s experience living and studying in China.

We cover:

  • Why immigration is the main policy area that should be affected by AI advances today.
  • Why talking about an ‘arms race’ in AI is premature.
  • How the US could remain the leading country in machine learning for the foreseeable future.
  • Whether it’s ever possible to have a predictable effect on government policy.
  • How Bobby Kennedy may have positively affected the Cuban Missile Crisis.
  • Whether it’s possible to become a China expert and still get a security clearance.
  • Can access to ML algorithms be restricted, or is that just not practical?
  • Why Helen and her colleagues set up the Center for Security and Emerging Technology and what jobs are available there and elsewhere in the field.
  • Whether AI could help stabilise authoritarian regimes.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#60 – Prof Tetlock on why accurate forecasting matters for everything, and how you can do it better

Have you ever been infuriated by a doctor’s unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won’t tell you the chances you’ll win your case?

Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can’t assess the likelihood of different outcomes we’re in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul’s Drag Race.

Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.

He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better.

Along with other psychologists, he identified that many ordinary people are attracted to a ‘folk probability’ that draws just three distinctions — ‘impossible’, ‘possible’ and ‘certain’ — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% versus 57% likely.

In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014.

That was five years ago. In today’s interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement.

We discuss how his work can be applied to your personal life to answer high-stakes questions, such as how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by Open Philanthropy and Clearer Thinking that teaches you to accurately distinguish your ’70 percents’ from your ’80 percents’.)

We also bring a few methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions their profession, as it has been for Tetlock over many decades.

We view Tetlock’s work as so core to living well that we’ve brought him back for a second and longer appearance on the show — his first appearance was back in episode 15. Some questions this time around include:

  • What would it look like to live in a world where elites across the globe were better at predicting social and political trends? What are the main barriers to this happening?
  • What are some of the best opportunities for making forecaster training content?
  • What do extrapolation algorithms actually do, and given they perform so well, can we get more access to them?
  • Have any sectors of society or government started to embrace forecasting more in the last few years?
  • If you could snap your fingers and have one organisation begin regularly using proper forecasting, which would it be?
  • When if ever should one use explicit Bayesian reasoning?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

#59 – Cass Sunstein on how social change happens, and why it's so often abrupt & unpredictable

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn’t despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.

The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.

In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.

How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?

Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.

He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.

In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren’t quite sure how socially acceptable their feelings would have to become before they revealed them or joined a campaign for change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people who then find a message that can spread their beliefs to millions.

According to Sunstein, it’s “much, much easier” to create social change when large numbers of people secretly or latently agree with you. But ‘preference falsification’ is so pervasive that it’s no simple matter to figure out when they do.

In today’s interview, we debate with Sunstein whether this model of social change is accurate, and if so, what lessons it has for those who would like to steer the world in a more humane direction. We cover:

  • How much people misrepresent their views in democratic countries.
  • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.
  • When is it justified to encourage your own group to polarise?
  • Sunstein’s difficult experiences as a pioneer of animal rights law.
  • Whether activists can do better by spending half their resources on public opinion surveys.
  • Should people be more or less outspoken about their true views?
  • What might be the next social revolution to take off?
  • How can we learn about social movements that failed and disappeared?
  • How to find out what people really think.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →