Hours ago from home isolation Rob and Howie recorded an episode on:
How many could die in the coronavirus crisis, and the risk to your health personally.
What individuals might be able to do.
What we suspect governments should do.
The properties of the SARS-CoV-2 virus, the importance of not contributing to its spread, and how you can reduce your chance of catching it.
The ways some societies have screwed up, which countries have been doing better than others, how we can avoid this happening again, and why we’re optimistic.
We’ve rushed this episode out, accepting a higher risk of errors, in order to share information as quickly as possible about a very fast-moving situation.
We’ve compiled 70 links below to projects you could get involved with, as well as academic papers and other resources to understand the situation and what’s needed to fix it.
Please also see our ‘problem profile’ on global catastrophic biological risks for information on these grave risks and how you can contribute to preventing them.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.
To do good, most of us look to use our time and money to affect the world around us today. But perhaps that’s all wrong.
If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you’d have $125,000 to give away instead. And in 200 years you’d have $17 million.
This astonishing fact has driven today’s guest, economics researcher Philip Trammell at Oxford’s Global Priorities Institute, to investigate the case for and against so-called ‘patient philanthropy’ in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now.
He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they’ll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn’t have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn’t even know about germs, and almost nothing in medicine was justified by science.
What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways?
And there’s a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It’s possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.
Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?
Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?
Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes’ Scholarships initial charter, which limited it to ‘white Christian men’.
Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.
Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today’s conversation with researcher Phil Trammell and my 80,000 Hours colleague Howie Lempel, we try to answer that, and also discuss:
Real attempts at patient philanthropy in history and how they worked out
Should we have a mixed strategy, where some altruists are patient and others impatient?
Which causes are most likely to need money now, and which later?
What is the research frontier in this issue of global prioritisation?
What does this all mean for what listeners should do differently?
COVID-19
Finally, note that we recorded this podcast before the appearance of COVID-19. And as we discuss, Phil makes the case that patient philanthropists should wait for moments in history when patient philanthropic resources can do the most good. Could the coronavirus crisis be one of those important historical episodes during which Phil would argue that even patient philanthropists should ramp up their spending?
We’ve spoken with him more recently, and he says that this strikes him as unlikely. The virus is certainly doing widespread damage, but most of this damage is expected to accrue in the next few years at most. As a result, this is the sort of crisis that governments and impatient philanthropists are happy to spend on (to the extent that spending can help at all).
On Phil’s view, therefore, patient philanthropists are still best advised to wait i) until they’re rich enough to better address, or fund more substantial preparation for, similar future crises, or, ii) until we face crises with unusually long-lasting impacts, not just unusually severe impacts.
If this is right, COVID-19 just serves as an example of the many temptations to spend in the present that patient philanthropists will have to resist, in order to reap the benefits that can come from waiting to do good.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
Problem profile by Gregory Lewis · Published March 2020
Plagues throughout history suggest the potential for biology to cause global catastrophe. This potential increases in step with the march of biotechnological progress. Global Catastrophic Biological Risks (GCBRs) may compose a significant share of all global catastrophic risk, and, if so, a credible threat to humankind.
Despite extensive existing efforts addressed to nearby fields like biodefense and public health, GCBRs remain a large challenge that is plausibly both neglected and tractable. The existing portfolio of work often overlooks risks of this magnitude, and largely does not focus on the mechanisms by which such disasters are most likely to arise.
Much remains unclear: the contours of the risk landscape, the best avenues for impact, and how people can best contribute. Despite these uncertainties, GCBRs are plausibly one of the most important challenges facing humankind, and work to reduce these risks is highly valuable.
After reading, you may also like to listen to our podcast interview with the author about this article and the COVID-19 pandemic.
This week Oxford academic and advisor to 80,000 Hours Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It’s about how our long-term future could be better than almost anyone believes, but also how humanity’s recklessness is putting that future at grave risk, in Toby’s reckoning a 1 in 6 chance of being extinguished this century.
I loved the book and learned a great deal from it.
While preparing for this interview I copied out 87 facts that were surprising to me or seemed important. Here’s a sample of 16:
The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined.
The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s.
In 2008 a ‘gamma ray burst’ reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren’t sure what generates gamma ray bursts but one cause may be two neutron stars colliding.
Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped pursuing the Bomb.
If we eventually burn all the fossil fuels we’re confident we can access, the leading Earth-system models suggest we’d experience 9–13°C of warming by 2300, an absolutely catastrophic increase.
In 1939, the renowned nuclear scientist Enrico Fermi told colleagues that a nuclear chain reaction was but a ‘remote possibility’. Four years later Fermi himself was personally overseeing the world’s first nuclear reactor. Wilbur Wright predicted heavier-than-air flight was at least fifty years away — just two years before he himself invented it.
The Japanese bioweapons programme in the Second World War — which included using bubonic plague against China — was directly inspired by an anti-bioweapons treaty. The reasoning ran that if Western powers felt the need to outlaw their use, these weapons must especially good to have.
In the early 20th century the Spanish Flu killed 3-6% of the world’s population. In the 14th century the Black Death killed 25-50% of Europeans. But that’s not the worst pandemic to date: that’s the passage of European diseases to the Americans, which may have killed as much as 90% of the local population.
A recent paper estimated that even if honeybees were completely lost — and all other pollinators too — this would only create a 3 to 8 percent reduction in global crop production.
In 2007, foot-and-mouth disease, a high-risk pathogen that can only be studied in labs following the top level of biosecurity, escaped from a research facility leading to an outbreak in the UK. An investigation found that the virus had escaped from a badly-maintained pipe. After repairs, the lab’s licence was renewed — only for another leak to occur two weeks later.
Toby estimates that ‘great power wars effectively pose more than a percentage point of existential risk over the next century. This makes it a much larger contributor to total existential risk than all the natural risks like asteroids and volcanos combined.
During the Cuban Missile Crisis, Kennedy and Khrushchev found it so hard to communicate, and the long delays so dangerous, that they established the ‘red telephone’ system so they could write to one another directly, and better avoid future crises coming so close to the brink.
A US Airman claims that during a nuclear false alarm in 1962 that he himself witnessed, two airmen from one launch site were ordered to run through the underground tunnel to the launch site of another missile, with orders to shoot a lieutenant if he continued to refuse to abort the launch of his missile.
In 2014 GlaxoSmithKline accidentally released 45 litres of concentrated polio virus into a river in Belgium. In 2004, SARS escaped from the National Institute of Virology in Beijing. In 2005 at the University of Medicine and Dentistry in New Jersey, three mice infected with bubonic plague went missing from the lab and were never found.
The Soviet Union covered 22 million square kilometres, 16% of the world’s land area. At its height, during the reign of Genghis Khan’s grandson, Kublai Khan, the Mongol Empire had a population of 100 million, around 25% of world’s population at the time.
All the methods we’ve come up with for deflecting asteroids wouldn’t work on one big enough to cause human extinction.
While I’ve been studying this topic for a long time, and known Toby eight years, a remarkable amount of what’s in the book was new to me.
Of course the book isn’t a series of isolated amusing facts, but rather a systematic review of the many ways humanity’s future could go better or worse, how we might know about them, and what might be done to improve the odds.
And that’s how we approach this conversation, first talking about each of the main risks, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved.
Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which my colleague Arden Koehler and I barely even had to work for.
For those wondering about pandemic just now, this extract about diseases like COVID-19 was the most read article in the The Guardian USA the day the book was launched.
Some topics Arden and I bring up:
What Toby changed his mind about while writing the book
Asteroids, comets, supervolcanoes, and threats from space
Why natural and anthropogenic risks should be treated so differently
Are people exaggerating when they say that climate change could actually end civilization?
What can we learn from historical pandemics?
How to estimate likelihood of nuclear war
Toby’s estimate of unaligned AI causing human extinction in the next century
Is this century be the most important time in human history, or is that a narcissistic delusion?
Competing visions for humanity’s ideal future
And more.
Interested in applying this thinking to your career?
If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible.
Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift.
All of us added something to it, but the single biggest contributor was our CEO and today’s guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012.
This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we’ve discovered since we started investigating high impact careers.
But it’s perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words.
Fortunately it’s designed to be highly modular and it’s easy to work through it over multiple sessions, scanning over the articles it links to on each topic.
Perhaps though, you’d prefer to absorb our most essential ideas in conversation form, in which case this episode is for you.
If you want to have a big impact with your career, and you say you’re only going to read one article from us, we recommend you read our key ideas page.
And likewise, if you’re only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through:
The most common misunderstandings of our advice
A high level overview of what 80,000 Hours generally recommends
Our key moral positions
What are the most pressing problems to work on and why?
Which careers effectively contribute to solving these problems?
Central aspects of career strategy like how to weigh up career capital, personal fit, and exploration
As well as plenty more.
One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we’re least sure about, or didn’t yet cover within the article.
Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we’re aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page!
Update: As of Sept 2021, you can now see this more recent introduction to the key ideas of 80,000 Hours and our story on the Superdatascience podcast, which is especially good for people with STEM backgrounds. You can also see another introduction on Clearer Thinking, which is a bit more in-depth.
Interested in applying this thinking to your career?
If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
Blog post by Anonymous · Published March 2nd, 2020
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.
But it’s the thirteenth in this series of posts with anonymous answers — many of which are likely to be useful to everyone. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
Today’s bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice.
You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show.
Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. Issues we talk about include:
If you’re not going to be completely moral, should you try being a bit more moral or give up?
Should you feel angry if you see an injustice, and if so, why?
How much should we ask people to live frugally?
So far the feedback on the post-episode chats that we’ve done have been positive, so we thought we’d go ahead and try out this freestanding one. But fair warning: it’s among the more difficult episodes to follow, and probably not the best one to listen to first as you’ll benefit from having more context!
And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I’ve read it and very much enjoyed it. Find out where you can pre-order it here. We’ll have an interview with him coming up soon.
Blog post by Anonymous · Published February 21st, 2020
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.
This is the twelfth in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
Blog post by Anonymous · Published February 17th, 2020
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.
But it’s the eleventh in this series of posts with anonymous answers — many of which are likely to be useful to everyone. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
Blog post by Anonymous · Published February 13th, 2020
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.
This is the tenth in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
COVID-19 (previously known as nCoV) is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places.
But bad though it is, it’s much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both.
Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can’t do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all.
This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like.
In today’s episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University’s Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we’re to keep the risk at acceptable levels. The ideas are:
Science
1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go. 2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes. 3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria.
Response
4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely. 5. Rigorously evaluate in what situations travel bans are warranted. (They’re more often counterproductive.) 6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible. 7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms. 8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out.
Oversight
9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens. 10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen. 11. Require full cost-benefit analysis of ‘dual-use’ research projects that can generate global risks.
12. And finally, to maintain momentum, it’s necessary to clearly assign responsibility for the above to particular individuals and organisations.
Very simply, there are multiple cutting edge technologies and policies that offer the promise of detecting new diseases right away, and delivering us effective treatments in weeks rather than years. All of them can use additional funding and talent.
At the same time, health systems around the world also need to develop pandemic response plans — something few have done — so they don’t have to figure everything out on the fly.
For example, if we don’t have good treatments for a disease, at what point do we stop telling people to come into hospital, where there’s a particularly high risk of them infecting the most medically vulnerable people? And if borders are shut down, how will we get enough antibiotics or facemasks, when they’re almost all imported?
Separately, we need some way to stop bad actors from accessing the tools necessary to weaponise a viral disease, before they cost less than $1,000 and fit on a desk.
These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem.
In the episode Rob and Cassidy also talk about:
How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential
The pros, and significant cons, of travel restrictions
Whether the same policies work for natural and anthropogenic pandemics
Where we stand with nCoV as of today.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Blog post by Anonymous · Published February 9th, 2020
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.
This is the ninth in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
The State Council of China’s 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard?
In his paper ‘Deciphering China’s AI Dream’ today’s guest, PhD student Jeff Ding, outlines why he believes none of these claims are true.
He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development?
Jeff emphasises that China’s AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It’s connected with a plan to develop an ‘Internet of Things’, and linked to a history of strategic planning for technology in areas like aerospace and biotechnology.
And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse.
What are the different levers that China is pulling to try to spur AI development?
Here, Jeff wanted to challenge the myth that China’s AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government.
Are China’s AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China’s progress in AI.
Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China’s AI capabilities have surpassed the US or make it the world’s leading AI power.
Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we’d need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016.
Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance.
He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues.
In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover:
The best analogies for thinking about the growing influence of AI
How do prominent Chinese figures think about AI?
Cultural cliches in the West and China
Coordination with China on AI
Private companies vs. government research
How are things are going to play out with ‘compute’?
China’s social credit system
The relationship between China and other countries beyond AI
Suggestions for people who want to become professional China specialists
And more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
Blog post by Anonymous · Published January 28th, 2020
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.
This is the eighth in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
You’re given a box with a set of dice in it. If you roll an even number, a person’s life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?
A committed consequentialist might say, “Sure! Free money!” But most will think it obvious that you should say no. You’ve only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die.
And yet, according to today’s return guest, philosophy Professor Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others.
To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you’ve probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you’ve changed the identity of a future person.
That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. Thanks to these ripple effects, after 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies.
As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the ‘new’ people will cause car crashes that wouldn’t have occurred in their absence, including crashes that prematurely kill people alive today.
Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise.
So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie (worth $10). Should you do it?
This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers.
To see how it implies inaction as an ideal, recall the distinction between consequentialism and non-consequentialism. For consequentialists, who just add up the net consequences of everything, there’s no problem here. The benefits and costs perfectly cancel out, and you get to see a free movie.
But most ‘non-consequentialists’ endorse an act/omission distinction: it’s worse to knowingly cause a harm than it is to merely allow a harm to occur. And they further believe harms and benefits are asymmetric: it’s more wrong to hurt someone a given amount than it is right to benefit someone else an equal amount.
So, in this example, the fact that your actions caused X deaths should be given more moral weight than the fact that you also saved X lives.
It’s because of this that the nonconsequentialist feels they shouldn’t roll the dice just to gain $10. But as we can see above, if they’re being consistent, rather than leave the house, they’re obligated to do whatever would count as an ‘inaction’, in order to avoid the moral responsibility of foreseeably causing people’s deaths.
Will’s best idea for resolving this strange implication? In this episode we discuss a few options:
give up on the benefit/harm asymmetry
find a definition of ‘action’ under which leaving the house counts as an inaction
accept a ‘Pareto principle’, where actions can’t be wrong so long as everyone affected would approve or be indifferent to them before the fact.
Will is most optimistic about the last, but as we discuss, this would bring people a lot closer to full consequentialism than is immediately apparent.
Finally, a different escape — conveniently for Will, given his work — is to dedicate your life to improving the long-term future, and thereby do enough good to offset the apparent harms you’ll do every time you go for a drive. In this episode Rob and Will also cover:
Are, or are we not, living at the most influential time in history?
The culture of the effective altruism community
Will’s new lower estimate of the risk of human extinction over the next hundred years
Why does AI stand out a bit less for Will now as a particularly pivotal technology?
How he’s getting feedback while writing his book
The differences between Americans and Brits
Does the act/omission distinction make sense?
The case for strong longtermism, and longtermism for risk-averse altruists
Caring about making a difference yourself vs. caring about good things happening
Why feeling guilty about characteristics you were born with is crazy
And plenty more.
Interested in applying this thinking to your career?
If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
Blog post by Anonymous · Published December 17th, 2019
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.
This is the seventh in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
What is it like to be you right now? You’re seeing this text on the screen, you smell the coffee next to you, feel the warmth of the cup, and hear your housemates arguing about whether Home Alone was better than Home Alone 2: Lost in New York. There’s a lot going on in your head — your conscious experiences.
Now imagine beings that are identical to humans, except for one thing: they lack conscious experience. If you spill that coffee on them, they’ll jump like anyone else, but inside they’ll feel no pain and have no thoughts: the lights are off.
The concept of these so-called ‘philosophical zombies’ was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic ‘trolley problem’:
Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?
Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is greatly reduced, or absent entirely.
So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.
He asks us to consider the ‘Vulcans’. If you’ve never seen Star Trek, Vulcans experience rich forms of cognitive and sensory consciousness; they see and hear and reflect on the world around them. But they’re incapable of experiencing pleasure or pain.
Does such a being lack moral status?
To answer this Dave invites us to imagine a further trolley problem: suppose you have a conscious human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?
Dave firmly believes the answer is no, and if he’s right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.
Dave is one of the world’s top experts on the philosophy of consciousness. He helped return the question ‘what is consciousness?’ to the centre stage of philosophy with his 1996 book ‘The Conscious Mind’, which argued against then-dominant materialist theories of consciousness.
This comprehensive interview, at over four and a half hours long, outlines each contemporary answer to the mystery of consciousness, what it has going for it, and its likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an ‘illusion’, to panpsychism, according to which it’s a fundamental physical property present in all matter.
These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If accurate computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?
Dave Chalmers is probably the best person on the planet to interview about these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode and our personal favourite so far.
They discuss:
Why is there so little consensus among philosophers about so many key questions?
Can free will exist, even in a deterministic universe?
Might we be living in a simulation? Why is this worth talking about?
The hard problem of consciousness
Materialism, functionalism, idealism, illusionism, panpsychism, and other views about the nature of consciousness
The story of ‘integrated information theory’
What philosophers think of eating meat
Should we worry about AI becoming conscious, and therefore worthy of moral concern?
Should we expect to get to conscious AI well before we get human-level artificial general intelligence?
Could minds uploaded to a computer be conscious?
If you uploaded your mind, would that mind be ‘you’?
Why did Dave start thinking about the ‘singularity’?
Careers in academia
And whether a sense of humour is useful for research.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
Blog post by Anonymous · Published December 9th, 2019
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.
The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.
This is the sixth in this series of posts with anonymous answers. You can find the complete collection here.
We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.
Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.
In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off.
According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention.
But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times.
Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one?
Singer’s book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free ebook and audiobook, read by a range of celebrities. Get it here.
Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences. But as Singer tells it, he gives public relations considerations plenty of thought.
One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump.
Another is the focus of the effective altruism (EA) community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement.
He suspects there’s a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns.
Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover:
What does he think are the most plausible alternatives to consequentialism?
Is it more humane to eat wild caught animals than farmed animals?
The re-release of The Life You Can Save
Whether it’s good to polarize people in favour and against your views
His active opposition to the Vietnam war and conscription
Should we make it easier for people to express unpopular opinions?
His most and least strategic career decisions
What does he think are the effective altruism community’s biggest mistakes?
Population ethics and arguments for and against prioritising the long-term future
What led to his changing his mind on significant questions in moral philosophy?
What is at the heart of making moral mistakes?
What should we do when we are morally uncertain?
And more.
In the post-episode discussion, Rob and Arden continue talking about:
The pros and cons of keeping EA as one big movement
Singer’s thoughts on immigration
And consequentialism with side constraints
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq. Illustration of Singer: Matthias Seifarth.