Anonymous contributors answer: What are the biggest flaws of the effective altruism community?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.

But it’s the thirteenth in this series of posts with anonymous answers — many of which are likely to be useful to everyone. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

Arden & Rob on demandingness, work-life balance and injustice (80k team chat #1)

Today’s bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show.

Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. Issues we talk about include:

  • If you’re not going to be completely moral, should you try being a bit more moral or give up?
  • Should you feel angry if you see an injustice, and if so, why?
  • How much should we ask people to live frugally?

So far the feedback on the post-episode chats that we’ve done have been positive, so we thought we’d go ahead and try out this freestanding one. But fair warning: it’s among the more difficult episodes to follow, and probably not the best one to listen to first as you’ll benefit from having more context!

If you’d like to listen to more of Arden, you can find her in episode 67 — David Chalmers on the nature and ethics of consciousness, or episode 66 – Peter Singer on being provocative, effective altruism & how his moral views have changed.

Here’s more information on some of the issues we touch on:

I mention the call for papers of the Academic Workshop on Global Priorities in the introduction — you can learn more here.

And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I’ve read it and very much enjoyed it. Find out where you can pre-order it here. We’ll have an interview with him coming up soon.

Continue reading →

Anonymous contributors answer: What are the biggest flaws of 80,000 Hours?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the twelfth in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

Anonymous contributors answer: Should the effective altruism community grow faster or slower? And should it be broader, or narrower?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.

But it’s the eleventh in this series of posts with anonymous answers — many of which are likely to be useful to everyone. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

Anonymous contributors answer: What’s some underrated general life advice?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the tenth in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#70 – Dr Cassidy Nelson on the twelve best ways to stop the next pandemic (and limit COVID-19)

COVID-19 (previously known as nCoV) is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places.

But bad though it is, it’s much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both.

Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can’t do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all.

This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like.

In today’s episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University’s Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we’re to keep the risk at acceptable levels. The ideas are:

Science

1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go.
2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes.
3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria.

Response

4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely.
5. Rigorously evaluate in what situations travel bans are warranted. (They’re more often counterproductive.)
6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible.
7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms.
8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out.

Oversight

9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens.
10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen.
11. Require full cost-benefit analysis of ‘dual-use’ research projects that can generate global risks.

12. And finally, to maintain momentum, it’s necessary to clearly assign responsibility for the above to particular individuals and organisations.

Very simply, there are multiple cutting edge technologies and policies that offer the promise of detecting new diseases right away, and delivering us effective treatments in weeks rather than years. All of them can use additional funding and talent.

At the same time, health systems around the world also need to develop pandemic response plans — something few have done — so they don’t have to figure everything out on the fly.

For example, if we don’t have good treatments for a disease, at what point do we stop telling people to come into hospital, where there’s a particularly high risk of them infecting the most medically vulnerable people? And if borders are shut down, how will we get enough antibiotics or facemasks, when they’re almost all imported?

Separately, we need some way to stop bad actors from accessing the tools necessary to weaponise a viral disease, before they cost less than $1,000 and fit on a desk.

These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem.

In the episode Rob and Cassidy also talk about:

  • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential
  • The pros, and significant cons, of travel restrictions
  • Whether the same policies work for natural and anthropogenic pandemics
  • Where we stand with nCoV as of today.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Transcriptions: Zakee Ulhaq.

Continue reading →

Anonymous contributors answer: How honest & candid should high-profile people really be?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the ninth in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

The State Council of China’s 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard?

In his paper ‘Deciphering China’s AI Dream’ today’s guest, PhD student Jeff Ding, outlines why he believes none of these claims are true.

He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development?

Jeff emphasises that China’s AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It’s connected with a plan to develop an ‘Internet of Things’, and linked to a history of strategic planning for technology in areas like aerospace and biotechnology.

And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse.

What are the different levers that China is pulling to try to spur AI development?

Here, Jeff wanted to challenge the myth that China’s AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government.

Are China’s AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China’s progress in AI.

Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China’s AI capabilities have surpassed the US or make it the world’s leading AI power.

Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we’d need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016.

Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance.

He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues.

In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover:

  • The best analogies for thinking about the growing influence of AI
  • How do prominent Chinese figures think about AI?
  • Cultural cliches in the West and China
  • Coordination with China on AI
  • Private companies vs. government research
  • How are things are going to play out with ‘compute’?
  • China’s social credit system
  • The relationship between China and other countries beyond AI
  • Suggestions for people who want to become professional China specialists
  • And more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Bonus episode: What we do and don't know about the 2019-nCoV coronavirus

UPDATE: Please also see our COVID-19 landing page for many more up-to-date articles about the pandemic.


Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, just recorded a discussion about the 2019-nCoV virus.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show.

In the 1h15m conversation we cover:

  • What is it?
  • How many people have it?
  • How contagious is it?
  • What fraction of people who contract it die?
  • How likely is it to spread out of control?
  • What’s the range of plausible fatalities worldwide?
  • How does it compare to other epidemics?
  • What don’t we know and why?
  • What actions should listeners take, if any?
  • How should the complexities of the above be communicated by public health professionals?

Below are some links we discuss in the episode, or otherwise think are informative:

Advice on how to avoid catching contagious diseases

Forecasts

General summaries of what’s going on

Our previous episodes about pandemic control

Thoughts on how to communicate risk to the public

Official updates

Published papers

General advice on disaster preparedness

Tweets mentioned

Continue reading →

Anonymous contributors answer: What’s one way to be successful you don’t think people talk about enough?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the eighth in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#68 – Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person’s life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?

A committed consequentialist might say, “Sure! Free money!” But most will think it obvious that you should say no. You’ve only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die.

And yet, according to today’s return guest, philosophy Professor Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others.

To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you’ve probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you’ve changed the identity of a future person.

That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. Thanks to these ripple effects, after 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies.

As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the ‘new’ people will cause car crashes that wouldn’t have occurred in their absence, including crashes that prematurely kill people alive today.

Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise.

So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie (worth $10). Should you do it?

This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers.

To see how it implies inaction as an ideal, recall the distinction between consequentialism and non-consequentialism. For consequentialists, who just add up the net consequences of everything, there’s no problem here. The benefits and costs perfectly cancel out, and you get to see a free movie.

But most ‘non-consequentialists’ endorse an act/omission distinction: it’s worse to knowingly cause a harm than it is to merely allow a harm to occur. And they further believe harms and benefits are asymmetric: it’s more wrong to hurt someone a given amount than it is right to benefit someone else an equal amount.

So, in this example, the fact that your actions caused X deaths should be given more moral weight than the fact that you also saved X lives.

It’s because of this that the nonconsequentialist feels they shouldn’t roll the dice just to gain $10. But as we can see above, if they’re being consistent, rather than leave the house, they’re obligated to do whatever would count as an ‘inaction’, in order to avoid the moral responsibility of foreseeably causing people’s deaths.

Will’s best idea for resolving this strange implication? In this episode we discuss a few options:

  • give up on the benefit/harm asymmetry
  • find a definition of ‘action’ under which leaving the house counts as an inaction
  • accept a ‘Pareto principle’, where actions can’t be wrong so long as everyone affected would approve or be indifferent to them before the fact.

Will is most optimistic about the last, but as we discuss, this would bring people a lot closer to full consequentialism than is immediately apparent.

Finally, a different escape — conveniently for Will, given his work — is to dedicate your life to improving the long-term future, and thereby do enough good to offset the apparent harms you’ll do every time you go for a drive. In this episode Rob and Will also cover:

  • Are, or are we not, living at the most influential time in history?
  • The culture of the effective altruism community
  • Will’s new lower estimate of the risk of human extinction over the next hundred years
  • Why does AI stand out a bit less for Will now as a particularly pivotal technology?
  • How he’s getting feedback while writing his book
  • The differences between Americans and Brits
  • Does the act/omission distinction make sense?
  • The case for strong longtermism, and longtermism for risk-averse altruists
  • Caring about making a difference yourself vs. caring about good things happening
  • Why feeling guilty about characteristics you were born with is crazy
  • And plenty more.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Anonymous contributors answer: What mistakes do people most often make when deciding what work to do?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the seventh in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#67 – David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You’re seeing this text on the screen, you smell the coffee next to you, feel the warmth of the cup, and hear your housemates arguing about whether Home Alone was better than Home Alone 2: Lost in New York. There’s a lot going on in your head — your conscious experiences.

Now imagine beings that are identical to humans, except for one thing: they lack conscious experience. If you spill that coffee on them, they’ll jump like anyone else, but inside they’ll feel no pain and have no thoughts: the lights are off.

The concept of these so-called ‘philosophical zombies’ was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic ‘trolley problem’:

Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?

Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is greatly reduced, or absent entirely.

So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.

He asks us to consider the ‘Vulcans’. If you’ve never seen Star Trek, Vulcans experience rich forms of cognitive and sensory consciousness; they see and hear and reflect on the world around them. But they’re incapable of experiencing pleasure or pain.

Does such a being lack moral status?

To answer this Dave invites us to imagine a further trolley problem: suppose you have a conscious human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?

Dave firmly believes the answer is no, and if he’s right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.

Dave is one of the world’s top experts on the philosophy of consciousness. He helped return the question ‘what is consciousness?’ to the centre stage of philosophy with his 1996 book ‘The Conscious Mind’, which argued against then-dominant materialist theories of consciousness.

This comprehensive interview, at over four and a half hours long, outlines each contemporary answer to the mystery of consciousness, what it has going for it, and its likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an ‘illusion’, to panpsychism, according to which it’s a fundamental physical property present in all matter.

These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If accurate computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?

Dave Chalmers is probably the best person on the planet to interview about these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode and our personal favourite so far.

They discuss:

  • Why is there so little consensus among philosophers about so many key questions?
  • Can free will exist, even in a deterministic universe?
  • Might we be living in a simulation? Why is this worth talking about?
  • The hard problem of consciousness
  • Materialism, functionalism, idealism, illusionism, panpsychism, and other views about the nature of consciousness
  • The story of ‘integrated information theory’
  • What philosophers think of eating meat
  • Should we worry about AI becoming conscious, and therefore worthy of moral concern?
  • Should we expect to get to conscious AI well before we get human-level artificial general intelligence?
  • Could minds uploaded to a computer be conscious?
  • If you uploaded your mind, would that mind be ‘you’?
  • Why did Dave start thinking about the ‘singularity’?
  • Careers in academia
  • And whether a sense of humour is useful for research.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Anonymous contributors answer: What bad habits do you see among people trying to improve the world?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect most of it is more broadly useful.

This is the sixth in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#66 – Peter Singer on provocative advocacy, EA, how his ethical views have changed, and drowning children

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off.

According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention.

But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times.

Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one?

Singer’s book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free ebook and audiobook, read by a range of celebrities. Get it here.

Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences. But as Singer tells it, he gives public relations considerations plenty of thought.

One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump.

Another is the focus of the effective altruism (EA) community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement.

He suspects there’s a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns.

Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover:

  • What does he think are the most plausible alternatives to consequentialism?
  • Is it more humane to eat wild caught animals than farmed animals?
  • The re-release of The Life You Can Save
  • Whether it’s good to polarize people in favour and against your views
  • His active opposition to the Vietnam war and conscription
  • Should we make it easier for people to express unpopular opinions?
  • His most and least strategic career decisions
  • What does he think are the effective altruism community’s biggest mistakes?
  • Population ethics and arguments for and against prioritising the long-term future
  • What led to his changing his mind on significant questions in moral philosophy?
  • What is at the heart of making moral mistakes?
  • What should we do when we are morally uncertain?
  • And more.

In the post-episode discussion, Rob and Arden continue talking about:

  • The pros and cons of keeping EA as one big movement
  • Singer’s thoughts on immigration
  • And consequentialism with side constraints

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.
Illustration of Singer: Matthias Seifarth.

Continue reading →

Anonymous answers: How risk-averse should talented young people be about their careers?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.

This is the fifth in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read 80,000 Hours’ career guide.

Continue reading →

Anonymous contributors answer: If you were 18 again, what would you do differently this time around? And other personal career reflections.

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.

This is the fourth in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

#65 – Amb. Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security.

Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. And in 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS).

But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation.

In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries.

What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next.

Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the last.

Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda (GHSA) discussed at length in episode 27.

Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the National Commission on Terrorist Attacks Upon the United States (9/11 Commission). Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11.

She’s also a retired Naval Reserves officer and received several awards for her service. Bonnie remembers the military fondly. She didn’t want that life 24 hours a day, which is why she never went full time. But she liked the rules, loved the camaraderie and remembers it as a time filled with laughter.

And as if that all weren’t curious enough, four years ago Bonnie decided to go vegan. We talk about her work so far as well as:

  • How listeners can start a career like hers
  • The history of Cooperative Threat Reduction work
  • Mistakes made by Mr Obama and Mr Trump
  • Biggest uncontrolled nuclear material threats today
  • Biggest security issues in the world today
  • The Biological Weapons Convention
  • Where does Bonnie disagree with her colleagues working on peace and security?
  • The implications for countries who give up WMDs
  • The fallout from a change in government
  • Networking, the value of attention, and being a vegan in DC
  • And the best 2020 Presidential candidates.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Continue reading →

Anonymous answers: What’s the thing people most overrate in their career?

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

The advice is particularly targeted at people whose approach to doing good aligns with the values of the effective altruism (EA) community, but we expect much of it is more broadly useful.

This is the third in this series of posts with anonymous answers. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

Continue reading →

Advice on how to read our advice

We’ve found that readers sometimes interpret or apply our advice in ways we didn’t anticipate and wouldn’t exactly recommend. That’s hard to avoid when you’re writing for a range of people with different personalities and initial views.

To help get on the same page, here’s some advice about our advice, for those about to launch into reading our site.

We want our writing to inform people’s views, but only in proportion to the likelihood that we’re actually right. So we need to make sure you have a balanced perspective on how compelling the evidence is for the different claims we make on the site, and how much weight to put on our advice in your situation.

What follows is a list of points to bear in mind when reading our site, and some thoughts on how to avoid the communication problems we face.

We’ve been wrong before, and we’ll be wrong again

We still have a lot to learn about how people can best have a positive impact with their careers. This means, unfortunately, that we make mistakes and change our advice over time. And this means that in a couple of years, we’ll no longer stand by some of the claims we make today.

Our positions can change because the world changes — for instance, a problem that was more pressing in the past can receive lots of attention and become less pressing over time. Our positions can also change as we learn more —

Continue reading →