#91 – Lewis Bollard on big wins against factory farming and how they happened

I suspect today’s guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming and what broader lessons we can learn from the experiences of people working to end cruelty in animal agriculture.

That’s why I interviewed him back in 2017, and it’s why I’ve come back for an updated second dose four years later.

That conversation became a touchstone resource for anyone wanting to understand why people might decide to focus their altruism on farmed animal welfare, what those people are up to, and why.

Lewis leads Open Philanthropy’s strategy for farm animal welfare, and since he joined in 2015 they’ve disbursed about $130 million in grants to nonprofits as part of this program.

This episode certainly isn’t only for vegetarians or people whose primary focus is animal welfare. The farmed animal welfare movement has had a lot of big wins over the last five years, and many of the lessons animal activists and plant-based meat entrepreneurs have learned are of much broader interest.

Some of those include:

  • Between 2019 and 2020, Beyond Meat’s cost of goods sold fell from about $4.50 a pound to $3.50 a pound. Will plant-based meat or clean meat displace animal meat, and if so when? How quickly can it reach price parity?
  • One study reported that philosophy students reduced their meat consumption by 13% after going through a course on the ethics of factory farming. But do studies like this replicate? And what happens several months later?
  • One survey showed that 33% of people supported a ban on animal farming. Should we take such findings seriously? Or is it as informative as the study which showed that 38% of Americans believe that Ted Cruz might be the Zodiac killer?
  • Costco, the second largest retailer in the U.S., is now over 95% cage-free. Why have they done that years before they had to? And can ethical individuals within these companies make a real difference?

We also cover:

  • Switzerland’s ballot measure on eliminating factory farming
  • What a Biden administration could mean for reducing animal suffering
  • How chicken is cheaper than peanuts
  • The biggest recent wins for farmed animals
  • Things that haven’t gone to plan in animal advocacy
  • Political opportunities for farmed animal advocates in Europe
  • How the US is behind Brazil and Israel on animal welfare standards
  • The value of increasing media coverage of factory farming
  • The state of the animal welfare movement
  • And much more

If you’d like an introduction to the nature of the problem and why Lewis is working on it, in addition to our 2017 interview with Lewis, you could check out this 2013 cause report from Open Philanthropy.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#90 – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?

In today’s interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning’ could be relevant for figuring out where we should direct our charitable giving.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism’ — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that’s both very large relative to what’s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.

If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we’re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn’t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

  • Which worldviews Open Phil finds most plausible, and how it balances them
  • Which worldviews Ajeya doesn’t embrace but almost does
  • How hard it is to get to other solar systems
  • The famous ‘simulation argument’
  • When transformative AI might actually arrive
  • The biggest challenges involved in working on big research reports
  • What it’s like working at Open Phil
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#89 – Owen Cotton-Barratt on epistemic systems & layers of defense against potential global catastrophes

From one point of view academia forms one big ‘epistemic’ system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we can think of society as a whole as a huge epistemic system, made up of these and many other subsystems.

How these systems absorb, process, combine and organise information will have a big impact on what humanity as a whole ends up doing with itself — in fact, at a broad level it basically entirely determines the direction of the future.

With that in mind, today’s guest Owen Cotton-Barratt has founded the Research Scholars Programme (RSP) at the Future of Humanity Institute at Oxford University, which gives early-stage researchers the freedom to try to understand how the world works.

Instead of you having to pay for a masters degree, the RSP pays you to spend significant amounts of time thinking about high-level questions, like “What is important to do?” and “How can I usefully contribute?”

Participants get to practice their research skills, while also thinking about research as a process and how research communities can function as epistemic systems that plug into the rest of society as productively as possible.

The programme attracts people with several years of experience who are looking to take their existing knowledge — whether that’s in physics, medicine, policy work, or something else — and apply it to what they determine to be the most important topics.

It also attracts people without much experience, but who have a lot of ideas. If you went directly into a PhD programme, you might have to narrow your focus quickly. But the RSP gives you time to explore the possibilities, and to figure out the answer to the question “What’s the topic that really matters, and that I’d be happy to spend several years of my life on?”

Owen thinks one of the most useful things about the two-year programme is being around other people — other RSP participants, as well as other researchers at the Future of Humanity Institute — who are trying to think seriously about where our civilisation is headed and how to have a positive impact on this trajectory.

Instead of being isolated in a PhD, you’re surrounded by folks with similar goals who can push back on your ideas and point out where you’re making mistakes. Saving years not pursuing an unproductive path could mean that you will ultimately have a much bigger impact with your career.

In today’s episode, Arden and Owen mostly talk about Owen’s own research. They cover:

  • Extinction risk classification and reduction strategies
  • Preventing small disasters from becoming large disasters
  • How likely we are to go from being in a collapsed state to going extinct
  • What most people should do if longtermism is true
  • Advice for mathematically-minded people
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcript: Zakee Ulhaq

Continue reading →

#88 – Tristan Harris on the need to change the incentives of social media companies

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages.

Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it’s hard to remember how recently it was a fringe view.

It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality.

But while it all feels plausible, how strong is the evidence that it’s true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory.

At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. And a jury trial — an Athenian innovation — ended up condemning Socrates to death. Fears about new technologies aren’t always misguided.

Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques.

Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address.

Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what’s in our interests as users and citizens?

One way is to encourage a shift to a subscription model. Presumably, that would get Facebook’s engineers thinking more about how to make users truly happy, and less about how to make advertisers happy.

One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site.

But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50.

One possibility is for Congress to say: it’s unacceptable for large social media platforms to influence the behaviour of users through hyper-targeted advertising. Once you reach a certain size, you are required to shift over into a subscription model.

That runs into the problem that some people would be able to afford a subscription and others would not. But Tristan points out that during COVID, US electricity companies weren’t allowed to disconnect you even if you were behind on your bills. Maybe we can find a way to classify social media as an ‘essential service’ and subsidize a basic version for everyone.

Of course, getting governments more involved in social media could itself be dangerous. Politicians aren’t experts in internet services, and could simply mismanage them — and they have their own perverse motivation as well: shift communication technology in ways that will advance their political views.

Another way to shift the incentives is to make it hard for social media companies to hire the very best people unless they act in the interests of society at large. There’s already been some success here — as people got more concerned about the negative effects of social media, Facebook had to raise salaries for new hires to attract the talent they wanted.

But Tristan asks us to consider what would happen if everyone who’s offered a role by Facebook didn’t just refuse to take the job, but instead took the interview in order to ask them directly, “what are you doing to fix your core business model?”

Engineers can ‘vote with their feet’, refusing to build services that don’t put the interests of users front and centre. Tristan says that if governments are unable, unwilling, or too untrustworthy to set up healthy incentives, we might need a makeshift solution like this.

Despite all the negatives, Tristan doesn’t want us to abandon the technologies he’s concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world.

Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we’ve ever had — tools that could educate and organise people better than anything that has come before.

The tricky and open question is how to get there — Rob and Tristan agree that a lot more needs to be done to develop a reform agenda that has some chance of actually happening, and that generates as few unforeseen downsides as possible. Rob and Tristan also discuss:

  • Justified concerns vs. moral panics
  • The effect of social media on US politics
  • Facebook’s influence on developing countries
  • Win-win policy proposals
  • Big wins over the last 5 or 10 years
  • Tips for individuals
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Continue reading →

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

In the last ’80k team chat’ with Ben Todd and Arden Koehler, we discussed what effective altruism is and isn’t, and how to argue for it. In this episode we turn now to what the effective altruism community most needs.

According to Ben, we can think of the effective altruism movement as having gone through several stages, categorised by what kind of resource has been most able to unlock more progress on important issues (i.e. by what’s the ‘bottleneck’). Plausibly, these stages are common for other social movements as well.

  • Needing money: In the first stage, when effective altruism was just getting going, more money (to do things like pay staff and put on events) was the main bottleneck to making progress.
  • Needing talent: In the second stage, we especially needed more talented people being willing to work on whatever seemed most pressing.
  • Needing specific skills and capacity: In the third stage, which Ben thinks we’re in now, the main bottlenecks are organizational capacity, infrastructure, and management to help train people up, as well as specialist skills that people can put to work now.

What’s next? Perhaps needing coordination — the ability to make sure people keep working efficiently and effectively together as the community grows.

The 2020 Effective Altruism Survey just opened. If you’re involved with the effective altruism community, or sympathetic to its ideas, it’s a great thing to fill out.

Ben and I also cover the career implications of those stages, as well as the ability to save money and the possibility that someone else would do your job in your absence.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#87 – Russ Roberts on whether it's more effective to help strangers, or people you know

If you want to make the world a better place, would it be better to help your niece with her SATs, or try to join the State Department to lower the risk that the US and China go to war?

People involved in 80,000 Hours or the effective altruism community would be comfortable recommending the latter. This week’s guest — Russ Roberts, host of the long-running podcast EconTalk, and author of a forthcoming book on decision-making under uncertainty and the limited ability of data to help — worries that might be a mistake.

I’ve been a big fan of Russ’ show EconTalk for 12 years — in fact I have a list of my top 100 recommended episodes — so I invited him to talk about his concerns with how the effective altruism community tries to improve the world.

These include:

  • Being too focused on the measurable
  • Being too confident we’ve figured out ‘the best thing’
  • Being too credulous about the results of social science or medical experiments
  • Undermining people’s altruism by encouraging them to focus on strangers, who it’s naturally harder to care for
  • Thinking it’s possible to predictably help strangers, who you don’t understand well enough to know what will truly help
  • Adding levels of wellbeing across people when this is inappropriate
  • Encouraging people to pursue careers they won’t enjoy

These worries are partly informed by Russ’ ‘classical liberal’ worldview, which involves a preference for free market solutions to problems, and nervousness about the big plans that sometimes come out of consequentialist thinking.

While we do disagree on a range of things — such as whether it’s possible to add up wellbeing across different people, and whether it’s more effective to help strangers than people you know — I make the case that some of these worries are founded on common misunderstandings about effective altruism, or at least misunderstandings of what we believe here at 80,000 Hours.

We primarily care about making the world a better place over thousands or even millions of years — and we wouldn’t dream of claiming that we could accurately measure the effects of our actions on that timescale.

I’m more skeptical of medicine and empirical social science than most people, though not quite as skeptical as Russ (check out this quiz I made where you can guess which academic findings will replicate, and which won’t).

And while I do think that people should occasionally take jobs they dislike in order to have a social impact, those situations seem pretty few and far between.

But Russ and I disagree about how much we really disagree. In addition to all the above we also discuss:

  • How to decide whether to have kids
  • Was the case for deworming children oversold?
  • Whether it would be better for countries around the world to be better coordinated

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

Had World War 1 never happened, you might never have existed.

It’s very unlikely that the exact chain of events that led to your conception would have happened if the war hadn’t — so perhaps you wouldn’t have been born.

Would that mean that it’s better for you that World War 1 happened (regardless of whether it was better for the world overall)?

On the one hand, if you’re living a pretty good life, you might think the answer is yes – you get to live rather than not.

On the other hand, it sounds strange to say that it’s better for you to be alive, because if you’d never existed there’d be no you to be worse off. But if you wouldn’t be worse off if you hadn’t existed, can you be better off because you do?

In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can’t be better for someone to exist vs. not.

Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn’t better for them, and thus, perhaps, that it’s not better at all.

This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn’t otherwise have existed) — which would affect how we try to make the world a better place.

Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned.

Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out.

This is our second episode with Professor Greaves. The first one was a big hit, so we thought we’d come back and dive into even more complex ethical issues.

We also discuss:

  • The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible
  • What it means for us to be ‘clueless’ about the consequences of our actions
  • Moral uncertainty — what we should do when we don’t know which moral theory is correct
  • Whether we should take a bet on a really small probability of a really great outcome
  • The field of global priorities research at the Global Priorities Institute and beyond

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Benjamin Todd on the core of effective altruism and how to argue for it (80k team chat #3)

Today’s episode is the latest conversation between Arden Koehler, and our CEO, Ben Todd.

Ben’s been thinking a lot about effective altruism recently, including what it really is, how it’s framed, and how people misunderstand it.

We recently released an article on misconceptions about effective altruism – based on Will MacAskill’s recent paper The Definition of Effective Altruism – and this episode can act as a companion piece.

Arden and Ben cover a bunch of topics related to effective altruism:

  • How it isn’t just about donating money to fight poverty
  • Whether it includes a moral obligation to give
  • The rigorous argument for its importance
  • Objections to that argument
  • How to talk about effective altruism for people who aren’t already familiar with it

Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong (80k team chat #2)

Today’s bonus episode is a conversation between Arden Koehler, and our CEO, Ben Todd.

Ben’s been doing a bunch of research recently, and we thought it’d be interesting to hear about how he’s currently thinking about a couple of different topics – including different types of longtermism, and things 80,000 Hours might be getting wrong.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show here.

This is very off-the-cut compared to our regular episodes, and just 54 minutes long.

In the first half, Arden and Ben talk about varieties of longtermism:

  • Patient longtermism
  • Broad urgent longtermism
  • Targeted urgent longtermism focused on existential risks
  • Targeted urgent longtermism focused on other trajectory changes
  • And their distinctive implications for people trying to do good with their careers.

In the second half, they move on to:

  • How to trade-off transferable versus specialist career capital
  • How much weight to put on personal fit
  • Whether we might be highlighting the wrong problems and career paths.

Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#85 – Mark Lynas on climate change, societal collapse & nuclear energy

A golf-ball sized lump of uranium can deliver more than enough power to cover all your lifetime energy use. To get the same energy from coal, you’d need 3,200 tonnes of the stuff — a mass equivalent to 800 adult elephants, which would go on to produce more than 11,000 tonnes of CO2. That’s about 11,000 tonnes more than the uranium.

Many people aren’t comfortable with the danger posed by nuclear power. But given the climatic stakes, it’s worth asking: Just how much more dangerous is it compared to fossil fuels?

According to today’s guest, Mark Lynas — author of Six Degrees: Our Future on a Hotter Planet (winner of the prestigious Royal Society Prizes for Science Books) and Nuclear 2.0. — it’s actually much, much safer.

Climatologists James Hansen and Pushker Kharecha calculated that the use of nuclear power between 1971 and 2009 avoided the premature deaths of 1.84 million people by preventing air pollution from burning coal.

What about radiation or nuclear disasters? According to Our World In Data, in generating a given amount of electricity, nuclear, wind, and solar all cause about the same number of deaths — and it’s a tiny number.

So what’s going on? Why isn’t everyone demanding a massive scale-up of nuclear energy to save lives and stop climate change? Mark and many other activists believe that unchecked climate change will result in the collapse of human civilization, so the stakes could not be higher.

Mark says that many environmentalists — including him — simply grew up with anti-nuclear attitudes all around them (possibly stemming from a widespread conflation of nuclear weapons and nuclear energy) and haven’t thought to question them.

But he thinks that once you believe in a climate emergency, you have to rethink your opposition to nuclear energy.

At 80,000 Hours we haven’t analysed the merits and flaws of the case for nuclear energy — especially compared to wind and solar paired with gas, hydro, or battery power to handle intermittency — but Mark is convinced.

He says it comes down to physics: Nuclear power is just so much denser.

We need to find an energy source that provides carbon-free power to ~10 billion people, and we need to do it while humanity is doubling or tripling its energy demand (or more).

How do you do that without destroying the world’s ecology? Mark thinks that nuclear is the only way:

“Coal is a brilliant way to run industry and to generate power, apart from a few million dead every year from particulate pollution, and small things like that.

But uranium is something like a million times more energy dense than hydrocarbons, so you can power whole countries with a few tons of the stuff, and the material flows and the waste flows are simply trivial in comparison, and raise no significant environmental challenges — or, indeed, engineering challenges.

It’s just doable, and it isn’t doable with any other approach that you can imagine.

Renewables are not energy dense, so you have to cover immense areas of land to capture enough solar power through photovoltaic technology to even go a small distance towards addressing our current energy consumption with solar. And wind likewise.”

How much land? In Nuclear 2.0 Mark says that if you wanted to reach the ambitious Greenpeace scenario for 2030 of wind power generating 22 percent of global electricity and solar power generating 17 percent, wind farms would cover about 1 million square kilometers. That’s about as much as Texas and New Mexico combined. Solar power plants would cover another ~50,000 square kilometers.

For Mark, the only argument against nuclear power is a political one — that people won’t want or accept it.

He says that he knows people in all kinds of mainstream environmental groups — such as Greenpeace — who agree that nuclear must be a vital part of any plan to solve climate change. But, because they think they’ll be ostracized if they speak up, they keep their mouths shut.

Mark thinks this willingness to indulge beliefs that contradict scientific evidence stands in the way of actually addressing climate change, and so he’s aiming to build a movement of folks who are out and proud about their support for nuclear energy.

This is just one topic of many in today’s interview. Arden, Rob, and Mark also discuss:

  • At what degrees of warming does societal collapse become likely
  • Whether climate change could lead to human extinction
  • What environmentalists are getting wrong about climate change
  • Why political and grassroots activism is important for fighting climate change
  • The most worrying climatic feedback loops
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#84 – Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked

When COVID-19 hit the US, everyone was told that hand sanitizer needed to be saved for healthcare professionals and to just wash their hands instead. But in India, many homes lack reliable piped water, so they had to do the opposite: distribute hand sanitizer as widely as possible.

American advocates for banning single-use plastic straws might be outraged at the widespread adoption of single-use hand sanitizer sachets in India. But the US and India are very different places, and it might be the only way out when you’re facing a pandemic without running water.

According to today’s guest, Shruti Rajagopalan, Senior Research Fellow at the Mercatus Center at George Mason University, context is key to policy. Back in April this prompted Shruti to propose a suite of policy responses designed for India specifically.

Unfortunately she also thinks it’s surprisingly hard to know what one should and shouldn’t imitate from overseas.

For instance, some places in India installed shared handwashing stations in bus stops and train stations, which is something no developed country would recommend. But in India, you can’t necessarily wash your hands at home — so shared faucets might be the lesser of two evils. (Though note scientists now regard hand hygiene as less central to controlling COVID-19.)

Stay-at-home orders present a more serious example. Developing countries find themselves in a serious bind that rich countries do not.

With nearly no slack in healthcare capacity, India lacks equipment to treat even a small number of COVID-19 patients. That suggests strict controls on movement and economic activity might be necessary to control the pandemic.

But many people in India and elsewhere can’t afford to shelter in place for weeks, let alone months. And governments in poor countries may not have the resources to send everyone money for months — even where they have the infrastructure to do so fast enough.

India did ultimately impose strict lockdowns, lasting almost 70 days, but the human toll has been larger than in rich countries, with a vast number of migratory workers stranded far from home with limited if any income support.

There were no trains or buses, and the government made no provision to deal with the situation. Unable to afford rent where they were, many people had to walk hundreds of kilometers to reach home, often carrying their kids and life’s belongings.

But in other ways the context of developing countries is more promising. In the US many people melted down when asked to wear facemasks. But in South Asia, people just wore them.

Shruti isn’t sure if that’s because of existing challenges with high pollution, past experiences with pandemics, or because intergenerational living makes the wellbeing of the elderly more salient, but the end result is that masks weren’t politicised the way they were in the US.

In addition, despite the suffering caused by India’s policy response to COVID-19, public support for the measures and the government remains high — and India’s population is much younger and so less affected by the virus.

In this episode, Howie and Shruti explore the unique policy challenges facing India in its battle with COVID-19, what they’ve tried to do, and how it has performed.

They also cover:

  • What an economist can bring to the table when studying pandemics
  • The mystery of India’s surprisingly low mortality rate
  • India’s strict lockdown, and the public’s reaction
  • Policies that should be implemented today
  • What makes a good constitution
  • Emergent Ventures

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#83 – Jennifer Doleac on ways to prevent crime other than police and prisons

The killing of George Floyd has prompted a great deal of debate over whether the US should shrink its police departments. The research literature suggests that the presence of police officers does reduce crime, though they’re not cheap, and as is increasingly recognised, impose substantial harms on the populations they are meant to be protecting, especially communities of colour.

So maybe we ought to shift our focus to unconventional but effective approaches to crime prevention — approaches that would shrink the need for police or prisons and the human toll they bring with them.

Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three alternative ways to effectively prevent crime: better street lighting, cognitive behavioral therapy, and lead abatement.

One of Jennifer’s papers used the switch into and out of daylight saving time as a ‘natural experiment’ to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double.

The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You’re just more likely to get caught.

You might think: “Well, people will just commit crime in the morning instead”. But it looks like criminals aren’t early risers, and that doesn’t happen.

(Incidentally, a different experiment used the discontinuity in daylight savings time to quantify racial bias in police traffic stops.)

While we can’t keep the sun out all day, just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone.

On her unusually rigorous podcast Probable Causation, Jennifer interviewed Aaron Chalfin, who studied what happened when very bright streetlights were randomly added to some public housing complexes but not others. His team found the lights reduced outside night-time crime by a massive 36%, even after taking account of possible displacement to other locations.

The second approach is cognitive behavioral therapy (CBT), in which you’re taught to slow down your decision-making and think through your assumptions before acting.

One randomised controlled trial looked at schools and juvenile detention facilities in Chicago, and compared kids randomly assigned to receive CBT with those who weren’t. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%.

Jennifer says the program isn’t that expensive, and its benefits are massive. Everyone would probably benefit from being able to talk through their problems and figure out why they make the decisions they do, but it might be especially helpful for people who’ve grown up with the trauma of violence in their lives.

A somewhat similar study of one-day ‘procedural justice’ training sessions for police officers in Chicago found they reduced civilian complaints against police by 10%.

Finally, Jennifer thinks that reducing lead levels might be the best buy of all in crime prevention.

There is really compelling evidence that lead not only increases crime, but also dramatically reduces educational outcomes.

In the US and other countries, there’s been a lengthy and mysterious drop in crime rates since the mid nineties, resulting in crime rates that are now just 25-50% of what they were in 1993.

That drop coincided with gasoline being deleaded. Before that, exhaust from cars would spread lead all over the place. While there’s no conclusive evidence that this huge drop in crime was due to kids growing up in a less polluted environment, there is compelling evidence that lead exposure does increase crime.

While average lead levels are much lower nowadays, some places still have shockingly high levels. Famously, Flint, Michigan still has major problems with lead in its water, but it’s far from the worst.

Jennifer believes that lead affects people’s brains in such a negative way that driving exposure down even further would be extremely cost-effective for its crime-reduction benefits alone, even setting aside broader benefits to people’s health.

In today’s conversation, Rob and Jennifer also cover, among many other things:

  • Misconduct, hiring practices and accountability among US police
  • Procedural justice training
  • Overrated policy ideas
  • Policies to try to reduce racial discrimination
  • The effects of DNA databases
  • Diversity in economics
  • The quality of social science research

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#82 – James Forman Jr on reducing the cruelty of the US criminal legal system

No democracy has ever incarcerated as many people as the United States. To get its incarceration rate down to the global average, the US would have to release 3 in 4 people in its prisons today.

The effects on Black Americans have been especially severe — Black people make up 12% of the US population but 33% of its prison population. In the early 2000s when incarceration reached its peak, the US government estimated that 32% of Black boys would go to prison at some point in their lives, 5.5 times the figure for whites.

Contrary to popular understanding, nonviolent drug offenses account for less than one fifth of the incarcerated population. The only way to get its incarceration rate near the global average will be to shorten prison sentences for so-called ‘violent criminals’ — a politically toxic idea. But could we change that?

According to today’s guest, Professor James Forman Jr — a former public defender in Washington DC, Pulitzer Prize-winning author of Locking Up Our Own: Crime and Punishment in Black America, and now a professor at Yale Law School — there are two things we have to do to make that happen.

First, he thinks we should lose the term ‘violent offender’, and maybe even ‘violent crime’. When you say ‘violent crime’, most people immediately think of murder and rape — but they’re only a small fraction of the crimes that the law deems as violent.

In reality, the crime that puts the most people in prison in the US is robbery. And the law says that robbery is a violent crime whether a weapon is involved or not. By moving away from the catch-all category of ‘violent criminals’ we can judge the risk posed by individual people more sensibly.

Second, he thinks we should embrace the restorative justice movement. Instead of asking “What was the law? Who broke it? What should the punishment be”, restorative justice asks “Who was harmed? Who harmed them? And what can we as a society, including the person who committed the harm, do to try to remedy that harm?”

Instead of being narrowly focused on how many years people should spend in prison for the purpose of retribution, it starts a different conversation.

You might think this apparently softer approach would be unsatisfying to victims of crime. But Forman has discovered that a lot of victims of crime find that the current system doesn’t help them in any meaningful way. What they want to know above all else is: why did this happen to me?

The best way to find that out is to actually talk to the person who harmed them, and in doing so gain a better understanding of the underlying factors behind the crime. The restorative justice approach facilitates these conversations in a way the current system doesn’t, and can include restitution, apologies, and face-to-face reconciliation.

The city of Washington DC has demonstrated another way to reduce the number of people incarcerated for violent crimes. They recently passed a law that gives anyone sentenced to more than 15 years in prison the right to return to court after those 15 years, show a judge all of the positive ways they’ve changed, and petition for a new sentence.

They’ve also moved aggressively in a direction of bringing in restorative justice, with a focus on juvenile courts.

So, although the road is hard, James does see examples of jurisdictions really trying to tackle the core of the problem of mass incarceration.

That’s just one topic of many covered in today’s episode, with much of the conversation focusing on Forman’s 2018 book Locking Up Our Own — an examination of the historical origins of contemporary criminal legal practices in the US, and his experience setting up a charter school for at-risk youth in DC.

Rob and James also discuss:

  • The biggest problems in policing and the criminal legal system today
  • How racism shaped the US criminal legal system
  • How Black America viewed policing through the 20th century
  • How class divisions fostered a ‘tough on crime’ approach
  • Important recent successes
  • How you can have a positive impact as a public prosecutor

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#81 – Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments.

Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment.

In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances.

Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there’s very little existing writing on existential accidents. Some more recent AI risk arguments do seem plausible to Ben, but they’re fragile and difficult to evaluate since they haven’t yet been expounded at length.

There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world.

He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence.

Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it’s really not clear that we should expect such jumps or find them plausible.

These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them.

But Ben points out that it’s also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can’t specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences?

Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance.

He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in.

This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things:

  • The threat of AI systems increasing the risk of permanently damaging conflict or collapse
  • The possibility of permanently locking in a positive or negative future
  • Contenders for types of advanced systems
  • What role AI should play in the effective altruism portfolio

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#80 – Professor Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed.

In his new book, Human Compatible, he outlines the ‘standard model’ of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we’ve stated explicitly. This is so obvious it almost doesn’t even seem like a design choice, but it is.

Unfortunately there’s a big problem with this approach: it’s incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we’ve asked it to. That’s true even if the goal isn’t what we really want, or the methods it’s choosing are ones we would never accept.

We already see AIs misbehaving for this reason. Stuart points to the example of YouTube’s recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn’t something we wanted, but it helped achieve the algorithm’s objective: maximise viewing time.

Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we’ve asked for.

This ‘alignment’ problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we’re ever to hand over much of the economy to thinking machines, we can’t count on ourselves correctly saying exactly what we want the AI to do every time.

Stuart isn’t just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles:

  1. The AI system’s objective is to achieve what humans want.
  2. But the system isn’t sure what we want.
  3. And it figures out what we want by observing our behaviour.

Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI.

For instance, a machine built on these principles would be happy to be turned off if that’s what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, “you can’t fetch the coffee if you’re dead.”

These principles lend themselves towards machines that are modest and cautious, and check in when they aren’t confident they’re truly achieving what we want.

We’ve made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we’ve rejected an option because we’ve considered it and decided it’s a bad idea, and when we simply haven’t thought about it at all.

Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political.

When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? How considerate of other people’s interests do we expect AIs to be? How do we avoid them being used in malicious or anti-social ways?

And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?

Despite all these problems, the rewards of success could be enormous. If cheap thinking machines can one day do most of the work people do now, it could dramatically raise everyone’s standard of living, like a second industrial revolution.

Without having to work just to survive, people might flourish in ways they never have before.

In today’s conversation we cover, among many other things:

  • What are the arguments against being concerned about AI?
  • Should we develop AIs to have their own ethical agenda?
  • What are the most urgent research questions in this area?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, “You know what, she’s not so bad”.

Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history.

He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His next book will ask: if we reframe global problems as puzzles, would the world be a better place?

This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at a clever blog post that changes styles each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.)

We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.)

Another reason to listen is for the facts:

  • The Bayer aspirin company invented heroin as a cough suppressant
  • Coriander is just the British way of saying cilantro
  • Dogs have a third eyelid to protect the eyeball from irritants
  • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.)

One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the Bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.)

I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; Rob and the rest of the 80,000 Hours team for their help; the thousands of people who’ll listen to this; my fiancée who let me talk about her to those thousands of people; the construction worker who told me how to get to my subway platform on the morning of the interview; Queen Jadwiga for making bagels popular in the 14th century, which kept me going during the recording; and the folks at the New York reservoir whose work allows A.J.’s coffee to be made, without which he’d never have had the energy to talk to me for more than five minutes. (Thanks a Thousand.)

We also discuss:

  • The most extreme ideas A.J.’s ever considered
  • Respecting your older self
  • Blackmailing yourself
  • The experience of having his book made into a CBS sitcom
  • Talking to friends and family about effective altruism
  • Utilitarian movie reviews
  • The value of fiction focused on the long-term future
  • Doing good as a journalist
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#78 – Danny Hernandez on forecasting and measuring some of the most important drivers of AI progress

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.

These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today’s episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy.

Danny and I talk about how to understand his team’s results and what they mean (and don’t mean) for how we should think about progress in AI going forward.

Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field.

If this research sounds appealing, you might be interested in applying to join OpenAI’s Foresight team — they’re currently hiring research engineers.

In the interview, Danny and I (Arden Koehler) also discuss a range of other topics, including:

  • The question of which experts to believe
  • Danny’s journey to working at OpenAI
  • The usefulness of “decision boundaries”
  • The importance of Moore’s law for people who care about the long-term future
  • What OpenAI’s Foresight Team’s findings might imply for policy
  • The question whether progress in the performance of AI systems is linear
  • The safety teams at OpenAI and who they’re looking to hire
  • One idea for finding someone to guide your learning
  • The importance of hardware expertise for making a positive impact

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

In March Professor Marc Lipsitch — director of Harvard’s Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalists knocking down his door, as everyone turned to him for information they could trust.

Here he lays out where the fight against COVID-19 stands today, why he’s open to deliberately giving people COVID-19 to speed up vaccine development, and how we could do better next time.

As Marc tells us, island nations like Taiwan and New Zealand are successfully suppressing SARS-COV-2. But everyone else is struggling.

Even Singapore, with plenty of warning and one of the best test and trace systems in the world, lost control of the virus in mid-April after successfully holding back the tide for 2 months.

This doesn’t bode well for how the US or Europe will cope as they ease their lockdowns. It also suggests it would have been exceedingly hard for China to stop the virus before it spread overseas.

But sadly, there’s no easy way out.

The original estimates of COVID-19’s infection fatality rate, of 0.5-1%, have turned out to be basically right. And the latest serology surveys indicate only 5-10% of people in countries like the US, UK and Spain have been infected so far, leaving us far short of herd immunity. To get there, even these worst affected countries would need to endure something like ten times the number of deaths they have so far.

Marc has one good piece of news: research suggests that most of those who get infected do indeed develop immunity, for a while at least.

To escape the COVID-19 trap sooner rather than later, Marc recommends we go hard on all the familiar options — vaccines, antivirals, and mass testing — but also open our minds to creative options we’ve so far left on the shelf.

Despite the importance of his work, even now the training and grant programs that produced the community of experts Marc is a part of, are shrinking. We look at a new article he’s written about how to instead build and improve the field of epidemiology, so humanity can respond faster and smarter next time we face a disease that could kill millions and cost tens of trillions of dollars.

We also cover:

  • How listeners might contribute as future contagious disease experts, or donors to current projects
  • How we can learn from cross-country comparisons
  • Modelling that has gone wrong in an instructive way
  • What governments should stop doing
  • How people can figure out who to trust, and who has been most on the mark this time
  • Why Marc supports infecting people with COVID-19 to speed up the development of a vaccines
  • How we can ensure there’s population-level surveillance early during the next pandemic
  • Whether people from other fields trying to help with COVID-19 has done more good than harm
  • Whether it’s experts in diseases, or experts in forecasting, who produce better disease forecasts

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#76 – Tara Kirk Sell on COVID-19 misinformation, who's over and under-performed, and what we can reopen first

Amid a rising COVID-19 death toll, and looming economic disaster, we’ve been looking for good news — and one thing we’re especially thankful for is the Johns Hopkins Center for Health Security (CHS).

CHS focuses on protecting us from major biological, chemical or nuclear disasters, through research that informs governments around the world. While this pandemic surprised many, just last October the Center ran a simulation of a ‘new coronavirus’ scenario to identify weaknesses in our ability to quickly respond. Their expertise has given them a key role in figuring out how to fight COVID-19.

Today’s guest, Dr Tara Kirk Sell, did her PhD in policy and communication during disease outbreaks, and has worked at CHS for 11 years on a range of important projects.

Last year she was a leader on Collective Intelligence for Disease Prediction, designed to sound the alarm about upcoming pandemics before others are paying attention. Incredibly, the project almost closed in December, with COVID-19 just starting to spread around the world — but received new funding that allowed the project to respond quickly to the emerging disease.

She also contributed to a recent report attempting to explain the risks of specific types of activities resuming when COVID-19 lockdowns end.

It’s not possible to reach zero risk — so differentiating activities on a spectrum is crucial. Choosing wisely can help us lead more normal lives without reviving the pandemic.

Dance clubs will have to stay closed, but hairdressers can adapt to minimise transmission, and Tara (who happens to also be an Olympic silver medalist swimmer) suggests outdoor non-contact sports could resume soon at little risk.

Her latest work deals with the challenge of misinformation during disease outbreaks.

Analysing the Ebola communication crisis of 2014, they found that even trained coders with public health expertise sometimes needed help to distinguish between true and misleading tweets — showing the danger of a continued lack of definitive information surrounding a virus and how it’s transmitted.

The challenge for governments is not simple. If they acknowledge how much they don’t know, people may look elsewhere for guidance. But if they pretend to know things they don’t, or actively mislead the public, the result can be a huge loss of trust.

Despite their intense focus on COVID-19, researchers at the Center for Health Security know that this is not a one-time event. Many aspects of our collective response this time around have been alarmingly poor, and it won’t be long before Tara and her colleagues need to turn their mind to next time.

You can now donate to CHS through Effective Altruism Funds. Donations made through EA Funds are tax-deductible in the US, the UK, and the Netherlands.

Tara and Rob also discuss:

  • Who has overperformed and underperformed expectations during COVID-19?
  • When are people right to mistrust authorities?
  • The media’s responsibility to be right
  • What policies should be prioritised for next time
  • Should we prepare for future pandemic while the COVID-19 is still going?
  • The importance of keeping non-COVID health problems in mind
  • The psychological difference between staying home voluntarily and being forced to
  • Mistakes that we in the general public might be making
  • Emerging technologies with the potential to reduce global catastrophic biological risks

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on the most plausible paths for them, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths.

I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford’s Global Priorities Institute, and these days I’m 80,000 Hours’ Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers.

We all thought it would be useful to discuss some of those on the show for others to hear. Among other topics we cover:

  • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you’re already accomplishing.
  • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in.
  • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it’s wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations.
  • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties.
  • Why many listeners aren’t spending enough time finding out about what the day-to-day work is like in paths they’re considering, or reaching out to people for advice or opportunities.

I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it.

If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people:

  1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address.
  2. Who don’t yet have close connections with people working at effective altruist organisations.
  3. Who aren’t strongly locationally constrained.

If you’re unsure, it doesn’t take long to apply and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds.

Want to talk to one of our advisors?

We speak to hundreds of people each year and can offer introductions and answer specific questions you might have. You can join the waitlist here:

Request a career advising session

Also in this episode:

  • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with.
  • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path.
  • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob.

It continues to be awe inspiring to me how many people I talk to are donating to save lives, making dietary changes to avoid intolerable suffering, and carefully planning their lives to improve the future trajectory of the world. I hope we can continue to support each other in doing those things, and appreciate how important all this work is.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →