#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

I think it’s about the story we tell as well, and leveraging that narrative, and changing the narrative about who animal rights folks are. That we’re not just these adversarial people coming to take jobs away from rural America and choices away from meat eaters. We’re actually building something better that benefits everyone.

Leah Garcés

For a chance to prevent enormous amounts of suffering, would you be brave enough to drive five hours to a remote location to meet a man who seems likely to be your enemy, knowing that it might be an ambush?

Today’s guest — Leah Garcés — was.

That man was a chicken farmer named Craig Watts, and that ambush never happened. Instead, Leah and Craig forged a friendship and a partnership focused on reducing suffering on factory farms.

Leah, now president of Mercy For Animals (MFA), tried for years to get access to a chicken farm to document the horrors she knew were happening behind closed doors. It made sense that no one would let her in — why would the evil chicken farmers behind these atrocities ever be willing to help her take them down?

But after sitting with Craig on his living room floor for hours and listening to his story, she discovered that he wasn’t evil at all — in fact he was just stuck in a cycle he couldn’t escape, forced to use methods he didn’t endorse.

Most chicken farmers have enormous debts they are constantly struggling to pay off, make very little money, and have to work in terrible conditions — their main activity most days is finding and killing the sick chickens in their flock. Craig was one of very few farmers close to finally paying off his debts, which made him slightly less vulnerable to retaliation. That, combined with his natural tenacity and bravery, opened up the possibility for him to work with Leah.

Craig let Leah openly film inside the chicken houses, and shared highly confidential documents about the antibiotics put into the feed. That led to a viral video, and a New York Times story. The villain of that video was Jim Perdue, CEO of one of the biggest meat companies in the world. They show him saying, “Farmers are happy. Chickens are happy. There’s a lot of space. They’re clean.” And then they show the grim reality.

For years, Perdue wouldn’t speak to Leah. But remarkably, when they actually met in person, she again managed to forge a meaningful relationship with a natural adversary. She was able to put aside her utter contempt for the chicken industry and see Craig and Jim as people, not cartoonish villains.

Leah believes that you need to be willing to sit down with anyone who has the power to solve a problem that you don’t — recognising them as human beings with a lifetime of complicated decisions behind their actions. And she stresses that finding or making a connection is really important. In the case of Jim Perdue, it was the fact they both had adopted children. Because of this, they were able to forget that they were supposed to be enemies in that moment, talk about their experience as parents, and build some trust.

The other lesson that Leah highlights is that you need to look for win-wins and start there, rather than starting with disagreements. With Craig Watts, instead of opening with “How do I end his job”, she thought, “How can I find him a better job?” If you find solutions where everybody wins, you don’t need to spend resources fighting the former enemy. They’ll come to you.

Typically animal activists are seen as coming into rural areas to take away jobs and choices — but MFA are trying to do the opposite. They want to create new opportunities, and give farmers a level of freedom they haven’t had since they first set foot on the factory farming debt treadmill.

It turns out that conditions in chicken houses are perfect for growing hemp or mushrooms, so Mercy For Animals have started their ‘Transfarmation project’ to help farmers like Craig escape from the prison of factory farming by converting their production from animals to plants. To convince farmers to leave behind a life of producing suffering, all you need to do is find them something better — which for many of them is almost anything else.

Leah and Rob also talk about:

  • Mercy for Animals’ overall strategy for ending factory farming sooner than later
  • Why conditions for farmers are so bad
  • The importance of building on past work
  • The benefits of creating a ranking and scoring companies against each other
  • Why we should drive up the price of factory farmed meat by any means necessary
  • The difficulty of enforcing corporate pledges
  • Her disagreements with others in the animal welfare movement
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

How much do people differ in productivity? What the evidence says.

People sometimes point out that performance is ‘power law’ distributed, e.g. they’ll point out that the top 10% of scientists get 5x more citations over their career than the other 90% of scientists, or that the top 1% of startup founders get 80% of the equity value.

But is this true? And if so, what does it imply?

I think these differences in performance can be really important, and their significance is often not properly appreciated. But it’s also often oversold.

To better understand how much people predictively differ in productivity, Max Daniel of the Future of Humanity Institute and I did an informal review of the academic research.

We found there’s relevant research in several fields (often pursued independently) including economics, organisational psychology, expert performance, scientometrics, and physics.

We aimed to get an overview of what’s out there and combine it with our own understanding to see if we could draw any practical lessons for hiring managers or people planning their careers.

Here’s a summary of some of the data we found in the review:

Data on the dispersion of staff productivity

And here’s a 10-point summary of what we learned. (See the full write up here, and discussion on the EA Forum.)

1) ‘Power law’ sounds catchy, but identifying which distribution to use is hard to do, statistically.

Distinguishing power laws from log-normal distributions is notoriously difficult,

Continue reading →

80,000 Hours Annual Review — November 2020

We’ve released our 2020 annual review. The full document is available as a google doc, and we’ve copied the summary below.

Progress in 2020

80,000 Hours provides research and support to help people switch into careers that effectively tackle the world’s most pressing problems.

Our goal for 2020 was to continue all our programmes (key ideas and other web content, podcast, job board, advising, and headhunting) with the aim of growing the number of plan changes we cause.

We also aimed to grow team capacity at a moderate rate (+2.5 FTE as well as onboarding Habiba), so that we’re working towards our longer-term vision, but going slowly enough that we can continue to focus on improving our programmes, resolving key uncertainties, and preserving culture.

I thought we made good progress on continued delivery (e.g. released 64% more content with +30% inputs & fixed some gaps in key ideas), though we missed our target for the number of advising calls.

On plan change impact, we tracked 11 net new ‘top plan changes’ and 188 ‘criteria-based plan changes’.

My best guess at the ratio of plan changes to full-time equivalents (FTE) for 2018–2019 went down 20% from what I estimated last year, though my estimate for 2016–2017 went up. I became more confident that 80,000 Hours is useful to the most promising new longtermist EAs. Otherwise, I didn’t make significant updates about our cost effectiveness.

Continue reading →

Planning a high-impact career: a summary of everything you need to know in 7 points

We took ten years of research and what’s we’ve learned advising 1,000+ people on how to build high impact careers, compressed that into an 8-week course to create your career plan, and then compressed that into this 3 page summary of the main points.

(It’s especially aimed at people who want a career that’s both satisfying and has a significant positive impact, but much of the advice applies to all career decisions.)

1. Use these factors to clarify what a successful career looks like.

You can divide career aims into three categories: (i) personal priorities (ii) impartial positive impact, and (iii) other moral values. We’d encourage you to make your own definition of each.

We define ‘impartial positive impact’ as what helps the most people live better lives in the long term, treating everyone’s interests as equal.

You can analyse the impact of a career opportunity in terms of:

  1. How pressing the problem is that you’d address
  2. How effective the opportunity is at tackling the problem
  3. Your personal fit with the opportunity, which depends on your abilities and ‘career capital’ (skills, connections, and reputation).

The goal is to maximise the product of these three factors over your career.

Because most people reach their peak productivity between age 40–60, you need your work to be personally satisfying enough to stick with it for the long haul,

Continue reading →

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

If you think that there is no fundamental asymmetry between the past and the future, maybe we should be sanguine about the future — including sanguine about our own mortality — in the same way that we’re sanguine about the fact that we haven’t existed forever.

Christian Tarsney

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience.

You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.”

So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour.

Which patient would you rather be?

Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future.

Christian Tarsney, a philosopher at Oxford University’s Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences.

That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past?

One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it.

But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn’t care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about!

Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven’t played yet are still on the way.

If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction.

It’s a live debate that’s playing out in the philosophy of time, as well as in physics. And Christian says that even if you could show that time had a direction, it would still be hard to explain why we should care more about the past than the future — at least in a way that doesn’t just sound like “Well, the past is in the past and the future is in the future”.

For Christian, there are two big practical implications of these past, present, and future ethical comparison cases.

The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people’s past goals, including the goals of people who are now dead.

The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born?

Christian and Rob also cover several other big topics, including:

  • A possible solution to moral fanaticism, where you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome
  • How much of humanity’s resources we should spend on improving the long-term future
  • How large the expected value of the continued existence of Earth-originating civilization might be
  • How we should respond to uncertainty about the state of the world
  • The state of global priorities research
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Sofia Davis-Fogel

Continue reading →

#97 – Mike Berkowitz on keeping the U.S. a liberal democratic country

When you have leaders who feel no adherence to norms or customs, [there’s] nothing preventing them from violating them.

Mike Berkowitz

Donald Trump’s attempt to overturn the results of the 2020 election split the Republican party. There were those who went along with it — 147 members of Congress raised objections to the official certification of electoral votes — but there were others who refused. These included Brad Raffensperger and Brian Kemp in Georgia, and Vice President Mike Pence.

Although one could say that the latter Republicans showed great courage, the key to the split may lie less in differences of moral character or commitment to democracy, and more in what was being asked of them. Trump wanted the first group to break norms, but he wanted the second group to break the law.

And while norms were indeed shattered, laws were upheld.

Today’s guest Mike Berkowitz, executive director of the Democracy Funders Network, points out a problem we came to realize throughout the Trump presidency: So many of the things that we thought were laws were actually just customs.

So once you have leaders who don’t buy into those customs — like, say, that a president shouldn’t tell the Department of Justice who it should and shouldn’t be prosecuting — there’s nothing preventing said customs from being violated.

And what happens if current laws change?

A recent Georgia bill took away some of the powers of Georgia’s Secretary of State — Brad Raffensberger. Mike thinks that’s clearly retribution for Raffensperger’s refusal to overturn the 2020 election results. But he also thinks it means that the next time someone tries to overturn the results of the election, they could get much farther than Trump did in 2020.

In this interview Mike covers what he thinks are the three most important levers to push on to preserve liberal democracy in the United States:

  1. Reforming the political system, by e.g. introducing new voting methods
  2. Revitalizing local journalism
  3. Reducing partisan hatred within the United States

Mike says that American democracy, like democracy elsewhere in the world, is not an inevitability. The U.S. has institutions that are really important for the functioning of democracy, but they don’t automatically protect themselves — they need people to stand up and protect them.

In addition to the changes listed above, Mike also thinks that we need to harden more norms into laws, such that individuals have fewer opportunities to undermine the system.

And inasmuch as laws provided the foundation for the likes of Raffensperger, Kemp, and Pence to exhibit political courage, if we can succeed in creating and maintaining the right laws — we may see many others following their lead.

As Founding Father James Madison put it: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.”

Mike and Rob also talk about:

  • What sorts of terrible scenarios we should actually be worried about, i.e. the difference between being overly alarmist and properly alarmist
  • How to reduce perverse incentives for political actors, including those to overturn election results
  • The best opportunities for donations in this space
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

Launching a new resource: ‘Effective Altruism: An Introduction’

Today we’re launching a new podcast feed that might be useful to you or someone you know.

It’s called Effective Altruism: An Introduction, and it’s a carefully chosen selection of ten episodes of The 80,000 Hours Podcast, with various new intros and outros to guide folks through them.

We think that it fills a gap in the introductory resources about effective altruism that are already out there. It’s a particularly good fit for people who:

  • prefer listening over reading, or conversations over essays
  • have read about the big central ideas, but want to see how we actually think and talk
  • want to get a more nuanced understanding of how the community applies EA principles in real life — as an art rather than science.

The reason we put this together now, is that as the number of episodes of The 80,000 Hours Podcast show has grown, it has become less and less practical to suggest that new subscribers just ‘go back and listen through most of our archives.’

We hope EA: An Introduction will guide new subscribers to the best things to listen to first in order to quickly make sense of effective altruist thinking.

Across the ten episodes, we discuss:

  • What effective altruism at its core really is
  • The strategies for improving the world that are most popular within the effective altruism community,

Continue reading →

#96 – Nina Schick on disinformation and the rise of synthetic media

Technology is just going to be an amplifier of human intention, this human innate desire…to deceive, to manipulate. The visual medium is a very powerful way of doing that.

Nina Schick

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.?

Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse, thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think.

But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that’s currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff?

Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as:

  • Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source?
  • If photoshop didn’t lead to total chaos, why should this be any different?

But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive.

She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied.

Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”?

Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions.

As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can’t agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society?

Nina and Rob also talk about a bunch of other topics, including:

  • The history of disinformation, and groups who sow disinformation professionally
  • How deepfake pornography is used to attack and silence women activitists
  • The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes
  • Whether we should make it illegal to make a deepfake of someone without their permission
  • And the coolest positive uses of this technology

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#95 – Kelly Wanser on whether to deliberately intervene in the climate

We have a massive toxic spill into the atmosphere. And the immediate most damaging effect of that is heat energy trapped in the system. And so the question is, do you try to do something to counter that or to abate it?

Kelly Wanser

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado.

100 years? 50 years? 20?

Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well.

Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate.

Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have.

Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy.

After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere.

Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them ‘whiter’ so they reflect even more sunlight back into space.

These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter.

Kelly says that scientists estimate that we’re already lowering the global temperature this way by 0.5–1.1ºC, without even intending to.

While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn’t been funding to measure how much temperature change you get for a given amount of spray.

And we won’t want to dive into these methods head first because the atmosphere is a complex system we can’t yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied.

The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as:

  • It being riskier than doing nothing
  • That it will inevitably be dangerously political
  • And the risk of the ‘double catastrophe’, where a pandemic stops our climate interventions and temperatures sky-rocket at the worst time.

Kelly and Rob also talk about:

  • The many climate interventions that are already happening
  • The most promising ideas in the field
  • And whether people would be more accepting if we found ways to intervene that had nothing to do with making the world a better place.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#94 – Ezra Klein on aligning journalism, politics, and what matters most

I don’t think what’s going to happen is you’re going to call people up and be like, “You’re doing your coverage all wrong,” and they’re going to say, “Oh, thank you for telling me my life’s work is garbage.”

Ezra Klein

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs?

When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously?

Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there’s little pre-existing infrastructure to push them.

He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy).

To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there’s very little infrastructure for thinking about it. There isn’t a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on.

On top of this, the status quo always has a psychological advantage. If something was thought important by previous generations, we naturally assume it must be important today as well — think of how students continued learning ancient Greek long after it had ceased to be useful even in most scholarly careers.

All of this generates a strong ‘path dependence’ that can lock the media in to covering less important topics despite having no intention to do so.

According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important.”

One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: “This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.”

Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “…like catnip for readers.”

Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can’t make the audience interested in it, that is your failure — never the audience’s failure.

But is that really true? In today’s episode we explore that claim, as well as:

  • How many hours of news the average person should consume
  • Where the progressive movement is failing to live up to its values
  • Why Ezra thinks ‘price gouging’ is a bad idea
  • Where the FDA has failed on rapid at-home testing for COVID-19
  • Whether we should be more worried about tail-risk scenarios
  • And his biggest critiques of the effective altruism community

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

I’m very, very concerned that North Korea today has an advanced biological weapons program. You don’t need a lot of biological weapons to potentially kill billions of people … Fortunately, while we’re not there yet, the science and the tools that are now available enable the possibility of making bioweapons obsolete.

Andy Weber

COVID-19 has provided a vivid reminder of the damage biological threats can do. But the threat doesn’t come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United States, but developed in large numbers by the Soviet Union, right up until its collapse — have the potential to spread globally and kill just as many as an all-out nuclear war.

For five years, today’s guest, Andy Weber, was the US’ Assistant Secretary of Defense responsible for biological and other weapons of mass destruction. While people primarily associate the Pentagon with waging wars (including most within the Pentagon itself) Andy is quick to point out that you don’t have national security if your population remains at grave risk from natural and lab-created diseases.

Andy’s current mission is to spread the word that while bioweapons are terrifying, scientific advances also leave them on the verge of becoming an outdated technology.

He thinks there is an overwhelming case to increase our investment in two new technologies that could dramatically reduce the risk of bioweapons, and end natural pandemics in the process: mass genetic sequencing and mRNA vaccines.

First, advances in mass genetic sequencing technology allow direct, real-time analysis of DNA or RNA fragments collected from all over the human environment. You cast a wide net, and if you start seeing DNA sequences that you don’t recognise spreading through the population — that can set off an alarm.

Andy notes that while the necessary desktop sequencers may be expensive enough that they’re only in hospitals today, they’re rapidly getting smaller, cheaper, and easier to use. In fact DNA sequencing has recently experienced the most dramatic cost decrease of any technology, declining by a factor of 10,000 since 2007. It’s only a matter of time before they’re cheap enough to put in every home.

In the world Andy envisions, each morning before you brush your teeth you also breathe into a tube. Your sequencer can tell you if you have any of 300 known pathogens, while simultaneously scanning for any unknown viruses. It’s hooked up to your WiFi and reports into a public health surveillance system, which can check to see whether any novel DNA sequences are being passed from person to person. New contagious diseases can be detected and investigated within days — long before they run out of control.

The second major breakthrough comes from mRNA vaccines, which are today being used to end the COVID pandemic. The wonder of mRNA vaccines is that they can instruct our cells to make any random protein we choose and trigger a protective immune response from the body.

Until now it has taken a long time to invent and test any new vaccine, and there was then a laborious process of scaling up the equipment necessary to manufacture it. That leaves a new disease or bioweapon months or years to wreak havoc.

But using the sequencing technology above, we can quickly get the genetic codes that correspond to the surface proteins of any new pathogen, and switch them into the mRNA vaccines we’re already making. Inventing a new vaccine would become less like manufacturing a new iPhone and more like printing a new book — you use the same printing press and just change the words.

So long as we maintained enough capacity to manufacture and deliver mRNA vaccines, a whole country could in principle be vaccinated against a new disease in months.

Together these technologies could make advanced bioweapons a threat of the past. And in the process humanity’s oldest and deadliest enemy — contagious disease — could be brought under control like never before.

Andy has always been pretty open and honest, but his retirement last year has allowed him to stop worrying about being seen to speak for the Department of Defense, or for the president of the United States – and so we were also able to get his forthright views on a bunch of interesting other topics, such as:

  • The chances that COVID-19 escaped from a research facility
  • Whether a US president can really truly launch nuclear weapons unilaterally
  • What he thinks should be the top priorities for the Biden administration
  • If Andy was 18 and starting his career over again today, what would his plan be?
  • The time he and colleagues found 600kg of unsecured, highly enriched uranium sitting around in a barely secured facility in Kazakhstan, and eventually transported it to the United States
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

How to identify your personal strengths

Perhaps the most common approach to finding a good career is to identify your personal strengths, and then look for paths that match them.

This article summarises the best advice I’ve found on how to identify your strengths, turned into a three-step process. It also includes lists of personal strengths that are most commonly used by researchers (to give you a language to describe your own) and a case study.

But first, I wanted to give a warning that I think the ‘match with strengths’ approach to choosing a career seems a little overrated.

Perhaps the biggest risk is limiting yourself based on your current strengths, and ignoring your potential to develop new, more potent strengths. This risk is most pressing for younger people, who don’t yet have much data on what they’re good at – making them more likely to guess incorrectly – and have decades ahead of them to develop new strengths.

You should ask both ‘what are my strengths?’ and also ‘which strengths are worth building?’

More broadly, I’ve argued that it’s often better to take the reverse approach to match with strengths: ask what the world most needs and then figure out how you might best help with that. This orientation helps you to focus on developing skills that are both valued in the market and that can be used to solve important global problems, which is key to finding a career that’s both meaningful and personally rewarding.

Continue reading →

#92 – Brian Christian on the alignment problem

It’s funny, if you track a lot of the nay-saying that existed circa 2017 or 2018 around AGI, a lot of people would be like, “Well, call me when AI can do that. Call me when AI can tell me what the word ‘it’ means in such and such a sentence.” And then it’s like, “Okay, well we’re there, so, can we call you now?”

Brian Christian

Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science.

Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer.

Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren’t nervous at all.

Here’s a tease of 10 Hollywood-worthy stories from the episode:

  • The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience.
  • ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early nineties, using a computer with a tenth the processing capacity of an Apple Watch.
  • Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen.
  • Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net.
  • Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die.
  • The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination.
  • Montezuma’s Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple.
  • Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies.
  • AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself.
  • Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip.

We also cover:

  • How reinforcement learning actually works, and some of its key achievements and failures
  • How a lack of curiosity can cause AIs to fail to be able to do basic things
  • The pitfalls of getting AI to imitate how we ourselves behave
  • The benefits of getting AI to infer what we must be trying to achieve
  • Why it’s good for agents to be uncertain about what they’re doing
  • Why Brian isn’t that worried about explicit deception
  • The interviewees Brian most agrees with, and most disagrees with
  • Developments since Brian finished the manuscript
  • The effective altruism and AI safety communities
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

Why I find longtermism hard, and what keeps me motivated

I find working on longtermist causes to be — emotionally speaking — hard: There are so many terrible problems in the world right now. How can we turn away from the suffering happening all around us in order to prioritise something as abstract as helping make the long-run future go well?

A lot of people who aim to put longtermist ideas into practice seem to struggle with this, including many of the people I’ve worked with over the years. And I myself am no exception — the pull of suffering happening now is hard to escape. For this reason, I wanted to share a few thoughts on how I approach this challenge, and how I maintain the motivation to work on speculative interventions despite finding that difficult in many ways.

This issue is one aspect of a broader issue in effective altruism: figuring out how to motivate ourselves to do important work even when it doesn’t feel emotionally compelling. It’s useful to have a clear understanding of our emotions in order to distinguish between feelings and beliefs we endorse and those that we wouldn’t — on reflection — want to act on.

What I’ve found hard

First, I don’t want to claim that everyone finds it difficult to work on longtermist causes for the same reasons that I do, or in the same ways. I’d also like to be clear that I’m not speaking for 80,000 Hours as an organisation.

My struggles with the work I’m not doing tend to centre around the humans suffering from preventable diseases in poor countries.

Continue reading →

#91 – Lewis Bollard on big wins against factory farming and how they happened

28% of the U.S. flock is cage-free, up from 6% in 2015. That’s over 70 million hens newly out of cages over the last few years… Costco, which is the second largest retailer in the U.S., is now over 95% cage-free.

Lewis Bollard

I suspect today’s guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming and what broader lessons we can learn from the experiences of people working to end cruelty in animal agriculture.

That’s why I interviewed him back in 2017, and it’s why I’ve come back for an updated second dose four years later.

That conversation became a touchstone resource for anyone wanting to understand why people might decide to focus their altruism on farmed animal welfare, what those people are up to, and why.

Lewis leads Open Philanthropy’s strategy for farm animal welfare, and since he joined in 2015 they’ve disbursed about $130 million in grants to nonprofits as part of this program.

This episode certainly isn’t only for vegetarians or people whose primary focus is animal welfare. The farmed animal welfare movement has had a lot of big wins over the last five years, and many of the lessons animal activists and plant-based meat entrepreneurs have learned are of much broader interest.

Some of those include:

  • Between 2019 and 2020, Beyond Meat’s cost of goods sold fell from about $4.50 a pound to $3.50 a pound. Will plant-based meat or clean meat displace animal meat, and if so when? How quickly can it reach price parity?
  • One study reported that philosophy students reduced their meat consumption by 13% after going through a course on the ethics of factory farming. But do studies like this replicate? And what happens several months later?
  • One survey showed that 33% of people supported a ban on animal farming. Should we take such findings seriously? Or is it as informative as the study which showed that 38% of Americans believe that Ted Cruz might be the Zodiac killer?
  • Costco, the second largest retailer in the U.S., is now over 95% cage-free. Why have they done that years before they had to? And can ethical individuals within these companies make a real difference?

We also cover:

  • Switzerland’s ballot measure on eliminating factory farming
  • What a Biden administration could mean for reducing animal suffering
  • How chicken is cheaper than peanuts
  • The biggest recent wins for farmed animals
  • Things that haven’t gone to plan in animal advocacy
  • Political opportunities for farmed animal advocates in Europe
  • How the US is behind Brazil and Israel on animal welfare standards
  • The value of increasing media coverage of factory farming
  • The state of the animal welfare movement
  • And much more

If you’d like an introduction to the nature of the problem and why Lewis is working on it, in addition to our 2017 interview with Lewis, you could check out this 2013 cause report from Open Philanthropy.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

Rob Wiblin on how he ended up the way he is

Today we put out an interview with our Head of Research, Rob Wiblin, on our podcast feed.

The interviewer is Misha Saul, a childhood friend of Rob’s who he has known for over 20 years. While it’s not an episode of our own show, we decided to share it with subscribers because it’s fun, and because it touches on personal topics that we don’t usually get to cover in our own interviews.

They cover:

  • How Rob’s parents shaped who he is (if indeed they did)
  • Their shared teenage obsession with philosophy, which eventually led to Rob working at 80,000 Hours
  • How their politics were shaped by growing up in the 90s
  • How talking to Rob helped Misha develop his own very different worldview
  • Why The Lord of the Rings movies have held up so well
  • What was it like being an exchange student in Spain, and was learning Spanish a mistake?
  • Marriage and kids
  • Institutional decline and historical analogies for the US in 2021
  • Making fun of teachers
  • Should we stop eating animals?

Continue reading →

#90 – Ajeya Cotra on worldview diversification and how big the future could be

I have an infinity-to-one update that the world is just tiled with Rob Wiblins having Skype conversations with Ajeya right now. Because I would be most likely to be experiencing what I’m experiencing in that world.

Ajeya Cotra

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?

In today’s interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning’ could be relevant for figuring out where we should direct our charitable giving.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism’ — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that’s both very large relative to what’s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.

If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we’re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn’t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

  • Which worldviews Open Phil finds most plausible, and how it balances them
  • Which worldviews Ajeya doesn’t embrace but almost does
  • How hard it is to get to other solar systems
  • The famous ‘simulation argument’
  • When transformative AI might actually arrive
  • The biggest challenges involved in working on big research reports
  • What it’s like working at Open Phil
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

Rob Wiblin on self-improvement and research ethics

Today on our podcast feed, we’re releasing a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin.

Rob chats with Spencer Greenberg, who has been an audience favourite in episodes 11 and 39 of the 80,000 Hours Podcast, and has now created this show of his own.

Among other things they cover:

  • Is trying to become a better person a good strategy for self-improvement?
  • Why Rob thinks many people could achieve much more by finding themselves a line manager
  • Why interviews on this show are so damn long
  • Is it complicated to figure out what human beings value, or actually simpler than it seems?
  • Why Rob thinks research ethics and institutional review boards are causing immense harm
  • Where prediction markets might be failing today, and how we could tell.

You can get the interview in your podcasting app by either subscribing to the ‘80,000 Hours Podcast’, or Spencer’s show ‘Clearer Thinking’.

You might also want to check out Spencer’s conversation with another 80,000 Hours researcher: 008: Life Experiments and Philosophical Thinking with Arden Koehler.

Continue reading →

#89 – Owen Cotton-Barratt on epistemic systems & layers of defense against potential global catastrophes

We don’t always know exactly what is important, and often people end up with more good taste for choosing important things to /work on if they’ve had a space where they can step way back and say, “Okay. So what’s really going on here? What is the game? What do I want to be focusing on?”

Owen Cotton-Barratt

From one point of view academia forms one big ‘epistemic’ system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we can think of society as a whole as a huge epistemic system, made up of these and many other subsystems.

How these systems absorb, process, combine and organise information will have a big impact on what humanity as a whole ends up doing with itself — in fact, at a broad level it basically entirely determines the direction of the future.

With that in mind, today’s guest Owen Cotton-Barratt has founded the Research Scholars Programme (RSP) at the Future of Humanity Institute at Oxford University, which gives early-stage researchers the freedom to try to understand how the world works.

Instead of you having to pay for a masters degree, the RSP pays you to spend significant amounts of time thinking about high-level questions, like “What is important to do?” and “How can I usefully contribute?”

Participants get to practice their research skills, while also thinking about research as a process and how research communities can function as epistemic systems that plug into the rest of society as productively as possible.

The programme attracts people with several years of experience who are looking to take their existing knowledge — whether that’s in physics, medicine, policy work, or something else — and apply it to what they determine to be the most important topics.

It also attracts people without much experience, but who have a lot of ideas. If you went directly into a PhD programme, you might have to narrow your focus quickly. But the RSP gives you time to explore the possibilities, and to figure out the answer to the question “What’s the topic that really matters, and that I’d be happy to spend several years of my life on?”

Owen thinks one of the most useful things about the two-year programme is being around other people — other RSP participants, as well as other researchers at the Future of Humanity Institute — who are trying to think seriously about where our civilisation is headed and how to have a positive impact on this trajectory.

Instead of being isolated in a PhD, you’re surrounded by folks with similar goals who can push back on your ideas and point out where you’re making mistakes. Saving years not pursuing an unproductive path could mean that you will ultimately have a much bigger impact with your career.

RSP applications are set to open in the Spring of 2021 — but Owen thinks it’s helpful for people to think about it in advance.

In today’s episode, Arden and Owen mostly talk about Owen’s own research. They cover:

  • Extinction risk classification and reduction strategies
  • Preventing small disasters from becoming large disasters
  • How likely we are to go from being in a collapsed state to going extinct
  • What most people should do if longtermism is true
  • Advice for mathematically-minded people
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcript: Zakee Ulhaq

Continue reading →

#88 – Tristan Harris on the need to change the incentives of social media companies

I think what I’m most concerned about is the shredding of a shared meaning-making environment and joint attention into a series of micro realities – 3 billion Truman Shows.

Tristan Harris

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages.

Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it’s hard to remember how recently it was a fringe view.

It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality.

But while it all feels plausible, how strong is the evidence that it’s true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory.

At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. And a jury trial — an Athenian innovation — ended up condemning Socrates to death. Fears about new technologies aren’t always misguided.

Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques.

Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address.

Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what’s in our interests as users and citizens?

One way is to encourage a shift to a subscription model. Presumably, that would get Facebook’s engineers thinking more about how to make users truly happy, and less about how to make advertisers happy.

One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site.

But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50.

One possibility is for Congress to say: it’s unacceptable for large social media platforms to influence the behaviour of users through hyper-targeted advertising. Once you reach a certain size, you are required to shift over into a subscription model.

That runs into the problem that some people would be able to afford a subscription and others would not. But Tristan points out that during COVID, US electricity companies weren’t allowed to disconnect you even if you were behind on your bills. Maybe we can find a way to classify social media as an ‘essential service’ and subsidize a basic version for everyone.

Of course, getting governments more involved in social media could itself be dangerous. Politicians aren’t experts in internet services, and could simply mismanage them — and they have their own perverse motivation as well: shift communication technology in ways that will advance their political views.

Another way to shift the incentives is to make it hard for social media companies to hire the very best people unless they act in the interests of society at large. There’s already been some success here — as people got more concerned about the negative effects of social media, Facebook had to raise salaries for new hires to attract the talent they wanted.

But Tristan asks us to consider what would happen if everyone who’s offered a role by Facebook didn’t just refuse to take the job, but instead took the interview in order to ask them directly, “what are you doing to fix your core business model?”

Engineers can ‘vote with their feet’, refusing to build services that don’t put the interests of users front and centre. Tristan says that if governments are unable, unwilling, or too untrustworthy to set up healthy incentives, we might need a makeshift solution like this.

Despite all the negatives, Tristan doesn’t want us to abandon the technologies he’s concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world.

Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we’ve ever had — tools that could educate and organise people better than anything that has come before.

The tricky and open question is how to get there — Rob and Tristan agree that a lot more needs to be done to develop a reform agenda that has some chance of actually happening, and that generates as few unforeseen downsides as possible. Rob and Tristan also discuss:

  • Justified concerns vs. moral panics
  • The effect of social media on US politics
  • Facebook’s influence on developing countries
  • Win-win policy proposals
  • Big wins over the last 5 or 10 years
  • Tips for individuals
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Continue reading →