#94 – Ezra Klein on aligning journalism, politics, and what matters most

I don’t think what’s going to happen is you’re going to call people up and be like, “You’re doing your coverage all wrong,” and they’re going to say, “Oh, thank you for telling me my life’s work is garbage.”

Ezra Klein

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs?

When people look back on this era, is the interesting thing going to have been fights over whether or not the top marginal tax rate was 39.5% or 35.4%, or is it going to be that human beings started to take control of human evolution; that we stood on the brink of eliminating immeasurable levels of suffering on factory farms; and that for the first time the average American might become financially comfortable and unemployed simultaneously?

Today’s guest is Ezra Klein, one of the most prominent journalists in the world. Ezra thinks that pressing issues are neglected largely because there’s little pre-existing infrastructure to push them.

He points out that for a long time taxes have been considered hugely important in D.C. political circles — and maybe once they were. But either way, the result is that there are a lot of congressional committees, think tanks, and experts that have focused on taxes for decades and continue to produce a steady stream of papers, articles, and opinions for journalists they know to cover (often these are journalists hired to write specifically about tax policy).

To Ezra (and to us, and to many others) AI seems obviously more important than marginal changes in taxation over the next 10 or 15 years — yet there’s very little infrastructure for thinking about it. There isn’t a committee in Congress that primarily deals with AI, and no one has a dedicated AI position in the executive branch of the U.S. Government; nor are big AI think tanks in D.C. producing weekly articles for journalists they know to report on.

On top of this, the status quo always has a psychological advantage. If something was thought important by previous generations, we naturally assume it must be important today as well — think of how students continued learning ancient Greek long after it had ceased to be useful even in most scholarly careers.

All of this generates a strong ‘path dependence’ that can lock the media in to covering less important topics despite having no intention to do so.

According to Ezra, the hardest thing to do in journalism — as the leader of a publication, or even to some degree just as a writer — is to maintain your own sense of what’s important, and not just be swept along in the tide of what “the industry / the narrative / the conversation has decided is important.”

One reason Ezra created the Future Perfect vertical at Vox is that as he began to learn about effective altruism, he thought: “This is a framework for thinking about importance that could offer a different lens that we could use in journalism. It could help us order things differently.”

Ezra says there is an audience for the stuff that we’d consider most important here at 80,000 Hours. It’s broadly believed that nobody will read articles on animal suffering, but Ezra says that his experience at Vox shows these stories actually do really well — and that many of the things that the effective altruist community cares a lot about are “…like catnip for readers.”

Ezra’s bottom line for fellow journalists is that if something important is happening in the world and you can’t make the audience interested in it, that is your failure — never the audience’s failure.

But is that really true? In today’s episode we explore that claim, as well as:

  • How many hours of news the average person should consume
  • Where the progressive movement is failing to live up to its values
  • Why Ezra thinks ‘price gouging’ is a bad idea
  • Where the FDA has failed on rapid at-home testing for COVID-19
  • Whether we should be more worried about tail-risk scenarios
  • And his biggest critiques of the effective altruism community

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

I’m very, very concerned that North Korea today has an advanced biological weapons program. You don’t need a lot of biological weapons to potentially kill billions of people … Fortunately, while we’re not there yet, the science and the tools that are now available enable the possibility of making bioweapons obsolete.

Andy Weber

COVID-19 has provided a vivid reminder of the damage biological threats can do. But the threat doesn’t come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United States, but developed in large numbers by the Soviet Union, right up until its collapse — have the potential to spread globally and kill just as many as an all-out nuclear war.

For five years, today’s guest, Andy Weber, was the US’ Assistant Secretary of Defense responsible for biological and other weapons of mass destruction. While people primarily associate the Pentagon with waging wars (including most within the Pentagon itself) Andy is quick to point out that you don’t have national security if your population remains at grave risk from natural and lab-created diseases.

Andy’s current mission is to spread the word that while bioweapons are terrifying, scientific advances also leave them on the verge of becoming an outdated technology.

He thinks there is an overwhelming case to increase our investment in two new technologies that could dramatically reduce the risk of bioweapons, and end natural pandemics in the process: mass genetic sequencing and mRNA vaccines.

First, advances in mass genetic sequencing technology allow direct, real-time analysis of DNA or RNA fragments collected from all over the human environment. You cast a wide net, and if you start seeing DNA sequences that you don’t recognise spreading through the population — that can set off an alarm.

Andy notes that while the necessary desktop sequencers may be expensive enough that they’re only in hospitals today, they’re rapidly getting smaller, cheaper, and easier to use. In fact DNA sequencing has recently experienced the most dramatic cost decrease of any technology, declining by a factor of 10,000 since 2007. It’s only a matter of time before they’re cheap enough to put in every home.

In the world Andy envisions, each morning before you brush your teeth you also breathe into a tube. Your sequencer can tell you if you have any of 300 known pathogens, while simultaneously scanning for any unknown viruses. It’s hooked up to your WiFi and reports into a public health surveillance system, which can check to see whether any novel DNA sequences are being passed from person to person. New contagious diseases can be detected and investigated within days — long before they run out of control.

The second major breakthrough comes from mRNA vaccines, which are today being used to end the COVID pandemic. The wonder of mRNA vaccines is that they can instruct our cells to make any random protein we choose and trigger a protective immune response from the body.

Until now it has taken a long time to invent and test any new vaccine, and there was then a laborious process of scaling up the equipment necessary to manufacture it. That leaves a new disease or bioweapon months or years to wreak havoc.

But using the sequencing technology above, we can quickly get the genetic codes that correspond to the surface proteins of any new pathogen, and switch them into the mRNA vaccines we’re already making. Inventing a new vaccine would become less like manufacturing a new iPhone and more like printing a new book — you use the same printing press and just change the words.

So long as we maintained enough capacity to manufacture and deliver mRNA vaccines, a whole country could in principle be vaccinated against a new disease in months.

Together these technologies could make advanced bioweapons a threat of the past. And in the process humanity’s oldest and deadliest enemy — contagious disease — could be brought under control like never before.

Andy has always been pretty open and honest, but his retirement last year has allowed him to stop worrying about being seen to speak for the Department of Defense, or for the president of the United States – and so we were also able to get his forthright views on a bunch of interesting other topics, such as:

  • The chances that COVID-19 escaped from a research facility
  • Whether a US president can really truly launch nuclear weapons unilaterally
  • What he thinks should be the top priorities for the Biden administration
  • If Andy was 18 and starting his career over again today, what would his plan be?
  • The time he and colleagues found 600kg of unsecured, highly enriched uranium sitting around in a barely secured facility in Kazakhstan, and eventually transported it to the United States
  • And much more.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

How to identify your personal strengths

Perhaps the most common approach to finding a good career is to identify your personal strengths, and then look for paths that match them.

This article summarises the best advice I’ve found on how to identify your strengths, turned into a three-step process. It also includes lists of personal strengths that are most commonly used by researchers (to give you a language to describe your own) and a case study.

But first, I wanted to give a warning that I think the ‘match with strengths’ approach to choosing a career seems a little overrated.

Perhaps the biggest risk is limiting yourself based on your current strengths, and ignoring your potential to develop new, more potent strengths. This risk is most pressing for younger people, who don’t yet have much data on what they’re good at – making them more likely to guess incorrectly – and have decades ahead of them to develop new strengths.

You should ask both ‘what are my strengths?’ and also ‘which strengths are worth building?’

More broadly, I’ve argued that it’s often better to take the reverse approach to match with strengths: ask what the world most needs and then figure out how you might best help with that. This orientation helps you to focus on developing skills that are both valued in the market and that can be used to solve important global problems, which is key to finding a career that’s both meaningful and personally rewarding.

Continue reading →

#92 – Brian Christian on the alignment problem

It’s funny, if you track a lot of the nay-saying that existed circa 2017 or 2018 around AGI, a lot of people would be like, “Well, call me when AI can do that. Call me when AI can tell me what the word ‘it’ means in such and such a sentence.” And then it’s like, “Okay, well we’re there, so, can we call you now?”

Brian Christian

Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science.

Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer.

Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren’t nervous at all.

Here’s a tease of 10 Hollywood-worthy stories from the episode:

  • The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience.
  • ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early nineties, using a computer with a tenth the processing capacity of an Apple Watch.
  • Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen.
  • Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net.
  • Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die.
  • The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination.
  • Montezuma’s Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple.
  • Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies.
  • AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself.
  • Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip.

We also cover:

  • How reinforcement learning actually works, and some of its key achievements and failures
  • How a lack of curiosity can cause AIs to fail to be able to do basic things
  • The pitfalls of getting AI to imitate how we ourselves behave
  • The benefits of getting AI to infer what we must be trying to achieve
  • Why it’s good for agents to be uncertain about what they’re doing
  • Why Brian isn’t that worried about explicit deception
  • The interviewees Brian most agrees with, and most disagrees with
  • Developments since Brian finished the manuscript
  • The effective altruism and AI safety communities
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

Why I find longtermism hard, and what keeps me motivated

I find working on longtermist causes to be — emotionally speaking — hard: There are so many terrible problems in the world right now. How can we turn away from the suffering happening all around us in order to prioritise something as abstract as helping make the long-run future go well?

A lot of people who aim to put longtermist ideas into practice seem to struggle with this, including many of the people I’ve worked with over the years. And I myself am no exception — the pull of suffering happening now is hard to escape. For this reason, I wanted to share a few thoughts on how I approach this challenge, and how I maintain the motivation to work on speculative interventions despite finding that difficult in many ways.

This issue is one aspect of a broader issue in effective altruism: figuring out how to motivate ourselves to do important work even when it doesn’t feel emotionally compelling. It’s useful to have a clear understanding of our emotions in order to distinguish between feelings and beliefs we endorse and those that we wouldn’t — on reflection — want to act on.

What I’ve found hard

First, I don’t want to claim that everyone finds it difficult to work on longtermist causes for the same reasons that I do, or in the same ways. I’d also like to be clear that I’m not speaking for 80,000 Hours as an organisation.

My struggles with the work I’m not doing tend to centre around the humans suffering from preventable diseases in poor countries.

Continue reading →

#91 – Lewis Bollard on big wins against factory farming and how they happened

28% of the U.S. flock is cage-free, up from 6% in 2015. That’s over 70 million hens newly out of cages over the last few years… Costco, which is the second largest retailer in the U.S., is now over 95% cage-free.

Lewis Bollard

I suspect today’s guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming and what broader lessons we can learn from the experiences of people working to end cruelty in animal agriculture.

That’s why I interviewed him back in 2017, and it’s why I’ve come back for an updated second dose four years later.

That conversation became a touchstone resource for anyone wanting to understand why people might decide to focus their altruism on farmed animal welfare, what those people are up to, and why.

Lewis leads Open Philanthropy’s strategy for farm animal welfare, and since he joined in 2015 they’ve disbursed about $130 million in grants to nonprofits as part of this program.

This episode certainly isn’t only for vegetarians or people whose primary focus is animal welfare. The farmed animal welfare movement has had a lot of big wins over the last five years, and many of the lessons animal activists and plant-based meat entrepreneurs have learned are of much broader interest.

Some of those include:

  • Between 2019 and 2020, Beyond Meat’s cost of goods sold fell from about $4.50 a pound to $3.50 a pound. Will plant-based meat or clean meat displace animal meat, and if so when? How quickly can it reach price parity?
  • One study reported that philosophy students reduced their meat consumption by 13% after going through a course on the ethics of factory farming. But do studies like this replicate? And what happens several months later?
  • One survey showed that 33% of people supported a ban on animal farming. Should we take such findings seriously? Or is it as informative as the study which showed that 38% of Americans believe that Ted Cruz might be the Zodiac killer?
  • Costco, the second largest retailer in the U.S., is now over 95% cage-free. Why have they done that years before they had to? And can ethical individuals within these companies make a real difference?

We also cover:

  • Switzerland’s ballot measure on eliminating factory farming
  • What a Biden administration could mean for reducing animal suffering
  • How chicken is cheaper than peanuts
  • The biggest recent wins for farmed animals
  • Things that haven’t gone to plan in animal advocacy
  • Political opportunities for farmed animal advocates in Europe
  • How the US is behind Brazil and Israel on animal welfare standards
  • The value of increasing media coverage of factory farming
  • The state of the animal welfare movement
  • And much more

If you’d like an introduction to the nature of the problem and why Lewis is working on it, in addition to our 2017 interview with Lewis, you could check out this 2013 cause report from Open Philanthropy.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

Rob Wiblin on how he ended up the way he is

Today we put out an interview with our Head of Research, Rob Wiblin, on our podcast feed.

The interviewer is Misha Saul, a childhood friend of Rob’s who he has known for over 20 years. While it’s not an episode of our own show, we decided to share it with subscribers because it’s fun, and because it touches on personal topics that we don’t usually get to cover in our own interviews.

They cover:

  • How Rob’s parents shaped who he is (if indeed they did)
  • Their shared teenage obsession with philosophy, which eventually led to Rob working at 80,000 Hours
  • How their politics were shaped by growing up in the 90s
  • How talking to Rob helped Misha develop his own very different worldview
  • Why The Lord of the Rings movies have held up so well
  • What was it like being an exchange student in Spain, and was learning Spanish a mistake?
  • Marriage and kids
  • Institutional decline and historical analogies for the US in 2021
  • Making fun of teachers
  • Should we stop eating animals?

Continue reading →

#90 – Ajeya Cotra on worldview diversification and how big the future could be

I have an infinity-to-one update that the world is just tiled with Rob Wiblins having Skype conversations with Ajeya right now. Because I would be most likely to be experiencing what I’m experiencing in that world.

Ajeya Cotra

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?

In today’s interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning’ could be relevant for figuring out where we should direct our charitable giving.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism’ — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that’s both very large relative to what’s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.

If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we’re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn’t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

  • Which worldviews Open Phil finds most plausible, and how it balances them
  • Which worldviews Ajeya doesn’t embrace but almost does
  • How hard it is to get to other solar systems
  • The famous ‘simulation argument’
  • When transformative AI might actually arrive
  • The biggest challenges involved in working on big research reports
  • What it’s like working at Open Phil
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Continue reading →

Rob Wiblin on self-improvement and research ethics

Today on our podcast feed, we’re releasing a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin.

Rob chats with Spencer Greenberg, who has been an audience favourite in episodes 11 and 39 of the 80,000 Hours Podcast, and has now created this show of his own.

Among other things they cover:

  • Is trying to become a better person a good strategy for self-improvement?
  • Why Rob thinks many people could achieve much more by finding themselves a line manager
  • Why interviews on this show are so damn long
  • Is it complicated to figure out what human beings value, or actually simpler than it seems?
  • Why Rob thinks research ethics and institutional review boards are causing immense harm
  • Where prediction markets might be failing today, and how we could tell.

You can get the interview in your podcasting app by either subscribing to the ‘80,000 Hours Podcast’, or Spencer’s show ‘Clearer Thinking’.

You might also want to check out Spencer’s conversation with another 80,000 Hours researcher: 008: Life Experiments and Philosophical Thinking with Arden Koehler.

Continue reading →

#89 – Owen Cotton-Barratt on epistemic systems & layers of defense against potential global catastrophes

We don’t always know exactly what is important, and often people end up with more good taste for choosing important things to /work on if they’ve had a space where they can step way back and say, “Okay. So what’s really going on here? What is the game? What do I want to be focusing on?”

Owen Cotton-Barratt

From one point of view academia forms one big ‘epistemic’ system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we can think of society as a whole as a huge epistemic system, made up of these and many other subsystems.

How these systems absorb, process, combine and organise information will have a big impact on what humanity as a whole ends up doing with itself — in fact, at a broad level it basically entirely determines the direction of the future.

With that in mind, today’s guest Owen Cotton-Barratt has founded the Research Scholars Programme (RSP) at the Future of Humanity Institute at Oxford University, which gives early-stage researchers the freedom to try to understand how the world works.

Instead of you having to pay for a masters degree, the RSP pays you to spend significant amounts of time thinking about high-level questions, like “What is important to do?” and “How can I usefully contribute?”

Participants get to practice their research skills, while also thinking about research as a process and how research communities can function as epistemic systems that plug into the rest of society as productively as possible.

The programme attracts people with several years of experience who are looking to take their existing knowledge — whether that’s in physics, medicine, policy work, or something else — and apply it to what they determine to be the most important topics.

It also attracts people without much experience, but who have a lot of ideas. If you went directly into a PhD programme, you might have to narrow your focus quickly. But the RSP gives you time to explore the possibilities, and to figure out the answer to the question “What’s the topic that really matters, and that I’d be happy to spend several years of my life on?”

Owen thinks one of the most useful things about the two-year programme is being around other people — other RSP participants, as well as other researchers at the Future of Humanity Institute — who are trying to think seriously about where our civilisation is headed and how to have a positive impact on this trajectory.

Instead of being isolated in a PhD, you’re surrounded by folks with similar goals who can push back on your ideas and point out where you’re making mistakes. Saving years not pursuing an unproductive path could mean that you will ultimately have a much bigger impact with your career.

RSP applications are set to open in the Spring of 2021 — but Owen thinks it’s helpful for people to think about it in advance.

In today’s episode, Arden and Owen mostly talk about Owen’s own research. They cover:

  • Extinction risk classification and reduction strategies
  • Preventing small disasters from becoming large disasters
  • How likely we are to go from being in a collapsed state to going extinct
  • What most people should do if longtermism is true
  • Advice for mathematically-minded people
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcript: Zakee Ulhaq

Continue reading →

#88 – Tristan Harris on the need to change the incentives of social media companies

I think what I’m most concerned about is the shredding of a shared meaning-making environment and joint attention into a series of micro realities – 3 billion Truman Shows.

Tristan Harris

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages.

Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it’s hard to remember how recently it was a fringe view.

It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality.

But while it all feels plausible, how strong is the evidence that it’s true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory.

At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. And a jury trial — an Athenian innovation — ended up condemning Socrates to death. Fears about new technologies aren’t always misguided.

Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques.

Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address.

Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what’s in our interests as users and citizens?

One way is to encourage a shift to a subscription model. Presumably, that would get Facebook’s engineers thinking more about how to make users truly happy, and less about how to make advertisers happy.

One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site.

But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50.

One possibility is for Congress to say: it’s unacceptable for large social media platforms to influence the behaviour of users through hyper-targeted advertising. Once you reach a certain size, you are required to shift over into a subscription model.

That runs into the problem that some people would be able to afford a subscription and others would not. But Tristan points out that during COVID, US electricity companies weren’t allowed to disconnect you even if you were behind on your bills. Maybe we can find a way to classify social media as an ‘essential service’ and subsidize a basic version for everyone.

Of course, getting governments more involved in social media could itself be dangerous. Politicians aren’t experts in internet services, and could simply mismanage them — and they have their own perverse motivation as well: shift communication technology in ways that will advance their political views.

Another way to shift the incentives is to make it hard for social media companies to hire the very best people unless they act in the interests of society at large. There’s already been some success here — as people got more concerned about the negative effects of social media, Facebook had to raise salaries for new hires to attract the talent they wanted.

But Tristan asks us to consider what would happen if everyone who’s offered a role by Facebook didn’t just refuse to take the job, but instead took the interview in order to ask them directly, “what are you doing to fix your core business model?”

Engineers can ‘vote with their feet’, refusing to build services that don’t put the interests of users front and centre. Tristan says that if governments are unable, unwilling, or too untrustworthy to set up healthy incentives, we might need a makeshift solution like this.

Despite all the negatives, Tristan doesn’t want us to abandon the technologies he’s concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world.

Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we’ve ever had — tools that could educate and organise people better than anything that has come before.

The tricky and open question is how to get there — Rob and Tristan agree that a lot more needs to be done to develop a reform agenda that has some chance of actually happening, and that generates as few unforeseen downsides as possible. Rob and Tristan also discuss:

  • Justified concerns vs. moral panics
  • The effect of social media on US politics
  • Facebook’s influence on developing countries
  • Win-win policy proposals
  • Big wins over the last 5 or 10 years
  • Tips for individuals
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Continue reading →

A (free) weekly career planning course for positive impact

If you want a career that’s both fulfilling and impactful, but are feeling unsure what to do, we’ve created this free weekly course to help you make a plan.

Each week, you’ll receive one article to read and some questions to answer, which start with clarifying your longer-term goals, and work towards actionable next steps.

If you complete the whole thing, you’ll have considered the most important questions about your career, made a career plan you can feel confident in, and given yourself the best possible chance of finding work that’s satisfying and makes a real difference.

The course will help you apply everything we’ve learned about career planning, drawing from academic research on decision making and our experience giving career advice to over 1,000 people.

It’s designed to be helpful no matter which issues you want to work on or what your skills are, and whether you’re still a student or have been in a job for years.

It aims to help you step back and ask the big questions. If you need to decide between a couple of concrete options right now, we have a shorter decision process specifically for that.

It’s not necessarily an easy (or a particularly short) process. But you have 80,000 hours of working time in your life, so if you’re lucky enough to have options for how to spend that time, it’s worth really thinking about how to spend it best.

Continue reading →

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

We’re a bit less constrained by kind of generally interested, talented people, and a bit more constrained by either people who have very particular skills that are needed, such as AI technical safety, or grantmaker skill sets — the kinds of things we list on our priority problems.

Benjamin Todd

In the last ’80k team chat’ with Ben Todd and Arden Koehler, we discussed what effective altruism is and isn’t, and how to argue for it. In this episode we turn now to what the effective altruism community most needs.

According to Ben, we can think of the effective altruism movement as having gone through several stages, categorised by what kind of resource has been most able to unlock more progress on important issues (i.e. by what’s the ‘bottleneck’). Plausibly, these stages are common for other social movements as well.

  • Needing money: In the first stage, when effective altruism was just getting going, more money (to do things like pay staff and put on events) was the main bottleneck to making progress.
  • Needing talent: In the second stage, we especially needed more talented people being willing to work on whatever seemed most pressing.
  • Needing specific skills and capacity: In the third stage, which Ben thinks we’re in now, the main bottlenecks are organizational capacity, infrastructure, and management to help train people up, as well as specialist skills that people can put to work now.

What’s next? Perhaps needing coordination — the ability to make sure people keep working efficiently and effectively together as the community grows.

The 2020 Effective Altruism Survey just opened. If you’re involved with the effective altruism community, or sympathetic to its ideas, it’s a great thing to fill out.

Ben and I also cover the career implications of those stages, as well as the ability to save money and the possibility that someone else would do your job in your absence.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

#87 – Russ Roberts on whether it's more effective to help strangers, or people you know

So if you said to me, “You should devote the rest of your life to getting better at being the father of your children.”… I have an idea of how to do that. I may not succeed at it, I may struggle at it, I’m sure it’s imperfect. But if you said to me, “You know, I think Americans should get along better with Russians, Chinese and Swedes.” I don’t know how to start with that.

Russ Roberts

If you want to make the world a better place, would it be better to help your niece with her SATs, or try to join the State Department to lower the risk that the US and China go to war?

People involved in 80,000 Hours or the effective altruism community would be comfortable recommending the latter. This week’s guest — Russ Roberts, host of the long-running podcast EconTalk, and author of a forthcoming book on decision-making under uncertainty and the limited ability of data to help — worries that might be a mistake.

I’ve been a big fan of Russ’ show EconTalk for 12 years — in fact I have a list of my top 100 recommended episodes — so I invited him to talk about his concerns with how the effective altruism community tries to improve the world.

These include:

  • Being too focused on the measurable
  • Being too confident we’ve figured out ‘the best thing’
  • Being too credulous about the results of social science or medical experiments
  • Undermining people’s altruism by encouraging them to focus on strangers, who it’s naturally harder to care for
  • Thinking it’s possible to predictably help strangers, who you don’t understand well enough to know what will truly help
  • Adding levels of wellbeing across people when this is inappropriate
  • Encouraging people to pursue careers they won’t enjoy

These worries are partly informed by Russ’ ‘classical liberal’ worldview, which involves a preference for free market solutions to problems, and nervousness about the big plans that sometimes come out of consequentialist thinking.

While we do disagree on a range of things — such as whether it’s possible to add up wellbeing across different people, and whether it’s more effective to help strangers than people you know — I make the case that some of these worries are founded on common misunderstandings about effective altruism, or at least misunderstandings of what we believe here at 80,000 Hours.

We primarily care about making the world a better place over thousands or even millions of years — and we wouldn’t dream of claiming that we could accurately measure the effects of our actions on that timescale.

I’m more skeptical of medicine and empirical social science than most people, though not quite as skeptical as Russ (check out this quiz I made where you can guess which academic findings will replicate, and which won’t).

And while I do think that people should occasionally take jobs they dislike in order to have a social impact, those situations seem pretty few and far between.

But Russ and I disagree about how much we really disagree. In addition to all the above we also discuss:

  • How to decide whether to have kids
  • Was the case for deworming children oversold?
  • Whether it would be better for countries around the world to be better coordinated

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

If you care about social impact, why is voting important?

Could one vote — your vote — swing an entire election? Most of us abandoned this seeming fantasy not too long after we learned how elections work.

But the chances are higher than you might think. If you’re in a competitive district in a competitive election, the odds that your vote will flip a national election often fall between 1 in 1 million and 1 in 10 million.

That’s a very small probability, but it’s big compared to your chances of winning the lottery, and it’s big relative to the enormous impact governments can have on the world.

Each four years the United States federal government allocates $17,500,000,000,000, so a 1 in 10 million chance of changing the outcome of a US national election gives an average American some degree of influence over $1.75 million.

That means the expected importance of voting — the probability of changing an election’s result multiplied by the impact if you do — might, depending on your personal circumstances, be very high.

This could, in itself, be a good argument for voting.

Fortunately there is a significant amount of academic research on the importance of elections and how likely one vote is to change the outcome, so I’ve pulled it together to estimate the average value of one vote for the right person.

The answer, as you might expect, depends a great deal on the circumstances of any given election,

Continue reading →

#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

If you think that being born with a good life is better for the people in question then not being born, then it’s really hard not to be led to something like a totalist population axiology, where in particular you’re going to think that if the future would be good in the absence of extinction, then premature human extinction is an astronomically bad thing.

Hilary Greaves

Had World War 1 never happened, you might never have existed.

It’s very unlikely that the exact chain of events that led to your conception would have happened if the war hadn’t — so perhaps you wouldn’t have been born.

Would that mean that it’s better for you that World War 1 happened (regardless of whether it was better for the world overall)?

On the one hand, if you’re living a pretty good life, you might think the answer is yes – you get to live rather than not.

On the other hand, it sounds strange to say that it’s better for you to be alive, because if you’d never existed there’d be no you to be worse off. But if you wouldn’t be worse off if you hadn’t existed, can you be better off because you do?

In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can’t be better for someone to exist vs. not.

Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn’t better for them, and thus, perhaps, that it’s not better at all.

This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn’t otherwise have existed) — which would affect how we try to make the world a better place.

Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned.

Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out.

This is our second episode with Professor Greaves. The first one was a big hit, so we thought we’d come back and dive into even more complex ethical issues.

We also discuss:

  • The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible
  • What it means for us to be ‘clueless’ about the consequences of our actions
  • Moral uncertainty — what we should do when we don’t know which moral theory is correct
  • Whether we should take a bet on a really small probability of a really great outcome
  • The field of global priorities research at the Global Priorities Institute and beyond

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Benjamin Todd on the core of effective altruism and how to argue for it (80k team chat #3)

Say you come along and are like, “Well, I did this estimate and I found that there’s this amazing global priority that no one else is working on and it’s like 100 times better than what everyone else is doing.” So then the question is, should you just trust that, or should you figure that you’ve probably made a mistake somewhere? And because your calculation has said there’s this thing that’s amazing compared to what everyone else is doing, most likely you’ve made an error in the direction of it being better than it actually is.

Ben Todd

Today’s episode is the latest conversation between Arden Koehler, and our CEO, Ben Todd.

Ben’s been thinking a lot about effective altruism recently, including what it really is, how it’s framed, and how people misunderstand it.

We recently released an article on misconceptions about effective altruism – based on Will MacAskill’s recent paper The Definition of Effective Altruism – and this episode can act as a companion piece.

Arden and Ben cover a bunch of topics related to effective altruism:

  • How it isn’t just about donating money to fight poverty
  • Whether it includes a moral obligation to give
  • The rigorous argument for its importance
  • Objections to that argument
  • How to talk about effective altruism for people who aren’t already familiar with it

Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Notes on good judgement and how to develop it

Judgement, which I roughly define as ‘the ability to weigh complex information and reach calibrated conclusions’, is clearly a valuable skill.

In our simple analysis of which skills make people most employable, using data from the Bureau of Labor Statistics across the US economy, ‘judgement and decision making’ came out top (though meant in a broader sense than we do).

My guess is that good judgement is even more important when aiming to have a positive impact.

Why good judgement is so valuable when aiming to have an impact

One reason is lack of feedback. We can never be fully certain which issues are most pressing, or which interventions are most effective. Even in an area like global health – where we have relatively good data on what works – there has been huge debate over the cost effectiveness of even a straightforward intervention like deworming. Deciding whether to focus on deworming requires judgement.

This lack of feedback becomes even more pressing when we come to efforts to reduce existential risks or help the long-term future, and efforts which take a more ‘hits based’ approach to impact. An existential risk can only happen once, so there’s a limit to how much data we can ever have about what reduces them, and we must mainly rely on judgement.

Reducing existential risks and some of the other areas we focus on are also new fields of research,

Continue reading →

Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong (80k team chat #2)

An example of something that could make a time really pivotal is if we discover a new technology, such as something that could create a new bioweapon. That moment right as we’re about to discover that, that would be a really pivotal time because maybe the details of how that technology is handled could make a big difference to whether there’s an existential risk or some other shift to the future.

Ben Todd

Today’s bonus episode is a conversation between Arden Koehler, and our CEO, Ben Todd.

Ben’s been doing a bunch of research recently, and we thought it’d be interesting to hear about how he’s currently thinking about a couple of different topics – including different types of longtermism, and things 80,000 Hours might be getting wrong.

You can get it by subscribing to the 80,000 Hours Podcast wherever you listen to podcasts. Learn more about the show here.

This is very off-the-cut compared to our regular episodes, and just 54 minutes long.

In the first half, Arden and Ben talk about varieties of longtermism:

  • Patient longtermism
  • Broad urgent longtermism
  • Targeted urgent longtermism focused on existential risks
  • Targeted urgent longtermism focused on other trajectory changes
  • And their distinctive implications for people trying to do good with their careers.

In the second half, they move on to:

  • How to trade-off transferable versus specialist career capital
  • How much weight to put on personal fit
  • Whether we might be highlighting the wrong problems and career paths.

Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature.

Our annual user survey is also now open for submissions.

Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we’ve helped or hurt you.

Your responses to the survey will be carefully read as part of our upcoming annual review, and we’ll use them to help decide what 80,000 Hours should do differently next year.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Continue reading →

Five philosophies of career success

People have many different beliefs about what drives career success. These different beliefs lead to different philosophies of career advice, which have different implications for how to choose a career.

Here I outline what I take to be five common philosophies of career success, some rough thoughts on which is correct, what they imply, and why most of them differ from mainstream careers advice.

Five philosophies

Here’s a short overview of each one, made extreme to clearly illustrate the differences:

1. Find your unique career match

There’s a narrow range of careers that match you really well, and which will let you be happy and productive, while most won’t be a good fit.

Your aim should be to try to understand your unique profile of strengths and find the job that best matches them.

I’d say this is the philosophy of most ‘standard’ career advice. If you speak to a career advisor, they will typically be unwilling to say that some paths are generally ‘better’ than another, but instead maintain that it’s all about finding the right match. Most career books spend plenty of time getting you to reflect on your interests and personality, and then encourage you to look for careers that match them. Career tests work in part on the same principle.

I’d also put the advice to ‘follow your passion’ in this category.

One interesting version of this philosophy is the idea that obsessive interest is necessary for outsized success,

Continue reading →