Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

The way I like to think about [Bayesianism] is actually using an English language phrase that we like to call the Question of Evidence: how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?

Spencer Greenberg

Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner?

Spencer Greenberg, founder of ClearerThinking.org, has a process for working out such real life problems.

Let’s work through one here: how likely is it that you’ll enjoy listening to this episode?

The first step is to figure out your ‘prior probability’: your estimate of how likely you are to enjoy the interview before getting any further evidence.

Other than applying common sense, one way to figure this out is ‘reference class forecasting’. That is, looking at similar cases and seeing how often something is true, on average.

Spencer is our first ever return guest (Dr Anders Sandberg appeared on episodes 29 and 33 – but only because his one interview was so fascinating that we split it into two).

So one reference class might be, how many Spencer Greenberg episodes of the 80,000 Hours Podcast have you enjoyed so far? Being this specific limits bias in your answer, but with a sample size of just one – you’ll want to add more data points to reduce the variance of the answer (100% or 0% are both too extreme answers).

Zooming out, how many episodes of the 80,000 Hours Podcast have you enjoyed? Let’s say you’ve listened to 10, and enjoyed 8 of them. If so 8 out of 10 might be a reasonable prior.

If we want a bigger sample we can zoom out further: what fraction of long-form interview podcasts have you ever enjoyed?

Having done that you’d need to update whenever new information became available. Do the topics seem more interesting than average? Did Spencer make a great point in the first 5 minutes? Was this description unbearably self-referential?

In the episode we’ll explain the mathematically correct way to update your beliefs over time as new information comes in: Bayes Rule. You take your initial odds, multiply them by a ‘Bayes Factor’ and boom – updated probabilities. Once you know the trick it’s even easy to do it in your head. We’ll run through several diverse case studies of updating on evidence.

Speaking of the Question of Evidence: in a world where Spencer was not worth listening to, how likely is it that we’d invite him back for a second episode?

Also in this episode:

  • How could we generate 20-30 new happy thoughts a day? What would that do to our welfare?
  • What do people actually value? How do EAs differ from non EAs?
  • Why should we care about the distinction between intrinsic and instrumental values?
  • Should hedonistic utilitarians really want to hook themselves up to happiness machines?
  • What types of activities are people generally under-confident about? Why?
  • When should you give a lot of weight to your existing beliefs?
  • When should we trust common sense?
  • Does power posing have any effect?
  • Are resumes worthless?
  • Did Trump explicitly collude with Russia? What are the odds of him getting re-elected?
  • What’s the probability that China and the US go to War in the 21st century?
  • How should we treat claims of expertise on nutrition?
  • Why were Spencer’s friends suspicious of Theranos for years?
  • How should we think about the placebo effect?
  • Does a shift towards rationality typically cause alienation from family and friends? How do you deal with that?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Highlights

The other way I see it happen is the people around you in your social circle, they only accept as a community, accept certain intrinsic values, right? So, you think “I’m only supposed to have certain intrinsic values”, but you actually have other ones. So what you try to do is you try to recast your other intrinsic values in terms of these ones that are acceptable or valid. An example of this would be when someone says something like, “Well, the reason I cultivate friendship is because it makes me more effective.” I think what’s going on there very often is kind of self-deception where people feel like they’re not supposed to have these intrinsic values, so they kind of trick themselves into thinking that they don’t have other intrinsic values.

This leads to what I think is a really important and subtle point that whatever you believe is objectively true about value in the universe, and whatever you believe is the right values you’re supposed to have according to your social group, those things are independent from what your current intrinsic values are. Your current intrinsic values is like a psychological fact. Like, a scientist could study your intrinsic values and answer the questions. It’s a fact about yourself. It’s not a fact about the universe. It think it’s very important to draw that distinction and say you might believe in objective moral truth and you might believe you figured out what it is, but it doesn’t mean that’s what your intrinsic values are right now. Maybe you aspire to make your intrinsic values match them more closely, but they’re probably not there yet, and if you don’t draw that distinction, you might end up having this very bizarre doublethink where you basically deceive yourself and create these weird psychological effects, potentially harmful.

So if you think about the marathon [example], people might think it’s difficult, which could cause them to be underconfident, they may not view it as part of their personality or character potentially, so that could explain why they might be underconfident. People are not experienced so that could explain why they’re underconfident. They might say it’s not a matter of personal opinion so that could explain why they’re underconfident and so on. So you see, the marathon one lines up pretty well with a bunch of these traits, actually, to explain why people might be underconfident.

Bayesianism is [a] probabilistic, mathematical theory of how much to change your beliefs based on evidence. The way I like to think about this is actually using an English language phrase that we like to call the Question of Evidence: how likely would I be to see this evidence if my hypothesis is true, compared to if it’s false?

So let’s say if you got a three to one ratio, like you’re three times more likely to see this evidence if my hypothesis is true than if it’s false, that gives you moderate amount of evidence. If it’s 30 to one, you’re 30 times more likely to see this evidence if your hypothesis is true than if it’s false, that’s really strong evidence. If it’s just one, you’re as likely to see this evidence if your hypothesis is true than if it’s false, that’s no evidence, it actually doesn’t push you in any way, and then if it’s one in three, one third, then that pushes you in the opposite direction, it’s moderate evidence in the opposite direction. One in thirty would be strong evidence in the opposite direction.

So, I think what a lot of people don’t realize is all these equations and so on can be very confusing but there’s the English language sentence which is the only way to say how strong evidence is. That is the right sentence. Other sentences that sound similar, actually are not the right way to quantify evidence.

Very often when people are doing long, complex projects, they underestimate how long they’ll take. They’ll also underestimate often how much it will cost, how many researchers they’ll use. There’s a bunch of theories around why this is. It’s commonly called the Planning Fallacy. One theory is that, when you’re thinking about a long, complex project, you know that on some level that some things will go wrong, but it’s very hard to know what will go wrong. It’s gonna be sort of idiosyncratic. So your brain kind of smooths over and says, “Well this thing is probably not gonna go wrong and that thing’s probably gonna go wrong,” and so each individual thing, you kind of assume it’s gonna go right. But of course, there’s a good chance something will go wrong that you never even thought of.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.