Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

You might think, OK, I know that the immediate effects of funding anti-malarial bed nets are positive – I know that I’m going to save lives. But I also know that there are going to be further downstream effects and side-effects of my intervention. For example, effects on the size of future populations. It’s notoriously unclear how to think about the value of future population size, whether it’ll be a good thing to increase population in the short term, or whether that would in the end be a bad thing. There are lots of uncertainties here.

Hilary Greaves

The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you’re never getting back? According to philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute, which is hiring now – this simple decision will completely change the long-term future by altering the identities of almost all future generations.

How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day — including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child.

By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child – who didn’t exist because of what you did – would have done if you decided not to worry about it.

As that child’s actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both!

Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you’re totally clueless about what’s going to lead to the best outcomes. It might lead to decision paralysis — you won’t be able to take any action at all.

Prof Greaves doesn’t share this concern for most real life decisions. If there’s no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by the equally plausible possibility that you might make things worse in equally unpredictable ways.

But, if instead we’re talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse — for example if we increase economic growth — Prof Greaves says that we don’t get to just ignore the unforeseeable effects.

When there are complex arguments on both sides, it’s unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers.

So, what do we do?

Today’s episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover:

  • What’s the long term vision of the Global Priorities Institute?
  • How controversial is the multiverse interpretation of quantum physics?
  • What’s the best argument against academics just doing whatever they’re interested in?
  • How strong is the case for long-termism? What are the best opposing arguments?
  • Are economists getting convinced by philosophers on discount rates?
  • Given moral uncertainty, how should population ethics affect our real life decisions?
  • How should we think about archetypal decision theory problems?
  • The value of exploratory vs. basic research
  • Person affecting views of population ethics, fragile identities of future generations, and the non-identity problem
  • Is Derek Parfit’s repugnant conclusion really repugnant? What’s the best vision of a life barely worth living?
  • What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations?
  • How could reducing global catastrophic risk be a good cause for risk-averse people?
  • What’s the core difficulty in forming proper credences?
  • The value of subjecting EA ideas to academic scrutiny
  • The influence of academia in society
  • The merits of interdisciplinary work
  • The case for why operations is so important in academia
  • The trade off between working on important problems and advancing your career

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Highlights

There are interestingly different types of thing that would count as a life barely worth living, at least three interestingly different types. It might make a big difference to somebody’s intuitions about how bad the repugnant conclusion is, which one of these they have in mind. The one that springs most easily to mind is a very drab existence where you live for a normal length of time, maybe say 80 years, but at every point in time you’re enjoying some mild pleasures. There’s nothing special happening. There’s nothing especially bad happening.

Parfit uses the phrase ‘muzak and potatoes’, like you’re listening to some really bad music, and you have a kind of adequate but really boring diet. That’s basically all that’s going on in your life. Maybe you get some small pleasure from eating these potatoes, but it’s not very much. There’s that kind of drab life.

A completely different thing that might count as a life barely worth living is an extremely short life. Suppose you live a life that’s pretty good while it lasts but it only lasts for one second, well, then you haven’t got time to clock up very much goodness in your life, so that’s probably barely worth living.

Alternatively you could live a life of massive ups and downs, so lots of absolutely amazing, fantastic things, lots of absolutely terrible, painful, torturous things, and then the balance between these two could work out so that the net sum is just positive. That would also count as a life barely worth living. It’s not clear that how repugnant the repugnant conclusion is is the same for those three very different ways of thinking about what these barely worth living lives actually amount to.

If you think, “I’m risk-averse with respect to the difference that I make, so I really want to be certain that I, in fact, make a difference to how well the world goes,” then it’s going to be a really bad idea by your lights to work on extinction risk mitigation, because either humanity is going to go extinct prematurely or it isn’t. What’s the chance that your contribution to the mitigation effort turns out to tip the balance? Well, it’s minuscule.

If you really want to do something in even the rough ballpark of maximizing the probability that you make some difference, then don’t work on extinction risk mitigation. But that line of reasoning only makes sense if the thing you are risk-averse with respect to was the difference that you make to how well the world goes. What we normally mean when we talk about risk aversion is something different. It’s not risk aversion with respect to the difference I make, it’s risk aversion with respect to something like how much value there is in the universe.

If you’re risk-averse in that sense, then you place more emphasis on avoiding very bad outcomes than somebody who is risk-neutral. It’s not at all counterintuitive, then, I would have thought, to see that you’re going to be more pro extinction risk mitigation.

So the basic prima facie problem is, if you say, “Okay, I’m going to do this quantum measurement and the world is going to split. So possible outcome A is going to happen on one branch of the universe, and possible outcome B is going to happen on the second branch of the universe.” Then of course it looks like you can no longer say, “The probability of outcome A happening is a half,” like you used to want to say, because “Look, he just told me. The probability of outcome A happening is one, just like the probability of outcome B happening.” They’re both going to happen definitely on some branch or other of the universe.

Many of us ended up thinking the right way to think about this is maybe to take a step back and ask what we wanted from the notion, or we needed from the notion of probability in quantum mechanics in the first place, and I convinced myself at least that we didn’t in any particularly fundamental sense need that the chance of outcome A happening was a half. What we really needed was for it to be rational to assign weight one half to what would follow from outcome A happening. And it be rational to assign weight to one half to what would follow if and where outcome B happened. So if you have some measure over the set of actually future branches of the universe, and in a specifiable sense the outcome A branches total measure one half and the outcome B branches total measure one half, then we ended up arguing you’ve got everything you need from probability. This measure is enough, provided it plugs into decision theory in the right way.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.