#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

Most people would prefer to have had 10 hours of painful surgery yesterday than 1 hour of painful surgery coming up today. But as today's guest explains, this 'future bias' is harder to justify than it first appears.

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operation, but because it’s so painful — you’ll be given a drug that makes you forget the experience.

You wake up, not remembering going to sleep. You ask the nurse if you’ve had the operation yet. They look at the foot of your bed, and see two different charts for two patients. They say “Well, you’re one of these two — but I’m not sure which one. One of them had an operation yesterday that lasted ten hours. The other is set to have a one-hour operation later today.”

So it’s either true that you already suffered for ten hours, or true that you’re about to suffer for one hour.

Which patient would you rather be?

Most people would be relieved to find out they’d already had the operation. Normally we prefer less pain rather than more pain, but in this case, we prefer ten times more pain — just because the pain would be in the past rather than the future.

Christian Tarsney, a philosopher at Oxford University’s Global Priorities Institute, has written a couple of papers about this ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences.

That probably sounds perfectly normal to you. But do we actually have good reasons to prefer to have our positive experiences in the future, and our negative experiences in the past?

One of Christian’s experiments found that when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about those experiences more — which suggests that our inability to affect the past is one reason why we feel mostly indifferent to it.

But he points out that if that was the main reason, then we should also be indifferent to inevitable future experiences — if you know for sure that something bad is going to happen to you tomorrow, you shouldn’t care about it. But if you found out you simply had to have a horribly painful operation tomorrow, it’s probably all you’d care about!

Another explanation for future bias is that we have this intuition that time is like a videotape, where the things that haven’t played yet are still on the way.

If your future experiences really are ahead of you rather than behind you, that makes it rational to care more about the future than the past. But Christian says that, even though he shares this intuition, it’s actually very hard to make the case for time having a direction.

It’s a live debate that’s playing out in the philosophy of time, as well as in physics. And Christian says that even if you could show that time had a direction, it would still be hard to explain why we should care more about the past than the future — at least in a way that doesn’t just sound like “Well, the past is in the past and the future is in the future”.

For Christian, there are two big practical implications of these past, present, and future ethical comparison cases.

The first is for altruists: If we care about whether current people’s goals are realised, then maybe we should care about the realisation of people’s past goals, including the goals of people who are now dead.

The second is more personal: If we can’t actually justify caring more about the future than the past, should we really worry about death any more than we worry about all the years we spent not existing before we were born?

Christian and Rob also cover several other big topics, including:

  • A possible solution to moral fanaticism, where you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome
  • How much of humanity’s resources we should spend on improving the long-term future
  • How large the expected value of the continued existence of Earth-originating civilization might be
  • How we should respond to uncertainty about the state of the world
  • The state of global priorities research
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Sofia Davis-Fogel

Highlights

Practical implications of past, present, and future ethical comparison cases

Christian Tarsney: I think there’s two things that are worth mentioning. One is altruistically significant, which is, if you think that one of the things we should care about as altruists is whether people’s desires or preferences are satisfied or whether people’s goals are realized, then one important question is, do we care about the realization of people’s past goals, including the goals of past people, people who are dead now? And if so, that might have various kinds of ethical significance. For instance, I think if I recall correctly, Toby Ord in The Precipice makes this point that well, past people are engaged in this great human project of trying to build and preserve human civilization. And if we allowed ourselves to go extinct, we would be letting them down or failing to carry on their project. And whether you think that that consideration has normative significance might depend on whether you think the past as a whole has normative significance.

Robert Wiblin: Yeah. That adds another wrinkle that I guess you could think that the past matters, but perhaps if you only cared about experiences, say, then obviously people in the past can’t have different experiences because of things in the future, at least we think not. So you have to think that the kind of fixed preference states that they had in their minds in the past, it’s still good to actualize those preferences in the future, even though it can’t affect their mind in the past.

Christian Tarsney: Yeah, that’s right. So you could think that we should be future biased only with respect to experiences, and not with respect to preference satisfaction. But then that’s a little bit hard to square if you think that the justification for future bias is this deep metaphysical feature of time. If the past is dead and gone, well, why should that affect the importance of experiences but not preferences? Another reason why the bias towards the future might be practically interesting or significant to people less from an altruistic standpoint than from a personal or individual standpoint, is this connection with our attitudes towards death, which is maybe the original context in which philosophers thought about the bias towards the future. So there’s this famous argument that goes back to Epicurus and Lucretius that says, look, the natural reason that people give for fearing death is that death marks a foundry of your life, and after you’re dead, you don’t get to have any more experiences, and that’s bad.

Christian Tarsney: But you could say exactly the same thing about birth, right? So before you were born, you didn’t have any experiences. And well, on the one hand, if you know that you’re going to die in five years, you might be very upset about that, but if you’re five years old and you know that five years ago you didn’t exist, people don’t tend to be very upset about that. And if you think that the past and the future should be on a par, that there is no fundamental asymmetry between those two directions in time, one conclusion that people have argued for is maybe we should be sanguine about the future, including sanguine about our own mortality, in the same way that we’re sanguine about the past and sanguine about the fact that we haven’t existed forever. Which I’m not sure if I can get myself into the headspace of really internalizing that attitude. But I think it’s a reasonably compelling argument and something that maybe some people can do better than I can.

Fanaticism

Christian Tarsney: Roughly the problem is that if you are an expected value maximizer, which means that when you’re making choices you just evaluate an option by taking all the possible outcomes and you assign them numeric values, the quantity of value or goodness that would be realized in this outcome, and then you just take a probability-weighted sum, the probability times the value for each of the possible outcomes, and add those all up and that tells you how good the option is…

Christian Tarsney: Well, if you make decisions like that, then you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome, or you can prefer certainty of a bad outcome over an option that gives you near certainty of a very good outcome, but just a tiny, tiny, tiny probability of an astronomically bad outcome. And a lot of people find this counterintuitive.

Robert Wiblin: So the basic thing is that very unlikely outcomes that are massive in their magnitude that would be much more important than the other outcomes in some sense end up dominating the entire expected value calculation and dominating your decision even though they’re incredibly improbable and that just feels intuitively wrong and unappealing.

Christian Tarsney: Well, here’s an example that I find drives home the intuition. So suppose that you have the opportunity to really control the fate of the universe. You have two options, you have a safe option that will ensure that the universe contains, over its whole history, 1 trillion happy people with very good lives, or you have the option to take a gamble. And the way the gamble works is almost certainly the outcome will be very bad. So there’ll be 1 trillion unhappy people, or 1 trillion people with say hellish suffering, but there’s some teeny, teeny, tiny probability, say one in a googol, 10 to the 100, that you get a blank check where you can just produce any finite number of happy people you want. Just fill in a number.

Christian Tarsney: And if you’re trying to maximize the expected quantity of happiness or the expected number of happy people in the world, of course you want to do that second thing. But there is, in addition to just the counterintuitiveness of it, there’s a thought like, well, what we care about is the actual outcome of our choices, not the expectation. And if you take the risky option and the thing that’s almost certainly going to happen happens, which is you get a very terrible outcome, the fact that it was good in expectation doesn’t give you any consolation, or doesn’t seem to retrospectively justify your choice at all.

Stochastic dominance

Christian Tarsney: My own take on fanaticism and on decision making under risk, for whatever it’s worth, is fairly permissive. A weird and crazy view that I’m attracted to is that we’re only required to avoid choosing options that are what’s called first-order stochastically dominated, which means that you have two options, let’s call them option one and option two. And then there’s various possible outcomes that could result from either of those options. And for each of those outcomes, we ask what’s the probability if you choose option one or if you choose option two that you get not that outcome specifically, but an outcome that’s at least that good?

Christian Tarsney: Say option one for any possible outcome gives you a greater overall probability of an outcome at least that desirable, then that seems a pretty compelling reason to choose option one. To give maybe a simple example would be helpful. Suppose that I’m going to flip a fair coin, and I offer you a choice between two tickets. One ticket will pay $1 if the coin lands heads and nothing of it lands tails, the other ticket will pay $2 if the coin lands tails, but nothing if it lands heads. So you don’t have what’s called state-wise dominance here, because if the coin lands heads then the first ticket gives you a better outcome, $1 rather than $0. But you do have stochastic dominance because both tickets give you the same chance of at least $0, namely certainty, both tickets give you a 50% chance of at least $1, but the second ticket uniquely gives you a 50% chance of at least $2, and that seems a compelling argument for choosing it.

Robert Wiblin: I see. I guess, and in a continuous case rather than a binary one, you would have to say, well, the worst case is better in say scenario two rather than scenario one. And the one percentile case is better and the second percentile case, the median is better, or at least as good, then the best case scenario is also as good or better. And so across the whole distribution of outcomes from worst to best, with probability adding them up as percentiles, the second scenario is always equal or better. And so it would seem crazy to choose the option that is always equally as good or worse, no matter how lucky you get.

Christian Tarsney: Right. Even though there are states of the world where the stochastically dominant option will turn out worse, nevertheless the distribution of possible outcomes is better.

Robert Wiblin: Okay. So you’re saying if you compare the scenario where you get unlucky in scenario two versus lucky in scenario one, scenario one could end up better. But ex-ante, before you know whether you got lucky with the outcome or not, it was worse at every point.

Christian Tarsney: Yeah, exactly.

The scope of longtermism

Christian Tarsney: There are two motivations for thinking about this. One is a worry that I think a lot of people have — certainly a lot of philosophers have — about longtermism, which is that it has this flavor of demanding extreme sacrifices from us. That maybe, for instance, if we really assign the same moral significance to the welfare of people in the very distant future, what that will require us to do is just work our fingers to the bone and give up all of our pleasures and leisure pursuits in order to maximize the probability at the eighth decimal place or something like that of humanity having a very good future.

Christian Tarsney: And this is actually a classic argument in economics too, that the reason that you need a discount rate, and more particularly, the reason why you need a rate of pure time preference, why you need to care about the further future less just because it’s the further future, is that otherwise you end up with these unreasonable conclusions about what the savings rate should be.

Robert Wiblin: Effectively we should invest everything in the future and kind of consume nothing now. It’d be like taking all of our GDP and just converting it into more factories to make factories kind of thing, rather than doing anything that we value today.

Christian Tarsney: Yeah, exactly. Both in philosophy and in economics, people have thought, surely you can’t demand that much of the present generation. And so one thing we wanted to think about is, how much does longtermism, or how much does a sort of temporal neutrality, no rate of pure time preference actually demand of the present generation in practice? But the other question we wanted to think about is, insofar as the thing that we’re trying to do in global priorities research, in thinking about cause prioritization, is find the most important things and draw a circle around them and say, “This is what humanity should be focusing on,” is longtermism the right circle to draw?

Christian Tarsney: Or is it maybe the case that there’s a couple of things that we can productively do to improve the far future, for instance reduce existential risks, and maybe we can try to improve institutional decision making in certain ways, and other ways of improving the far future, well, either there’s just not that much we can do or all we can do is try to make the present better in intuitive ways. Produce more fair, just, equal societies and hope that they make better decisions in future.

Robert Wiblin: Improve education.

Christian Tarsney: Yeah, exactly. Where the more useful thing to say is not we should be optimizing the far future, but this more specific thing, okay we should be trying to minimize existential risks and improve the quality of decision making in national and global political institutions, or something like that.

The value of the future

Christian Tarsney: There is this kind of outside view perspective that says if we want to form rational expectations about the value of the future, we should just think about the value of the present and look for trend lines over time. And then you might look at, for instance, the Steven Pinker stuff about declines in violence, or look at trends in global happiness. But you might also think about things like factory farming, and reach the conclusion that actually, even though human beings have been getting both more numerous and better off over time, the net effect of human civilization has been getting worse and worse and worse, as we farm more and more chickens or something like that.

Christian Tarsney: I’ll say, for my part, I’m a little bit skeptical about how much we can learn from this, because we should expect the outside view, extrapolative reasoning makes sense when you expect to remain in roughly the same regime for the time frame that you’re interested in. But I think there’s all sorts of reasons why we shouldn’t expect that. For instance, there’s the problem of converting wealth into happiness that we just haven’t really mastered, because, well, maybe we don’t have good enough drugs or something like that. We know how to convert humanity’s wealth and resources into cars. But we don’t know how to make people happy that they own a car, or as happy as they should be, or something like that.

Christian Tarsney: But that’s in principle a solvable problem. Maybe it’s just getting the right drugs, or the right kinds of psychotherapy, or something like that. And in the long term it seems very probable to me that we’ll eventually solve that problem. And then there’s other kinds of cases where the outside view reasoning just looks kind of clearly like it’s pointing you in the wrong direction. For instance, maybe the net value of human civilization has been trending really positively. Humanity has been a big win for the world just because we’re destroying so much habitat that we’re crowding out wild animals who would otherwise be living lives of horrible suffering. But obviously that trendline is bounded. We can’t create negative amounts of wilderness. And so if that’s the thing that’s driving the trendline, you don’t want to extrapolate that out to the year 1 billion or something and say, “Well, things will be awesome in 1 billion years.”

Externalism, internalism, and moral uncertainty

Christian Tarsney: Yeah, so unfortunately, internalism and externalism mean about 75 different things in philosophy. This particular internalism and externalism distinction was coined by a philosopher named Brian Weatherson. The way that he conceives the distinction, or maybe my paraphrase of the way he conceives the distinction, is basically an internalist is someone who says normative principles, ethical principles, for instance, only kind of have normative authority over you to the extent that you believe them. Maybe there’s an ethical truth out there, but if you justifiably believe some other ethical theory, some false ethical theory, well, of course the thing for you to do is go with your normative beliefs. Do the thing that you believe to be right.

Christian Tarsney: Whereas externalists think at least some normative principles, maybe all normative principles, have their authority unconditionally. It doesn’t depend on your beliefs. For instance, take the trolley problem. Should I kill one innocent person to save five innocent people? The internalist says suppose the right answer is you should kill the one to save the five, but you’ve just read a lot of Kant and Foot and Thompson and so forth and you become very convinced maybe in this particular variant of the trolley problem at least, that the right thing to do is to not kill the one, and to let the five die. Well, clearly there is some sense in which you should do the thing that you believe to be right. Because what other guide could you have, other than your own beliefs? Versus the externalist says well, if the right thing to do is kill the one and save the five, then that’s the right thing to do, what else is there to say about it?

Robert Wiblin: Yeah. Can you tie back what those different views might imply about how you would resolve the issue of moral uncertainty?

Christian Tarsney: The externalist, at least the most extreme externalist, basically says that there is no issue of moral uncertainty. What you ought to do is the thing that the true moral theory tells you to do. And it doesn’t matter if you don’t believe the true moral theory, or you’re uncertain about it. And the internalist of course is the one who says well no, if you’re uncertain, you have to account for that uncertainty somehow. And the most extreme internalist is someone who says that whenever you’re uncertain between two normative principles, you need to go looking for some higher-order normative principle that tells you how to handle that uncertainty.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.