#86 – Hilary Greaves on Pascal’s mugging, strong longtermism, and whether existing can be good for us

Had World War 1 never happened, you might never have existed.

It’s very unlikely that the exact chain of events that led to your conception would have happened if the war hadn’t — so perhaps you wouldn’t have been born.

Would that mean that it’s better for you that World War 1 happened (regardless of whether it was better for the world overall)?

On the one hand, if you’re living a pretty good life, you might think the answer is yes – you get to live rather than not.

On the other hand, it sounds strange to say that it’s better for you to be alive, because if you’d never existed there’d be no you to be worse off. But if you wouldn’t be worse off if you hadn’t existed, can you be better off because you do?

In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can’t be better for someone to exist vs. not.

Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn’t better for them, and thus, perhaps, that it’s not better at all.

This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn’t otherwise have existed) — which would affect how we try to make the world a better place.

Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned.

Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out.

This is our second episode with Professor Greaves. The first one was a big hit, so we thought we’d come back and dive into even more complex ethical issues.

We also discuss:

  • The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible
  • What it means for us to be ‘clueless’ about the consequences of our actions
  • Moral uncertainty — what we should do when we don’t know which moral theory is correct
  • Whether we should take a bet on a really small probability of a really great outcome
  • The field of global priorities research at the Global Priorities Institute and beyond

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Highlights

The case for strong longtermism

The basic argument arises from the fact that, at least on plausible empirical assessments, it is either the case that there’s a very long future for humanity, or it’s the case that there might not be a very long future for humanity, and there are things we can do now that would nontrivially change this probability. There are lots of different scenarios which would postulate different plausible ballpark numbers for how many people there’ll be in the future. But some of them — particularly possibilities that involve humanity spreading to settle other star systems — result in absolutely astronomical numbers of people, spreading on down the millennia. When you average across all of these possibilities, a plausible ballpark average estimate is something like 10 to the 15 future people, in expectation.

If you’re dealing with that absolutely enormous number of possible future people, then it’s very plausible… If you can do anything at all to nontrivially change how well-off those future people are — or if you can do anything at all to nontrivially change the probability that those future people get to exist — then in terms of expected value, doing that thing, making that kind of positive change to the expected course of the future, is going to compete very favorably with the best things that we could do to improve the near term. For example, if you can imagine an intervention that would improve things for the world’s poorest people now, and that would have no knock-on effects down the centuries, it’s plausible that something that would reduce extinction risk or improve the whole course of the very long-run future — even by just by a tiny bit at every future time — would be even better than that.

A thing that we hoped to surprise people with was the claim that, at least very plausibly, the truth of axiological strong longtermism is very robust to quite a lot of plausible variations in moral theory and decision theory. I think that when a lot of people think of longtermism, they primarily think of reducing risks of premature human extinction, and then they think, ‘Well, that’s only a big deal if you’re something like a total utilitarian.’ Whereas, part of what we’re trying to press in this paper is, even if you completely set that aside, the longtermist thesis is still reasonably plausible, at least. Because there are at least very plausible things that you could do to influence future average wellbeing levels across a very long-term scale, even without affecting numbers of future people.

Longtermist interventions besides reducing extinction risk

From a more abstract point of view, the salient thing about human extinction, if you like, is that it’s a really good example of a locked-in change. So once human extinction happens, it’s extremely unlikely that we come back from that. So the effects of a human extinction event will persist on down the millennia. But once you realize that that’s a key part of why focusing on human extinction might be a plausible thing for a would-be longtermist to do, you can easily see that anything else that would also change the probabilities of some relevant lock-in event could do something that’s relevantly similar for evaluative terms.

So, for example, if there was some possibility of a political lock-in mechanism, where say either some extremely good or some extremely bad system of world governance got instituted — and if there are reasons maybe arising from the lack of competition with other countries, because we’re talking about a world government rather than a government of some particular country — that would mean that there’s a non-trivial chance that the given international governance institution once instituted would persist basically indefinitely down the future of humanity. If there are things we could do now that would affect the probabilities that, content-wise, what that world government was up to was better rather than worse, that could be an example of this kind of trajectory change that isn’t about how many people there are in the future, but how good the lives are of those possible future people.

And then besides political examples, you can imagine similar things might go on with value systems. Value systems exhibit a lot of path dependence. They tend to spread from one person to another. So, if there are things that we could do now that would affect which path gets taken, that could have similar effects. One possibility in this general vicinity that’s very salient here involves the possibility of the future course of history basically being determined by what value system is built into the artificial intelligence that assumes extreme amounts of power within the next century or two, if there is one. If there are things we can do now to get better value systems built into such an artificial intelligence, then there’s a bunch of plausible reasons for thinking that the value system in an AI would be much less susceptible to change than the value system in a human being. (One reason being that artificial intelligences don’t have the same tendency to die that humans do.)

Why cluelessness isn't an objection to longtermism

So if you started off being confident that longtermism is true before thinking about cluelessness, then cluelessness should make you less confident of that. But also, if you started off thinking that short-termism is true before thinking about cluelessness, then cluelessness should make you less certain about that too. As the name suggests, cluelessness tends to make you less certain about things.

So in other words, it doesn’t seem like there’s an asymmetry that makes this specifically an objection to longtermism, rather than an objection to short-termism. It’s more an epistemic humility point. It should make us less certain about a lot of things. But if you’ve got this money and you have to spend it, it’s not clear that it will sway everybody in the anti-longtermist direction.

Theories of what to do under moral uncertainty

Probably the dominant option is the one that says, well, look, this is just another kind of uncertainty and we already know how to treat uncertainty in general, that is, to say, expected utility theory, so if you’ve got moral uncertainty, then that just shows that each of your moral theories better have some moral value function. And then what we’ll do under moral uncertainty is take the expected value of the moral values of our various possible actions. So that’s a pretty popular position.

I’ll briefly mention two others. One because it’s also reasonably popular in the literature. This is the so-called ‘my favorite theory’ option, where you say, ‘Okay, you’ve got moral uncertainty, but nonetheless you’ve probably got a favorite theory.’ That is to say, you’ve got one theory that you place more credence in than any other theory taken individually. What you should do under moral uncertainty, according to this approach, is just whatever your favorite theory says. So pick your highest credence theory, and ignore all of the others.

And then secondly, the thing that I worked on recently-ish. Instead of applying standard decision theory — that is, expected utility theory to moral uncertainty — consider what happens if you apply bargaining theory to moral uncertainty. Bargaining theory is a bunch of tools originally designed for dealing with disagreements between different people. You could conceptualize what’s going on as these different voices in your head, where the different moral theories that you have some credence in are like different people — and they’re bargaining with one another about what you, the agent, should do. So that motivated me to apply tools of bargaining theory to the problem of moral uncertainty, and look to see whether that led to any things that were distinctively different in particular from what the maximized expected choice-worthiness approach says.

Comparing existence and non-existence

The question we’re interested in is a question about whether some states of affairs can be better than other states of affairs for particular people. So the background distinction here is between what’s just better full stop, and what’s better for a particular person. So like a world in which I get all the cake might be better for me, even though it’s not better overall; it’s better to share things out.

Okay, so then the particular case of interest is what about if we’re comparing, say, the actual world — the actual state of affairs — with an alternative merely possible state of affairs, in which I was never born in the first place. Does it make any sense to say that the actual world is better for me than that other one, where I was never born? So what we call existence comparativism would say, ‘Yeah, that makes sense.’ And that can be true. If my actual life is better than nothing, if it’s a good life, I’m pretty well off, I’ve got a nice family, all kinds of nice things, then I feel lucky to be born. And that makes sense on this view because the actual state of affairs is better for me than one in which I was never born in the first place.

So I am pretty sympathetic to that view myself. But a lot of people think that view is just incoherent. So there’s a couple of arguments in the ethics literature that says, “Even if you do feel lucky to be born, you’re going to have to explain that feeling some other way because it makes no sense to compare a state of affairs in which you exist to one in which you don’t exist in terms of how good they are for you.” It’s similar in flavor to the Epicurean idea that it’s not bad to die because once you’re dead, it can no longer be bad for you. The difference is in the Epicurean case, it makes sense whether or not it’s true. It makes sense by everybody’s lights to say that the actual world is better for me than one in which I die earlier, because at least at some time or other I exist in both of those worlds. But there’s supposed to be some special problem in a case where one of the worlds you’re comparing is one where the person was never born in the first place. There’s a worry that there’s an absence of the wellbeing subject that will be needed for that comparison to make sense in that case.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.