#86 – Hilary Greaves on Pascal’s mugging, strong longtermism, and whether existing can be good for us
#86 – Hilary Greaves on Pascal’s mugging, strong longtermism, and whether existing can be good for us
By Arden Koehler, Robert Wiblin and Keiran Harris · Published October 21st, 2020
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Rob's intro [00:00:00]
- 3.2 The interview begins [00:02:53]
- 3.3 The Case for Strong Longtermism [00:05:49]
- 3.4 Compatible moral views [00:20:03]
- 3.5 Defining cluelessness [00:39:26]
- 3.6 Why cluelessness isn't an objection to longtermism [00:51:05]
- 3.7 Theories of what to do under moral uncertainty [01:07:42]
- 3.8 Pascal's mugging [01:16:37]
- 3.9 Comparing Existence and Non-Existence [01:30:58]
- 3.10 Philosophers who reject existence comparativism [01:48:56]
- 3.11 Lives framework [02:01:52]
- 3.12 Global priorities research [02:09:25]
- 3.13 Rob's outro [02:24:15]
- 4 Learn more
- 5 Related episodes
Had World War 1 never happened, you might never have existed.
It’s very unlikely that the exact chain of events that led to your conception would have happened if the war hadn’t — so perhaps you wouldn’t have been born.
Would that mean that it’s better for you that World War 1 happened (regardless of whether it was better for the world overall)?
On the one hand, if you’re living a pretty good life, you might think the answer is yes – you get to live rather than not.
On the other hand, it sounds strange to say that it’s better for you to be alive, because if you’d never existed there’d be no you to be worse off. But if you wouldn’t be worse off if you hadn’t existed, can you be better off because you do?
In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can’t be better for someone to exist vs. not.
Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn’t better for them, and thus, perhaps, that it’s not better at all.
This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn’t otherwise have existed) — which would affect how we try to make the world a better place.
Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned.
Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out.
This is our second episode with Professor Greaves. The first one was a big hit, so we thought we’d come back and dive into even more complex ethical issues.
We also discuss:
- The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible
- What it means for us to be ‘clueless’ about the consequences of our actions
- Moral uncertainty — what we should do when we don’t know which moral theory is correct
- Whether we should take a bet on a really small probability of a really great outcome
- The field of global priorities research at the Global Priorities Institute and beyond
Interested in applying this thinking to your career?
If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.
Highlights
The case for strong longtermism
The basic argument arises from the fact that, at least on plausible empirical assessments, it is either the case that there’s a very long future for humanity, or it’s the case that there might not be a very long future for humanity, and there are things we can do now that would nontrivially change this probability. There are lots of different scenarios which would postulate different plausible ballpark numbers for how many people there’ll be in the future. But some of them — particularly possibilities that involve humanity spreading to settle other star systems — result in absolutely astronomical numbers of people, spreading on down the millennia. When you average across all of these possibilities, a plausible ballpark average estimate is something like 10 to the 15 future people, in expectation.
If you’re dealing with that absolutely enormous number of possible future people, then it’s very plausible… If you can do anything at all to nontrivially change how well-off those future people are — or if you can do anything at all to nontrivially change the probability that those future people get to exist — then in terms of expected value, doing that thing, making that kind of positive change to the expected course of the future, is going to compete very favorably with the best things that we could do to improve the near term. For example, if you can imagine an intervention that would improve things for the world’s poorest people now, and that would have no knock-on effects down the centuries, it’s plausible that something that would reduce extinction risk or improve the whole course of the very long-run future — even by just by a tiny bit at every future time — would be even better than that.
A thing that we hoped to surprise people with was the claim that, at least very plausibly, the truth of axiological strong longtermism is very robust to quite a lot of plausible variations in moral theory and decision theory. I think that when a lot of people think of longtermism, they primarily think of reducing risks of premature human extinction, and then they think, ‘Well, that’s only a big deal if you’re something like a total utilitarian.’ Whereas, part of what we’re trying to press in this paper is, even if you completely set that aside, the longtermist thesis is still reasonably plausible, at least. Because there are at least very plausible things that you could do to influence future average wellbeing levels across a very long-term scale, even without affecting numbers of future people.
Longtermist interventions besides reducing extinction risk
From a more abstract point of view, the salient thing about human extinction, if you like, is that it’s a really good example of a locked-in change. So once human extinction happens, it’s extremely unlikely that we come back from that. So the effects of a human extinction event will persist on down the millennia. But once you realize that that’s a key part of why focusing on human extinction might be a plausible thing for a would-be longtermist to do, you can easily see that anything else that would also change the probabilities of some relevant lock-in event could do something that’s relevantly similar for evaluative terms.
So, for example, if there was some possibility of a political lock-in mechanism, where say either some extremely good or some extremely bad system of world governance got instituted — and if there are reasons maybe arising from the lack of competition with other countries, because we’re talking about a world government rather than a government of some particular country — that would mean that there’s a non-trivial chance that the given international governance institution once instituted would persist basically indefinitely down the future of humanity. If there are things we could do now that would affect the probabilities that, content-wise, what that world government was up to was better rather than worse, that could be an example of this kind of trajectory change that isn’t about how many people there are in the future, but how good the lives are of those possible future people.
And then besides political examples, you can imagine similar things might go on with value systems. Value systems exhibit a lot of path dependence. They tend to spread from one person to another. So, if there are things that we could do now that would affect which path gets taken, that could have similar effects. One possibility in this general vicinity that’s very salient here involves the possibility of the future course of history basically being determined by what value system is built into the artificial intelligence that assumes extreme amounts of power within the next century or two, if there is one. If there are things we can do now to get better value systems built into such an artificial intelligence, then there’s a bunch of plausible reasons for thinking that the value system in an AI would be much less susceptible to change than the value system in a human being. (One reason being that artificial intelligences don’t have the same tendency to die that humans do.)
Why cluelessness isn't an objection to longtermism
So if you started off being confident that longtermism is true before thinking about cluelessness, then cluelessness should make you less confident of that. But also, if you started off thinking that short-termism is true before thinking about cluelessness, then cluelessness should make you less certain about that too. As the name suggests, cluelessness tends to make you less certain about things.
So in other words, it doesn’t seem like there’s an asymmetry that makes this specifically an objection to longtermism, rather than an objection to short-termism. It’s more an epistemic humility point. It should make us less certain about a lot of things. But if you’ve got this money and you have to spend it, it’s not clear that it will sway everybody in the anti-longtermist direction.
Theories of what to do under moral uncertainty
Probably the dominant option is the one that says, well, look, this is just another kind of uncertainty and we already know how to treat uncertainty in general, that is, to say, expected utility theory, so if you’ve got moral uncertainty, then that just shows that each of your moral theories better have some moral value function. And then what we’ll do under moral uncertainty is take the expected value of the moral values of our various possible actions. So that’s a pretty popular position.
I’ll briefly mention two others. One because it’s also reasonably popular in the literature. This is the so-called ‘my favorite theory’ option, where you say, ‘Okay, you’ve got moral uncertainty, but nonetheless you’ve probably got a favorite theory.’ That is to say, you’ve got one theory that you place more credence in than any other theory taken individually. What you should do under moral uncertainty, according to this approach, is just whatever your favorite theory says. So pick your highest credence theory, and ignore all of the others.
And then secondly, the thing that I worked on recently-ish. Instead of applying standard decision theory — that is, expected utility theory to moral uncertainty — consider what happens if you apply bargaining theory to moral uncertainty. Bargaining theory is a bunch of tools originally designed for dealing with disagreements between different people. You could conceptualize what’s going on as these different voices in your head, where the different moral theories that you have some credence in are like different people — and they’re bargaining with one another about what you, the agent, should do. So that motivated me to apply tools of bargaining theory to the problem of moral uncertainty, and look to see whether that led to any things that were distinctively different in particular from what the maximized expected choice-worthiness approach says.
Comparing existence and non-existence
The question we’re interested in is a question about whether some states of affairs can be better than other states of affairs for particular people. So the background distinction here is between what’s just better full stop, and what’s better for a particular person. So like a world in which I get all the cake might be better for me, even though it’s not better overall; it’s better to share things out.
Okay, so then the particular case of interest is what about if we’re comparing, say, the actual world — the actual state of affairs — with an alternative merely possible state of affairs, in which I was never born in the first place. Does it make any sense to say that the actual world is better for me than that other one, where I was never born? So what we call existence comparativism would say, ‘Yeah, that makes sense.’ And that can be true. If my actual life is better than nothing, if it’s a good life, I’m pretty well off, I’ve got a nice family, all kinds of nice things, then I feel lucky to be born. And that makes sense on this view because the actual state of affairs is better for me than one in which I was never born in the first place.
So I am pretty sympathetic to that view myself. But a lot of people think that view is just incoherent. So there’s a couple of arguments in the ethics literature that says, “Even if you do feel lucky to be born, you’re going to have to explain that feeling some other way because it makes no sense to compare a state of affairs in which you exist to one in which you don’t exist in terms of how good they are for you.” It’s similar in flavor to the Epicurean idea that it’s not bad to die because once you’re dead, it can no longer be bad for you. The difference is in the Epicurean case, it makes sense whether or not it’s true. It makes sense by everybody’s lights to say that the actual world is better for me than one in which I die earlier, because at least at some time or other I exist in both of those worlds. But there’s supposed to be some special problem in a case where one of the worlds you’re comparing is one where the person was never born in the first place. There’s a worry that there’s an absence of the wellbeing subject that will be needed for that comparison to make sense in that case.
Articles, books, and other media discussed in the show
Hilary’s work
- The Case for Strong Longtermism with Will MacAskill
- Cluelessness
- A bargaining-theoretic approach to moral uncertainty with Owen Cotton-Barratt
- The parliamentary model of moral uncertainty (slides)
- Moral uncertainty about population ethics with Toby Ord
- Population axiology
- Optimum population size
- Discounting future health
- Discounting for public policy: A survey
- Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility with David Wallace
- Epistemic Decision Theory
- Hilary’s home page
Opportunities at the Global Priorities Institute
- Up to date list of all opportunities
- Early Career Conference Programme
- The Atkinson Scholarship
- The Parfit Scholarship
Everything else
- Moral uncertainty – towards a solution? By Nick Bostrom
- Rob Wiblin’s post about a sensible answer to the Pascal’s mugging paradox
Transcript
Table of Contents
- 1 Rob’s intro [00:00:00]
- 2 The interview begins [00:02:53]
- 3 The Case for Strong Longtermism [00:05:49]
- 4 Compatible moral views [00:20:03]
- 5 Defining cluelessness [00:39:26]
- 6 Why cluelessness isn’t an objection to longtermism [00:51:05]
- 7 Theories of what to do under moral uncertainty [01:07:42]
- 8 Pascal’s mugging [01:16:37]
- 9 Comparing Existence and Non-Existence [01:30:58]
- 10 Philosophers who reject existence comparativism [01:48:56]
- 11 Lives framework [02:01:52]
- 12 Global priorities research [02:09:25]
- 13 Rob’s outro [02:24:15]
Rob’s intro [00:00:00]
Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.
Professor Hilary Greaves was a guest for episode 46 back in October 2018, and her episode is one of the most popular of all time (as well as the one of the most useful – we’ve heard from several people who said it was a key factor in them towards making a positive career change).
Naturally, we were enthusiastic about having her on for a second time.
In this episode we talk about the case for strong longtermism, cluelessness, Hilary’s all-things-considered view of moral uncertainty, whether it can be better or worse for someone to exist rather than not, and the latest on the field of global priorities research.
Just a warning: a lot of philosophy language is thrown around in this episode – and unfortunately we don’t always stop to explain terms.
At the beginning we only very quickly give the definitions of ‘axiological strong longtermism’ and ‘deontic strong longtermism’ before going in to talking about them – so I thought I’d at least lay those out here.
Axiological strong longtermism is the view that in many if not most situations, the action that will have the best consequences in expectation is among those actions whose consequences on the very long run future are in expectation best.
In other words, in order to find the action with the best expected consequences overall, you need to look at the actions whose expected effects on the long-run future are best and choose among them.
Deontic strong longtermism says that in many if not most situations, the action you ought morally to do is among those actions whose expected effects on the very long-run future are best. I.e., not only would doing so have the best expected consequences, it is also what you should do.
That matters as on some approaches to ethics it’s not always the case that you should do the thing that will produce the best consequences.
If you find these kinds of topics interesting but find this conversation challenging, I’d recommend going back and listening to episode number 46 – Professor Hilary Greaves on moral cluelessness, population ethics, & harnessing the brainpower of academia to tackle the most important research questions. That one will give you a lot of useful context.
Before we get to the interview, I just wanted to mention that EAGx Asia-Pacific is on the weekend of November the 20 and 21.
It’s a virtual conference, streaming worldwide, and in addition to showcasing some of the excellent projects and people in the Asia-Pacific region, is also a great opportunity to connect with people who can’t normally make it to conferences in Europe and the US.
The financial and other costs involved in attending are far far less than those for attending an EA Global event in person.
You can apply or get more information about that and other events at eaglobal.org.
Alright, without further ado – here’s myself and Arden interviewing Hilary Greaves.
The interview begins [00:02:53]
Robert Wiblin: Today, I’m speaking for the second time with Professor Hilary Greaves. Hilary is a Philosophy Professor at the University of Oxford and Director at the Global Priorities Institute there. Hilary’s first episode was much loved by listeners, hence this second interview. And, of course, her academic research interests could also scarcely be more relevant to 80,000 Hours because they include the philosophy of effective activism and global priorities setting, foundational issues in consequentialism, the debate between consequentialist and contractualists, issues in interpersonal aggregation, the interface between ethics and economics and formal epistemology. So thanks for coming back on the podcast, Hilary.
Hilary Greaves: Thanks so much for a second invite.
Robert Wiblin: And I’m also joined again by Arden Koehler, who frankly is a lot better placed to have this interview than me as she recently completed her philosophy PhD at NYU. So welcome back Arden, and congratulations on finishing your thesis.
Arden Koehler: Excited to be here.
Robert Wiblin: All right. Hilary, I hope we’ll get to talk about your new work on whether it can be good or bad for a particular person not to exist, whether it could be bad for them specifically, and also your latest thinking on longtermism and moral uncertainty. But first, what have you been up to since you last appeared on the show two years ago?
Hilary Greaves: I’ll have to think back now to what’s been in two years; it’s a long period of time. I guess, most recently I’ve been working on the paper where we tried to set out the case for what we call strong longtermism with William MacAskill. More recently, working on a paper about the question you just mentioned, whether it can be better for a particular person that that person exists rather than not. And then besides that, of course, a lot of my time is taken up with running GPI. So the various issues involved in building an organization, bringing the people together, and so on.
Arden Koehler: So when Rob interviewed you two years ago, he asked if any research at the Global Priorities Institute had shifted your views on what sort of practical, global priorities we ought to have. And you said the Institute was really new and you didn’t know how the investigations are going to turn out. And you said, “So, ask me in a year or two, what I’ve changed my mind on due to GPI.” So, what have you changed your mind on if anything?
Hilary Greaves: So I guess I’ll give another annoying academic answer. Let me try and give a slightly different one to the one I gave last time. I think it just is the case with these big, important complex issues… It’s inevitably the case that when you start digging deeper, you discover all sorts of things that the important question depends on that you hadn’t realized it depended on. So I think the effect of the first year or two of research at GPI, for me, has been to get a clearer sense of what that landscape is, what the key arguments seem likely to be in each area, where more investigation is needed, as academics are always saying, but without necessarily being able to say, “Okay, here’s a question where previously I thought X and now I think not X.”
Hilary Greaves: So one example of that might be this issue that perhaps you’ll press me on later of how robust are various longtermist theses. So plausible variations in the background views that people have on, say, population ethics or decision theory. That’s something I hadn’t really thought about before starting to write the longtermism paper that I’ve been working on with Will. And now I’m in a position where I can see that there are lots of complex issues, but I’m not going to be able to tell you “I’ve resolved them. Here’s the answer.”
The Case for Strong Longtermism [00:05:49]
Robert Wiblin: All right. So the first thing I wanted to discuss is this paper you’ve alluded to, which you put out last year with Will, called “The Case for Strong Longtermism.” First up, what is strong longtermism, as defined in the paper, and I guess there’s two slightly different variants on it. One that you call axiological longtermism and deontic longtermism?
Hilary Greaves: In the paper, we define axiological longtermism to be, well axiological strong longtermism, sorry, to be the thesis that in a wide class of decision situations, the option that is best, ex-ante, is one that’s contained in some small subset of options whose effects on the very long-run future are best. So you’re asking what’s the difference between that and the deontic thesis?
So this distinction between axiology and deontology is one that’s bread and butter for moral philosophers, but I think a bit recherché for most non-philosophers. It’s about whether the key, normative sounding notion in the thing you’re saying is which actions are best or instead, which actions are such that you should do them. And that’s a really important distinction for moral philosophers because on one approach to moral philosophy, namely consequentialism, those two things basically coincide with one another extensionally. So consequentialists think that in any given situation you should do X, just in case X happens to be the thing that leads to the best consequences. But consequentialism is very controversial.
Hilary Greaves: And I’d say probably most moral philosophers are not consequentialists. They think that there are very important ways in which considerations of what we ought to do, or, in particular, what we’re morally obliged to do come apart from consideration of what’s best.
So in the paper we take on the question of which actions are best and whether the best things we could do are longtermist things, if you like, first. And then we turn only much later after having said everything we can think of to say about axiological longtermism about what this might imply by the likes of some plausible nonconsequentialist moral theory for considerations of what one should do or what one’s morally obliged to.
Robert Wiblin: I guess that helps explain why I find it so hard to keep these two things separate in my mind, because I’m just such a consequentialist at heart that I, whenever I think about like, “Oh, which one is which? Are these things even different? Why would they be different?” But I guess, yeah. So if you’re not a consequentialist, then I guess something could produce better consequences, but you wouldn’t be required to do it. Or maybe even you shouldn’t do it. And so, it could be axiologically good but not deontologically good.
Hilary Greaves: Yeah. They probably wouldn’t say deontologically good. Not appropriate or something like that.
Robert Wiblin: I see. Yeah.
Arden Koehler: So, that’s axiological longtermism or axiological strong longtermism and deontic strong longtermism. People in the effective altruism community use the term longtermism now in various contexts. Are there any times when people talk about longtermism and you think, “That’s a different concept than the thing I’m talking about in this paper?”
Hilary Greaves: So I think the way we use the term axiological strong longtermism is roughly the thing that effective altruists are usually trying to talk about when they say longtermism. Although we formulate it in a way that people don’t often formulate it. So often amongst effective altruists you hear people say things like, “Oh longtermism is about whether or not most of the value of your actions lies in the far future or whether most of the value of your actions lies in the near future.” I think that just fails to talk about the thing people are actually trying to talk about though, which is why we don’t formulate it that way in the paper.
Hilary Greaves: So, for example, it could well be, and it’s supposing there is some zero the value scale, so that we even know what we mean when we say this action, rather than just talking about differences between actions, even if you’ve got that kind of zero, it could well be that most of the value of my action lies in the far future. However, it’s exactly the same amount of value for the far future, for all the actions I have available to me. So, in that case, these considerations of the far future will be completely irrelevant for considerations of what I should do, whereas what effective altruists really take themselves to be getting at when they say longtermism is, this is really important for decisions. So that’s why we formulate things in terms of comparisons between actions, rather than just where the value lies for a single action.
Hilary Greaves: The other difference… So this is not so much about the way people in effective altruism use the term longtermism, but the reason why we put the term “strong” in our discussion, so strong longtermism, rather than just longtermism, is outside the effective altruism community, just longtermism by itself normally means something involving a much shorter time scale. So people will talk about longtermism, for instance, in the context of urging politicians to think beyond the next election cycle. Whereas when we say longtermism, we want to highlight a timescale that’s more like a million years rather than 10 years.
Arden Koehler: So you went with strong instead of really longtermism? Very longtermism?
Hilary Greaves: Yeah.
Robert Wiblin: I guess so part of the goal of this paper was to clarify or to define more specifically a term that people often just throw around in a vague way. Do you feel like by the end of it, you were like, “Yeah. We’ve nailed this down. We have a good sense of what longtermism is now. We’re happy with our definition.” Or did you more feel, “Oh, wow. There’s actually lots of things that we learned that we don’t know that we hadn’t even thought of before we started.”
Hilary Greaves: It was very interesting thinking through how we should define longtermism for the purpose of that paper. And we went through several iterations of that, throwing out lots of previous definitions before settling on this one. So, one thing that was interesting to me in that process was the definition we eventually settled on was surprisingly messy. I think I went into the project expecting that there’d be a nice clean tidy thing we were going to talk about. But all this stuff about contained in a small subset of options whose effects on the very long-run future are best, well, how small is small? And we say, in a wide class of decision situations, the best option has these features, well, how wide? But, I think that’s just an example of the general phenomenon that, when you start forcing yourself to be very precise and careful, you realize what the other questions are that you face, that you hadn’t realized you previously faced.
Hilary Greaves: So, in a way, it’s a bit disappointing that the definition we settled on is so messy because theoretical elegance is nice and this is a place where our paper doesn’t have it. But it’s given that the situation actually is messy, I think, then it’s progress to at least recognize that.
Robert Wiblin: Yeah. I guess, I mean, it seems like there’s a lot of empirical stuff here and just the empirical world is really messy or it’s like we use terms all the time that have these kind of vague boundaries, like “Yeah. How wide is the thing, or how important does this have to be relative to other considerations to really matter?” So, I guess, it’s perhaps unsurprising that when you nail it down, it still has some of that, some of that vagueness that attaches to almost all common words that we use.
Hilary Greaves: And I think the empirical considerations relevant to evaluating longtermism are messy, but I think these are actually different considerations. So even if the empirical role were relatively neat, I think it’s this kind of messiness would still remain. So, for example… One thing we considered saying and making the definition of the longtermist thesis was, the best option is almost always the one that’s the very best for the long run future. But, then we realized that that’s not actually something longtermists normally intend to claim or normally should intend to claim because it could well be that, say by making some slight adjustment to the second best action, say, you could come up with something that’s just ever so, ever so, ever so slightly worse for the long term, but massively better for the short term and people who call themselves longtermists don’t mean to be ruling out that there could be those kinds of trade-offs.
Arden Koehler: So that’s why you say it has to be contained in a relatively small set?
Hilary Greaves: Yeah. So the small set is there to allow those kinds of trade-offs to exist. But the idea is this can be very decision guiding in a messy, complex world because if this longtermist thesis is true, then you can start by just looking at a relatively small subset of those actions whose effects on the very long run future are best. You know that your best option is in there somewhere. And now hopefully you have a tractable sized set of options to consider in your search for the very best thing. And maybe we can do these trade offs at that point. That was the spirit of the thesis.
Arden Koehler: So while we’re on the definition and it’s messiness, maybe this is not related to it’s messiness, I’m not exactly sure, but one thing that I found interesting when I read the paper was that longtermism, isn’t, it isn’t really a moral view in the way that I think sometimes philosophers talk about moral views, in the sense that it’s, it depends on a bunch of empirical stuff. So I think there was a sentence in the paper that was something like, “longtermism is true at the current margin.” Or like, “Given this sort of situation, longtermism is true, where, maybe it wouldn’t be true in a different situation.” And that sort of distinguished it from moral theories, which are usually thought of as true in any situation.
Hilary Greaves: That’s right. It’s definitely not moral theory. It’s more a piece of applied ethics or practical ethics or something like that. Where what you say about this pressing practical issue may depend, at least in significant part, on what you think about moral theories properly so-called. But this thing is more about, “Given the correct moral theory, what should we do in particular decision situation X that we currently face?” So, I mean, it’s not an uncommon type of enterprise in moral philosophy more broadly construed, but that’s where moral philosophy more broadly construed and includes what some people call applied ethics or practical ethics rather than literally just the question of which moral theory is true.
Robert Wiblin: Are there any ideas in the paper that might surprise our regular listeners of the show, who’ve probably heard longtermism talked about a bunch of times in vaguer terms? So anything that comes out of trying to be more concrete, more precise?
Hilary Greaves: I don’t know if this is something that comes out of trying to be more concrete and precise, but a thing that we maybe hope to surprise people with was the claim that, at least very plausibly, the truth of axiological strong longtermism is very robust to quite a lot of plausible variations in moral theory and decision theory. Because I think a lot of people, when they think of longtermism, they primarily think of reducing risks of premature human extinction, and then they think, “Well, that’s only a big deal if you’re something like a total utilitarian.” Whereas, part of what we’re trying to press in this paper is, even if you completely set it aside, the possibility of affecting chances of human extinction, the longtermist thesis is still reasonably plausible, at least. Because there are at least very plausibly things that you could do to influence future average wellbeing levels across a very long-term scale, even without affecting numbers of future people.
Arden Koehler: Okay. Before we get to that question of whether longtermism is robust to certain changes in your moral theory, can you just run us quickly through the case for longtermism in basic terms, or I guess strong axiological longtermism?
Hilary Greaves: Sure. The basic argument arises from the fact that, at least on plausible empirical assessments, is either the case that there’s a very long future for humanity. So, there’s an enormous number of future people. Or it’s the case that there might or might not be one and there are things that we can do now that would nontrivially change the probability that there’s an enormous number of future people. So there are lots of different scenarios here which would postulate different plausible ballpark numbers for how many people there’ll be in the future. But some of them, particularly possibilities that involve humanity spreading to settle other star systems, results in absolutely astronomical numbers of people, spreading on down the millennia. So in the paper, we come up with a ballpark estimate: when you average across all of these possibilities, maybe a plausible ballpark average estimate is something like 10 to the 15 future people, in expectation.
Hilary Greaves: And if you’re dealing with that absolutely enormous number of possible future people, then once you do your expected value theory, it’s very plausible… If you can do anything at all to nontrivially change how well off those future people are, or if you can do anything at all to nontrivially change the probability that those future people get to exist, then in terms of expected value, doing that thing, making that kind of positive change to the expected course of the future, is going to compete very favorably with the best things that we could do to improve the near term. For example, if you can imagine an intervention that would improve things for the world’s poorest people now, and that would have no knock-on effects down the centuries, it’s plausible that something that would reduce extinction risk or improve the whole course of the very long run future, even by just by a tiny bit at every future time, would be even better than that.
Robert Wiblin: If I understood correctly what you were saying earlier, there’s this problem where I guess a lot of people kind of really closely associate reducing extinction risk and longtermism and so these two things are synonymous. But, in fact, they are quite separate. It could be that longtermism is true, but extinction isn’t plausible. And so, instead people should focus on a different way of improving the long-term future. Yeah. Can you elaborate on that a little bit? What the other options are and whether you think they do matter in the world as it exists now?
Hilary Greaves: Okay. So from a more abstract point of view, the salient thing about human extinction, if you like, is that it’s a really good example of a locked-in change. So once human extinction happens, it’s extremely unlikely that we go back from that. So the effects of a human extinction event will persist on down the millennia. But once you realize that that’s a key part of why focusing on human extinction might be a plausible thing for a would be longtermist to do, you can easily see that anything else that would also change the probabilities of some relevant lock-in event, could do something that’s relevantly similar for evaluative terms.
Hilary Greaves: So, for example, if there was some possibility of a political lock-in mechanism, where say either some extremely good or some extremely bad system of world governance got instituted and if there are reasons maybe arising from the lack of competition with other countries, because we’re talking about a world government rather than a government of some particular country, if that would mean that there’s a non-trivial chance that the given international governance institution once instituted would persist basically indefinitely down the future of humanity. Then if there are things we could do now, that would affect the probabilities that, content-wise, what that world government was up to was better off than worse, that could be an example of this kind of trajectory change that isn’t about how many people there are in the future, it’s rather about how good the lives are of those possible future people.
Hilary Greaves: And then besides political examples, you can imagine similar things might go on with value systems. Value systems exhibit a lot of path dependence. They tend to spread from one person to another. So, if there are things that we could do now that would affect which path gets taken, that could have similar effects. And one possibility in this general vicinity that’s very salient here, involves the possibility of the future course of history basically being determined by what value system is built into the artificial intelligence, if there is one, that assumes extreme amounts of power within the next century or two. If there are things we can do now to get better, rather than worse value systems built into such an artificial intelligence, then there’s a bunch of plausible reasons for thinking that the value system in an AI would be much less susceptible to change than the value system in a human being, for instance, artificial intelligences don’t have the same tendency to die that humans do.
Compatible moral views [00:20:03]
Arden Koehler: Okay. So you mentioned earlier that you argued that longtermism was robust to various changes that you might make in the underlying moral view. So I think people usually think of longtermism as, or the argument is simplest when it comes to a totalist axiology and a view where you just sort of maximize expected value. Can you talk about some ways that you might be able to deviate from those underlying moral views and what that does to the thesis?
Hilary Greaves: Sure. I mean, I think in the case of totalism, we basically already talked about it because as soon as you say, “We’re not only considering affecting the probability of premature human extinction, we’re also considering ways in which you can improve average future wellbeing while holding fixed numbers.” That second thing is a thing that any plausible population axiology is going to care about. Everyone agrees if these people are going to exist either way, it’s better for them to be happier rather than less happy and so forth.
Hilary Greaves: So insofar as there are plausible routes for longtermist interventions, whose cost-effectiveness competes favorably with that of what we might think of as the best short term interventions. That would be what underwrites the case for thinking that it’s not just about totalism and that basically whatever you think about population axiology is also going to underwrite a longtermist argument. I do agree that the argument’s simplest if you first preach totalism cause I think the discussion of what we could actually do and be sufficiently confident that it’s cost-effective is significantly thornier when you’re talking about affecting future average well-being, than when you’re talking about affecting future numbers.
Robert Wiblin: Yeah. What are some other moral views or moral philosophies that people might think don’t lead to strong longtermism, but actually are consistent with it?
Hilary Greaves: So people often say, “Well, what about prioritarianism?” So if I’m more concerned about affecting the fate of the very worst-off, then you might think, surely then I should be focusing on global poverty today because look, the world’s getting richer, everybody in the future is going to be vastly better off than people today. So even if I can increase the number of units of wellbeing that they have, in aggregate, by more than I can by a short-termist intervention, you might think prioritarianism imposes weighting factors that mean what’s actually the better thing to do is the short-termist thing.
Hilary Greaves: So that sort of sounds quite plausible, in so far as we’re talking about this kind of nice, simple economic model where you have economic growth, well-being just depends on GDP and things getting better over time. But actually in many of the cases that we’re concerned about when we’re considering being longtermist or considering longtermist interventions, we’re not talking about cases where the future people who would most obviously be affected by our actions are people who are enormously well off, and we’re making them just a little bit better off. We’re talking about cases, often, where things have gone really badly wrong. So in a totalitarian world government, artificial intelligence takeover gone wrong, or something like that. And those are plausibly cases where the future people are already vastly worse off than present people. In which case, of course, prioritarianism would just strengthen the case for longtermism rather than undermining it.
Robert Wiblin: So, I guess, yeah. That ends up depending, I suppose, on how likely do you think it is that in the future we’ll end up with people with, well lots of people with low welfare? I guess, if you also brought in this idea that, “Wow, people in the future must be better off”, then I guess prioritarianism does point against it. But I guess that’s just maybe unjustified empirically.
Hilary Greaves: I think it’s not precisely that, that it depends on actually. I think it’s about which possible futures your would-be longtermist intervention is targeting, right? So, I could, for instance, think it’s quite unlikely people will be much worse off in the future than they are today, but there’s a non-zero probability of that happening. And what my intervention is going to do is make things maybe only a little bit but still significantly better, for those future people in that history where things have gone badly wrong. So that would be a case where I could think very bad futures are unlikely and prioritarianism, but still having gone prioritarianism will make me more favorably inclined towards the longtermist intervention I was considering.
Arden Koehler: Are there any longtermist interventions that sort of stick out as particularly good on prioritarian rights too? Or particularly not good? Like, “Oh, they only make sense in the worlds where everyone really is better off in the future.”
Hilary Greaves: I guess related to what we just said, they will tend to make you more favorably inclined to ones that focus on really bad future outcomes, rather than making things a little bit better when actually they’re already quite okay. So maybe more focused on what people call s-risks, that sort of thing.
Robert Wiblin: All right. Pushing a bit further away from consequentialist theories, are there any other philosophical theories that you think are surprisingly consistent with strong longtermism?
Hilary Greaves: I’m not sure who would be surprised. I guess I’ll let other people say if they’re surprised or not, but I think that the interesting bit of the discussion here is, okay, so if I’m not a consequentialist, if I’ve got my favorite nonconsequentialist, sometimes called deontological theory off the shelf of my moral philosophy class, how is that going to change matters when I’m actually deciding what is morally appropriate for me to do? What I’m morally obliged to do? So we explore an argument to the effect that it’s not going to make that much difference. And roughly our argument is, it seems likely that if axiological strong longtermism is true at all, then probably it’s going to be true by a large margin. That is to say, the best things you could do that are aimed at influencing the course of the very far future are not just a tiny bit better than the best short-termist motivated things you could do, they’re actually a lot better, maybe many orders of magnitude better.
Hilary Greaves: Why do we think that? Well, just because of some kind of anti fine-tuning argument. It seems like it will be a strange coincidence if these things happen to end up at the same order of magnitude when the underlying considerations that are going on are so radically different. So, suppose that’s true. So I suppose this conditional holds: if longtermism, then longtermism by a really large margin. Then in order for a nonconsequentialist theory to, if you like, undermine the longtermist conclusion, and say that what you should do is more of a short-termist thing, it would have to say that, “Even when the amount that’s axiologically at stake is enormous, you should still go with your favorite nonconsequentialist principles rather than with considerations of what’s better.” And there just aren’t that many nonconsequentialist theories that are willing to say that. And I think it’s for quite a good reason.
Hilary Greaves: Consider a simple example. Suppose you’ve promised to meet a friend for lunch. Now, in case one, after promising to meet the friend for lunch, you get a better offer. Somebody says, “How about you come to this really fun party instead?” Most people will feel in that case, even if the outcome of going to the party will be better because the party will be so fun and all that stuff. But the moral injunction to keep your promise feels quite strong in that case. You should meet the friend for lunch, stuff the party. But if you really raise the axiological stakes, so that what would be gained for breaking the promise is not just going to a party, but saving a life or saving the world or something like that, then it feels intuitively. And I think a lot of nonconsequentialists would agree that the nonconsequentialists scruples become decision irrelevant at that point. Yes, it’s in some sense regrettable that you broke a promise, but still what you should do is save a life or whatever the thing is that would generate the massively better outcomes.
Hilary Greaves: So, in other words, if this conditional about if longtermism, than longtermism by a long way, holds, then the only kind of nonconsequentialist theory that’s going to disagree with deontic longtermism, looks like it’s going to have to be one that says the axiological stakes don’t matter. The nonconsequentialist principles are absolute. And that seems quite hard to defend.
Robert Wiblin: Are there people who defend that? Are there serious philosophers who, yeah, take that view?
Hilary Greaves: I’ll let you decide who the serious philosophers are. I mean, a view like that is often ascribed to Kant, but he’s also often mocked on precisely that basis, for saying that if a murderer comes to your door saying, “Where is your friend hiding?”, you have to tell the truth, even though that will mean your friend is getting killed.
Arden Koehler: So, for what it’s worth, I have the sense that although a lot of people feel like these deontic constraints are overridden by really big axiological stakes, you can sort of get, sort of overriding on the one hand and then flip it and get overriding again on the other side, if you raise the deontological stakes a lot too. So, I’m thinking of… Okay, so maybe it’s better to not follow the deontological constraints, if there’s this really good outcome you could get, but then, again, if you have to do something really, really awful in order to get the really good outcome, it flips again and you don’t have to, and you ought not to do it.
Arden Koehler: So somebody could think something like, “Well, yeah. When the stakes are really high, we should generally do the longtermist thing, but not if it’s going to require huge, huge sacrifices.” Let’s say, us all living in extreme poverty for the next 500 years or something like that, in order to get this really great long-term future. Or have all this extreme suffering in the short term, some people will have the reaction that, there, the deontology wins, even though the axiological stakes are high because the deontological ones are high too. Does that sound right?
Hilary Greaves: It’s definitely an interesting proposal. I think for practical purposes though, that’s just not the situation that we’re in. I mean, it could be. If you take the effective altruist project to extremes and start asking the question of, “Well, precisely how much should we be giving away? Precisely how much should we be reducing our own wellbeing in order to make things better from the impartial point of view?” But, at the margin we’re currently at, it seems like there are… Imagine the decision is the one that I think most people are actually thinking about where they’ve got some relatively modest fixed pot of resources and their decision is just whether to direct that pot of altruistic resources, already earmarked altruism, to a longtermist project, or instead to a short-termist project. Then there’s no difference, as far as I can see, in the deontological stakes of a sort that you just described. So that consideration wouldn’t kick in.
Arden Koehler: That seems right.
Robert Wiblin: It seems like this is slightly repeating an issue again with effective altruism where I guess some people associate it so strongly with consequentialism or with utilitarianism that they feel like they can say, “Well, I’m not a utilitarian, so I’m not super interested in effective altruism. It’s not relevant to me.” And then there’s all of these essays pointing out, “Well, actually almost any moral theory says that even if it’s not obligatory to do the most good that you can, at least it would be good to help, like all else equal, it’s good to help other people more.” So, surely all of this empirical research about how to help people more actually is relevant basically to any moral theory that gives at least some weight to beneficence or to how much you help others.
Robert Wiblin: But then this doesn’t seem to, in practice, persuade that many people. Or at least a lot of people don’t seem to buy into this, on a gut level though. And I wonder whether it’s just that some people don’t like the vibe of the thing and some people don’t like the vibe of longtermism because of kind of the priorities that it suggests. And some people don’t like the vibe of effective altruism because of what kind of moral outlook that creates. And even though, technically, that maybe their moral system or their views are such that they should take it seriously, in practice, they just want to be thinking about other topics.
Hilary Greaves: And the question is, comment on that.
Robert Wiblin: I suppose so, yeah. Do you think that’s right? I mean, maybe this isn’t a philosophical question, it’s more of a sociological observation, but do you have any thoughts on whether that’s true?
Hilary Greaves: Yeah. I do find it somewhat plausible, but then I think this project of exploring whether it is in fact the case that the conclusion in question, whether it’s effective altruism or longtermism, that people have some kind of gut objection to does seem to follow on a very wide range of moral views. Because if that is true, then what that should motivate is sober inquiry into what really is going on with the gut resistance. And I agree. I mean, I definitely feel that gut resistance to longtermism; here are all these people suffering, kind of right in front of you, and you know what you could do about it. On the other hand, here’s this really speculative stuff that you could try and do. You don’t really know if it’s going to work for a future that you only really dimly graph. So, it does feel really weird. But, the conclusion I draw from that is not that we should take longtermism to be false, but rather to try and work out why it seems really weird. And whether there’s anything interesting going on there.
Arden Koehler: While we’re on sociological observations, I have this sense that maybe there’s actually a lot of people who would sort of follow the axiological point and then stop at the deontic point. So they would say, “Yeah, okay. It’s the best action to do the thing that has the best long-term consequences, but I don’t think, that’s what I ought to do.” I feel like that’s actually maybe a sort of common combination. Is there anything to say in defense of that combination?
Hilary Greaves: I am waiting to find out. So, we, I mean, part of the reason why Will and I put that argument that’s supposed to lead from axiological to deontic longtermism in the paper, was to be provocative. We do think it’s a very plausible argument and we ourselves are at least pretty sympathetic to it. But we did expect that to be the thing that would generate the most pushback, in particular from moral philosophers. But in ways that we ourselves are probably not best placed to foresee and work through the details of. So what I really hope will happen, if you like, is that this attempt to be provocative succeeds and somebody else is inspired to sketch out what they think is the right response to that argument.
Arden Koehler: Another thing I’m wondering is, do you think it’s plausible to be a deontic strong longtermist without being an axiological strong longtermist? So in the paper it’s sort of like, “Well, first is axiological strong longtermism, and then you get the argument for the deontic one after that.” But, I guess, you could think maybe if we have sort of a duty to care about the future, we’re carrying the torch or something. You have these other sorts of reasons to look to the far future, perhaps you could be a deontic strong longtermist and not an axiological one?
Hilary Greaves: I think that’s going to be a pretty hard case to make, because even if one does think that one has that kind of duty to posterity, one’s presumably also going to think one has duties to the here and now. And the question is going to be how one’s supposed to weigh up those kinds of things. Now it seems hard to me to say anything sensible about how to weigh them up other than by talking about axiology: that supplies the most obvious route to doing ‘weighing up’ type of operations. Maybe you could do it in some other way, but as far as you could do it in some other way, it seems really unclear how and why the longtermists would come out favorably in the weighing up exercise. So, yeah. I mean, interesting project. If somebody wants to take it up, I’d love to hear what they’ve got to say. I wouldn’t take it up because I wouldn’t have the faintest clue where to start with arguing for that.
Arden Koehler: I guess you could think that maybe because future generations are neglected right now or sort of unfairly, you could think, “Okay. Yeah. Somebody really needs to fight for them.” So, deontic strong longtermism is true, for me, right now, at the current margin, without axiological strong longtermism.
Hilary Greaves: Yeah, great stuff, Arden. Why did you leave academia? You should come write this paper (laughter).
Robert Wiblin: Yeah. Don’t want to spend another three years developing that idea? Yeah. So far we’ve been talking about ways that strong longtermism is consistent with similar views, but what actually are some kind of mainstream philosophical or possibly empirical views that really would undermine longtermism?
Hilary Greaves: Okay. So, I think the most obvious, I don’t know if you call it philosophical or economic or what, the most obvious evaluative view that would undermine longtermism would be something like a significant positive rate of what economists call pure time preference. So, in other words, if you place less moral weight on the welfare of future people than the welfare of present people. And if you do so in a particular way that either say sets it all to zero beyond 100 years, or has it decay exponentially into the future, then even if the number of people in the future is astronomically large, as we conjectured it, is at least, in expectation, these things are going to get damped so severely in the calculation that most of the value that you can affect is in the shorter term. So that would be the evaluative premise.
Hilary Greaves: On the empirical thing, I think this is one of the things I feel most uncertainty about myself after writing the paper. Although, if you talk to the other author, I think you might get a different story. So suppose you’re not a totalist about population axiology and suppose your particular way of being a non totalist is one that means you’re not interested in increasing numbers of future people. You’re only interested in increasing average wellbeing. So, we’re not talking about reducing chances of premature human extinction when we’re thinking about plausible longtermist interventions, we’re instead thinking about, well, are there plausible ways that we could reduce the probability of a totalitarian world government taking over or improve the moral principles that govern such a totalitarian world government if it does take over and so on and so forth. Here, it was just painfully obvious to me when writing the paper that everything we were saying was an awful lot more speculative.
Hilary Greaves: So it seems not completely crazy to me that affecting average, very long run future wellbeing, might prove to be too intractable. I mean, what we say about that in the paper, partly, and the thing I do still stand behind is, well, surely we should at least do more research into it because it seems like these things are extremely important, really unclear and really under-researched. So at least here’s one longtermist intervention that looks like it’s going to score well in expected value terms: fund people to think more seriously about this stuff. I still stand by that comment. But maybe once all that research has been done, it will be harder to defend longtermism at that margin.
Robert Wiblin: Why do you think it is that longtermism is kind of only now, in some sense, being discovered by philosophy? It seems like kind of almost the only premises you need are that consequences matter somewhat, and the future could have a lot of consequences in them or in it. So, it’s kind of surprising perhaps that this isn’t an idea as old as time.
Hilary Greaves: Yeah, I guess I don’t think it’s that surprising. So firstly, there’s one more premise that you need, which is that you can do something about it. And plausibly, that’s more true now than it has been throughout much of history. And this is related to the so called, “Hinge of History Hypothesis” that Parfit has discussed and lots of others in the effective altruism community have been interested in. So there’s that, and it maybe it’s only been discussed recently because it’s only been a sensible thing to think about recently.
Hilary Greaves: The other thing I think might be relevant is a lot of moral philosophy is mostly interested in talking about relatively… I don’t know what’s the best way to characterize them, but I would say something like relatively small scale, private decision-making. So, should a woman have an abortion? Should you be a vegetarian? Should you keep your promises? That kind of thing. And if you’re only thinking about questions like that, it’s quite hard to find very, very, very long run implications. So, it’s maybe only when you start taking up this question of, “I’ve got a pot of money that I want to use to make the world better, in expectation, and I’m open to spending it on literally anything.” That opens up many more possible ways of affecting the world than are normally of interest.
Hilary Greaves: So to put that point, what’s maybe the same point another way, I think having an evaluative theory that cuts off the consequences you care about quite close to the actor, is much more plausible when you’re thinking about things like abortion and vegetarianism than when you’re thinking about spending money. Maybe. So I don’t know how much there is in that conjecture, but that feels to me like a salient difference between the kind of things that a lot of the moral philosophy literature is interested in talking about, and the kind of things that effective altruists are more interested in talking about.
Arden Koehler: Yeah. I guess, I still feel sort of intuitively surprised that philosophers didn’t want to talk about if we could do something to make the long-term future better, would that be what we were required to do? I mean, usually, what’s practical doesn’t constrain philosophers very much. So it’s maybe slightly surprising that it would have done so in this case. But, maybe that just means it’s more of the second thing that you were talking about.
Hilary Greaves: I don’t know. I mean, a hypothesis has to somehow suggest itself for consideration before people can find it interesting. So that might be part of it. If this thing wasn’t salient for decision purposes, maybe people just didn’t think about talking about it.
Arden Koehler: Are there any sort of misconceptions or definitely mistaken objections to longtermism that you think people might have?
Hilary Greaves: I think one that’s most salient, for me, is one that we have touched on in various bits of this conversation. I think there is a misconception out there that this is only a thing for total utilitarians. So if I’m not utilitarian, or if I’m not a totalist about population ethics, or if I’m not a consequentialist, then these arguments aren’t for me. And that’s a big part of why we wrote the paper that we’ve been discussing to try and say, “No look. Let’s think through how it goes on all these other views. It really does look like this conclusion is far more robust than you might have thought.”
Robert Wiblin: How has that conclusion being received by other philosophers? Do you have deontologists and Kantians knocking down your door to learn more about longtermism?
Hilary Greaves: Well, the paper’s not in print yet. So, I guess we’ll wait to see if that happens.
Robert Wiblin: Have to give it a minute.
Defining cluelessness [00:39:26]
Arden Koehler: So reading the paper on strong longtermism, I was surprised that you and Will MacAskill didn’t bring up cluelessness as an objection to longtermism. So before we talk through that, I know you talked about cluelessness on the last podcast with Rob, but do you think you could just explain what cluelessness is again when you use the term? So I think you call it complex cluelessness. I think it’s maybe a little bit fuzzy in a lot of people’s heads and certainly mine, so I’d like to get clearer on it and how it relates to longtermism.
Hilary Greaves: Sure. So I introduced the term complex cluelessness to distinguish the decision situations that, I think, especially effective altruists face, although also others, from simpler decision situations, which are the ones I gave the title of simple cluelessness to. So in both cases, we’re thinking we’re doing something like expected value theory, expected utility theory, if you like, for handling uncertainty. And so among other things, we need to assign probabilities to the various possible consequences of our actions.
Hilary Greaves: The key point for discussion of simple cluelessness is that there are an enormous number of possible consequences of our actions. Who knows if I help an old lady across the road, whether that means that she then is in a better mood later that day and has different conversations with people. And you can imagine this setting some chain of events in motion that leads to maybe different future people existing, or things going radically different in one way or another in the next century. So some people have been tempted to conclude from the possibility of those complicated causal chains that we’re completely clueless about which of our actions are best, and so any attempt at decision-making, based on trying to make the consequences as good as possible, is completely hopeless.
Hilary Greaves: When I introduced the term simple cluelessness, that was supposed to highlight the fact that, it at least seems like in a lot of these cases, although it’s possible that there could be this or that momentous consequence, from say helping an old lady across the road, it also seems equally possible and equally plausible, and for precisely the same sorts of reasons, that doing the opposite thing could turn out to have the same, very long-run consequences.
Hilary Greaves: So it seems plausible that when you do your expected utility calculation, these completely unforeseeable and merely possible long-run things are going to cancel one another out, and so not affect the question of which out of two possible actions has higher expected utility than the other one. So that was my response to some of the existing cluelessness literature, which tries to draw, in particular, often the anti-consequentialist or anti-care-about-the-consequences conclusions from unforeseeability of the future.
Hilary Greaves: So having said we don’t need to worry too much about that kind of cluelessness, I then started talking about complex cluelessness because this is a thing that really does still trouble me, notwithstanding all those things said about the so-called simple case. The case of complex cluelessness is supposed to be the case where it’s not just that there’s a mere possibility that some action you could take could have such and such unpredictable, far-future consequence. It’s also that you’ve got some structured reason for thinking that it might. You can say something concrete about what the causal chain might be that would lead from, say funding bednets, to higher population in the next generation, or from funding bednets to environmental degradation in the next generation.
Hilary Greaves: And suppose you’ve also got some plausible, also highly structured, but quite different in character reason for thinking that the opposite action, so, in this case, not funding bednets or funding whatever you would otherwise have funded, could lead to increased population or increased environmental degradation or whatever. Now in this case where it’s not just that such and such could happen, who knows, maybe it will, maybe it won’t, it doesn’t seem like you’re going to get anything like the neat canceling out argument that leads to the conclusion that, under expected utility theory, these things are going to be irrelevant for decision purposes.
Hilary Greaves: It more feels like what’s going on is, well, which direction these relatively hard to predict consequences sway things in, is going to depend on very arbitrary feeling details of what you say about precisely how plausible one causal chain is compared to a different causal chain. And there, the worry is it’s not something where we have hard evidence to guide us, and it’s also not something where it’s intuitively obvious what probability numbers we should be assigning when you try to pin these things down for decision purposes. I think most of us feel like we’re really just making up arbitrary numbers, but that’s really uncomfortable because precisely which arbitrary numbers we make up seems to make a difference to what we ended up doing.
Hilary Greaves: So that’s the predicament that I call complex cluelessness. I don’t know if that’s any clearer than what I said two years ago.
Arden Koehler: No, that’s helpful. So can I give a gloss, and ask you if you think it’s right?
Arden Koehler: So in both of the simple case and the complex case, we don’t think that the long-term consequences of our actions will actually cancel out. Right? It’s just that in the simple case, we know literally nothing about how the long-term consequences will go. Whereas in the complex case, we know a little bit, or we have some reason to believe some things, so we get a little bit of knowledge and that throws us into this predicament.
Arden Koehler: Is that right?
Hilary Greaves: I think it’s not so much about getting a little bit of knowledge, it’s about whether or not there’s symmetry in the situation. So the difference is supposed to be, in the simple case, there are all these possibilities, but there’s complete symmetry in the reasons. So anything that you could say in favor of the hypothesis that helping an old lady across the road increases future population could equally, easily, in precisely the same way, be said about how not helping the old lady could lead to the same long-run consequence.
Hilary Greaves: And where there’s that kind of symmetry, it seems plausible that whatever numbers you say for the probabilities, they should be the same for the two actions you’re considering. Whereas in the complex case, where there’s not that symmetry, you don’t have that guarantee the numbers are going to be the same. It seems like they could go anywhere.
Robert Wiblin: So I do find that whenever people ask me, “What exactly is cluelessness,” I often find it a bit hard to state exactly. I think one thing is it’s very natural to put it in terms of we’re so uncertain about what the impacts will be, but it sounds like you want to define it in a slightly different way.
Robert Wiblin: That it’s not only that we’re uncertain about the impacts, which is the case for so many things that we do, but that it just feels like any credences, any probabilities you try to assign to things, just seem completely arbitrary. Which is a kind of uncertainty, but it’s like a deeper role, a slightly different form of uncertainty about the consequences.
Hilary Greaves: Yeah, that’s right. I mean, it’s not quite the distinction that some people call a distinction between risk and uncertainty, but it’s similar-ish. So the distinction between risk and uncertainty is supposed to be about whether the particular numbers you use are completely constrained, or whether it’s really unclear what probability numbers you should be using. This is a bit different because it’s more about whether the probabilities are the same for this action as they are for this other action, nevermind what you know about what they are for either action taken separately.
Hilary Greaves: But yeah, it’s a similar worry about arbitrariness. It’s definitely not that there’s uncertainty. Because, as you say, there’s uncertainty for everything, and expected … If that was the only issue, expected utility theory would be the answer and there would be no remaining problem.
Robert Wiblin: I guess if you’re someone who just doesn’t feel troubled by arbitrarily placing numbers on things, and so just does have credences, does have beliefs about the possibilities of things, I guess does cluelessness not bite for you then? Can you then just run through the expected value calculations?
Hilary Greaves: Yeah. When you’re acting by yourself, plausibly, you can. You’re still going to get interesting issues when you try to join a community of otherwise like-minded people. If you’ve got your arbitrary numbers and Arden has a different set of arbitrary numbers, then you might want to get together and do some coordination to make sure you’re not fighting against each other, while she tries to increase future population and you try to reduce it. You could’ve just got together and spent the combined pot of money on something more productive.
Robert Wiblin: Yeah, interesting.
Arden Koehler: So I guess I still feel a little bit unclear on the extent to which complex cluelessness is supposed to be something that sort of just describes our discomfort with having to put probabilities on things, that when that’s really hard, and we don’t have a lot of good reason for thinking that the probability is a sensible one, and thinking that it’s like you actually shouldn’t have precise credences, you shouldn’t have precise probabilities assigned to these things. So it’s inappropriate to do this expected value calculation.
Arden Koehler: Can you clarify for me which one of those it’s supposed to be?
Hilary Greaves: No. I wanted to identify a predicament independently of taking a view on what’s the right way to theoretically model a predicament. I feel unsure what’s the right way to theoretically model it, and that’s what I think is one of the things that’s interesting about it.
Hilary Greaves: So you could say, obviously, Bayesianism is right. Obviously you have to have some precise probabilities. As you suggest, this is just about the discomfort of having to pick some precise set of credences when it feels arbitrary. Or yeah, you could go for this alternative account, along the lines you sketched, where we think because of the arbitrariness, this is a situation where actually, you’re rationally required not to have precise credences. You should have something more like a whole class of probability distributions representing your mental state.
Hilary Greaves: So I wanted to keep both of those possibilities within the heading of cluelessness, and not say cluelessness by definition is one of them or the other one. I think it’s an open question. What’s the best way of modeling what’s going on for a rational actor who faces this kind of predicament? The imprecise credence model feels appealing at first sight, until you probe a bit deeper, and start trying to write down a decision theory to go along with your imprecise credences, and then you run into all sorts of problems.
Robert Wiblin: Are there other ways that people talk about cluelessness, or other ways of defining what the problem is, that people should be aware of?
Hilary Greaves: So there’s one more thing I guess we haven’t mentioned, which is where the subject matter that you’re talking about is one where your credences are highly non-robust. So you can easily imagine that getting just a little bit of extra information would massively change your credences. And there, it might be that here’s why we feel so uncomfortable with making what feels like a high-stakes decision on the basis of really non-robust credences, is because what we really want to do is some third thing that wasn’t given to us on the menu of options. We want to do more thinking or more research first, and then decide the first-order question afterwards.
Hilary Greaves: So that’s a line of thought that was investigated by Amanda Askell in a piece that she wrote on cluelessness. I think that’s a pretty plausible hypothesis too. I do feel like it doesn’t really… It’s not really going to make the problem go away because it feels like for some of the subject matters we’re talking about, even given all the evidence gathering I could do in my lifetime, it’s patently obvious that the situation is not going to be resolved.
Arden Koehler: So maybe could cluelessness come in degrees or something? So before you’re used to putting precise credences on things … I mean, this is something that I’ve tried to do a little bit more of, because for various reasons it seems like a good thing to make these sort of predictions and then see how it all turns out. So before you’re used to doing that, you might feel extremely uncomfortable with doing that even for like, “Who’s going to win the presidential election in 2020?”
Arden Koehler: But is the idea maybe that I can start with my probability on Donald Trump winning the presidential election, and then have a bunch of conversations, do a bunch of reading, and maybe start to feel better about a probability? Move it around, and then I don’t move around so much after that. Whereas maybe with, “Is this action going to improve the chance of a good constitution in 2500 or something”, maybe it takes way longer, or maybe I never get to that sense of security?
Hilary Greaves: Yeah. I mean maybe you could or maybe a different person could. It might not be a coincidence that I wrote a paper about cluelessness and I’m someone who’s terrible at forecasting. So I think there are people who are much better than others at coming up with sensible reasoned, plausible-ish numbers, even when things are horrendously messy. I don’t know how they do it, but they’ve definitely got a skill that I haven’t got.
Hilary Greaves: I mean, I think even for the case of the example you just used, world constitution in 2500, there could plausibly be people who would say much less arbitrary seeming things about that, than I feel in a position to say.
Why cluelessness isn’t an objection to longtermism [00:51:05]
Robert Wiblin: Yeah. So why is it that you didn’t list cluelessness as a top objection to longtermism? Because I think that’s maybe the biggest concern that a lot of people raise with me personally, are when they’re thinking about, “Should I focus on affecting the world in the short-term or the long-term?”
Hilary Greaves: Because I don’t think it is an objection to longtermism, and maybe that’s for reasons that you just hinted at, when you said what people often say to you. I mean, yeah, things get very complicated when you’re thinking about the question of whether to focus on the short-term or the long-term, and cluelessness is a thing that raises its head there, but it’s very unclear that it raises its head in a way that pushes against longtermism.
Robert Wiblin: Okay, yeah. Can you elaborate on that? Because I think most people strongly feel that it does push towards short-termism.
Hilary Greaves: Maybe I’d like to hear their argument and then respond to that. I mean, I’m kind of guessing what it is.
Arden Koehler: Well, I could give a version that occurred to me when I was reading the strong longtermism paper. So it seemed like the argument for strong, axiologial longtermism depended on the expected change in the value of the long-term consequences, of the best longtermist action being sufficiently bigger than the expected change in the value of the short-term consequences, of the best short-termist action.
Arden Koehler: And if cluelessness is true, or maybe you can’t say cluelessness is true. I don’t know. Maybe cluelessness might make you doubt that, in fact, it is much bigger, because really it’s just this ratio is undefined. We don’t really have the expected change, we can’t figure out what the expected change is on the long-term actions, or maybe even on the short-term actions. But maybe it seems more obvious in the case of the long-term actions because it’s harder to put credences on these stories about how things affect the long-term. So this ratio between the expected change on the long-term and the expected change on the short-term is just undefined, as opposed to really big, which is what you needed for longtermism.
Hilary Greaves: Okay, good. Yeah, I think that helps me to say why my picture is a bit different. I think instead of saying the ratio’s undefined, I would be tempted to rephrase it as something more like, “It’s very unclear what the ratio is,” or something like that.
Hilary Greaves: But then it seems to me the picture is like this. So if you started off before thinking about cluelessness, being confident longtermism is true, then cluelessness should make you less confident of that. But also, if you started off before thinking about cluelessness, thinking that short-termism is true, then cluelessness should make you less certain about that too. As the name suggests, cluelessness tends to make you less certain about things.
Hilary Greaves: So in other words, it doesn’t seem like there’s an asymmetry that makes this specifically an objection to longtermism, rather than an objection to short-termism. It’s more an epistemic humility point. It should make us less certain about a lot of things. But if you’ve got this money and you have to spend it, it’s not clear that it will sway everybody in the anti-longtermist direction.
Robert Wiblin: I suppose though, many people feel like they can predict, they do have sensible credences on the short-term impacts of the things that they’re going to do. And so one of these things ends up being undefined or arbitrary seeming, the other seems more specific and positive, and so that’s maybe why they feel like it pushes towards short-termism.
Robert Wiblin: One can object that, in fact, they’re just ignoring the fact that in reality, the actual differences in the consequences of the actions that they can take, that would be focused on short-termist looking things, that in fact, they are also really arbitrary and uncertain. They’re just pretending that they’re not.
Robert Wiblin: Is that how you’d respond?
Hilary Greaves: I’m not totally sure I understood the suggested response correctly. So I’m going to say a thing, and it may or may not be what you just said.
Robert Wiblin: All right.
Hilary Greaves: Yeah, so I think it’s crucial to distinguish here. If you’re thinking about some intervention that’s primarily motivated by considerations of what the action does for the short-term, like say funding bednets, it’s crucial to distinguish between the amount of good that that action does in the short-term, on the one hand, and the amount of good that that short-termist motivated action does in total, on the other hand.
Hilary Greaves: So if you have a view where, say, no discounting, so you do, in principle, care about the long-run implications, then even if you’re funding bednets, you still want to be thinking about the long-run implications. And for familiar reasons … In fact, bednets was one of the more motivating examples in my paper on cluelessness that I wrote several years ago. There is an awful lot of cluelessness concerning what the long-run implications are for interventions that are motivated by thinking about what they do for the short-term.
Hilary Greaves: So it’s not like there are some interventions here that are not subject to cluelessness. And also, even if there were, it’s not clear that will be an objection because cluelessness just increases the spread. It doesn’t subtract from value in any predictable way.
Robert Wiblin: Yeah, the response I get from some people is maybe that they encounter cluelessness and they’re like, “Well, before I wasn’t sure quite what to value, and now it’s weighing up should I care about the very long-term? Actually, I just wasn’t sure what was important morally. And now having learned what a faff it is to try to figure out what the long-term impacts are, I’m inclined to just focus on some other aspect of morality that I might feel inclined to care about, like my family, or whether I’m following social rules or whatever, and focus on that because it just seems like this other thing is so impractical.”
Robert Wiblin: Does that potentially make sense, as a response?
Hilary Greaves: I think it depends what you mean by make sense. I mean, it’s definitely understandable, and I think it’s psychologically extremely natural. I find it psychologically extremely tempting. I think a lot of people do. But one of the things that fascinated me about, and still fascinates me about this area of inquiry is I don’t think it’s ultimately defensible. It feels a little bit too much like wishful thinking, where one starts from a premise like, “I want things to be clear cut enough that my head won’t hurt thinking about all the things I meant to think about,” and then tries to get from there to the conclusion that it is.
Robert Wiblin: Yeah. I wonder whether it makes sense to non-philosophers who think about morality more vaguely and more pluralistically perhaps? Where it does, I think, to a moral philosopher, it seems odd to go from empirical things about what is easy to accomplish, to then work back and then figure out what your values must be. But maybe because most people are so unsure about what their values are, the thing about what, in practice, can I do, seems like it’s more relevant somehow.
Hilary Greaves: Yeah. Except it’s not precisely about what in practice you can do, is it? It’s about what you can do while not facing difficult questions, and not incurring a responsibility to do more thinking and things like that.
Arden Koehler: So I have a sense that this confusion about cluelessness’s relationship to longtermism has to do with the definition of longtermism that you’re using. So I think sometimes people might mean something… Longtermism means something like, “I should care about the long-term consequences.”
Arden Koehler: And I guess it feels like your response to why, or your explanation of why cluelessness isn’t an objection to longtermism, sounds to me like saying, “Okay, well, as long as you already care about the long-term consequences, then this is not an objection to the idea that the thing that is the best action, is the thing that has the best effect on the long-term consequences, or is in the small set of things that has the best effect on the long-term consequences.”
Arden Koehler: And that seems true, but maybe people are thinking, “No, this is an objection to the idea that I should even care about the long-term consequences.”
Hilary Greaves: That’s right. Maybe they are. So I do think the response has to be very different in those two cases. But I think for different reasons, neither of those two positions feel tenable to me. So the second one, again, it just feels like a case of illegitimate wishful thinking argumentation.
Robert Wiblin: So is cluelessness an objection to anything in particular? Other than, I suppose, particular projects that think that they’re confident they’re going to have a positive long-term impact? Is it an objection to anything else, more philosophically?
Hilary Greaves: When I started thinking about it and wrote that paper on it, I didn’t intend it to be an objection to some thesis or position. I intended it to be setting out a predicament that I think we face, and noting that we need to figure out what to do about this. So I think for me, it just occupies a very different position in this intellectual space of what’s going on.
Robert Wiblin: Yeah. So actually, someone from the audience wrote in when we were asking/soliciting questions for you. They wanted to know, what are the practical implications of cluelessness for people today? In their view, they thought in practice, it seems like almost everyone just ignores it.
Robert Wiblin: And they’re wondering, is that reasonable, given how much we know about cluelessness or how much we don’t know about it?
Hilary Greaves: I guess I disagree with the empirical premise. No, I don’t think it is true that almost everyone ignores it, including people who wouldn’t frame their reasons for doing what they do in the language of cluelessness or anything like it.
Hilary Greaves: So for example, it’s becoming increasingly commonplace to say, “Well, don’t take cost-benefit analyses too literally. You should recognize these things are very crude. They’re only part of the picture. Use them at most as a guide, use them really carefully, and so forth.” An account of what’s going on with cluelessness is one natural story that, for some people at least, could be lying behind that kind of talk. I mean, if you think, “Yeah, I’m funding bednets, but actually it’s not only that I think my cost-benefit analysis or the cost benefit analysis that I’m reading is only a partial picture of how many childrens lives this intervention will immediately save. It’s also that I recognize that’s not the only thing that I care about.”
Hilary Greaves: And so if I’m deciding between say funding bednets or something else that’s not about malaria at all, and it’s supposed to do good by some very different route, I need to take into account the fact that, in the fullness of time, the ways these are going to affect the future of human history includes all kinds of things that just aren’t in this calculation. So there could be a cluelessness story behind that caution about cost-benefit analysis for one thing.
Hilary Greaves: I think there’s another way. For a different set of people, I think there’s another way in which people don’t ignore cluelessness. Although in this sense, I think it’s a place where people should, in the relevant sense, “ignore” cluelessness. So another of the key drivers for me behind starting to think about this, was it seems like a psychological effect that cluelessness has on many people, is to drive them away from trying to do anything to improve the world. And maybe as has been hinted in this conversation, to focus on more immediate concerns, maybe just their own family or their own local community or something like that. I think that’s a dangerous tendency. So yeah, in a way, I wish that people would ignore it more in that sense.
Arden Koehler: So I guess this is related to my question earlier about is it maybe an objection to just the idea that we should care about all of the consequences of our actions, including the ones in the far future? So you said you thought maybe it’s just wishful thinking to think that it is because… I think that’s what you said, right? Because maybe more comfortable if you don’t care about the long-term consequences. Is that right?
Hilary Greaves: I think I said something like that.
Arden Koehler: Okay. I mean, just to maybe put a little more meat on the bones of that line of thought and see what you think, could somebody think, “Look, morality has to be this action-guiding thing for me. It has to tell me what the good actions are to do, in order for it to play the role it needs to play in my life.”
Arden Koehler: And if I interpret cluelessness in this stronger way of the two ways that I suggested earlier, so not just it feels weird to put credences on this stuff, but actually, it’s inappropriate to put credences on this stuff, such that I get, am able to do expected value to figure out what’s best in the long-term. If it’s actually inappropriate, somebody might think, “Well, this just shows that it’s actually impossible to appropriately act in a way that takes into account the long-term consequences of my action. So because morality has to be this action-guiding thing that tells me what it’s appropriate to do, then that implies that I shouldn’t take into account those long-term consequences.”
Arden Koehler: What do you make of that argument?
Hilary Greaves: Yeah, I think it’s an interesting line of argument. I think it’d be interesting to think more about how escapable cluelessness is really though. So I mean, this line of argument seems to presume that there’s some safe place you can find where you don’t face cluelessness. Maybe it’s if you only care about yourself, maybe it’s if you only care about your family. But I think a lot of the issues that we’re facing when we talk about cluelessness will arise at that scale as well. It really does feel like quite an inescapable predicament.
Hilary Greaves: Take somebody choosing which university to go to. They might look at the quality of the course and the placement record and so on. But, of course, things are more complicated than that, and the different ways in which the future might be affected, even just for that same person’s future wellbeing, are really complicated. So I guess I think that line of reasoning does make sense and it has some allure, but it’s hostage to there being any such safe place. And I’m not as confident as that interlocutor seems to be that there is one.
Robert Wiblin: I guess if you had a much more narrow conception of morality, or if, say, all you cared about was whether your actions directly violated a list of things that you can’t do, then it seems like cluelessness matters less. Once you start caring about consequences that this becomes a bigger issue.
Arden Koehler: Or maybe that it directly harms somebody in a way that’s not mediated by anything else.
Hilary Greaves: Yeah. I mean, that kind of theorist still needs to have some way of choosing what their principles are supposed to be in the first place though. Different people will say very different things about this, but I mean, I think they’re going to, for independent reasons, hate the thing I say next, but I’m quite sympathetic towards … So far as there’s an important place for rules and morality, it’s important to consider what tend to be the consequences in general of adopting those rules.
Hilary Greaves: And of course, you’ve got anything like that in the picture, then talking in terms of rules doesn’t ultimately help. You need to be able to evaluate the rules. Like, why not this crazy set of rules that I saw written in chalk on the wall yesterday?
Robert Wiblin: So I guess you’d need some way of being confident about what the rules are, without, at any point, referencing any of the consequences that they have. Which I guess is an unusual philosophy. I suppose possibly religious groups or something might think that they have access to lists of rules that they’re really confident about that aren’t about the consequences. But for the rest of us…
Hilary Greaves: Societies that place extreme importance on doing what your parents say, maybe the rules just get handed down through the generations. But yeah, I mean, if you have any view on which you are supposed to be putting in intellectual work yourself to address the question of what the right morality is, this seems really hard.
Arden Koehler: It does seem like we get more and more uncomfortable with putting credences on the effects of our actions as we expand the timescale. So beyond today to next week, to a year from now, to a hundred years, a thousand years. So there’s this matter of degree where cluelessness feels like a more serious practical problem the farther out you’re considering the consequences. So in that sense–
Hilary Greaves: In general, I think that’s true. Yeah.
Arden Koehler: Okay.
Hilary Greaves: Like modularities of lock-in.
Arden Koehler: Okay, so even if there isn’t a fully safe place that you can retreat to, like you were talking about before, maybe there’s places that are safer from cluelessness, rather than less safe?
Hilary Greaves: I think there are going to be matters of degree. Yeah. But I don’t think they’re going to be such as to undermine all of the things that longtermists in particular want to press on. And that you can imagine cases where, yeah, maybe there’s this general tendency for things to get more uncertain as you go further and further into the future. But longtermists are particularly in the business of looking for places where, at least beyond some cutoff point, that stops happening. And so it’s not at all clear that this response to cluelessness would undermine longtermism either.
Arden Koehler: Because they’re looking for lock-in scenarios?
Hilary Greaves: Yeah. Whether based on extinction or otherwise.
Arden Koehler: Okay. So maybe one last question on cluelessness. I actually don’t really know if this is a well-formed question, but I feel like a lot of people have a sort of intuition that because of cluelessness we should do things that are more robustly good, or good on a wider range of assumptions about how the world might go. So maybe we should try to promote world peace, which feels like, I mean, it’s not like it’s impossible to think of ways where world peace could be a bad thing in the long-term future. But it feels like, it seems a little, I feel a little more confident in it.
Arden Koehler: Do you think that’s an appropriate reaction to these cluelessness worries or does that seem like a misguided reaction?
Hilary Greaves: Yeah, I don’t know. It’s definitely an interesting reaction. I mean, it feels like this is going to be another case where the discussion is going to go something like, “Well, I’ve got one intervention that might be really, really, really good, but there’s an awful lot of uncertainty about it. It might just not work out at all. I’ve got another thing that’s more robustly good, and now how do we trade off the maybe smaller probability or very speculative possibility of a really good thing against a more robustly good thing that’s a bit more modest?”
Hilary Greaves: And then this feels like a conversation we’ve had many times over; is what we’re doing just something structurally, like expected utility theory, where it just depends on the numbers, or is there some more principled reason for discarding the extremely speculative things?
Arden Koehler: And you don’t think cluelessness adds anything to that conversation or pushes in favor of the less speculative thing?
Hilary Greaves: I think it might do. So again, it’s really unclear how to model cluelessness, and its plausible different models of it would say really different things about this kind of issue. So it feels to me just like a case where I would need to do a lot more thinking and modeling, and I wouldn’t be able to predict in advance how it’s all going to pan out. But I do think it’s a bit tempting to say too quickly, “Oh yeah, obviously cluelessness is going to favor more robust things.” I find it very non-obvious. Plausible, but very non-obvious.
Theories of what to do under moral uncertainty [01:07:42]
Robert Wiblin: All right. Let’s push on to another research interest of yours, which is moral uncertainty. And just to recap for people who haven’t heard previous episodes that have talked about this, the problem of moral uncertainty is what you should do in practice when you aren’t sure which theory of morality is correct, which I think is true for most of us. Yeah, can you briefly lay out the options that you and your colleagues have identified for dealing with moral uncertainty and perhaps quickly what your opinion is on how promising they are?
Hilary Greaves: Sure. I think for the purpose of this question, I should probably take my colleagues to be all the people who’ve written about moral uncertainty, is that okay?
Robert Wiblin: Sounds good.
Hilary Greaves: There’s not that big a literature on it and the space of options is pretty small. So maybe the dominant approach apart from… There’s one approach that says the question doesn’t make sense, so what you should do under moral uncertainty is just whatever you should do according to the true moral theory. And if you don’t know what that is, it follows that you don’t know what you should do, but tough, that’s the situation.
Robert Wiblin: Sucks to be you.
Hilary Greaves: Yeah. So there’s that’s for you and quite a lot of people are sympathetic to that. Let me set that one aside for the purpose of this discussion though. Among the remaining people, probably the dominant option is the one that says, well, look, this is just another kind of uncertainty and we already know how to treat uncertainty in general, that is, to say, expected utility theory, so if you’ve got moral uncertainty, then that just shows that each of your moral theories better have some moral value function. And then what we’ll do under moral uncertainty is take the expectation value of the moral values of our various possible actions. So that’s a pretty popular position.
Hilary Greaves: I’ll briefly mention two others. One, because it’s also reasonably popular in the literature, this is the so-called my favorite theory option where you say, “Okay, you’ve got moral uncertainty, but nonetheless you’ve probably got a favorite theory.” That is to say, you’ve got one theory that you place more credence in than any other theory taken individually. What you should do under moral uncertainty, according to this approach, is just whatever your favorite theory says. So pick your highest credence theory, ignore all of the others.
Hilary Greaves: And then finally, the thing that I worked on recently-ish, that was based on instead of applying standard decision theory, that is expected utility theory to moral uncertainty, consider what happens if you instead explore bargaining theory to moral uncertainty. So bargaining theory is a bunch of tools originally designed for dealing with disagreements between different people that you could conceptualize what’s going in moral uncertainty, if you like, is these different voices in your head where the different moral theories, you have some credence in, like the different people and their bargaining with one another about what you, the agent, should do. So that metaphor motivated applying what mathematically, formerly, other tools of bargaining theory to the problem of moral uncertainty and looking to see whether that led to any things that were distinctively different in particular from what the maximized expected choice-worthiness approach says.
Arden Koehler: Do you have the sense that these three basic approaches approximately exhaust the space of possibilities? Or do you feel like you’d be unsurprised if somebody came up with a completely different theory of what to do under moral uncertainty that was like, they didn’t even resemble expected utility maximization, or bargaining theory.
Hilary Greaves: I guess I’d both be surprised and not. I’d be surprised in the sense that there are some principled reasons for thinking we’ve covered the ground, right? If we thought about thinking of it as just another case of uncertainty, and we thought about thinking of it as a case of disagreement, and we’ve thought about just pushing aside the uncertainty bit and going for the modal verdict. There’s a sort of sense of completeness about that. It’s hard to see what could have been missed.
Hilary Greaves: On the other hand, we know, in general terms, that one often has the sense that the ground has been covered, and then it turns out one’s missed something and it’s in the nature of the business that you don’t get to see beforehand that that was coming. And so in that sense, of course, I’d never be surprised of people saying surprising things.
Arden Koehler: Do you have a favored theory of moral uncertainty?
Hilary Greaves: I do. It’s very boring in a sense. I’m most partial to the maximize expected choice-worthiness approach. So that’s to say, this is just another kind of moral uncertainty. In particular, when people don’t like that, it’s because they think that the so-called intertheoretic comparisons that are required by that approach are really problematic. And I don’t buy that, I don’t think they’re problematic.
Arden Koehler: Can you say what that means?
Hilary Greaves: Sure. So in order to get a maximized expected choice-worthiness approach to have well-defined verdicts for what you should do under moral uncertainty, you have to think there are well-defined matters of facts concerning how much is at stake in this decision situation according to one theory, compared to how much is at stake in the same decision situation according to a different moral theory.
Hilary Greaves: And many people think there’s just no place for those intertheoretic comparisons to come from because all you’ve got is say moral theory A which says things are less than so, and moral theory B which disagrees with theory A, and it says the ranking of options is a different one. And maybe the ratios of value differences between options is a different one, but there doesn’t seem to be any master theory which says, “By the way, here’s how to compare theory A and B.” So theory A isn’t going to tell you how to compare it to theory B, and theory B isn’t going to tell you how to compare theory A to theory B. So this is me trying to be sympathetic to the view that I don’t believe that says these intertheoretic comparisons are not well-defined. And so you can’t maximize expected moral value.
Robert Wiblin: So do you think this maximize expected choice-worthiness is actually just a good theory: there aren’t that many serious problems with it? Or is it more that all of the other options are even worse?
Hilary Greaves: I think there are problems with it in practice and they do relate to intertheoretic comparisons. So I think it’s a problem of arbitrariness where, in order to arrive at definite verdicts about what you should do, you have to decide what you’re going to take the intertheoretic comparisons to be, and that’s unsettling because again, it’s really hard to see where the guidance on that could come from. But this is also related to the reason why I don’t take this to be an objection to the view that maximizing expected moral value is the right approach to moral uncertainty because this, to me, feels like just another instance of a predicament that we anyway knew we were in. For example, in deciding what our credences should be when we’re trying to deal with empirical uncertainty, and it’s really unclear what the probabilities are.
Arden Koehler: The way you put it just then suggested there was a special worry about intertheoretic value comparisons, because you need a meta theory to tell you how to compare them or something, and neither theory is going to give you that. So it’s like you’re cheating or something if you have this meta theory, or maybe you’re like implicitly favoring one of them, or something like that. That doesn’t seem present with the empirical uncertainty. Is that right?
Hilary Greaves: I think that’s probably right, but I don’t think it undermines the view that I was sketching. So what predicament are we in, in the absence of such a meta theory? Well we’re just in the same predicament that we’re in, according to me, in the case of empirical uncertainty where there was never even any conceptual possibility, if you like, of a meta theory to tell you what the probabilities were.
Robert Wiblin: Yeah. There’s, I guess, a bunch of practical problems that people have had with this maximizing choice-worthiness approach. I guess one is like, how do you do these intertheoretic comparisons? I suppose another worry is that your choices might really end up getting dominated by fanatical theories or theories that seem very improbable but, nonetheless, the stakes are incredibly high and it’s like the tail ends up wagging the dog. Do you have any response to that? Is that just perhaps an outcome that we have to accept that we’re going to end up acting on these radical, fanatical theories or is there a way out of it?
Hilary Greaves: And I can’t give you a definite answer to that. Basically I think it’s a really important, interesting, largely open research question. It’s also incidentally one we already face in the case of empirical uncertainty is not specific to moral uncertainty where, and for instance, the standard arguments for longtermism, even just the standard documents for extinction risk mitigation, those arguments rely on thinking that if you’re comparing a very small probability of generating an astronomical amount of value with a probability reasonably close to one of generating a modest amount of value in it, setting aside considerations of cluelessness and so forth, that when you’re facing that decision, you should just do your expected value calculation and go with whatever the numbers say.
Hilary Greaves: So a lot of people, in that empirical case, that’s just a case of empirical uncertainty, or it could be. A lot of people in that case will feel “I’m really uncomfortable about letting such a tiny probability, even if it is a tiny probability of an extremely large amount of value, carry the day and I seek some other decision theory that’s not expected utility theory that allows me to do something in the vicinity of discarding sufficiently small probabilities.” The problem with doing that is just that when you try to actually write down a well-defined and not obviously wrong decision theory that has that consequence, it’s extremely hard to find one. And I think we’re in that same situation in the case of moral uncertainty. But the bargaining thematic approach was motivated in part by the thought that maybe at least this approach will be somewhat more resistant to considerations of fanaticism than the maximize expected choice-worthiness will be.
Pascal’s mugging [01:16:37]
Robert Wiblin: So it sounded like you’re basically describing the Pascal’s mugging case. So Pascal’s mugging is a concrete example of this thing where you end up with a very low probability of very high value driving your decision. I guess, just for those who haven’t heard of it, it’s the idea that you’re walking along the street and someone comes up to you and says, “If you give me your wallet, I’ll generate some phenomenal amount of moral value. I’ll create an enormous amount of pleasure.” So if they were talking to me and I say, “No, I didn’t really think that you can do that.” And they say, “Well, actually I’m a magic person. I have like access to all the dimensions. And if you give me a wallet then I’ll create a whole new universe that is just filled with really good stuff.”
Robert Wiblin: And I say, “No, that doesn’t seem like a good idea because I think it’s incredibly unlikely.” And they’re like, “All right. Well, what if I create 10 times as much as what I just said, 10 times as much pleasure.” And I’m like, “Well, I still don’t really buy that.” But it seems like if they keep iterating there, just promising you more and more and more, unless your credence in their ability to do that is always decreasing in proportion to how much more they’re offering you. At some point, the expected value is going to get really high and you’ll have to accept the offer. But people don’t really want to accept the idea that you should give them your wallet for that. Do you have any preferred approach to dealing with Pascal’s mugging case?
Hilary Greaves: No. I think where I am with that at the moment is I recognize the problem. And of course, like everybody else, I get the intuition motivation for exploring alternatives to expected utility theory, but I haven’t seen one that works. And if people try, to first approximation, or as a first step rather, people will try things like so-called de minimis principle where if something has a sufficiently low probability, maybe less than 10 to the minus a hundred or set the threshold as low as you like, you can just ignore anything with a probability lower than that. But those principles turn out to not be well-defined because you can always carve up the space of possibilities into things such that every “possibility” is less probable than that. And you don’t want to ignore all of them.
Arden Koehler: I have this intuition Hilary, and correct me if I’m wrong, but that you might be sympathetic to the idea that you should just give the wallet because I think some people think of longtermism as doing something relevantly similar, being like, “Look, the stakes are so astronomical that you should do this.” Maybe spend your career working on researching something that has a small chance of, in expectation, reduces the extinction risk by a small amount or something like that. And they think of that as like “This is Pascal’s mugging happening to me” or something. And it sounded, when we were talking about longtermism, that you might be sympathetic to the idea that like, “No, that’s actually good reasoning.” Are you sympathetic in the Pascal’s mugging case to the solution that just says, “Give me your wallet?”
Hilary Greaves: Well I don’t think it’s obviously crazy in the sense that this argument seems to have the character of paradox, right? Where you start from a bunch of plausible looking principles and you reach an apparently absurd conclusion. And sometimes when that happens, the right thing to do is to revisit the intuition that the conclusion is absurd. And it seems to me things are enough up in the air on this one that I wouldn’t like to make any rash verdict about where the failure point is going to be. But it does feel to me like there’s a bit of a difference between the two cases, but I don’t know how much of that is just driven by the psychological glossing that goes on with the actual Pascal’s mugging case, where there’s a sense of an agent trying to trick you. And that’s somehow objectionable, whereas that’s not the case in the case of, you’re just facing a battle with nature, if you like, to figure out how to improve the world as much as you can.
Arden Koehler: Okay. So this is maybe a slightly crazy question, and tell me if it’s just misguided, but it seems like a lot of people are uncertain about what the right theory of moral uncertainty is. Have you thought at all about how we should deal with that uncertainty? Should we just apply our favorite theory of moral uncertainty, whether that’s maximized expected choice-worthiness or something else, in order to figure out what we should do? Or should we do something besides that in order to take into account are uncertainty about moral uncertainty?
Hilary Greaves: Yeah. I haven’t thought about it a great deal myself. This is the so-called regress problem. So it seems like you could go up to that second level of moral uncertainty. And also, by the way, once you’ve done that, you’ll still be uncertain about what the answer was to that question and one could just ask, so let’s go up to a third level and “Hang on, is this ever going to stop? Or do we have to climb an infinite hierarchy before we decide what to do?”
Hilary Greaves: So I think I know there’s some work exploring the possibility that as you climb this infinite hierarchy, maybe you reach a fixed point where you know what to do, because you know that after you’ve climbed 16 levels, the verdict isn’t going to change as you keep going up and so forth. That’s maybe jumping the gun a bit. So you asked me, should we apply our favorite theory of moral uncertainty to what you might call the second-level question? Well, I happen to think “No”. But that’s just for the same reasons that I think “No” to the first level of question of whether my favorite theory approach is the right approach to moral uncertainty. Of course, some people will think, yes.
Arden Koehler: Okay. So, but you do think of it as it’s probably the same kind of thing. So whatever you say in the first order case, that’s at least some reason to think you should say the same thing in the second order case?
Hilary Greaves: Yeah. Certainly, this is maybe tautological, but absent some principled reason for thinking that there’s a relevant difference between the two levels of uncertainty. And furthermore, I find it a little bit hard to see what the relevant difference could be.
Robert Wiblin: Could you explain why you’re not so optimistic about the bargaining approach?
Hilary Greaves: Sure. It gets into a lot of detailed stuff very quickly. But when I started on this project, it was because a bunch of people seem to be sympathetic to the idea that maybe if you think of this as more of a bargaining problem then the situation is going to be radically different. But as far as I was aware, nobody had actually sat down to carefully model how it all pans out. And when we did that, well, there’s lots of different ways you might model it as “a bargaining situation”. But the dominant approach in the standard literature on bargaining theory is to use the so-called Nash bargaining solution. And then it just turns out when you do that for the question of moral uncertainty, the result is something that’s extremely similar actually to one version of maximizing expected choice-worthiness.
Hilary Greaves: And, in particular, I think some of the ambitions that people had had to the effect that lots of those alleged problems with maximizing expected choice-worthiness would evaporate on a bargaining theoretical approach. Those ambitions seem not to be realized. So in particular, there are existing approaches to maximizing expected choice-worthiness where you deal with a supposedly problematic nature of intertheoretic comparisons by doing some structural normalization that looks for some structural feature of how each theory treats the range of options under consideration. Maybe it’s the range, or the variance, of how much moral value they ascribe to the options, and then equalizes that statistical measure across all the theories. And if that’s the version of maximize expected choice-worthiness that you’d like, then that ends up looking very similar actually to the bargaining theoretic approach with just some subtle differences around the edges. And then, where there were differences, it looked like actually the maximize expected choice-worthiness theory was superior on the details.
Arden Koehler: So you just mentioned one alternative way for approaching intertheoretic value comparisons. Is that your preferred solution to the puzzle you presented earlier about how to make these comparisons?
Hilary Greaves: No, it isn’t mine. It is the preferred solution of lots of the people I talk to. I think I’m quite idiosyncratic on this. So that the proposal is, if you like, here’s an off-the-shelf recipe for settling the intertheoretic value comparisons, that just is the one that you should use. Whereas I’m more inclined to take a much more permissive view, in some sense, where the point is rather this question of intertheoretic value comparisons is just, unfortunately, I’m sorry, it’s a thing you have to form a view on when you face moral uncertainty, although it wasn’t decision relevant in the absence of moral uncertainty. So it’s an extra component of your view that you have to get definite about if you want to deal with moral uncertainty.
Hilary Greaves: So that’s, if you like, an annoying extra task that the agent has to take on, but on my way of thinking about it, it’s a thing that the agent has to settle for themselves. And what’s awkward about it is, on my view, there isn’t any single recipe that tells you how to do it. So the structural normalization program is engaged in a different game, they, in some sense, maybe take it for granted that we have to have a recipe, and then the structural thing is a prescription for which is the right recipe.
Arden Koehler: Just to make sure I understand the difference between these two proposals. So the structural thing is it’s determined somehow by the structures of the moral theories, how to make these comparisons, maybe like in the super simple case if there were just two theories that said there was a best action and a worst action or something, you can compare the best actions of the two theories and the worst actions of the two theories, and that’s what you ought to do. And then your view is more “No, there’s just substantive judgements, maybe even moral judgements about what’s approximately equally morally important on consequentialism, say, what kind of consequence, if that’s true is equally morally important to what kind of situation you can bring… The importance of some situation you can bring about if some deontological theory is true. Is that right?
Hilary Greaves: Roughly that. I’d be squeamish about calling them moral judgments because they’re not things that come from any particular single moral theory, but they’re a bit like that. They’re judgments that the agent maybe subjectively has to make for themselves on the view that I’m inclined towards.
Arden Koehler: So just an example would be like, I just judged that the badness of murdering on a deontological view is approximately equal to the, or the wrongness of it, is approximately equal to the badness of some amount of suffering on a consequentialist view.
Hilary Greaves: Yeah.
Robert Wiblin: What about the parliamentary approach that I think a lot of people might’ve heard of, that I think Nick Bostrom’s talked about?
Hilary Greaves: I think it’s a pretty vague term. So when I said I’d heard people interested in taking approaches to moral uncertainty that maybe modeled the situation as being one interpersonal disagreement, some of the discussion I had in mind was discussion of the parliamentary model, but I don’t think there is a well-defined thing called the parliamentary model. It’s more people who use that term are interested in this possibility of basing moral uncertainty on the model of disagreement. But then there’s a question of what actually follows from that. And again, that part of the reason we wrote that paper was it seems really unclear what’s going to follow from that. It seems like there are lots of choice points in how you make concrete a proposal to “treat it like disagreement”, because of course there are also zillions of different ways you can treat interpersonal disagreement.
Hilary Greaves: So really, in the paper, we were just trying to get a start on one way of making things concrete enough that we can actually see what the implications are. And it was quite instructive for me because I think at least some of them are overblown claims about how, obviously a parliamentary approach is going to get rid of the problem of fanaticism, or obviously a parliamentary approach is not going to have any problem with interpersonal comparisons. It’s not obvious at all. The devil is in the details and lots of those things, or close cousins of those things, are still problematic, if you do bargaining theory in the way that’s captured by the Nash bargaining solution. Now of course there are lots of other ways you could do it, but my overriding thought is still “Look, the devil is in the details. And if you don’t like the Nash bargaining solution, then to have any idea what’s going to happen under a so-called parliamentary approach, you need some other equally precise model and you need to crank through the details to see what happens”.
Arden Koehler: And so is it right that you found that the Nash bargaining solution wasn’t solving all the problems and in particular didn’t seem as good as maximized expected choice-worthiness, so then you’re like, “Well, I’m pessimistic or maybe withholding judgment on all other possible versions of the parliamentary model or the disagreement model until somebody comes up with a concrete proposal.”
Hilary Greaves: That’s a reasonable summary. It makes it sound more adversarial than I’d prefer to be. I tried. I did one thing. I spent quite a lot of time working through the details. It wasn’t arbitrary which one I picked. The one I picked was the thing that almost all bargaining theorists will take off the shelf if they want a standard solution to a bargaining problem. But there’s no claim there that there aren’t other possibilities. And given that all of the approaches to moral uncertainty at least seem somewhat unsatisfactory, if only because there’s this fanaticism thing and we’re not quite sure what to say about that, it would definitely be interesting to see more work on this. And I’m definitely open to the possibility that modeling, “parliament”, in some completely different way might generate more interesting results.
Robert Wiblin: This has all been at the level of theory so far. Have you formed any views on what you would actually like to see people do differently in the world given moral uncertainty? Or is that just a different level of analysis you haven’t gotten to yet?
Hilary Greaves: Yeah. I don’t know how wrong people get this. Maybe there are some occasions where people have some tendency towards implementing “My Favorite Theory” in practice. And if so, then I think a move away from that would be pretty good. But I think, in general, what people actually do is probably implicitly pretty close to what will be recommended by maximize expected choice-worthiness approach, that is maybe they have some inclination or some presumption in favor of their favorite theory, except when that recommends something that it only slightly prefers. And there’s some other theory they have little credence in that says doing that thing will be absolutely awful when the people do get squeamish, and that’s appropriate according to maximize expected choice-worthiness approach. So I’m not sure people are terrible at taking it into account in practice.
Arden Koehler: So before we move on, it seems really complicated to come up with what you should do under these different theories of moral uncertainty. Do you have any heuristic or guidance for busy people who would like to take moral uncertainty into account in their decisions, but aren’t going to go through all of this calculation?
Hilary Greaves: Yeah, it’s tricky. So the natural thing to suggest would be something like, “Well, why don’t you look for options that are robustly good across a broad range of moral theories?” Because then it seems quite unlikely that whatever the true theory of moral uncertainty is, it seems quite unlikely it’s going to undermine the conclusion that that’s really good. I think that’s a reasonable heuristic, but it is a bit of a dangerous one, because it is sometimes the case that the appropriate things to do is gamble. Let’s say there’s some intervention you could fund and you have only 30% credence in a theory, according to which it’s the best thing, however, if it is the best thing, it’s vastly better than everything else. And if it’s not the best thing is only a little bit worse than the others. That might be a case where you don’t want to insist on robustness. So yeah, heuristics busy actors mostly look for robustness, but keep your eye out for these exceptions.
Comparing Existence and Non-Existence [01:30:58]
Robert Wiblin: All right, so let’s move on to another paper that you’ve been working on recently with John Cusbert called “Comparing Existence and Non-Existence”. It’s a pretty complex paper, and I think people are going to have to read through it if they want to fully understand it. But that said, we will, I think, be able to guide people through the main points. So if I start, what’s the question you tackle in the paper, and why do you think it’s important?
Hilary Greaves: The question we’re interested in is a question about whether some states of affairs can be better than other states of affairs for particular people. So the background distinction here is between what’s just better full stop, and what’s better for a particular person. So like a world in which I get all the cake might be better for me, even though it’s not better overall, it’s better to share things out.
Hilary Greaves: Okay, so then the particular case of interest is what about if we’re comparing, say the actual world, the actual state of affairs with an alternative merely possible state of affairs in which I was never born in the first place. Does it make any sense to say that the actual world is better for me than that other one, where I was never born? So what we call existence comparativism would say, “Yeah, that makes sense.” And that can be true. If my actual life is better than nothing, if it’s a good life, I’m pretty well off, I’ve got a nice family, all kinds of nice things, then I feel lucky to be born. And that makes sense on this view because the actual state of affairs is better for me than one in which I was never born in the first place.
Hilary Greaves: So I am pretty sympathetic to that view myself. But a lot of people think that view is just incoherent. So there’s a couple of arguments in the ethics literature that say, “Even if you do feel lucky to be born, you’re going to have to explain that feeling some other way because it makes no sense to compare a state of affairs in which you exist to one in which you don’t exist in terms of how good they are for you.”
Robert Wiblin: What was the relationship between this and kind of the Epicurean idea that it’s not bad to die because once you’re dead, it can no longer be bad for you? I think that’s one that a lot of people will have heard, which I suppose is subtly different, but a bit similar.
Hilary Greaves: Yeah. It’s similar in flavor. I mean, I guess the difference is in the Epicurean case, it makes sense whether or not it’s true. It makes sense by everybody’s lights to say that the actual world is better for me than one in which I die earlier because at least at some time or other I exist in both of those worlds. But there’s supposed to be some special problem in a case where one of the worlds you’re comparing is one where the person was never born in the first place. There’s a worry that there’s an absence of the wellbeing subject that will be needed for that comparison to make sense in that case.
Robert Wiblin: Yeah. Yeah, it’s interesting, I think that the puzzle is kind of easy to grasp on a common sense level and people maybe have both of these intuitions where they flip between them, where they can both see how it could be bad for them if they had never been born. But also then they have the sense that that doesn’t quite make sense. Is it in some sense, while the solution here is a bit technical, the nature of the problem is very obvious.
Hilary Greaves: Right. So, sorry, you also asked me why it matters. I think the key thing is that it could potentially make quite a radical difference to which so-called population axiology you adopt. So if you’re comparing two situations where there’s a big difference in how many people ever get to be born. If you think that being born with a good life, obviously, is better for the people in question than not being born, then it’s really hard not to be led to something like a totalist population axiology, where in particular you’re going to think that if the future would be good in the absence of extinction, then premature human extinction is an astronomically bad thing. Because look at all those merely possible people who never got to exist and who are, thereby, in a morally relevant sense, losing out as a result of the extinction event.
Hilary Greaves: Whereas if you think those kinds of comparisons don’t make sense, then you’re more likely to be in a frame of mind where there’s no loser in a case of premature extinction, except of course the people who are killed in the process of extinction. But that’s not normally the thing that we’re focusing on. And if there’s no loser, then it might seem strange to have anything like a totalist population axiology that says an event like that is astronomically bad.
Arden Koehler: So just to clarify, the existence comparativism claim says that it can be worse or better for somebody to exist rather than not and that it can be worse or better for them to not exist rather than to exist. So I guess maybe those are the same thing, but I guess the second one sounds weirder, like when I think of, “Oh, those poor non-existent people. They didn’t get to exist, or they’re not getting to exist right now. How terrible for them.” That seems weirder than saying, “Oh, it’s so great for me that I get to exist rather than to not exist.” Could somebody think the first thing, like it’s good to get to exist when you would get to exist, but not think that it’s bad for you if you don’t get to exist.
Hilary Greaves: Yeah. There are people who think that. So the character I’m calling the existence comparativist is the one who goes all the way and accepts both of those things. But you’re right that the second thing strikes people as stranger than the first, and even the existence comparativists will probably resist some of the language that you use to state the second thing. They might find it weird to say, “It’s terrible not to exist.” Maybe terrible is an inappropriate word. But they would say it’s worse for some non-existent people that they didn’t get to exist if they would otherwise have had bad lives. Maybe not terrible, but worse.
Robert Wiblin: Yeah. Arden and I got into a comments thread on the document here where I was saying, “How can these things even be different?” Because one is just saying, it can be better or worse for you to exist rather than not. How can that be true, but then not it also be true that, if A can be better or worse than B, how could it be the case that B can’t be better or worse than A? That seems inconsistent, which I guess is kind of setting up the puzzle that people want to say the first, but not the second. And it is like, “But aren’t these two things actually functionally equivalent?”
Hilary Greaves: Yeah. Yeah. Good. So a lot of people do go through a chain of reasoning a bit like that. That’s their main argument for saying that it can’t be better for us that we exist rather than not, it’s that if that were true, then this other really weird thing would have to be true as well. And clearly, that’s crazy. That’s the common thread of argument in the literature that we’re responding to with this paper.
Arden Koehler: I guess that doesn’t quite make sense to me. So I have a sense that like, “Well, if the problem is that it doesn’t make sense to ascribe to people who don’t exist, states of affairs being better or worse for them. If that’s the core problem, then it seems like it should be okay to say that for people who exist, it can be better to exist, but not for people who don’t exist.”
Hilary Greaves: Right. So that’s the position that’s taken by some authors in this literature and we call it variantism in the paper. It’s a theory according to which, the truth value of a statement like, “This is better than that for Arden” can vary from one possible world to another.” The pushback on that is just something like, “Well, doesn’t this seem like the kind of sentence whose truth value shouldn’t vary from one possible world to another. It’s just comparing how good two possible worlds are for a given person. Why should it make a difference which world is actual for the purpose of a sentence like that?”
Robert Wiblin: Yeah. I mean, I feel like I never think in these terms of like, “Is this thing better for this person?” Because I just don’t think about ethics or morality in terms of the goodness of things for people. I just think about it in terms of, “Is the state of the universe, is this configuration of matter better or worse in absolute terms?”
Arden Koehler: Rob is perfectly impartial.
Robert Wiblin: Well, sorry, unless it’s me, okay. If it’s me then that I do think in these terms, but ethics wise, I guess I’m not drawn to using this language in the first place. So maybe this isn’t the kind of topic that I would naturally dive into so much, but I don’t know how much of an exception I am on this that I think about it in terms of like states of the universe, rather than goodness of things for people.
Hilary Greaves: Can I try and draw you into it then? Because I think you–
Robert Wiblin: Sure. Yeah, go for it.
Hilary Greaves: … should be drawn into it. So even if what you want to get at the end of the day is an answer about which states of affairs are better full stop, rather than which are better for people. Plausibly a route to figuring that out is going to go via considerations of which things are better for people. So for example, let’s suppose we’re talking about the value of equality. It’s a standard argument that crops up in the discussion of egalitarian theories in moral philosophy. Suppose that Arden is well-off and you’re okay, but worse off than she is. Why would it not be an improvement to the state of affairs full-stop, not for anyone, why would it not be an improvement full stop, to level her down so that her life is as mediocre as yours is.
Hilary Greaves: A natural answer to that is, “Well, it wouldn’t be better overall for that to happen because it wouldn’t be better for anyone. It wouldn’t be better for Rob. It wouldn’t improve his wellbeing. And we already said it wouldn’t be better for Arden. In fact, it’ll be worse for Arden. So if we’re willing to indulge in talk of what’s better and worse for people on the way to reaching conclusions about what’s better and worse overall, then there are certain principles we can use for getting concrete conclusions for the thing we ultimately care about out of this kind of talk. So even the perfectly impartial Rob, I claim, should care about theories that involve this kind of talk on the way.
Robert Wiblin: Okay. So kind of you’re saying a nice shortcut can often be talking about, let’s say the Pareto principle, that if a change makes one person better off and no one else worse off, then it’s an improvement. I think we’ve talked about this on the show before that that’s one way that people can sometimes use to try to get out of infinite ethics challenges is think, “Well, at least if one person’s better off, no one else is worse off than that, then we should view that as an improvement, even if we still have infinite good and bad.” But if we don’t have people in the picture or if we just reject that entire framework, then we can’t use that trick, and maybe we would like to.
Hilary Greaves: If you refuse to even talk in terms of what’s better or worse for people that is, more or less to say, you refuse to have a place for people’s wellbeing levels and states of affairs in the first place. Then you can’t even formulate a theory like utilitarianism, and it’s unclear to me how you would go about formulating any account of which states of affairs are better or worse overall, other than one that doesn’t care about people’s wellbeing at all. You know, you only care that there are 16 rocks in existence. Fine, that theory of well-being doesn’t need to indulge in better or worse for people. But if you want to be anything remotely like a utilitarian, then you’ll need this kind of theorizing.
Arden Koehler: Well, can’t you still be the kind of utilitarian that… I think sometimes people speak derisively of this kind of utilitarianism where you sort of see people as containers for happiness. So you just want to see how much happiness there is in the world and you don’t really care who it belongs to. So you don’t need to use this concept of better or worse for people or you just need, “How much happiness is there?” basically.
Robert Wiblin: Yeah. You just care about the total amount of water in jugs, but the jugs themselves are not super material.
Hilary Greaves: I don’t think that container objection really has anything to do with this. I mean, it seems like if it’s not utilitarianism, it could be a prioritarian theory, it could be an egalitarian theory, it could even be a sufficientarian theory, it could be more or less anything. All of those theories, whether you think of them in terms of something objectionably containery or unobjectionably containery or not containery, whatever. All of those theories start by saying, “Oh, you want me to evaluate the state of affairs? Let me first write down how well off each person is.” in wellbeing terms, obviously not in material terms, in the states of affairs in question.
Hilary Greaves: And as soon as you’re doing that, well well-being just is a representation of how good some states of affairs are for the various people. So all of those theories, I claim at least, are up to their elbows in this enterprise of talking about better or worse for people. If you’re thinking of some theory that doesn’t have any place for this kind of talk, I think it’s a lot more radical than maybe that way of proposing the question at least suggested. It’s not merely that we’re not totally utilitarians.
Arden Koehler: So are you saying something like, when I said you just look at the happiness of all of the people, you don’t look at whose happiness it is, are you thinking, “Well, in order to even look at how much happiness is there, somehow you’re already implicitly asking how well off is this person. You can’t get away from thinking in terms of the person’s happiness.” Is that right?
Hilary Greaves: I think what I was thinking when you said that was, so it sounds as though you’re talking about what we might call an anonymity principle, where if you commute the people, but you keep all the wellbeing levels the same, then what you’ve done makes no moral difference. It makes no value to difference. It doesn’t change how good the world is. So you might or might not sign up to a principal like that. It’s true utilitarianism happens to be one of the theories that signs up to an anonymity principle. But even if you had a theory that didn’t, there are lots of theories that don’t, those theories too, tend to proceed in terms of first going from all the messy details of a possible world, including all the positions of all the atoms. And first boiling it down to saying what wellbeing levels all the people have.
Hilary Greaves: If they think it matters which people have which wellbeing levels, and they keep that information, but they seldom include writing down what all the wellbeing levels are before they go on to saying, and now I go on from here to my story about which states of affairs are better and worse overall than others… So, I mean, I guess I just need to know more about what Rob’s account is of which states of affairs are better or worse because at the moment I’m feeling pretty skeptical he would be in the line of work that he is if he’s got a theory that doesn’t have any place for an account of wellbeing.
Robert Wiblin: So maybe to try to give an intuition for why I don’t think about it in terms of people is… What if you just think that what’s valuable is kind of the experiences that I had and whether they kind of attached any individual that you’re tracking over time, that just doesn’t seem so relevant?
Robert Wiblin: So I just want to tally up the feelings that I had and whether they’re relevant to a person. I mean, I suppose maybe you have a question of like, can you have an experience that isn’t had by what you would describe as a person for these purposes? But anyway, you’ll just be tallying up the experiences almost as if they were rocks and then asking then, well, are the rocks/the experiences good for a person is kind of neither here nor there.
Hilary Greaves: Yeah, that makes sense. I guess the pushback, as you sort of indicated, is likely to be, well, there can’t be experiences that aren’t had by persons, or if that could, then for related reasons, those would be ones you would want to exclude from the tallying up. If there isn’t a subject of the experience, then it’s not a thing that we want to count.
Hilary Greaves: And then it feels like once you’ve gone that far, the distinctive thing that’s being said by that view is kind of the person boundaries don’t count. But that, of course, is a view that many versions of utilitarianism will agree with. So it’s not clear to me that you need to be so hostile to the idea of things being better for people.
Hilary Greaves: Suppose you assigned… Suppose you decided where to draw the person boundaries for the purpose of a person-based description however you like, and you’ve got a theory according to which it doesn’t really matter how you draw them. You would still think even if it wasn’t normally a central part of your theorizing that if you had those same people and you could make things a bit better for one of the people without making things worse for anyone, then that will be an improvement overall.
Hilary Greaves: So yeah, I mean, I think I mostly agree. So if you’ve got that framework which really makes the boundaries between persons otiose, then you could just avoid talking about them altogether. I think you’ve then got to view according to which.
Hilary Greaves: Granted you don’t think persons are fundamental, you don’t think person boundaries are that important, still, when you do start talking about persons, which you probably will sometimes, even if only in the pub, you’ve got a view that agrees with existence comparativism. So you might still want to have something to say against the usual arguments for the claim that that can’t be true.
Robert Wiblin: Yeah, that makes sense.
Robert Wiblin: Okay, well let’s maybe come back to this one because I’m aware that we’ve introduced the problem and now I’ve diverted us of onto this tangent. We haven’t even talked about what your purported solution to the problem is. So maybe we should dive into that next.
Hilary Greaves: Sure. I mean mostly what the paper is doing is saying the existing arguments in the literature against existence comparativism aren’t very good. And we wrote the paper because it feels like the flavor, like the feeling in the room in this literature is most people reject existence comparativism. And John and I disagree with one another on whether we’re inclined to favor existence comparativism at the end of the day. But we were united in the belief that the usual arguments against are not very good. So that’s the key thing that this paper’s saying.
Arden Koehler: So I think we’ve sort of alluded to the usual arguments, but do you think you could state the usual arguments in a sort of clearer form or maybe the most important usual argument perhaps.
Hilary Greaves: Yeah. I mean, I think there are two. So one is one that we call the semantic argument. So this is one way you say, firstly, if you’re talking about sentences of this form, a possible world or a possible state of affairs, A, is better for Rob than B. Those are the kind of sentences whose truth value doesn’t vary from one world to another. So it’s not going to be the case that that sentence is true in a world where Rob exists, but not true in world where he doesn’t exist. So that’s the first premise.
Arden Koehler: So that’s the rejection of the thing I wanted to go over.
Hilary Greaves: That’s right. Yeah. So rejecting that thing is going to be your response to this argument. And then the second premise says, if you’re looking at these sentences, A is better than B for Rob. What that sentence is doing is it’s saying that a certain relation holds between two possible worlds and Rob. And relations can’t hold unless the things that they’re relating exists. So that kind of sentence, then the second premise continues, can’t be true, except if Rob exists. So in particular, if you’re thinking of B being a world where Rob doesn’t exist, then at least in world B this sentence can’t be true according to the second premise of the argument. And then when you put those together, you get the conclusion that if Rob exists, suppose A is the actual world, Rob exists, Rob is happy, B is a different world where Rob was never born in the first place. You’ll get the conclusion that A can’t be better than B for Rob, even in the actual world.
Arden Koehler: So this is put in terms of sentences and stuff. But I’m wondering is there an intuitive way to gloss it that’s just something like, “Look, in order for something to be better for somebody than something else, they have to stand in a relation to those two things. Or that that is a relation. And if they don’t exist, then there’s just nothing. No relation can exist without the things that are being related.”
Hilary Greaves: Yeah. You can put it in terms of not sentences. And indeed that’s maybe even more natural initially. The reason why we start talking about truth values of sentences is that if you don’t do that, things start getting quite complicated. You get tangled up in chains of counterfactuals. But I can have a go at doing that if you want. So the formulation that doesn’t involve talking about sentences and truth values would be something like, firstly, if A is better than B for Rob, then A still would have been better than B for Rob, even if B had been actual. And then the second premise would be, if A is better than B for Rob, then Rob exists. But then you have to think about holding that fixed across lots of different possible worlds. So you can state it in that way and the argument goes through as before. We just found that things tend to get unnecessarily convoluted and confusing when it gets into the details doing it in that way, but nothing really hangs on it.
Arden Koehler: I think the reason I wanted to rephrase it was I think that the phrase “semantic argument”, and then talking about sentences and truth values might make some people think of sort of sophistry or like a trick. Like, “Oh, it’s like a trick of language or something. Our language happens to work this way.” And then we try to get this ethical conclusion out of that and that’s really suspect. So I was sort of wanting to make sure that, in fact, there is an argument that isn’t really about language or the structure of sentences.
Hilary Greaves: Yeah. That’s a good point. So calling it a semantic argument might be unhelpful for that reason.
Philosophers who reject existence comparativism [01:48:56]
Robert Wiblin: All right. So is there a simple way of explaining why you think philosophers are maybe being too quick to reject existence comparativism?
Hilary Greaves: Sure. I mean, so the first point about that argument as stated is the argument proves too much. That is, it’s supposed to be an argument that somebody who wants to end up saying the actual world is not better for me than one in which I wasn’t born. That person is supposed to really like both of the premises of these arguments, but not like existence comparativism. But actually, it turns out that even that person had better not like both of the premises of this argument, because it’s not only the case that it rules out existence comparativism, it’s also the case that it does a bunch of other crazy stuff, that even the person who dislikes existence comparativism is not going to want.
Hilary Greaves: So suppose, for instance, we compare two possible worlds, in both of which Rob exists. So suppose there’s the actual world and then there’s another world where Rob was born, but he never had an education, and suppose that other world is worse for Rob. And now let’s introduce a third world also, call it C, where Rob was never born in the first place. So, okay, A, the actual world. B, a world where Rob was born, but things were worse for him. And, C, a world where he doesn’t exist.
Hilary Greaves: If you accept both of the premises of the argument as stated so far, you’re still going to be forced to the conclusion that A is not better for Rob than B. Because look, let’s think about what goes on at this other world, C, where Rob wasn’t born. It can’t be true in that world, that A is better than B for Rob, because Rob isn’t there to be one of the relata, because the truth values of this kind of sentence don’t vary across worlds, according to this argument, if it’s not true in C, then it can’t be true anywhere, not even in A or B. And that’s not something that anybody wants to accept. So we can already see that something’s gone wrong.
Hilary Greaves: What the paper’s interested in doing is seeing whether there’s another reasonable response because we think that this other premise that looks really compelling at first site that says, “Surely he’s got to exist in order to be there as a relatum. That’s a lot less innocent than it seems at first sight and there are independent reasons to doubt that one.
Robert Wiblin: Yeah. I thought it sounded like he was saying that if you accept that, then you also have to accept that you can’t talk about any properties of things that don’t exist, which we actually kind of do all the time talking about hypothetical things that could exist. Did I understand that right?
Hilary Greaves: I think you’re a step ahead of me, but I was going to go on and say that thing in a few minutes to one of the other questions I thought you were going to ask me.
Arden Koehler: What did you want to say before about why that second premise might be a bit suspicious?
Hilary Greaves: Oh, so this is where it really does get quite nitty-gritty and detailed. And a lot of the paper is about trying to draw a map of the various ways in which it might go wrong. I mean, one part of the short answer is, it does look really compelling at first sight, but actually, we know from general discussions in metaphysics that compelling looking principles like this one lead to all kinds of crazy conclusions, quite asides from issues of population ethics. So if we took our graduate-level metaphysics class, then the red flag should have gone up when somebody said, “I think this premise that the relatum has to exist is completely compelling. Nobody could possibly deny that.”
Arden Koehler: So isn’t the reason that you think that’s suspicious or that points in the direction of that being suspicious, the thing that Rob said, that things seem to have properties even when they don’t exist, or we seem to say things about things even when they don’t exist. And those things seem to be true.
Hilary Greaves: Good. Yeah. So one thing that goes on, in general, is that some of the things we want to say are what you might call modal. That is, they’re not solely about what’s going on in the actual world. They’re also about what’s going on in other possible worlds, as philosophers will put it. So a familiar point in particular from writings of David Lewis, one of the most famous philosophers of the 20th century, that in order to tell an adequate story about what’s going on in this kind of talk, like it might’ve rained yesterday, for instance, is that we need to be allowing ourselves to talk about non-actual things.
Hilary Greaves: So we can’t have these hard and fast constraints everywhere where everything we say can be cashed out entirely in terms of what things there are and what properties they have and what relations they have into one another just in the actual world. A lot of our talk is in fact said with so-called modal things.
Hilary Greaves: So that’s part of the reason for being suspicious of this argument. That to get the argument to be on solid ground, you would need to have some reason for thinking that this particular kind of talk of things being better and worse for people has to be of the really simple, basic stuff, where we’re just talking about the actual world. It can’t be a modal kind of talk. And then the danger is in this dialectical context, but insisting on that it’s just question-begging, right?
Hilary Greaves: If we already know that a lot of the talk we indulge in is modal, and we know that if we were prepared to grant that this bit of discourse could be modal as well, and there’ll be no obstacle to existence comparativism, then it feels like somebody who started off wanting to be sympathetic to existence comparativism would have no reason to agree that this must be non-modal talk. There’s no obvious reason why that would have to be so. All they would have to say is, “Well, I already knew that some talk was modal. My position is that this bit of talk is modal too, so no obstacles to existence comparativism, and it doesn’t look like anything’s been said against that.
Hilary Greaves: I mean, it is a little bit uncomfortable saying that because we do want to say, in some cases, “Come off it, that can’t be true because the relatum isn’t there to exist”. So like, if I say, “Rob talks to Hilary”, we want it to follow from that, that Rob exists and that Hilary exists. That seems like a kind of straightforwardly non-modal sentence. And it seems like we can know that it’s a non-modal sentence. So it’s also a little bit uncomfortable just insisting without giving any reasons that A is better than B for Rob is non-modal.
Hilary Greaves: But it’s a little bit unclear that anybody in this discussion has any independent grip on how to classify the things we’re talking about as modal or non-modal. That is, it seems so that everybody’s just working back from whether they’re antecedently inclined to believe existence comparativism, or the other thing, what you might call anti-comparativism: people just working back from that to their assessment of whether these bits of discourse are modal or not.
Arden Koehler: So just to clarify, the sort of anti-comparitivist, they would still admit that in some sense, the sentence, it’s better for Rob to exist than not, is modal because it sort of like requires that we think of a world in which Rob doesn’t exist and ask whether that’s better or worse for him, right? They would say, “Well, the subject of the sentence has to exist.” Well, yeah, I guess I can… It seems very relevantly different from “Rob talks to Hilary” because it’s like comparing two things. One of which is true and one of which is not true. So it seems like everyone should admit that it’s different from that sentence.
Hilary Greaves: That’s true. Yeah. I guess the question is whether there’s still a difference when you zoom in on the Rob bit of the sentence, and you forget about the A and B. It’s clear that the A and B is about comparing two different possible states of affairs, and it can’t be that both of those are actual, but you might think you could set those aside and zoom in on the Rob thing and have an intuition that that bit is non-modal.
Robert Wiblin: Okay. So what’s kind of the framework that you present in this paper or book chapter to try to deal with this and give people a different way of looking at it. One in which existence comparativism makes simple sense.
Hilary Greaves: Okay. Well, as you said, it’s a pretty complex chapter and we do a lot of things because really what we’re trying to do is open up the debate and suggest some new things that should be explored rather than outline a definitive position. But what a bunch of the chapter does is it centers on the idea of what we call re-analysis. So this is a very familiar thing to philosophers of language. You often find sentences all over the place where if you were really naive about going from the grammatical structure of a sentence, to which things would have to exist in order for the sentence to be true. I mean, I’m putting it in terms of sentences again, but hopefully, this isn’t too objectionable.
Arden Koehler: I forgive you.
Hilary Greaves: Thank you. You often find sentences where if you went directly from grammatical structure to what you might call ontological commitments, which things have to exist for the sentence to be true, then you get crazy conclusions. So one example would be something like, the average woman has 2.2 children. If you think about the grammatical structure of that sentence. So the kind of sentence structure that you’re thinking of when you assess the sentence as grammatical or as ungrammatical, then the average woman is a noun phrase, and normally we think that noun phrases refer to objects. You might be tempted to think there really is such a thing as the average woman, but of course, there isn’t, and that’s not what that sentence is trying to do. It’s doing something much more complicated. It’s quantifying over all the women there are and doing some kind of calculation.
Hilary Greaves: So that kind of sentence teaches us in general that we can’t always go directly from grammatical structure to a story about what the world has to be like in order for the sentence to be true. So a bunch of what we do in the paper is we explore ways in which one might take sentences of the form, A is better than B for Rob, not at face value, if you like. So the thought that existence comparativism is absurd threatens to follow, if at all, if you assume that what that sentence is doing is stating that a certain three-place relation holds, where in particular, Rob was one of the things standing in the relation.
Hilary Greaves: But inspired by sentences like the one about the average woman, you might think, “Well, it’s a completely open question whether that’s what that sentence is saying about the way the world is.” It might be something that doesn’t involve a three-place relation at all. And so it’s completely open once this option of re-analysis is on the table, that the sentence could be understood in a way that doesn’t involve the existence of Rob. And a bunch of the paper explores various possibilities in that domain.
Arden Koehler: So stepping back from the sort of intricate arguments. I had a reaction to the idea that existence comparativism was true, that I would just love to hear your thoughts on. So I had the thought, well, if it can be better for someone to exist and worse for someone not to exist, or vice versa, then there’s a sort of uncomfortable sense that, well, there’s just so many possible people now, that it’s either better or worse for them that they don’t exist.
Arden Koehler: And when I’m thinking about how good the world is, I have to sort of take into account all of how good or bad it is for all of these possible people. So I have to be like, all these children that I could have had already, assuming that they would have had lives, it’s worse for them that they don’t exist. So when I think about how good the world is, I have to be like, well, it’s bad for them. Is that a sensible reaction? Is that a concern about it? I guess, intuitively I think that’s weird and I don’t like that, but what do you think about that?
Hilary Greaves: So I feel like I want to say it’s not a sensible reaction, but that sounds a bit rude, doesn’t it?
Arden Koehler: I’m fully willing to accept that it’s not a sensible reaction.
Hilary Greaves: Okay. I think I want to say two things. So one is, if you want to think in terms of how good the world is or how bad the world is. So that’s a big if. If you want to think in those terms, it’s going to be unproblematic on the kind of view that existence comparativists are inclined to take. Because they’re thinking about how you should think about the well-being of people who don’t exist at all. They’re going to say people who don’t exist have zero wellbeing. And so it’s trivial to add that up. It’s not going to make any difference to your calculation, and you know in advance it’s not going to make any difference to your calculation. So you’re quite safe proceeding on the basis of ignoring those people. So that’s one half of it.
Hilary Greaves: The second thing I wanted to say is, you shouldn’t anyway be thinking in terms of how good or how bad the actual world is. You should be thinking about what’s the alternative thing that you should have done, which world would that have actualized, and compare that other world to the actual one. And then it’s not even the case any longer that there are indefinite numbers of people whose interests are at stake as you put it. So for any particular way you could have gone about having children already, suppose there would have been three of them. That’s only three extra people you have to consider, you’re already considering 7.5 billion or so, well, more of course if we’re doing a timeless calculation. So that’s not so hard.
Arden Koehler: So the main thrust of that I think was, “Look, when you’re talking about how good things are in this world, none of these comparisons matter anyway. And if you’re thinking about the value of your action, it’s not going to be the case that the value of your action is going to be determined partly by these sort of indefinite numbers of possible people that may, or yet might or might not have brought into existence because our actions have more limited scope.”
Hilary Greaves: Might be misunderstanding it. I didn’t think I was saying that. I mean, if your action might’ve brought into existence enormous numbers of people, which some of your actions do, and consider again things that reduce the chance of extinction risks. There’s a chance that that in a relevant sense, brings into existence an enormous number of people that wouldn’t have come into existence if you hadn’t reduced extinction risk.
Hilary Greaves: Then there could be a very large number of people whose interests you therefore have to take into account, at least on the kind of totalist population axiology that’s at least strongly suggested once one accepts existence comparativism. But it’s not infinite. When you said an indefinite number of people, I was thinking you meant an infinite number and I wanted to push back on that bit.
Arden Koehler: Okay. That’s helpful. So just to be clear, it is still the case that on this view, I would need to say something like, “It’s worse for my three possible children that I didn’t have them.”
Hilary Greaves: Right.
Arden Koehler: Okay. So that part, which still sounds a little weird, but maybe it’s only that weird and not this other weird thing.
Hilary Greaves: Yeah. I mean, for what it’s worth, I don’t find it that weird, but I do acknowledge that that’s a controversial view.
Lives framework [02:01:52]
Robert Wiblin: So in the chapter, you kind of outline this lives framework, which you think might help to tackle this problem. Is there a simple way of explaining kind of what that framework is?
Hilary Greaves: Sure. I can have a go. So the distinction is supposed to be between thinking as persons, as in some sense the most fundamental things in the story, versus thinking of the lives that person’s lead as being a more fundamental thing. If you’re doing your theorizing in terms of the person’s framework, then you think if I’m going to describe a state of affairs for the purposes of evaluating it, what I’m going to do first of all, is, say which people exist in that state of affairs, then I’m going to say which wellbeing level each person has. But you might think, particularly if you subscribe to a principle that it doesn’t at the end of the day matter morally which people had the wellbeing levels in question, it only makes a difference morally how many people had this wellbeing level, how many people had that wellbeing level.
Hilary Greaves: So not everybody agrees with that, but if you agree with that, then it’s quite natural to say, “Well, actually the most perspicuous, maybe the most fundamental description of what’s going on, doesn’t directly mention the persons at all. It just says which lives get lived and how many copies of each life get lived in the state of affairs in question.
Hilary Greaves: So how would you think about what we’ve been calling personal betterness relations, the kind of thing we’re trying to talk about when we say this is better than that for Rob, in that so-called lives framework? Well, it looks like instead of comparing two states of affairs in terms of how good they are for Rob, instead, you’re going to be doing a binary comparison, just relating two things. You’re going to be looking at the possible life Rob lives, if any, in the first state of affairs and the possible life that Rob lives, if any, in the second state of affairs, and then directly comparing those two things.
Hilary Greaves: And then it’s plausible, although it’s not inevitable, that if you’re doing things that way, then you can get positive comparisons being made: one half of the comparison is the state in which Rob doesn’t exist. Because you might think it makes sense to talk about a non-life as well as all the possible lives when you’re doing these kinds of comparisons. So that Rob actually exists is better than the non-life, that doesn’t seem a crazy thing to say. And in particular, there’s no objection to saying that stemming from the idea that Rob has to be there in order to stand in a relation because in this kind of picture it’s not Rob that stands in the relation. It’s the two possible life-like objects instead.
Arden Koehler: Okay. So one question I have about this is, I guess I intuitively feel like it doesn’t quite solve the problem because I think the person who really feels like things have to exist in order to stand in relations will just be like, “Well, you can’t compare a life to the non-existence of a life.” I mean, you can call it a non-life if you want, but it doesn’t exist by hypothesis, right? I mean, I guess maybe there’s a question, are you postulating the existence of some objects ‘non-lives’?
Arden Koehler: And if those exist, then maybe you can compare them, but that’s weird. And if you don’t want to say that they exist, if they’re just the lack of lives then I, as the person who was against existence comparativism in the first place might say, “Well, this doesn’t help me because now I’m having to compare a life with a certain level of wellbeing to a non-existent thing.” And that’s no easier because one of them still doesn’t exist that’s supposed to attain a stand in this relation.
Hilary Greaves: Okay, good. So I think, I want to say that the existence of lives that’s required for this framework is not as weird as maybe you’re taking it to be, but I actually think everybody is forced to this conclusion, not only the people who end up sympathetic to existence comparativism. Because if we consider again, the case where we’re trying to compare two states of affairs where the person we’re interested in exists in both of those states of affairs. So say the case of the actual world and the world where Rob got no education. We want to say the actual world is better for Rob than the other one. How would we do it in the lives framework?
Hilary Greaves: Well, we’d say the actual life that Rob’s here actually living is better than that other merely possible life. So you might say, “But hang on, that other possible life doesn’t exist.” And maybe there’s something to that thought, but clearly, that doesn’t stop us. Nobody wants that to stop us from saying the actual world is better for Rob than the one where he had no education. So what’s going on with this stuff? I think what’s going on with this stuff is that it’s much less problematic to say that both of those two lives exist than it is to say, “Rob exists in a world where he was never born.” So you think of lives as being maybe more like abstract objects, like numbers, where there’s a crucial difference between the life existing, which is what we need for the argument to make sense, and the life being lived by somebody, which is a quite different matter.
Robert Wiblin: So in some sense, there’s lots of null lives as you call them, or non-lives being lived all the time. In fact, there’s like an infinite number of non-lives being lived in some sense, or perhaps that’s the wrong way to conceptualize this kind of object. But it seems like I don’t exist in a world where there is no Rob, but we could say, “Well, in the world that exists then, there is the life that includes no experiences or no anything. That is a life that is being lived, it’s just that it doesn’t feature anything.”
Hilary Greaves: I think I’d be reluctant to say it’s being lived, but yeah, I get where you’re coming from. I mean, I tend to think of these lives as being types rather than tokens as we put it. So the life is the kind of thing. And then there’s a separate question of how many copies of a life are lived. So your actual life is lived once, your merely possible life where you didn’t go to university is lived zero times. Because even those people who didn’t go to university don’t have lives that otherwise are exactly like yours.
Hilary Greaves: And then maybe it’s in the absence of another life if you like, that it’s never lived, because how could you live a null life? That sounds more like a contradiction. So I want to get off the boat before that last bit because I know that otherwise, my interlocutors are going to hammer me. But otherwise, yeah, I’m broadly on board with the spirit of what you said.
Arden Koehler: So one worry I still have, and then I will drop this, is it feels like there’s still a difference between… So like the life that Rob actually lives and the life he would have lived if he had no education. The other life doesn’t exist because it’s not actual, it’s not in the actual world, it’s not what actually happened. But it’s still sort of like in the space of possibilities. Isn’t there a special type of non-existence that a non-existent life has that like Rob’s possible life doesn’t have?
Hilary Greaves: I think there’s something special about it, but it’s not a special type of non-existence. So again, I think where everybody in this debate has to say, if they’re going to talk in terms of lives at all, they have to say that even lives that aren’t lived by anyone, nonetheless exist, so that they can make the kind of comparisons that everybody in this game wants to make. And so I need to draw a distinction between the life existing and the life being lived. And then there is something special about the null life. It’s the thing that makes it awkward to talk about the null life being lived by somebody in some other state of affairs. Maybe it’s not the kind of thing that can ever be lived, but that doesn’t stop it from existing. And arguably, this seems like a non-trivial choice point, where people whose intuitions are sympathetic to existence comparativism in the first place, and on the other hand, people whose intuitions are not sympathetic to existence comparativism. In this framework, the disagreement between those two characters might boil down to a disagreement about whether or not it makes sense for the null life to be compared in terms of better or worse from a personal point of view with a non-null life. But our point in writing this bit of the paper is there’s nothing remotely in the vicinity of the principle that the relatum has to exist for the relation to hold, to decide that choice point. It just seems like a thing where one could go either way.
Robert Wiblin: All right. I think we should probably wrap up on this, but maybe just one final question is, do you think this kind of line of argument is going to persuade people who previously disagreed?
Hilary Greaves: I don’t know. It’s very hard to tell. I mean, you always write these papers feeling in the heat of the moment, like clearly you’re onto something and surely everybody who reads your paper is going to agree that you’re onto something. And then people come up with all kinds of objections that you hadn’t foreseen. So I wouldn’t really like to guess. I mean, obviously, I hope that it will make a difference if only to open up the debate and get people talking about what seem to us more fruitful areas of the theoretical space. But we could just be misguided, so we’ll wait and see what responses we get.
Global priorities research [02:09:25]
Arden Koehler: Great. So let’s move on and talk about global priorities research and the Global Priorities Institute. So we got a audience question about the theory of change at the Global Priorities Institute. And I think you talked about this last time with Rob about how you’re sort of hoping that the Global Priorities Institute will help launch this academic field of global priorities research, maybe people will be teaching more about these topics in universities, there’ll be more research, more collaboration. Any updates on that? Has that happened?
Hilary Greaves: A little bit, yeah. I mean, don’t underestimate how slow the pace of movement and progress in academia can be. So on that kind of timescale, I wouldn’t have expected to see too much concrete change, particularly the things that go via papers getting published. That process invariably takes upwards of a year just to get a paper published, even from the point where it’s almost completely written.
Hilary Greaves: So we haven’t seen very much by way of, say, other people writing papers, responding to and citing ours, but that’s because mostly we haven’t yet had time to push our papers through the pipeline to actually being published, which is usually the point where other academics start working on them, as opposed to just having stuck them on the GPI website.
Hilary Greaves: I mean, there are definitely some promising indications. So we’ve done a lot of outreach work, inviting other academics to collaborate with us, and workshops, and a lot of people have been very positive and we’ve got a lot of informal information to the effect that other people are working on similar themes.
Hilary Greaves: We don’t have any robust impact tracking mechanisms in place yet to determine whether they did that because they came to a workshop or instead of just they came to a workshop and they’re doing that and since they know we’re interested, they’re telling us about it.
Hilary Greaves: But yeah, I mean, there’s a general sense that especially amongst younger people, I guess, related to the fact that EA is quite a young movement, a lot of younger researchers, I think, find it quite inspiring that one could be an academic and nonetheless contribute directly to the EA enterprise instead of just doing it by donations.
Arden Koehler: Cool. Yeah. I mean, one thing I’m curious about is whether there’ve been any other sort of academic centers of global priorities research that are starting to show signs of cropping up. So right now, a lot of global priorities research happens at Oxford or a large… I think a big majority of it that’s labeled that way anyway, happens at Oxford. Are there any other universities that are looking like they might be centers in the future?
Hilary Greaves: Yeah. So there are definitely people we know who are interested in funding similar centers at other universities in the future.
Robert Wiblin: But is it mostly limited by the ability to find people with the right experience and capabilities to get those set up?
Hilary Greaves: I think so, yeah. I mean, I think in general, if somebody’s got the enthusiasm and the expertise to lead a center like this and the funding is forthcoming, which it feels like it would be, it’s not too hard to convince a university to host such a thing if it’s bringing in external funding.
Robert Wiblin: Has global priorities research managed to attract any vocal detractors? Surely when you’ve really made it, is when people start disliking you.
Hilary Greaves: I guess we haven’t made it to that extent yet. I mean, again, things are really slow in academia. Until you start publishing lots of papers, there’s not so much of a thing for people to object to.
Robert Wiblin: Yeah. Interesting. Do you have a sense of whether global priorities research is kind of more popular in the UK than the U.S. perhaps culturally?
Hilary Greaves: No, I don’t. I mean, it’s true that Oxford is serving as some kind of center for it at the moment, but I think that’s contingent. I think it’s just because of we’ve got a critical mass of people doing it here. And when we do, when we run workshops, we host people from the U.S. as well as from the UK and we don’t notice any kind of statistical difference in enthusiasm level.
Arden Koehler: Rob, I’m curious, what was behind that question? Do you have the intuition that there’s a cultural affinity with global priorities research in the UK?
Robert Wiblin: I guess my understanding is that maybe consequentialism is more popular in the UK, at least in philosophy, and maybe in the U.S., people are more inclined towards deontology and theories of justice and so on. So that could make a difference.
Hilary Greaves: That’s interesting. I mean, I don’t have that sense about the background statistics either, but you might be right. I haven’t seen a survey either way.
Robert Wiblin: So last time you were trying to attract more economists and more attention from economists because they seem to be kind of maybe the missing piece of this in this research. Has that bore much fruit yet?
Hilary Greaves: This is very slow for us at the moment. So until basically now, more or less literally this month, we haven’t had any post-doctoral, so post-PhD research staff in economics, as opposed to management staff. And so we haven’t been in a position to start churning out papers in economics global priorities research, if you want to call it that, ourselves. And I think that’s just an absolutely essential brick in the edifice. Until we can start publishing papers ourselves, it’s very hard even to communicate to external researchers what’s supposed to be distinctive about the thing that we’re interested in. The easiest way to communicate that is to say, “Well, look, here are lots of examples. Are you’re interested in doing stuff like this? This is what we’re interested in.” And we’re not at that point yet.
Hilary Greaves: So we have been doing a lot of outreach work, having kind of one-on-one conversations with economists based all over the place and trying to get a sense of who might be interested. But for that hiring reason, it’s been much, much slower to start progress on economics than in philosophy. Fingers crossed, maybe this will change in the next few years when we start actually building our own economics research team, but that has yet to be seen.
Robert Wiblin: Yeah. Are there any roles or visiting opportunities at GPI at the moment that people should be aware of if they found this conversation interesting and would like to come and meet you all in person?
Hilary Greaves: Sure. I mean, now, obviously, is a difficult time to be asking that.
Robert Wiblin: Sure. We’ve taken that from our template questions about career options and perhaps I haven’t fully engaged my brain while I was reading that.
Hilary Greaves: Yeah, I mean, we’re definitely looking to hire again in the upcoming academic year. So we’ll be opening rounds of postdoc hiring as we normally do in the fall for applications to close around about Christmas time. We’re hoping to host visitors again next summer. This year, we had to either cancel the visits we had scheduled or just do them remotely because of COVID-19. Hopefully, we’re able to do that next summer. It’s still a bit too early to tell.
Hilary Greaves: But yeah, I mean, in principle, if you set aside COVID, I guess the answer would be if you’re an early career researcher, by which we roughly mean pre-PhD or recent PhD, then we run what we call an early career conference programme in roughly June each year.
Hilary Greaves: And the natural thing to do is to apply for that, come and spend a month with us, and then you can work on research projects alongside one of our researchers, have lots of back-and-forth, get a sense of what the conversations in the office are, think about whether this is something you might want to do in the longer term.
Hilary Greaves: If you already know it’s the thing that you want to do and you’re at the stage of finishing a PhD or later, then one thing you could do is apply for a postdoctoral position at GPI. Look out for the adverts in the autumn.
Hilary Greaves: And if you’re more senior than that, we’re always interested in conversations about possible hiring with people who are interested in doing the same kind of research that we’re engaged in and there are also lots of short-term visiting opportunities. So we’re very keen for people like that to just reach out to us and then we can have a conversation about what might be the good fit.
Arden Koehler: So if somebody wants to do global priorities research and maybe eventually work at GPI or either they’re not going to work at GPI ever or not for a long time, what are some other places where they might consider trying to get a good experience?
Hilary Greaves: I mean, it depends a little bit what kind of level of seniority we’re talking about. If we were talking, for instance, about where should I apply to grad school, I think, to a first approximation, it’s just the usual answer of try to get into the best grad school that you can.
Hilary Greaves: If you happen to be in an academic discipline, like maybe philosophy, to some extent, economics, where Oxford is a pretty good grad school, and that was among your contenders, then maybe there’s some additional case for coming to Oxford based on the fact that there are lots of people interested in these things here.
Hilary Greaves: But I think for most people it’s just going to be go to the best grad school or otherwise go to the best institution and then keep conversations open with us if you’re interested in collaborating in the meantime.
Robert Wiblin: Yeah. I think we hear people say things along those lines all the time, but then I guess I also hear that PhD students can often become kind of miserable over the course of… Especially in the U.S., the PhDs are so long and maybe they kind of lose interest in the topic that they’ve done or that they’ve chosen at the start, perhaps because they don’t think it’s so important.
Robert Wiblin: Could it be that maybe it’s more important than we might think in the abstract to go to a place where there are people or like potential supervisors studying things that you think are intrinsically important because otherwise you’re just going to kind of lose heart and find it really hard to commit yourself? So especially if you really are very interested in global priorities research and other things might just not engage you as heavily.
Hilary Greaves: Yeah. I mean, maybe this is more of a thing in economics than in philosophy and we’ve definitely had conversations with early-stage aspiring academics in economics who’ve had a choice of grad schools and for reasons related to their interest in global priorities research, they might find uninspiring the particular strengths in terms of which subdiscipline it is that some department specializes in, where otherwise that department looks like a really good grad school on paper.
Hilary Greaves: So yeah, I do think that’s somewhat relevant, but it is also important to balance that with the consideration of, well, don’t go too far in the direction of hanging out with people who are motivated by the same things as you if that means sacrificing so much career capital that you go to a grad school that’s ranked 50th instead of one that’s ranked second.
Robert Wiblin: Yeah that makes sense. Are there any kind of new research questions that have appeared under the banner of global priorities research over the last couple of years that it’s maybe worth drawing people’s attention to as something that they could do a PhD on or study if they’re already academics?
Hilary Greaves: Yeah, I don’t know how new these research questions are, but I could definitely say a few things about some of those questions we’re particularly excited about. Some of them are things where we feel like we’ve started to scratch the surface doing a little bit of work and that’s just made us even more aware of how much more there is to be done.
Hilary Greaves: So one thing would be the topic of fanaticism. This feels like a thing where it’s only relatively recently on the timescales of academia that people have started thinking about this issue, but it seems like a really crucial consideration for questions like whether money is best directed at long-term or short term as causes. And it’d be really interesting to see more people seriously trying to develop a decision theory or a theory of moral uncertainty that avoids fanatical conclusions and see what the best effort to do that looks like.
Hilary Greaves: Another bunch of considerations concerns infinite ethics. So there’s always a bit of a worry at the back of our mind about, well, is the correct thing to say about infinite ethics, which we don’t yet know what it is, is whatever the correct thing is to say about those puzzles going to be such as to undermine lots of the things that we try and confidently go around saying about the finite case while ignoring the possibility of infinities? So bigger to find out to what extent it’s possible to sidestep the paradoxes of infinite ethics or if it’s not possible to sidestep them, make some progress on solving them.
Hilary Greaves: And then maybe on the more empirical side, particularly with a sort of a hat on this specifically interested in longtermism, there’s always this worry that we’re just so bad about making any kind of predictions about how things pan out on very long time scales that any attempt to decide what to do based on attempting to influence the very long run future is completely hopeless. So it’d be interesting to learn more if we can about just how bad are we at predicting on very long time scales?
Hilary Greaves: Obviously, it’s hard to directly do an experiment on that, but there are some theoretical tools you could use to try and get a grip on it. And indeed, we’ve got some researchers joining us at the moment who’ve got expertise in long run prediction and forecasting and are hoping to think about that question. I mean, there’s sort of 16 different things at least I could have said in answer to that question, but that’s a few of them.
Arden Koehler: Are there any questions in economics that you might want to whet the appetites of the audience who might go in to study economics?
Hilary Greaves: I mean, I think actually all of those could be examples in economics. And the last one about forecasting is more in economics than it is in philosophy. The thing about fanaticism is decision theory, which both philosophers and economists will claim is part of their discipline.
Arden Koehler: Okay. Yeah, maybe it’s just me as a philosopher implicitly claiming that that was part of my discipline.
Hilary Greaves: Well, it is.
Robert Wiblin: I guess if people want more, there’s always the GPI research agenda, which is up on your website, which is pretty extensive.
Hilary Greaves: Yeah, that’s right.
Arden Koehler: So at the end of the interview with Rob two years ago, you were sort of singing the praises of academia as a discipline where there’s enough flexibility so that maybe it’s a little bit easier than some other disciplines to have a family life and also be successful at work. So how’s it going with COVID-19 now that everybody’s at home?
Hilary Greaves: Yeah. I mean, COVID-19 is obviously a bit of a special case vis-a-vis the particular question of how do you balance career and family because with the school and nursery closures, we’ve basically incurred an extra full-time job. In fact, probably a full-time job for more than one person between the two parents in a family, which is homeschooling the school-aged children and being nursery carer to the nursery kids. So there’s something special about that situation. But yeah, I think even there, I mean, the key thought underlying my earlier comments was probably something like you have a lot more control over your own hours in academia than in many other jobs and you don’t have the same sense… I mean, people who are in those other jobs of course can comment, but you don’t have the same sense that you really can’t do this career unless you’re willing to work at least, whatever, 10 hours a day, willing to work weekends and whatever.
Hilary Greaves: It feels more like in academia, it’s really up to you how you do those trade-offs and the cost is generally just going to be that you get less research done if you don’t put in so many hours. But there’s not the same sense that you can’t succeed or that you’re going to get fired if you don’t work long hours.
Hilary Greaves: And the particular hours you work are extremely flexible. So being an academic, when I woke up at half past four this morning, I thought, “Well, why don’t I just get the laptop out and do three hours of research before the kids drag me out of the room?” Whereas if I were, say, a social worker, obviously, that’s not an option.
Robert Wiblin: Yeah. I think you might have buried the lead there because you’ve bought a caravan to work from, right?
Hilary Greaves: I have, yeah. I mean, my situations maybe a bit unusual even by the standards of people who have families because I both have an unusually large family and an unusually small house. So there’s essentially no chance of working in the house while the kids are at home.
Hilary Greaves: So as soon as lockdown was announced in the UK, we realized that we were going to need a home office and we also realized that there wasn’t going to be time to get a garden office installed and also a bit nervous about there being too much demand for garden offices on March the 24th, 2020.
Hilary Greaves: So before too many other people thought about it, we bought a small caravan and parked it out in our parking space and that’s our home office. So that’s where I’m sitting right now while all chaos breaks loose in the house.
Robert Wiblin: Do you know if there was a shortage on caravans? I suppose that’s one that I haven’t thought about, something that might run out. But I guess it’s not that easy to build a tonne more caravans so people to work from.
Hilary Greaves: Yeah, I don’t know. I mean, I think as I understand it, people were buying caravans more with a view to, well, I want to go on holiday with my own bathroom facilities, or I want to have somewhere where maybe a guest can stay without actually being in the house. So I think there was a lot of demand for caravans, but I don’t know if many people had the same use for them as we did.
Robert Wiblin: Well, hopefully we’ll all be out of our caravans, literal and metaphorical, before too long. I’ve been seeing some positive vaccine results, so fingers crossed. Maybe by the end of the year, we’ll be back in the office.
Robert Wiblin: Yeah, our guest today has been Hilary Greaves. Thanks so much for coming on the 80,000 Hours podcast, Hilary.
Hilary Greaves: Thanks so much for a second invite.
Robert Wiblin: And thanks for joining, Arden, and making a lot more sense of all of this than I might have been able to alone.
Arden Koehler: It’s very fun. Thanks.
Rob’s outro [02:24:15]
Just a reminder that if you enjoyed this conversation, you should definitely check out Hilary’s first episode: number 46 – Prof Hilary Greaves on moral cluelessness, population ethics, & harnessing the brainpower of academia to tackle the most important research questions.
You might well also enjoy episode number 68 – Will MacAskill on the paralysis argument, whether we’re at the hinge of history, & his new priorities.
You can find all the research coming out of the Global Priorities Institute at globalprioritiesinstitute.org.
The 80,000 Hours Podcast is produced by Keiran Harris.
Audio mastering by Ben Cordell.
Full transcripts are available on our site and made by Zakee Ulhaq.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.