Transcript
Cold open [00:00:00]
Bob Fischer: When you look at Drosophila — fruit flies, closely related to black soldier flies — they’re used as depression models for studying humans, as pain models. And you read all these papers, there are a million of them, and they will say, “It’s amazing how similar the neurology of these organisms is to humans! They’re such a perfect model, or such a useful model for understanding these aspects of the most excruciating and terrible human experiences. Isn’t it great that what we can now do is starve them and put them through sleeplessness and all kinds of things, and guess what? They get depressed. And that’s such a great way of studying these horrible symptoms, some of the worst symptoms that humans ever experience.”
So when you see that kind of thing in the literature, and you see the excitement of researchers thinking about these organisms as ways of understanding humans, and seeing the behavioural implications for these organisms, you start to think, man, there’s something going on in that creature that should make me really uncomfortable.
Luisa’s intro [00:01:10]
Luisa Rodriguez: Hi listeners, this is Luisa Rodriguez, one of the hosts of The 80,000 Hours Podcast.
In today’s episode, I speak with Bob Fischer about how to compare the “moral weight” of a chicken to a human, or of a salmon to a pig based on their capacity for sentience — in other words, how much can those species experience pain and pleasure, and what does that mean about how we should trade off between helping, say, 100 chickens or 20 humans or 1,000 salmon.
Bob and several of his colleagues spent over a year trying to figure out how to make these kinds of comparisons, and the results really shocked me in that they suggest nonhuman animals like chickens and even bees have way more capacity for pain and pleasure than I would’ve thought coming into this.
But even if you end up feeling sceptical of Bob’s conclusions, it’s worth listening to how he approached this project and what’s driving the findings — partly because it’s genuinely fascinating, but also because this project is such a valuable first step toward thinking about how to decide which interventions can do the most good, without confining yourself to “just problems that affect humans” or “just problems that affect animals.” So if you care about really, truly, earnestly thinking through how to compare the world’s problems to work out which are most important, neglected, and tractable, this work is unmissable.
We also talk about:
- Whether we can say something like “the suffering of four chickens for one hour is as important as the suffering of one human for one hour,” or if it’s more complicated than that.
- Lots of examples of the kinds of behaviours and abilities that Bob and his colleagues found evidence of when trying to figure out how the capacity for suffering of one animal compares to another’s.
- Thought experiments that test different philosophical assumptions, like hedonism, about what welfare even is, and how those informed the project.
- How your takeaways would change if you don’t share those assumptions.
- Some concrete examples of how someone might use the estimated moral weights to compare different interventions.
- And lots more.
Without further ado, I bring you Bob Fischer.
The interview begins [00:03:40]
Luisa Rodriguez: Today I’m speaking with Bob Fischer. Bob is a senior research manager at Rethink Priorities, an associate professor of philosophy at Texas State University, and the director of the Society for the Study of Ethics and Animals.
Thank you so much for coming on the podcast, Bob.
Bob Fischer: Thanks so much for having me. I’m really pleased to be here.
Luisa Rodriguez: So I hope to talk about how a chicken’s best and worst experiences compare to a human’s best and worst experiences. But first, let’s talk about the Moral Weight Project a bit more broadly. At a high level, can you explain in sort of simple terms what this project aimed to do?
Bob Fischer: Sure thing. So a moral weight is, in theory anyway, a way of converting some unit of interest into another unit of interest. So we think we’ve got ways of measuring human health and wellbeing; we think we’ve got ways of measuring animal well being. The question is: how do we get those things on the same scale? A moral weight is, in theory, a function that takes the one and converts it into the other.
So what the Moral Weight Project tries to do is say, given that we’re trying to do the most good, we have to make comparisons between very different kinds of causes. Some of those causes we’re used to comparing: when we’re trying to figure out how to compare giving antimalarial bed nets to kids versus providing clean drinking water for somebody else, we have ways of measuring those different projects and thinking about how much good they’re doing.
But then if we want to say how much good we do when we’re distributing those antimalarial bed nets versus here’s how much good we do when we get laying hens out of cages on a factory farm, our mind might just experience some vertigo, and we might not have any idea how to proceed. So what the Moral Weight Project tries to do is provide tools for comparing these very different kinds of causes.
Luisa Rodriguez: So from there, you have a couple of key concepts that we should talk about before we get to the overall question of how you did this. The first is “capacity for welfare.” Can you explain what you mean by that?
Bob Fischer: Sure. So when you think about how we’re going to make these comparisons, there’s one assumption that we can take on, or a model that we can use, that will allow us to start thinking about how to compare human and animal wellbeing: we can think about individuals as like buckets, and welfare as like water. So we can say that maybe some kinds of organisms are very large buckets, right? They can contain lots of welfare. And maybe some individuals are very tiny buckets, the smallest of all possible buckets, and can only contain the smallest fraction of welfare. And then what we can do is say that your capacity for welfare is essentially the size of that bucket.
So if we are willing to think about welfare that way — which is controversial; not everybody wants to do that — but if you’re willing to think about welfare that way, which many people are, then you can say, all right, let’s compare the sizes of these buckets. And that’s going to give us a basic way of thinking about how much welfare is at stake in different individuals.
Luisa Rodriguez: Cool. OK, and then the reason to think that different species have different capacities — so their buckets are different sizes — is just something like, as a human, I feel like there are a bunch of different ways that I can experience things that are great and things that are really terrible. And I have the sense that I can experience those things at different intensities. So I can experience things like a stubbed toe, which is pretty bad, but much less bad than the death of a loved one.
And then you might think that in a species like toads, I have the intuition that they might experience the death of a loved one differently to me. It might still be bad, but it might be different. And we might have some reasons to think that maybe they don’t have the concept of acquaintances and friends and losing friends, so maybe they’re less likely to be like, “It’s really sad that my friend moved to another pond.” And if that’s true, then there are fewer things that are going to be making them super happy or super sad, and therefore they have something like a smaller bucket.
And so in considering who to help, we should think about the overall capacities for these good and bad experiences that species have. Am I understanding that correctly?
Bob Fischer: Right. So there are a couple of questions there that we want to tease apart. One question is: why think that the buckets might be of different sizes in the first place? Then the second question is: how would you go about thinking about what’s relevant to different size buckets? And then how would you go about assessing?
We’ll set aside those second two questions and just think about the first one. There are a couple of different ways you could come at this issue of, why think the buckets are of different sizes? Why think that some individuals can realise more welfare than others, or generate more welfare than others, or whatever you want to use?
One way is the way that you’re describing, Luisa. You’re saying that maybe some experiences are just worse for humans than frogs because of various cognitive capacities that we might have.
Another is just that I think people care about welfare, and they think that somehow that’s got to be really important for understanding why it’s OK to do some things to animals that it doesn’t seem to be OK to do to humans — and maybe welfare is an important factor. So that’s a different route that someone could take to try to think about that, sort of the tradeoff way of thinking about it.
Or you could have this view where you say that what welfare is is how things are going from the perspective of the organism on some level. And maybe some organisms just don’t have the ability to see that much about how well or badly things are going for them, in which case maybe the things just aren’t going that well or badly by comparison to how well they can go for individuals with finer capacities for discrimination.
Welfare ranges [00:10:19]
Luisa Rodriguez: Cool. That makes sense. And then in figuring out how big these buckets are for different species, you’ve got this concept of a “welfare range.” Can you talk me through that?
Bob Fischer: Sure. So think of this as the difference between looking at the whole life of the organism versus how things are going for an organism at a moment. In other words: what are the possibilities over the course of the lifespan versus what are the possibilities at a time slice?
What’s happening here is, when we are trying to make comparisons across species, what we’re often doing is saying not chicken versus human; we’re comparing X number of life years of benefit to a human versus X number of life years of benefits to chickens. And to do that, we want to isolate just the momentary thing and then spread that across the life year, rather than factoring in lifespan.
So a welfare range is just how well things can go for you at a time versus how badly things can go for you at a time. So the peak and the valley.
Luisa Rodriguez: Right. So is this literally the best possible experience and the worst possible experience someone could have? Is it literally the best moment of my life compared to the worst? And a black soldier fly larva’s best moment of their life and the worst?
Bob Fischer: So conceptually, it certainly sounds like that; that’s the way the concept is sort of set up. But in practice, of course, we don’t really have tools to investigate that, so we’re not actually looking at that — because who knows how to compare your experience on MDMA with a chicken’s really enjoying some corn? I have no idea. So what we’re doing is something more like saying, here’s the normal range, here’s the typical range for individuals of different kinds. What we’re really doing is thinking about what’s the average when you’re in full health, and in a reasonably hospitable environment, perhaps.
Luisa Rodriguez: Cool. So you’ve got these best and worst experiences, which aren’t literally the best and the worst, but they’re the kinds of good and bad experiences that people have when they’re reasonably happy and healthy. Then you say that an individual can have that range, plus they’ve got however many years of life that they’re going to live. And when you put those together, you get the size of their overall bucket — which is just like, how much good, how much bad, and then all of the moments they’re going to live: that’s the possible space for welfare that they can fill with water, I guess. Am I basically getting that right?
Bob Fischer: Yeah, that’s exactly right. So you get the total capacity for welfare by multiplying the welfare range times the lifespan.
Luisa Rodriguez: So that’s fundamentally what moral weights are. Can you say a bit more about why these are valuable, and how you imagine them being used?
Bob Fischer: Sure. To go back to the original question that we were asking on the ambition of the project: the goal of the project is to compare things like the Against Malaria Foundation with some corporate campaign for chickens. And what we need to do is say, here’s how many disability-adjusted life years we could avert if we put money into AMF versus here’s how much chicken welfare we could get if we were to put money into this corporate campaign.
What a welfare range lets you do is convert between them. What it says is: here’s how much welfare is at stake in those chickens; here’s how much of a welfare benefit we actually think you got from the corporate campaign for those chickens. And now, because that welfare range is expressed as a percentage of humans’ welfare range, you can multiply your estimate of the welfare benefit for chickens times the welfare range, and you can get something that’s expressed in terms of a human unit, and that’s the value.
Luisa Rodriguez: Can you give me a toy example? How would I use your moral weight for a chicken to compare interventions helping chickens to interventions helping, say, salmon or bees or humans?
Bob Fischer: Well, the simplest toy example is just going to be, imagine that you have some assessment that says, I think chickens are in a really bad state in factory farms, and I think that if we move layer hens from battery cages into a cage-free environment, we make them 40% better off. And I think that after doing this whole project — whatever the details, we’re just going to make up the toy numbers — I think that chickens have one-tenth of the welfare range of humans.
So now we’ve got 40% change in the welfare for these chickens and we’ve got 10% of the welfare range, so we can multiply these through and say how much welfare you’d be getting in a human equivalent for that benefit to one individual. Then you multiply the number of individuals and you can figure out how much benefit in human units we would be getting.
Luisa Rodriguez: That just feels so incredibly powerful to me. I just have this feeling that for the last, I don’t know, more than a decade, we’ve been trying really hard to figure out how to best use our resources to help people in the world — “people” being all-inclusive beings — and that this has been this massive gap. And now we can literally convert between different species and humans, and decide what is the actual most fundamentally good thing to do. That’s just really hitting me, so thank you for that.
Bob Fischer: Or at least we can try. There’s real questions about how much you should trust the numbers and et cetera, et cetera. But yes, the project is: can we at least do better than waving our hands vaguely and saying, “It feels to me like it matters roughly this amount”?
Historical assessments [00:16:47]
Luisa Rodriguez: Cool. Maybe we should actually talk about how this has been done historically. Before we had moral weights, what were funders doing to decide between human interventions and, say, interventions to help chickens?
Bob Fischer: There are two basic kinds of strategies that people have had. One strategy is: don’t ask this question. And you can understand why people would say that maybe you shouldn’t ask this question. It’s really hard. So what you could quite reasonably think is, “I have no idea how to compare these things. They both seem really important. I’m just going to diversify. I’m going to allocate some of my money to this, some of my money to that, and I’m not really going to worry too much about exactly how to make the comparison.” So the worldview diversification strategy says, let’s not worry too much about the fact that we don’t know how to make the comparison explicitly.
The other approach is just to have moral weights that come from wherever they come from. So some people have said, “Let’s just use the relative number of neurons as our moral weight,” or some people have said, “Let’s just look into our hearts and ask ourselves, what do we really think at the end of the day about the relative importance of these things,” and plug in those numbers as our moral weights.
There are probably other strategies that people have employed, but basically those are the kinds of things people have done: don’t ask the question and just say that this is really hard, and we’re just going to have to come up with a strategy that doesn’t involve answering it; or they’ve said, let’s just plug in other moral weights based on the proxies that are available to us / our own best hunches.
Luisa Rodriguez: I’m of course sympathetic; these questions are incredibly hard. But that feels insane to me. Like, what I thought about a chicken before and after meeting a chicken was just like night and day. And the fact that anyone would be trying to decide how to allocate resources without having that information, and then presumably you could spend loads of time with a chicken, and that would have a huge influence on what you thought their life was like.
Bob Fischer: Right. I mean, I do think it’s really important to make those observations. And I think one way of making that more pointedly is to think about how little you have probably cared about various groups of humans at other points in your life, and just think about the various ways in which you felt like, “They’re really different from me. I’m not willing to make any sacrifices for them. I’m not even willing to like something on Facebook to support this thing.”
I mean, it’s amazing how little sympathy we have without experience of other individuals, and then how radically our sympathies can be altered and how morally important other individuals can seem once we have some of those experiences. And seeing our shifts in the human case I think should be some clue that that same thing can happen in the animal case.
Luisa Rodriguez: Yeah. Then the other option you mentioned was using different kinds of proxies. And a very common one for a long time has been using neuron counts. A colleague of yours at Rethink Priorities has written this report on why neuron counts aren’t actually a good proxy for what we care about here. Can you give a quick summary of why they think that?
Bob Fischer: Sure. There are two things to say. One is that it isn’t totally crazy to use neuron counts. And one way of seeing why you might think it’s not totally crazy is to think about the kinds of proxies that economists have used when trying to estimate human welfare. Economists have for a long time used income as a proxy for human welfare. You might say that we know that there are all these ways in which that fails as a proxy — and the right response from the economist is something like, do you have anything better? Where there’s actually data, and where we can answer at least some of these high-level questions that we care about? Or at least make progress on the high-level questions that we care about relative to baseline?
And I think that way of thinking about what neuron-count-based proxies are is the charitable interpretation. It’s just like income in welfare economics: imperfect, but maybe the best we can do in certain circumstances.
That being said, the main problem is that there are lots of factors that really affect neuron count as a proxy that make it problematic. One is that neuron counts alone are really sensitive to body size, so that’s going to be a confounding factor. It seems like, insofar as it tracks much of anything, it might be tracking something like intelligence — and it’s not totally obvious why intelligence is morally important. At least in the human case, we often think that it’s not important, and in fact, it’s a really pernicious thing to make intelligence the metric by which we assess moral value.
And then, even if you think that neuron counts are proxies of some quality for something else, like the intensity of pain states or something… It’s not clear that that’s true, but even if that were true, you’d still have to ask, can we do any better? And it’s not obvious that we can’t do better. Not obvious that we can, but we should at least try.
Luisa Rodriguez: Yes. Makes sense. Are there any helpful thought experiments there? It doesn’t seem at all insane to me — though maybe you wouldn’t expect it to happen on its own through evolution — that there would be a being who has many fewer neurons than I do, but that those neurons are primarily directed at going from extreme pain to extreme something like euphoria. It doesn’t seem like there’s a good reason that’s not possible, and that that extreme pain could just be much more than the total amount of pain I could possibly feel. Even though the types of pain might be different for me, because I’ve got different kinds of capacities for sadness and shame and embarrassment, like a wider variety of types of pain, it still seems at least theoretically possible that you could house a bunch of pain in a small brain. And that feels like good reason to me to basically do what you’ve done, which is look for better ways than neurons alone.
Bob Fischer: Sure. And some evolutionary biologists have basically said things along these lines. Richard Dawkins actually has this line at some point, where he says maybe simpler organisms actually need stronger pain signals because they don’t learn as much as we do and they don’t remember all these facts, so they need big alarm bells to keep them away from fitness-reducing threats. So it’s always possible that you have a complete inversion of the relationship that people imagine, and you want to make sure that your model captures that.
Luisa Rodriguez: Yeah, makes sense. Just to give credit where it’s due, who was the colleague who wrote that report on neurons?
Bob Fischer: Adam Shriver.
Luisa Rodriguez: Nice. Thank you, Adam.
Method [00:24:02]
Luisa Rodriguez: So before we get to the results, can you say in kind of broad strokes what your approach was? Because this project does just sound completely impossible to me.
Bob Fischer: Sure. So broad strokes, the way this goes is something like this. You’re going to start off with a theory of welfare — that is to say, you’ve got to say something about what welfare is. Once you’ve got a theory of welfare, then you can go and think about what would be a proxy for variation in the ability to realise whatever welfare is?
So if you think welfare is about pleasures and pains, then what you want to ask is: what would provide any evidence of the ability to have more intense pains or more intense pleasures?
Or if you think that welfare is all about satisfying desires, then the question is going to be: what would be an empirical proxy for being able to have stronger desires, such that their frustration would be worse for you or their satisfaction would be better for you?
And we could go on running through theories of welfare and thinking about what it would be in each case.
So start with a theory of welfare, figure out what would be a proxy for variation, and then go out into the literature and figure out what is the evidence? Do you see these proxies? Are there in fact these differences? And then come up with some method for aggregating.
So that’s the big picture. Four steps: choose a theory of welfare, figure out what would be evidence of variation, go out and find the evidence, aggregate.
Luisa Rodriguez: Cool. Just to make it a bit concrete, can you give just a few examples of those proxies?
Bob Fischer: Sure. The way we do this is we assume hedonism — so we assume that what makes things go well or badly for anything is the intensity of pleasures and pains. Then you say, what are these kinds of valenced states for — that’s what you call pleasures and pains: “valenced states” — what are these kinds of states for, from an evolutionary perspective? And people have different theories about that. Maybe you think it’s for representing fitness-relevant information: pleasure says, “that’s good, get more of it”; pain says, “that’s bad, get away from it.”
Then you’re going to ask: what would be some traits that might provide evidence of different abilities with respect to that representation goal? So something like numerical cognition: you might have thought that doesn’t really have anything to do with welfare per se, but it has something to do with what kind of information you can represent, and it’s something that we can go out and observe. Do animals have the ability to reason in any way numerically? So we get proxies like that, in addition to things that are more familiar, like do we see evidence that they use pain medication when it’s available?
But what we really want to do is think about that link from the theory of welfare, to what makes your life go well, to the things that we can actually go out and find evidence for based on the function that it’s supposed to have.
Luisa Rodriguez: Cool. Yeah, that basically all just sounds very reasonable to me.
The present / absent approach [00:27:39]
Luisa Rodriguez: OK, you’re assessing the proxies that these different species have as either present or absent. So you’re basically saying, like, “Does a pig have the capacity to do a certain kind of learning?” And it seems like probably they do, so you’re like, yes. But if you then look at humans and you’re like, “Do they have the capacity to do a certain kind of learning?” The answer might be like, yes, they have a very complicated and much more complicated and sophisticated capacity for that kind of reasoning.
So it feels like these proxies, I’m sure, come in degrees or in qualitatively different forms. Given that, and given that those degrees might be very different, you might have a thing where you say, yes, a bee has this capacity and a human has that capacity, when you make it very all-or-nothing. But that might make bees and humans seem much more similar than they are. Is the present/absent approach to evaluating whether a proxy is there or not actually going to get you closer to the truth than trying to adjust for it, even if the latter kind of smuggles in prohuman biases?
Bob Fischer: This is a great question. There’s a lot to say here. Let’s just try to get the high-level points that I think are worth keeping in mind. So one is, one, I totally want to be concessive here and say that as we learn more, these estimates should change. I am not claiming that these are the things you should believe for all time. The claim is just that these are the things you should believe now, based on the evidence available. So if the academic literature provided any way of distinguishing these traits in a quantitative way, we would have incorporated it, but they don’t; they just give these very broad qualitative characterisations of these traits.
So yeah, we might end up with important differences in our welfare range estimates over time, but you should believe what you should believe based on the evidence that’s available at the moment. And this is, I think, the evidence that we’ve got at the moment. So that’s the first thing to say.
Second thing to say is that we could address this to some degree, even in the present — even though there isn’t a metric that we can straightforwardly use — by surveying experts. If somebody wants to fund more work on this, we are happy to do that. These are the kinds of things that we think would be valuable to improve the quality of this research. So there may be strategies, but they just require doing new research and not just the desk research that we had done in the past.
The third thing to say is, sure, we could start applying adjustments. My aversion to doing that is that for every adjustment you want to incorporate that’s going to increase the welfare range estimates, there’s probably some other one that I might want to add that would decrease them. And there’s a lot of this kind of extrapolation that you might want to do from one particular case where we know that there’s probably a difference between the manifestation of some particular proxy — but you probably shouldn’t do that; it’s probably the case that you’re poorly positioned to make these kinds of judgement calls.
So, long story short on that: we thought if we aren’t sure — or even don’t have a hunch, really — about how best to apply these kinds of adjustments, we just shouldn’t do it. We should put it out the way it is and then tell folks that all we’re offering is placeholders. This is just a place to get started. It’s a reasonable best guess. We think the truth is closer to this than it is to the kinds of numbers that people had been throwing around before. But we’ve got to wait and see how the analysis goes once we’ve refined it in all sorts of ways.
Luisa Rodriguez: Cool. I am sympathetic to that reasoning.
Results [00:31:42]
Luisa Rodriguez: Let’s talk about the actual results. You estimated welfare for 11 species: pigs, chickens, octopuses, carp, bees, salmon, crayfish, shrimp, crabs, black soldier flies, and silkworms. And I name these in case any listener wants to be like, “I’m really interested in silkworms. I’m really glad there are moral weights for those. I’m going to go compare those moral ranges to human moral ranges.” But yeah, why these species?
Bob Fischer: We chose them based on the animals we farm the most. So everybody looks at these choices that we made and they say, “Why not cattle? I want to see the cow welfare range.” But if you look at the number of cattle raised by comparison, it’s really, really small.
Then you might wonder, what about octopuses? We’re not really farming many of them at this point; that industry hasn’t taken off yet. And there it’s because people really want to farm octopuses, so we’re worried about it, and we thought it was important to include it for that reason.
Chickens [00:32:42]
Luisa Rodriguez: Makes sense. So this podcast isn’t the ideal medium to examine and internalise all the moral weight ranges for 11 species, but let’s talk about a couple. There are very good illustrations on the blog posts themselves, so I encourage people to go look at those.
But let’s talk about chickens to start. What did you conclude about the welfare range of chickens?
Bob Fischer: What we conclude is roughly that the welfare range of chickens is about a third of that of humans. So let me quickly jump in and say what that means, because it sounds crazy, but what’s going on there is we’re asking the question: how intense is the pleasure that a chicken is getting from ordinary experiences, on average, versus how intense are the pains? And we’re guessing that maybe it’s about a third as intense versus a third as painful.
So we’re not saying one human equals three chickens; we’re not saying that we could do some sort of straightforward calculus like that. Instead, what we’re saying is, when you think about the relative intensities of pleasures and pains, maybe that’s roughly the difference that we’re dealing with.
Luisa Rodriguez: Cool. Just to make sure I understand, the thing is saying that the capacity of welfare or suffering of a chicken in a given instant is about a third of the capacity for the kind of pain and pleasure a human could experience in a given instant. Is that it?
Bob Fischer: That’s the way to think about it. And that might sound very counterintuitive, and I understand that. I think there are a couple of things we can say to help get us in the right frame of mind for thinking about these results.
One is to think about it first like a biologist. If you think that humans’ pain is orders of magnitude worse than the pain of a chicken, you’ve got to point to some feature of human brains that’s going to explain why that would be the case. And I think for a lot of folks, they have a kind of simple picture — where they say more neurons equals more compute equals orders of magnitude difference in performance, or something like that.
And biologists are not going to think that way. They’re going to say, look, neurons produce certain functions, and the number of neurons isn’t necessarily that important to the function: you might achieve the exact same function using many more or many fewer neurons. So that’s just not the really interesting, relevant thing. So that’s the first step: just to try to think more like a biologist who’s focused on functional capacities.
The second thing to say is just that you’ve got to remember what hedonism says. What’s going on here is we’re assuming that welfare is about just this one narrow thing: the intensities of pleasures and pains. You might not think that’s true; you might think welfare is about whether I know important facts about the world or whatever else, right? But that’s not what I’m assessing; I’m just looking at this question of how intense is the pain.
And you might also point out, quite rightly, “But look, my cognitive life is richer. I have a more diverse range of negatively valenced states.” And I’m going to say that I don’t care about the range; I care about the intensity, right? That’s what hedonism says, is that what matters is how intense the pains are. So yeah, “I’m very disappointed because…” — choose unhappy event of your preference — “…my favourite team lost,” whatever the case may be. And from the perspective of hedonism, what matters about that is just how sad did it make me? Not the content of the experience, but just the amount of negatively valenced state that I’m experiencing, or rather the intensity of the negatively valenced state that I’m experiencing. So I think people often implicitly confuse variety in the range of valenced states with intensity.
Luisa Rodriguez: Yeah, I mean, that’s definitely something I do. For sure there is a part of me that thinks that the thing that matters a lot here is that I can fall in love in a particularly meaningful and big way; I can have friendships lasting 50 years that involve really deep and meaningful conversations. And that even if a chicken has meaningful relationships with other chickens, they’re not as complex and varied as the relationships I have with people in my life.
On the other hand, a big part of me puts a bunch of weight, when I really think about it, on just like, no, what matters is the intensity. If a chicken feels more sad about her wing being broken than I feel about losing a friend, then so be it. We should make sure that their wings aren’t broken before we should make sure that whatever threat that could mean I lose my friend [is prevented].
And I guess lots of listeners will have their own kind of internal turmoil about this, about what welfare even is. But for now, I guess if we’re just taking this assumption, which is that what matters is the intensity, your finding is that something like averting the suffering of three chickens for an hour is similarly important to averting the suffering of one person for an hour. And that feels uncomfortable to me. Can you talk me through that discomfort?
Bob Fischer: Sure. So the first thing to say is: you’re not alone. I don’t feel totally comfortable either. And we have to ask ourselves what our most serious moral commitments are when we’re approaching this question. So you’re not going to avoid really uncomfortable, challenging questions when we try to think about moral weights — just not going to go away.
But here are a few things to say. One is: is there any number that you wouldn’t be uncomfortable with? Because notice that if you’re committed to this idea of doing conversions, eventually it’s going to just work out that you’ve got to say there is some number of hours of chicken suffering that is more important than helping a human.
And I think actually for a lot of people, they don’t really think that there is any conversion at all, right? If I had said it was 300, would you really have felt that much better? You might have felt a little bit better; I’m not saying you wouldn’t felt it at all. Sure, it’s a difference, but you might still say, when you really think about it: “Three hundred hours? Would I put somebody through that for… chickens?” And then you might just have the same level of discomfort, or something close to it.
So I think to some degree we have to remember that the tradeoffs that we’re talking about come from background theoretical commitments that have nothing to do with our specific welfare range estimates: it comes from the fact that we’re trying to do the most good. We think that means making comparisons across species, and we’re committed to this kind of maximising ethic that says, yeah, there is some tradeoff rate, and you’ve got to find it.
So that’s the first thing to say about the discomfort. Before I say anything else, what do you think about that?
Luisa Rodriguez: Yes, some of that definitely worked for me. I think the thing that lands most is if I think about chickens on a railroad track, and there’s a trolley coming, and there’s a human on the other side, it is pretty impossible for me to imagine getting to the point where I’m ever super comfortable being like, “I’m going to let it hit the human, who I could have conversations with, who has a family I might know, who I could give a hug to, and who has a job…” These are all the things that kind of run through my head as I’m deciding whether to pull a lever to decide who gets hit by this trolley. And so, fair enough that that is something I have to grapple with, regardless of exactly what these numbers are.
Bob Fischer: And just to tag on to that, think about it when you put someone you really care about on the track. So I think about this with my children, and say, look, it might well be the case that there’s almost no number of other humans I would choose to spare, given the choice between killing my own children and them. But that’s not because I think they actually matter less in some objective sense. Like when I’m trying to do the impartial good, I would never say, “Oh yes, my children are utility monsters: they have infinite worth and everybody else has just some tiny portion of that.” And so when we recognise that our moral judgements are so detached from our judgements of value, that also can help us think about why these welfare ranges might not be quite so crazy.
Luisa Rodriguez: Yeah. Was there anything else that helps you with the discomfort?
Bob Fischer: I think the thing that helps me to some degree is to say, look, we’re doing our best here under moral uncertainty. I think you should update in the direction of animals based on this kind of work if you’ve never taken animals particularly seriously before.
But ethics is hard. There are lots of big questions to ask. I don’t know if hedonism is true. I mean, there are good arguments for it; there are good arguments for all the assumptions that go into the project. But yeah, I’m uncertain at every step, and some kind of higher-level caution about the entire venture is appropriate. And if you look at the way people actually allocate their dollars, they often do spread their bets in precisely this way. Even if they’re really in on animals, they’re still giving some money to AMF. And that makes sense, because we want to make sure that we end up doing some good in the world, and that’s a way of doing that.
Luisa Rodriguez: I guess I’m curious if there’s anything you learned, like a narrative or story that you have that makes this feel more plausible to you? Anything particular about chickens or just about philosophy? You’ve already said some things, but what story do you have in your head that makes you feel comfortable being like, “Yes, I actually want to use these moral weights when deciding how to allocate resources”?
Bob Fischer: There are two things that I want to say about that. One is I really worry about my own deep biases, and part of the reason that I’m willing to be part of the EA project is because I think that, at its best, it’s an attempt to say, “Yeah, my gut’s wrong. I shouldn’t trust it. I should take the math more seriously. I should try to put numbers on things and calculate. And when I’m uncomfortable with the results, I’m typically the problem, and not the process that I used.” So that’s one thing. It’s a check on my own tendency to discount animals, even as someone who spends most of their life working on animals. So I think that’s one piece.
The other thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics.
Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered.
Luisa Rodriguez: Yes, a lot of that is resonating with me. OK, let’s say that we buy this ratio — so something like we’re roughly indifferent between averting painful experiences for three chickens for one hour and averting similarly painful experiences for one human for one hour. What’s an appropriate use of this? Can you talk me through a real world example of how we apply it?
Bob Fischer: A real world example might be: imagine you’re trying to decide how you want to split your resources between interventions aimed at helping animals and interventions aimed at helping humans. Maybe you’re doing something like 90/10 humans right now. One way you could check that is by saying, what would happen if I valued animals at this rate? How cost effective would work on chickens look, by comparison to the best human interventions?
And there have been a bunch of different analyses that have been done on this. It will come as no surprise that helping animals looks really cheap by comparison to helping humans. So you could then think that actually what this cost-effectiveness analysis tells me is I should be going all-in on chickens, right?
Luisa Rodriguez: Right.
Bob Fischer: And most people are going to say, “I’m not comfortable doing that, but if I was 90/10 before, humans to animals, maybe now I’ll go to 60/40, because this is an update that work on chickens, insofar as I’m interested in doing the impartial good, is a lot better than I thought it was.”
Luisa Rodriguez: Cool. I like that. That feels, at least given where I currently am, like a slightly more achievable update, even if in theory I might endorse a stronger one.
How much did your moral weights change how people who think about these interventions a lot think that farmed animal interventions compare to human ones? Did it make farmed animal interventions look much better than we even thought? Because it’s arguable that farmed animal interventions were already much more cost effective than many human ones.
Bob Fischer: For me personally, what happened was this: I went in pretty bullish on chickens and pigs, and felt like, yeah, they matter a lot. I’ve read my Peter Singer and I’m prepared to be really lobbying for them. And I thought probably fish matter reasonably close. But then I was pretty sceptical about the invertebrates: I thought the probability of sentience is really low. And I thought that even if these animals are sentient, the complexities of their lives must pale by comparison.
I think I basically walked away thinking I was just wrong about that, that the probability of sentience for invertebrates is a lot higher than I thought it was. I was very comfortable throwing around 1% as a reasonable baseline estimate before doing the Moral Weight Project, and then have walked away thinking that’s way too low. The range should be something like 10% to 50% — and maybe even slightly above 50% at the top end of the range in some cases. So that was a big shift.
And then of course, learning a bunch about the complexities of their lives made me think this was just totally size bias. I’d never really thought about this, and I don’t really have any good reason for discounting them as much as I did. So now I’m just way more worried about invertebrates than I was previously.
Bees [00:50:00]
Luisa Rodriguez: Interesting. Let’s talk about an invertebrate. Let’s talk about bees. What did you conclude about the welfare range of bees?
Bob Fischer: I think we say it’s roughly 7% of a human, is the 50% estimate.
One very high-level comment that I should make about all of these numbers: we don’t actually want you to focus that much on the numbers, because what we really want to get into the public consciousness, or the community’s consciousness rather, is the idea that probably the vertebrates are within an order of magnitude of humans. We don’t know where they fall in that range, but it’s not going to be 10 orders of magnitude. It’s going to be closer to one, within one. And we want to say that the invertebrates are within a couple of orders of magnitude of those vertebrates.
So basically what we really want to lobby for is not 0.3 as the number for chickens; what we want to say is that the baseline here should not be several orders of magnitude difference, that the baseline here should be within one. And likewise, what we’re really saying when it comes to bees is like, yeah, we come up with this pretty positive number, pretty high number, but really what we want to say is within a couple of orders of magnitude of those vertebrates. So don’t zero them out, make them comparable to what you’re dealing with with the vertebrates.
Luisa Rodriguez: Yeah, and I was going to do something you probably wouldn’t like, which is do the math and say something like that means that if I’ve got train tracks and I’ve got a human on one side, that means putting 14 bees on the other side. And obviously that’s not taking into account the length of their lives, so that actually isn’t the kind of moral outcome you’d endorse. But trading off an hour of suffering for those two groups feels even more uncomfortable to me. And it sounds like the thing you’d actually stand by is not this kind of 1-to-14, 7% figure, but something like 1-to-100, a couple of orders of magnitude. And even that, I’m still like, “A hundred bees?!” I like bees, but wow.
Bob Fischer: Sure, totally. Again, a couple of things to say. One is that I do think size bias is real. Imagine if bees were the size of rhinos, and you never had to worry about getting stung. You’d probably be pretty into bees all of a sudden. I think we are just affected by the fact that they’re little and they feel very replaceable, we can’t really observe their behaviours, et cetera. So that’s one thing to say.
Luisa Rodriguez: On that, just because I think it might be interesting: are you imagining a rhino-sized bee with a bee-sized brain?
Bob Fischer: Yes.
Luisa Rodriguez: Interesting. OK, so just kind of imagine a really big, fluffy bumblebee buzzing around, being adorable, not stinging you. Yeah, fair enough. I feel like I’d be like, “That thing is cute and important and I’ve gotta protect it.”
Bob Fischer: Got to protect that thing. Exactly.
Luisa Rodriguez: Darn it, Bob.
Bob Fischer: Well, I know it’s an uncomfortable fact about human psychology that we care about all the wrong things. But anyway, that’s one thing to say.
Second thing to say is that, again, the welfare range estimate is a factor here. The background commitment to something like utilitarianism or welfarist consequentialism, that’s doing a lot of the work. We’re just committed to aggregation if we’re doing this kind of thing, and there’s going to be some number of bees where you’re supposed to flip the lever and kill the human — and that, again, might just make you uncomfortable. If it does, that’s not my fault. That’s a function of the moral theory, not a function of the welfare range estimate.
And the third thing to say is: I do think it’s really important just to learn more about these animals. And of course, bees in particular are very charismatic and cute. And you could go and watch Lars Chittka, who’s a bee scientist, and he’s got these lovely little videos of bees playing and rolling balls around, and it’s adorable. And of course, you can feel lots of sympathy for bees if you watch those kinds of things.
But for me, those actually are not the most interesting cases and compelling cases. For me, it’s the fact that when you look at Drosophila — fruit flies, closely related to black soldier flies — they’re used as depression models for studying humans, as pain models. And you read all these papers, there are a million of them, and they will say, “It’s amazing how similar the neurology of these organisms is to humans! They’re such a perfect model, or such a useful model for understanding these aspects of the most excruciating and terrible human experiences. Isn’t it great that what we can now do is starve them and put them through sleeplessness and all kinds of things, and guess what? They get depressed. And that’s such a great way of studying these horrible symptoms, some of the worst symptoms that humans ever experience.”
So when you see that kind of thing in the literature, and you see the excitement of researchers thinking about these organisms as ways of understanding humans, and seeing the behavioural implications for these organisms, you start to think, man, there’s something going on in that creature that should make me really uncomfortable.
Luisa Rodriguez: Yeah, I actually find that extremely compelling. I’ve heard lots of arguments for bees and other invertebrates having wider ranges of intensity of experience than many people think, but that one actually hit me pretty hard.
Salmon and limits of methodology [00:56:18]
Luisa Rodriguez: Let’s talk about another species you looked at. So talk to me about salmon. What did you conclude about the welfare range of salmon?
Bob Fischer: In the case of salmon, I think we say something like 5% or 6% — which of course is lower than the number I just gave you for bees, and that might strike you as remarkable. So let’s just comment on that before we say anything else.
What’s going on? Well, one thing that’s going on is we get these results by surveying a lot of literature about a bunch of proxies for welfare ranges. Those are things like the ability to engage in reversal learning, or the use of analgesics, or numerical cognition, or whatever else. And what does that mean? It means that the less some class of organisms is studied, the lower their welfare range estimate is going to be — so there’s a bias against understudied organisms.
And I think what we see here is a case where people just don’t care that much about the cognitive and affective lives of many of the animals they eat. Bees just happen to be charismatic enough and interesting enough to a certain group of researchers that we do know a surprising amount about their abilities.
So again, I don’t want to say that this is really indicative of some fundamental difference here between the capacities of salmon and the capacities of bees — I totally grant that this is just ignorance doing the work — but it shows you that there are the limits of this methodology, that it’s affected by these kinds of factors. And if you wanted to improve it, you would have to either collect a lot more data, or you would have to figure out how you can control for degree of studiedness, which is something that we’ve been working on.
Luisa Rodriguez:
So let’s go back to this kind of headline figure about salmon. And again, you want me to take away that it’s within an order of magnitude ish of humans? That’s the headline. And, yeah, I guess we’ve already done the dance of, like, that sounds weird, so I won’t do that again, but it does sound weird. Maybe to make it a bit more intuitive, are there any particularly memorable welfare-related facts about salmon you learned while researching them?
Bob Fischer: I mean, one of the things is just to think about how badly salmon fare in farmed contexts, and that’s itself kind of indicative. So there’s a problem of “dropout fish,” or sometimes they’re called “loser fish” in farmed salmon contexts, where they just get really stressed and they end up floating to the surface. And it’s almost like they’ve given up on life, because this is an animal with incredible navigational abilities, and they’re used to travelling these enormous distances, and instead they’re taking these tiny little laps inside these ponds — and some of them just can’t seem to handle it.
And when you think about this mismatch between the obvious cognitive abilities of the organism and the environment that it’s in, that can also help you think about and appreciate the abilities that the thing actually has. Another way of coming at this is just to think about stereotypical behaviours of tigers in a zoo. You could get so depressed when you’re watching that tiger pacing back and forth. It seems so sad. And part of it’s because you’re recognising the mismatch between cognitive ability and the environment. Why have it be any different for fish? You would expect the exact same thing. And if it’s the same thing, that speaks to the fact that there really is a lot going on in that organism. It’s more sophisticated than we might imagine.
Octopuses [01:00:31]
Luisa Rodriguez: I’m finding this super interesting. So let’s actually talk about a few more before we move on. Can we talk about octopuses next?
Bob Fischer: Sure.
Luisa Rodriguez: How should we think about octopuses’ welfare range compared to humans’?
Bob Fischer: So the octopuses ended up looking pretty good, and we came up with a very optimistic interpretation, something like a fifth as intense. Octopuses are very complicated, actually, for a whole lot of reasons. They have these distributed neural systems where you’ve got lots and lots of neurons in each arm, and they sometimes seem to be acting quasi-independently. This raises this question of whether you’ve got one mind or nine minds, where you’ve got a mind per arm. We actually did an entire report on just that question, and ended up thinking we probably shouldn’t say that, but it shows just how alien that kind of organism is.
But still, the evidence suggests they have these remarkable abilities: incredibly intelligent beings, lots of coordinated planning, the ability to recognise faces. If you want to have a good time, go on YouTube and look for videos of octopuses squirting water at people they don’t like in the lab. So they remember faces. Those people can actually change their clothes, and the octopuses will still figure out who they are and still try to squirt water at them. So they are incredibly complex organisms, and a real testimony to how minds can evolve in ways that we would never have imagined, taking a completely different lineage.
Luisa Rodriguez: Yeah, I’m too tempted not to ask, though it’s a bit of a digression. Your team did write this report on whether octopuses have nine minds. Can you explain the motivation and then what you concluded?
Bob Fischer: Sure. So the motivation is something like this: there are actually a lot of animals where it isn’t clear that they have the same kind of unified minds that we think of ourselves as having — open question whether humans have the same kind of unified minds that we think of ourselves as having.
But you don’t have the same structures in birds, for instance, between the hemispheres. And you have this kind of distributed cognition, apparently, in octopuses. And you might think, does that mean that you don’t just have one welfare subject in a chicken? Maybe you’ve got two, one for each hemisphere. Or maybe you don’t have one welfare subject, one entity of moral concern, in an octopus: maybe you’ve got nine. And of course, that would really change the moral weight that you assigned to that organism. So that’s why we investigated.
Then the upshot is we basically don’t think you should say that. The short reason is that you want to think functionally about the mind: you want to think about what the overall organism is accomplishing with the ability that it has. And we should assume by default that these organisms evolved to be coordinated systems that are trying to accomplish the collective ends in the world. And I say “collective” as though we’re thinking about the parts as separate individuals — but of course, that’s exactly what’s being contested.
And the idea is, once we think about things that way, it becomes a lot less plausible that there’d be an evolutionary reason for things to work in the way that it has these multiple minds. And we think that the empirical evidence itself just isn’t good enough to overcome that fundamental sort of default hypothesis.
Luisa Rodriguez: Yeah, OK. Can you clarify why we think octopuses might have multiple minds? Is it just because they have neurons in their arms?
Bob Fischer: Yeah. So in the case of octopuses, part of it is just the concentration, the sheer number of neurons in the arms. Part of it is behavioural — if you watch videos of octopuses, you can see examples of this — where it’ll look like arms are kind of operating independently, roving and exploring and reaching out on their own, and they don’t seem to be coordinated with what’s happening with the main attention of the organism, and they’re still off doing their own thing. And that gives people pause and they start to think, is it the case that all these neurons are acting in some semi-coordinated way, independently of what’s happening at the main site of cognition?
And people have written some speculative pieces about this. Of course it’s very hard to test. Of course many of the tests would be horrible. Lots of reasons to think this is either a difficult question to answer, or one that, insofar as we could answer it, perhaps we should not try. But it just looks like it would be really hard to show that that was the case, rather than some more modest hypothesis about the ability to sense more thoroughly through the appendage or whatever.
Luisa Rodriguez: Right. And why is the theory that it’s multiple minds rather than something like multitasking? I feel like you could make a similar observation about me when I’m cooking, and also having a conversation, and also… I don’t know, maybe that’s the most I can do at once. But something like my hands are doing something while it seems like I’m thoroughly mentally engaged in something else?
Bob Fischer: Well, maybe it’s easier as a case study to think about birds. Where in humans, when you sever the corpus callosum — the thing that connects the two hemispheres — you do get these gaps where the hemispheres seem to operate in an uncoordinated way. And you have people report that they don’t know what the other hemisphere ostensibly seems to know, based on the behaviour that they’re engaging in. And so if you don’t have that structure in a bird, you might then wonder, is what you have here effectively a split-brain patient all the time? And then there are these interesting cases, like dolphins, where they seem to have one hemisphere able to go into a sleep mode while the other hemisphere is awake.
And seeing those kinds of things in other species can make you wonder whether there’s just a very different organisational principle at work in those minds. So if that’s your background context, then seeing this really distributed set of neurons in an octopus, and then seeing the behaviour that looks not entirely coordinated, et cetera, can motivate the idea of multiple minds. But admittedly, it’s speculative stuff, a really complicated set of questions. The work on that is in early days, so it’s not like I think there’s some super strong case that one might have had for that view.
Luisa Rodriguez: OK, so for now we think that probably these species do not have multiple minds. We should consider them as one kind of moral patient.
Anyways, back to the core of it, which is that you found results that made it look like octopuses might have very significant moral welfare ranges: a particularly interesting thing here is that you actually put some weight on octopuses having a wider welfare range than humans. What’s the story there?
Bob Fischer: So what’s going on there is hard to explain without going into a little bit of the methodology that we used for generating the welfare range estimates. The short answer, though, is something like this: we came up with all these proxies. You can go to the website, see all these proxies, see the list of things that we were interested in, see what we know and don’t know about these organisms. And then you have to ask this hard question, which is: what’s the relationship between that list of proxies and your overall welfare range estimate?
And as you might imagine, it’s really hard to say what that relationship is. So what you want is to capture your uncertainty about that relationship and say, here’s one way you could think about the way you should generate a welfare range estimate — based on, say, one kind of philosophical hypothesis. Here’s a different way you could generate the same welfare range estimates, based on a different philosophical hypothesis. And so on and so forth, enumerate them.
And then what you do essentially is aggregate over your uncertainty about the right hypothesis about the relationship. But some of those hypotheses are ones where animals can have larger welfare ranges than humans, and so under uncertainty across them, you end up with this result where octopuses and some other animals have upper bounds that are actually larger than the upper bounds for humans.
Those come basically from two possibilities. One of those possibilities is the thing we talked about earlier, that maybe simpler organisms need stronger signals to get their bodies where they’re supposed to be, so you might actually get more intense pain signals.
And the other one is something we have not discussed: it’s the possibility of differences in the rate of subjective experience. So what if it’s the case that, per unit of objective time, some organisms essentially have a faster clock speed: they get more units of subjective experience per unit of objective time? And if you think that’s possible, then you can end up with this result that some animals have higher welfare ranges than humans because they have a faster clock speed.
Luisa Rodriguez: Right. OK, I’ve got to ask about that. And I think that’s actually work by Jason Schukraft?
Bob Fischer: That’s right.
Luisa Rodriguez: So Jason wrote this report that basically tries to look at whether different species might have different amounts of total experience because they’re perceiving things at a more fine-grained level of time — which is extremely trippy to think about. Can you help with some kind of toy example to make it a bit easier?
Bob Fischer: Well, the way to think about this sort of intuitively is to think about reports that people often give in tragedies, where you’re in a car accident and you feel like time slows down: “I saw the vehicle slowly sliding toward me. I knew I couldn’t control it…” All that sort of thing. One way of interpreting what’s going on there — though this is controversial — is to think the brain is sampling information from the environment at a faster rate because it’s some kind of emergency mode.
If you accept that sort of interpretation — not saying you should, but if you did — then you might think that you could have this as a general phenomenon that maybe brains of different kinds can, in general, sample information at different rates from the environment, such that they essentially have more units of subjective time per unit of objective time.
Luisa Rodriguez: Yeah. So can I ask a followup question? I have had the experience of being in a car accident. I do remember it feeling like time was passing slowly and I had extra time to think. I was like, “Where does it feel like my head is about to hit?” and could react a bit more than I would have expected to be able to. When I remember it, I don’t have a memory of it being a long time. It still feels like it was a few seconds, even though it felt like my brain worked faster. So that example kind of pulls the intuition for me, but it doesn’t totally. Is there anything I’m missing?
Bob Fischer: First of all, great. Well, not great. I wish you hadn’t had the car accident, but secondly, great in the sense that, yeah, we do want to separate the question of the rate of a subjective experience from the rate of memory encoding and then the nature of memory later on. So it could turn out that the remembered experience is encoded in a way where it still feels like it took three seconds retrospectively, but in the moment you were having more units of subjective experience per unit of objective time. Those things are totally compatible.
That being said, it’s a highly controversial hypothesis, really unclear. There’s just lots to say here about all of the empirical assumptions that are being smuggled in and how plausible those are, the philosophical assumptions that are being smuggled in. Andreas Mogensen at GPI has a new paper about this. He thinks he’s very sceptical of interpreting the evidence this way. So it’s an ongoing scholarly dispute about what you should think along these dimensions.
For our purposes — thinking about the way of addressing this for the purposes of coming up with moral weights — we thought we don’t want to put much stock in this hypothesis, but we want to put some, and so we just incorporate it as one of the factors. But obviously it doesn’t dominate, because most of these welfare range estimates for nonhuman animals come out well below — even if, using the kind of proxies that Jason talks about in his report, it turns out that they might have this higher clock speed.
Luisa Rodriguez: OK. And so methodologically, did you adjust all of these ranges for some kind of general “maybe they think faster or slower,” or did you have more specific proxies that are like, “it seems like octopuses have the kind of structures that might make you think that they are processing things more quickly in a way that might give them more subjective experience”?
Bob Fischer: We did adjust them all. However, we also show you the unadjusted versions. So if you go back and you again look at the website, what you’ll see is links to a truly ridiculous number of tables where you can see different versions of these numbers based on whichever adjustment you find most compelling or not. So you can always figure out which factors are at work.
Essentially, what Jason argues in his report is that you could use the rate at which organisms can see individual bits of light as a single stream of light as a way of estimating clock speeds. So when we go ahead and do the adjustment, it’s based on those numbers, but it affects the tails of the distribution; it doesn’t actually move the centre of the probability or mass around very much.
Luisa Rodriguez: I’m still a bit confused about why exactly this light thing should reflect on the processing speed thing. Can you make that a bit more intuitive again, still?
Bob Fischer: Sure. So let’s stop talking about light. Let’s think about if you’re looking at the blades of a fan, there’s a rate at which they spin where you still see individual fan blades. Then there’s a rate at which they spin where you stop seeing individual fan blades, and it’s just a blur.
Luisa Rodriguez: Yeah, I like that. Nice.
Bob Fischer: And the thought here is that different organisms might be better able to get that information where they still look like individual fan blades at higher speeds, and that looks like a difference in their visual information sample rate.
Now, do we know that their visual information sample rate somehow demonstrates something about the clock speed of consciousness? No, we do not. But it’s a reasonable hypothesis. It’s a possibility. And so whatever credence you assign to that possibility, you can then say, given these differences across organisms, you can apply some discount to that and then use it as a way of saying that maybe you get this kind of variation in the subjective experience of time, or rather the rate of subjective experience.
Luisa Rodriguez: Got it. Cool. I think that really helped the visualisation. So I’m now picturing the fan. At some point it becomes a blur. But if at the point at which it became a blur, in fact, I became a different species and could still see the bits of fan turning, it would feel like I had access to more units of consciousness.
And it sounds like your position is like, it’s really hard to know if that’s true, but also hard to rule out. It does seem like a totally plausible hypothesis that would have real consequences for how we interpret these moral ranges. So I think you’ve dragged me along to where you are.
Bob Fischer: No, that’s great. And I think that also is a nice way of summarising the general approach of the Moral Weight Project. Really what we are trying to do is take on a few assumptions, and then beyond that, make as few assumptions as possible and just say that we are massively uncertain about all of these steps, so let’s just acknowledge our uncertainty and make sure that our models for estimating these welfare range estimates just aggregate over all that uncertainty — they acknowledge it and aggregate over it.
Luisa Rodriguez: Right. Which is why you get results where your best guess is something like octopuses are about 20% as morally weighty as humans — but you put a little bit of weight on octopuses being almost 0% as weighty as humans, and then also you put some weight on octopuses being 1.5x as weighty as humans. And you get that really big range because you just don’t know about the subjective rate of experience. And so you’re like, we’ve got to be uncertain — and that means putting a tiny bit of weight on the pretty weird conclusion that octopuses either don’t matter basically at all, which seems pretty weird, or that they matter more than humans, which also seems pretty weird.
Anyways, I appreciate that, because it’s so hard, I think, to put really wide ranges on things. It makes it kind of feel like you’re not saying anything.
Bob Fischer: Sure, that’s right. People do feel really uncomfortable with it. And the only thing I want to tweak in what you just said is that, yes, it is making this claim about moral importance or value — but really what it’s making a claim about is differences in the possible intensity of valenced states that, given a bunch of assumptions, is equivalent to moral value.
Luisa Rodriguez: Right. At a given time.
Bob Fischer: Right.
Luisa Rodriguez: Thank you. That is worth clarifying again. Before we move on, you’ve already said a couple of fun things about octopuses. Is there anything else that you found particularly memorable that kind of helps make sense of this result where you put a fair bit of weight on them having pretty significant capacity for welfare?
Bob Fischer: I think the answer to your question is, when I think about the facts we learn about octopuses, it’s hard for me to express them in a way that makes it intuitively compelling why that should matter so much. In part, that’s because the individual traits are not going to sound super exciting — like, “they exhibit conditioned place preference,” which means they can learn that a site, a location, that has arbitrary symbols on it is associated with pain and not go there. So that’s cool that they have this ability, but you’re not like, “Oh, now I care tonnes about octopuses,” right? That’s not the thing that does it for people in terms of their sentiments.
If you want to sort of experience what makes these creatures fascinating and compelling, you are much better off doing things like going and watching My Octopus Teacher on Netflix and just spending time saying, “Wow, I see the personality there, I see individuality there.”
But yeah, I think there are all these traits. I do think they’re the kinds of things that matter for determining whether there’s the capacity to suffer and the intensity of that suffering — but they’re not going to be super resonant. And we have to just accept that the scientific descriptions of some of these traits are not going to be super resonant.
Luisa Rodriguez: Yeah, that does make sense. And I have seen My Octopus Teacher and it destroyed me. It was extremely moving. If I try to play devil’s advocate, we’ve got all sorts of biases pointing in both directions, and I buy that many of them are pointing against anthropomorphising. But I did worry that in watching that film, I was doing a tonne of anthropomorphising. How much do you worry about that? And do you feel like there are really concrete things that you see in My Octopus Teacher where it both seems like that is a good indicator of that being having this big capacity for experience and welfare, and also there’s real scientific reasoning behind it or something.
Bob Fischer: Right. So I confess that my memory of the details of that film are not such that I’m going to be able to give you a quick example. But what I can say is, when we step back and ask a larger question about to what degree we should worry about anthropomorphising animals, I do worry about that. It’s the right thing to worry about. And on the other side of that, there’s what Frans de Waal calls “anthropodenial,” where we just have this knee-jerk rejection of the possibility that these animals have these traits.
Both are bad. I suspect that it’s really hard to know which we’re falling into in any given case. So my inclination is to go with a sort of flat-footed approach, and say, I’m not going to be able to talk you out of the worry about anthropomorphising in any particular case, probably — because there’s always going to be some way of explaining this behaviour that doesn’t involve appealing to more sophisticated cognitive traits. That’s what the entire world of cognitive ethology does, is try to come up with these really minimal explanations for behaviour.
So what we have to do is just list a bunch of traits that seem like they’re relevant, and see what happens when we put those things together. What picture emerges? And if you don’t like my traits, fine: give me your traits and let’s look at the literature for those. I just don’t know what else to do, to be perfectly honest, than to just say that the case for consciousness of some kind is reasonably strong. There are lots of interesting capacities here. I don’t see any great biological explanation for radical differences in the capacity for suffering in terms of five orders of magnitude or something like that. So probably you should go with that, and you should be really uncertain about it.
Luisa Rodriguez: Yeah, I think that does just all make sense to me. I’d love to watch My Octopus Teacher with an octopus expert next to me and try to have them explain to me what exactly the behaviours are revealing. But perhaps another episode.
Bob Fischer: I do strongly recommend, by the way, having experiences like that: go and find biologists and watch the way they watch animals, and you can learn an enormous amount. I mean, I think it’s tremendous how much we miss as nonexperts engaging with these animals and not appreciating why those behaviours are significant. I spend a lot of time working with entomologists now and looking at insects through an entomologist’s eyes. Very different experience than looking at insects without that expertise at hand.
There’s a great story about a friend of mine who was rearing mantises. And mantises are not something I’ve ever cared about in any particular way. I’m an ordinary person, you know? But some of them ended up dying. And when she was talking to me about the ends of their lives, and the difficulties they were facing and the way that they were struggling and so on and so forth, I got emotional about those mantises in a way that I was stunned to experience. And it was because of her ability to make their lives vivid to me in a way that I had never had happen before.
Luisa Rodriguez: Wow. That sounds really intense and really meaningful. Thanks for sharing.
Pigs [01:27:50]
Luisa Rodriguez: Let’s do just one more. What was your result for pigs?
Bob Fischer: With pigs, we end up guessing that their pains are about half as intense, or can be about half as intense as a human’s. In one sense, that should be the least surprising of all these results. Our brains are actually very similar. It would be kind of weird if a large mammal was that much different than us. The parts of our brains that really seem to set us apart, the abilities that really set us apart from pigs are things like our impressive ability to engage in social learning, that sort of thing — and not necessarily the intensities of our pains. So that result has always not struck me as particularly odd. I think it’s just the right result. I have qualms about other things, but not really about that with pigs.
And it’s also the case that the more you learn about pigs, the easier it is to feel like there’s something recognisably there. Some of that information is just pure heartstring-tugging. Female pigs go and make nests a lot like birds when they’re going to have offspring, and of course, they can’t do that on a factory farm, and their lives are much more terrible as a result. So there are things like that.
It’s also just that I think people have a strong intuition that their dogs have welfare ranges that are not radically different from their own. And if you’ve ever spent any time with pigs, you know that they behave stunningly like dogs in lots of ways. Maybe you’re familiar with that story of Esther the pig, adopted by this Canadian couple, and gets potty trained and roots in the cupboard to get her snacks and all these sorts of things. And once you have that picture of the kind of organism you’re dealing with, then whatever you’re inclined to say about the welfare ranges of dogs seems to naturally port over.
Luisa Rodriguez: Yeah, that makes sense. Whenever I am thinking about pigs in factory farms, a very immediate intuition pump for me is just like, imagine dogs in factory farms. And then I’m just like, “What?!”
Bob Fischer: Yeah, right. Totally.
Luisa Rodriguez: “Please, no.” So yeah, that does work for me.
Surprises about the project [01:30:19]
Luisa Rodriguez: Let’s zoom out a bit. You’ve already said that one of the big surprises for you to come out of this project was coming in thinking you knew which animals you expected to come out looking really important, from the perspective of welfare and moral weight, but that invertebrates actually ended up looking much more important than you thought. Were there any other surprises?
Bob Fischer: In general, I also thought we were just going to have bigger differences between humans and nonhumans. I thought it was going to be easier to get those differences. And it’s not that I couldn’t have reworked the methodology in ways to generate that, but the thing that seemed most natural didn’t do that.
So just as a bit of a personal background here, I’ve been working on animal issues for the last 12 or 13 years, but I don’t come in with a strong commitment to animal rights. It’s not like I’m super pro-inequality view; my position has always been something like, sure, animals matter a lot less, but what we’re doing is so awful to them, and it would be so easy to make change. And just basic human virtues like compassion and empathy should get us to do a lot better. So we should really be doing a lot better.
So I didn’t come in thinking we should get a really flat line basically across the board for all these organisms in terms of the welfare range estimates. But then I finished the project thinking that it’s going to be really hard to generate really big differences between these organisms if you take on these assumptions.
Luisa Rodriguez: Yeah. Interesting. Were there any other big surprises?
Bob Fischer: I suppose the other big surprise was about the reception of the project, and how much it divided folks in terms of the way they thought about animals, and the way they thought about the kind of work we were doing. There were some individuals who have just taken it on whole cloth, but I think for a lot of people it really brought out very fundamental differences in the way they think about nonhuman animals, and the way they think about ethics generally. So some folks just communicating very clearly, “I can’t take seriously any ethic that’s just going to spit out that kind of result for animals, or say that they could have that kind of moral importance.”
And that does get us down to brass tacks really fast. Like, how much weight are you going to put on your intuitions? How strongly do you think you can lean, or how heavily can you lean on your pre-theoretic views about the relative importance of humans and nonhumans? How much are you really committed to these assumptions that have been popular in the community? Like, do you really want to go in for utilitarianism? Do you really want to go in for hedonism? Do you really want to go in for this, for that?
And I think I have been surprised because I had a background picture of the larger community that was that people are just way more homogeneous in terms of their philosophical views. And now I think, whoa, I was wrong. Folks around here are way more diverse than I imagined, and it just turns out that it’s a convenient language: the language of utilitarianism is a convenient one for doing the kinds of cost-effectiveness analyses that people want to make in the context of doing the most good. But in fact, people aren’t really that utilitarian in lots of important ways. And that’s something that’s striking and needs to get brought out and discussed probably much more in the community.
Objections to the project [01:34:25]
Luisa Rodriguez: OK, let’s dig in more to some objections that people might have about the whole project, in that vein. So your high-level bottom line is that you think the welfare ranges of humans and pigs, chickens, carp, and salmon are within an order of magnitude of one another. And you also conclude that all of the invertebrates you looked at — so octopuses, bees, crabs, black soldier flies, the others — have welfare ranges within two orders of magnitude of the vertebrate nonhuman animals you looked at, like pigs and chickens.
Orders of magnitude I always find a little bit hard to sink my teeth into, but it’s something like we’re talking about comparing an hour of human suffering to an hour of suffering for 10 or 100 of these animals. We’re not talking about one to 10,000 or 1,000 even, which I think is the kind of trade off that people are expecting when they think about some of these species, especially the invertebrates.
Bob Fischer: Yeah. It could be as much as 1,000, but still, those numbers are high for many people. They’re going to be really uncomfortable with them.
Luisa Rodriguez: Right. And so then if you take these results at face value, I wouldn’t be surprised if the results made it seem like the best thing to do with your donations or with your career was to focus on alleviating animal suffering and not human suffering — which I imagine makes many people very suspicious of your methods. What’s your reaction to that kind of flavour of objection?
Bob Fischer: Well, there are a few things to say about it. One is, as people who do this work go, I mean, sure, I obviously have spent a lot of time doing animal stuff. I’ve also written a book defending meat eating. I’ve defended views on which animals don’t matter nearly as much as humans. I’ve spent a lot of time arguing for the kinds of positions that are not popular among proponents of animal rights. So sure, there’s a worry about my own biases. All I can do is say I’m trying to be as even-handed as I can, and my publication record might not include all the things you would expect. So there’s that.
The second thing to say is, yeah, sometimes the arguments push in uncomfortable directions. I don’t think it’s crazy to think we should be spending way more money on animals than we are now. When we think about the sheer number that we are raising and slaughtering — hundreds of billions of vertebrates; arguably in the trillions, once we factor in the invertebrates — that’s just a lot of individuals. It seems like that should be getting a lot of resources.
And the last thing to say is that this is conditional on a bunch of assumptions that you can contest. But the people who typically contest them — or who have been contesting them in paying attention to this project — they’re actually already in on a lot of them. They’re kind of sympathetic to utilitarianism; they’re kind of sympathetic to hedonism, et cetera. So they’re in a bit more of a bind. It’s just harder to resist those conclusions.
Luisa Rodriguez: Yeah, let’s dive into that more. Is this just a problem with trying to maximise expected value when prioritising between causes, which can lead you to do this kind of thing that feels very uncomfortable — like prioritising large but uncertain payoffs over smaller but more certain ones?
Bob Fischer: Sure. So yes, one thing you could think is that the main problem here is that we have gone all-in on a decision theory that just gets you crazy results, and always tells you to do the thing that is likely to have the highest payoff, no matter how low the probability. This raises a worry called fanaticism, the thought that maybe we’re literally going to burn all of our money and our entire careers — all 80,000 of our hours — on things that don’t matter, on the extraordinarily tiny chance that they are the most important thing to do. So yes, you can certainly make that charge. That charge depends on how low you think the probabilities of sentience and moral importance are.
So I’m really confident that pigs matter, and that they matter a fair amount. I’m a lot less confident about what to say about shrimp. And so you do have some set of concerns here about whether it’s really fanaticism in every case, but I think that’s the right kind of question to be asking.
Alternative decision theories and risk aversion [01:39:14]
Luisa Rodriguez: An alternative decision-making procedure that you’ve written about is risk aversion. What exactly is risk aversion in the context of comparing the welfare of different species?
Bob Fischer: Yeah. Good. So what we’re doing here now is we’re stepping away from the project that we originally did, Moral Weight Project. Moral Weight Project’s goal is to come up with these welfare range estimates that you could use to estimate how much human-equivalent welfare you’re getting from different kinds of projects. Cool. So then the next step is to say, so you’ve come up with your best guesses, how do you then make a decision about what to do?
And if you are a traditional expected utility maximiser, you just say: What are the possible outcomes? What are the values of the outcomes? What are the probabilities of the outcomes? Multiply through some, you get your answer.
But there are other decision theories, as you mentioned — risk-averse decision theories — and they say maybe you shouldn’t do it that way. You should think in terms of your ability to make a difference, or your ability to avoid worst-case outcomes, or your sensitivity to your uncertainty about the probabilities that you’re assigning. Maybe those kinds of factors should somehow affect what you end up doing.
Luisa Rodriguez: Yeah, that makes sense to me. It’s like we’ve got a bunch of data. We kind of know what the parameters would spit out if we literally just did expected value. But now we’re like, how do we actually want to use that data to decide what to do? Do we literally want to just multiply the badness of the thing times the probability, if that’s going to give us results that feel really implausible to us? And risk aversion is just an alternative that allows us to take into account other ways of thinking about this that we might endorse more.
Bob Fischer: Right. So the simple way or simple example that will give you some feeling of the view is: risk aversion comes in different types. One very common form says, “What I’m really worried about is the worst-case scenario.” So there are people who get to the airport four hours before their flight, because they absolutely do not want to miss their flight. And then there are people like me, who skate in 45 minutes beforehand, right? Like, I am less risk averse than you might be, Luisa, because I am willing to sort of say, “Thank god for precheck; we’re going to make it through,” right?
But what’s going on there is a difference not in terms of the probabilities necessarily that we’re assigning to being able to make it to the flight, but rather what our attitudes are toward those probabilities, and perhaps our attitudes toward the value of the different outcomes — not just how bad you think it would be, but how much you care about that particular bad outcome.
So if you are risk averse in the sense that you want to avoid worst-case outcomes, you’re going to rank your options differently than you would if you were a straight expected value maximiser. So something like, part of the case for focusing on the long-term future and saying we want to avoid existential risks is precisely that you’re worried about worst-case outcomes: it seems really bad if we all die, and we don’t have more of humanity.
But likewise, it’d be really bad if shrimp were sentient and we were putting them in these low-water-quality environments where they cannibalise each other, and where we’re ripping off their eye stalks so that they mature faster, and where they are desperately trying to escape when harvested and they’re suffocating, et cetera. That would be really bad. That would be really bad if we were doing all those things to shrimp and they were sentient.
So if you are an avoid-the-worst-case person, then when you’re comparing something like helping pigs and helping shrimp, actually you’re going to be much more inclined to think you should help the shrimp, even if your probability of sentience for them is much lower, and even if the welfare range estimate that you have for them is much smaller.
Luisa Rodriguez: Awesome. That was really clear to me. I like the term “worst-case person.” So that’s one type of risk aversion, and I think you’ve come up with three types. So maybe let’s talk through all of them. What’s another one?
Bob Fischer: So another one would be if you are worried about futility: not doing anything at all. You care about difference making. And if you think about the motivation for doing the most good — or at least one of the arguments that got me interested in trying to do the most good — you think about Peter Singer’s classic thought experiment. He’s saying that if you can prevent something really bad from happening, and it’s not really going to cost you anything, then you really should. And so when you’re thinking about this child drowning in the pond, like, of course you’ve got to go and save the child. And likewise, because you can help these far-away starving strangers, you should help them.
Well, what’s the moral intuition that’s driving that? Fundamentally, it’s this thought that you can do this: you actually have this power. It’s not like you’re gambling, and you’ve got a 0.001% chance of making the world better. It’s like, no, you can pluck that kid out of the pond — and you can, via your donation, prevent this kid from starving over there in some faraway place.
And so concern for difference making is going to change what we focus on, because it’s essentially going to penalise options where we think the odds of doing good are worse. It’s also going to penalise options where we might end up doing a bad thing, where there’s a downside risk.
So shrimp look good on worst-case scenario risk aversion. They look worse than they would on expected value maximisation, if you’re a difference-making-focused person, because you could be throwing your money away, right? Because maybe they’re just not sentient at all.
Luisa Rodriguez: Yeah, it’s funny because I feel very strongly that both of those resonate with me, which is annoying. It makes it very unclear cut. But I’m like, yeah, I do want to prevent worst-case outcomes. I don’t want to tolerate giving really any chance at all to shrimp being sentient and me not having done anything about the horrible ways in which they’re raised.
And then another part of me is like, there are people dying of malaria, and we know how to make sure they aren’t, and we know how bad it is to have malaria and to lose a loved one to malaria. And I’m not going to spend my money making sure that shrimp — who are maybe, plausibly sentient, but are weird and small and very impossible to study — making sure that they’re having a slightly better welfare because their tank is a bit bigger or a bit less muddy or something.
Anyways, I imagine lots of people will resonate with both of these, so we’ll have to come back to what we do when we’ve had that kind of conflict. But first, what’s the third kind of risk aversion that is worth talking about here?
Bob Fischer: The third kind is called ambiguity aversion, and that’s essentially where you’ve got hard-to-specify probability ranges, where you’re really uncertain about what probability to assign to something. So this differentiates things affecting humans and things affecting animals, and of course among the animals.
So I don’t know about you, but when it comes to the probability I assign to pigs being moral subjects, to pigs having moral standing, it’s pretty high. I’m more confident that pigs matter than I am of lots of other things. Maybe I’m at like, 0.8 or 0.9: I really think that they are going to be morally important entities.
What do I think about shrimp, just to go back to the same example? Well, I mean, I’m more confident than I was, but still not super confident. And it’s not just that I’m less confident, but I’m also less confident about what probability to assign. So the range of possible values around pigs for me is pretty narrow: maybe the low end is 0.7 and the upper end is 0.99 or something. But for shrimp it’s like, maybe they matter as much as pigs, or maybe they don’t matter at all, and it’s all over the map. Big, wide ranges: very uncertain about what to do.
If you’re ambiguity averse, you don’t like situations like that, where you’ve got these really wide or uncertain probability distributions, and you think you would prefer known probabilities — even if they’re low probabilities, you prefer known ones. So that’s going to penalise, again, the insects, the invertebrates generally, because we just are so much less confident about what to say about them.
Luisa Rodriguez: Yeah. I want to actually try to distinguish between the aversion to acting on ambiguous or uncertain probabilities with the aversion to not doing any good at all, because they feel very close or similar to me. Can you try to make really clear for me that difference?
Bob Fischer: So one way of thinking about this is the difference between the way that these two different risk attitudes are going to respond to low probabilities. So if you are difference-making risk averse, and you have a clearly known but low probability, like 1%, you don’t like that, right? And you say, “I want to make sure that my action has a much higher chance of making a difference, perhaps, than 1%.” If you’re ambiguity averse, you don’t actually have any problem with 1%, as long as you really know it’s 1%. Because what you’re penalising, what you’re concerned about, is the case where it could be anywhere between 0.001% and 50%, just I have no idea what probability to put on there. So that’s the crucial difference between those two attitudes.
Luisa Rodriguez: OK, yeah. It’s clearly different. When I try to step into the mind of the person who doesn’t like acting on ambiguous information about benefits, I expect that the main thing that they don’t like is that there’s this big chance that they’re not doing anything because of the uncertainty. Who cares if it’s uncertainty about doing a lot of good or a lot, a lot, a lot, a lot of good. But I do start to care when it’s basically no good or harm versus a tonne of good. But maybe there are actually people who care about the difference between a lot and a lot, a lot, a lot, a lot. And so maybe I’m missing some key things still.
Bob Fischer: Well, one way of getting at this is to think about why we care about risk. And we’re very focused in this conversation — because we’re focused on doing good — on impact-related questions. But of course, you might care about risk for a different reason. You might just not want to be wrong about stuff. So there’s sort of an outcomes-oriented way of caring about risk, and there’s a more epistemic way of caring about risk — where what we really want to know is: do we believe true things? Are we latching onto facts about the world? And I think difference-making risk aversion is much more outcome oriented, and the ambiguity aversion is much more epistemic.
Luisa Rodriguez: Yes, that totally drove it home for me. I think clearly what’s happening is I’m just being a consequentialist.
Bob Fischer: You mean you’re caring about how the world goes? I mean, that sounds like a virtue.
Luisa Rodriguez: Yeah. At the very least I’m being very stuck in my own perspective, and not realising that other people might care about how they go about making decisions and whether it’s based on confident assessments of the world or not. And that does make sense as a reason you might want to be risk averse.
OK, so those are the three types. I’ll just name them again so that they’re a bit more in our near-term memory: there’s the person who wants to make sure that we avoid worst-case outcomes; there’s the person who wants to make sure that we do any good at all for sure; and then there’s the person who doesn’t want to make decisions based on really, really uncertain information or data.
So I’m curious about how exactly you apply these risk aversions. Presumably you’re not just saying, like, “I want to avoid worst-case outcomes, and therefore I’m spending all of my time making sure that no one goes to hell, because that would be the worst possible thing in the whole world.” There’s presumably some more complex decision-making procedure.
And you actually did this test comparison where you compared human, chicken, and shrimp interventions adjusting for these kinds of risk aversion, and the results were super fascinating. But first, maybe just talk me through what exactly did it mean for you to account for these aversions to certain kinds of risks when comparing interventions?
Bob Fischer: Yeah, great question. So you’re totally correct. You don’t want to be all-in on one kind of risk aversion, because you will end up saying crazy things like that. Instead, what risk aversion is is something you can have different levels of — where you can be more risk averse or less risk averse along any of these dimensions, and they can be combined in various ways, and what they’re going to do is they are going to affect how you rank to some degree or other.
You can think about it as sort of like amplifying low-probability events in the worst-case scenario situation, or putting your thumb on the scale when it comes to the value of certain kind of outcomes when you’re thinking about the difference-making risk aversion, or not liking to some degree or other the really uncertain, ambiguous probabilities.
So then the question is always going to be: what’s a reasonable level of risk aversion of each type? What we did is just choose very modest levels — so don’t assume that people are crazy risk averse; just assume that they’ve just got a little bit of this sort of risk aversion and see what happens, basically think about the way this is going to go.
If you’re just comparing humans, chickens, and shrimp, and you’re a straight expected value maximiser, well of course shrimp win because there are trillions of them, so even given very small moral weights for shrimp, they just dominate.
Now suppose that you are worst-case scenario risk averse. Well, now the case for shrimp looks even better than it did before if you were a straight expected value maximiser.
But then suppose you go in for one of those other two forms of risk aversion: you’re worried about difference making or you don’t really like ambiguity. Well, those penalised shrimp, and maybe quite a lot — to the point where the human causes look a lot better than the shrimp causes.
The really interesting thing is that chickens actually look really good across the various types of risk aversion. So if you’re a straight expected value maximiser, the shrimp beat the chickens. But once you’ve got one of those other kinds of risk aversion in play — you’re worried about difference making or ambiguity — actually chickens look really good, and they even beat the human causes. And the reason for that is really simple: it’s just that there are so many chickens and we think they probably matter a fair amount.
Luisa Rodriguez: Yeah, interesting. So it’s like if you have some moderate amount of risk aversion, you might think intuitively that that’s going to rule out the animal welfare interventions. And in fact, that’s just not what happens.
Is there a level of risk aversion you could have in this category of “I want to be certain and I want to make sure I’m doing good” that would rule out nonhuman animal interventions? And if so, is it reasonable at all? Is this a position that people could reasonably take if they’re even remotely expected value maximising oriented?
Bob Fischer: Yeah, that’s a great question. It’s a hard question, actually. So we didn’t do explicit modelling that was designed to find that threshold. So it’s not like I have some particular function to plug in to show what it would be, and then where we could take that out and apply it in other cases and see what you should say.
You’re absolutely right that there is going to be some level of risk aversion where that’s going to happen. As for whether it’s a reasonable one, that’s probably going to depend on other antecedent questions.
So I think the probability of chickens mattering is really high. But you can imagine someone else who says, “I don’t know. Chickens, maybe at best I’ve got some theory of consciousness I’m committed to where I think they’ve only got a 4% chance of being conscious or whatever. And then even if they were conscious, I would think they would only have some tiny welfare range compared to humans — a fraction of a fraction of a fraction.” Well, now it’d be a lot easier to have a level of risk aversion that rules them out. It’s going to be harder if you’re pretty optimistic about sentience and their welfare ranges, as we are. So you’re going to have to do sort of a holistic assessment. It’s not going to be a quick thing.
Luisa Rodriguez: How sensitive are these results, broadly? Maybe the best way to answer that question is just like, how close are chicken interventions to being beat out by one of these other two — such that is it possible that someone could just have slightly different beliefs to Rethink and all of a sudden humans end up looking much better? Or would you have to have really different beliefs to Rethink?
Bob Fischer: You’d have to have quite different views. So Laura Duffy did this wonderful report. She tries to answer this question by looking at different weights that you could have for chickens, and seeing how robustly good chickens end up being, even if you think that they matter a lot less than the Moral Weight Project suggests. And she found, yeah, you could think that they were an order of magnitude less important — so instead of 30% as important, 3% — and still end up with this result that they look really good. So that’s pretty striking.
Luisa Rodriguez: Yeah, I agree. That is pretty striking. Were there any kind of high-level lessons you drew from this exercise that aren’t obvious from just the conclusions we’ve talked about so far?
Bob Fischer: That’s a good question. I think for us, the main result was just that until you actually put numbers on these things and specify levels of risk aversion and plug in particular moral weights, et cetera, your intuitions are just not a good guide to what you should end up thinking, all things considered. I would not have predicted that chickens were as robustly good. That was a total surprise to us. We were not anticipating it. It just sort of came out of the blue once we had developed models for these various forms of risk aversion. So that was just an interesting result.
And in general, our expectation was that it would have a smaller impact: that it would sort of sound like it was going to have an impact, risk aversion attitudes of various kinds, but that it wouldn’t be that big of a deal. And then it turned out that, yeah, it is kind of a big deal.
Luisa Rodriguez: Yeah. OK, interesting.
Bob Fischer: I mean, the other interesting thing to say, particularly for listeners of this podcast, is just that some of these decision theories are really going to penalise downside risks — so ways that you’d make the world worse. What that means is that if you think that certain kinds of interventions have those risks, they are going to look much worse than safer, surer bets. So that doesn’t in and of itself have any implications for existential risk work, but it is going to have implications for some kinds of existential risk work.
So think about people who have worried that certain kinds of AI safety projects have actually accelerated AI development: basically what they’re saying is there’s a real downside risk to some kinds of AI safety work. And if you have a decision theory that says “avoid making the world worse,” then options like that are going to get pretty seriously penalised — and will in fact look worse than animals and global health and development interventions.
Hedonism assumption [02:00:54]
Luisa Rodriguez: OK, let’s leave that there and discuss another high-level objection. So one major objection some people might have is around the fact that you assume hedonism is true — which we’ve talked about a bit, but as a reminder, hedonism says that the only determinants of welfare are positively and negatively valenced experiences. What is the best case for hedonism?
Bob Fischer: Well, I always think the best case for hedonism is to think about the question of what makes the good things good and what makes the bad things bad. And if you keep asking that question long enough, in a very childlike way, you often ground out in, “It makes people feel good. It makes me feel good.” Right? Like, why do you care about getting this job? Or why do you value these friendships? Or why is it important to you to achieve these political objectives? And when we continue to ask that question, eventually what we seem to care about are impacts on the subjective states of individuals. So that chain of reasoning is, I think, part of the most compelling case for hedonism. It just seems like the right kind of final answer.
And if someone says, “Why do you care about making people happy or preventing pain?” the right response is, “What’s wrong with you? You should just see that that is a good stopping point.” Right? That’s one of the things that doesn’t need explanation in a way that something like why I care about this political achievement might need explanation.
Luisa Rodriguez: Yeah. I think part of me is like, yes, that completely sounds right to me. And then I can kind of access an intuition that there are things that are valuable to me that don’t feel like they ground out in positive or negative experiences. So just knowing that I love my family and they love me feels valuable, separate from all of the actual joy that they bring me and the pain that they sometimes cause me. It feels categorically different. And then I push back on myself, and I’m like, I don’t even know if that feels true, actually. Maybe it is just like over decades, they’ve given me loads of joy, and I have this immense feeling of warmth and gratitude that I get to continue having them in my life, and that I’ve had them in my life so far. So I feel very confused about it. How do you feel about it?
Bob Fischer: I think one way of getting at the intuition that you’re pushing there is to think about other kinds of cases than the family case. The one that always does it for me is thinking about knowledge, where you might think some things are just important to know, even if they make your life worse and you can’t do anything about it. I think lots of people have that intuition, and that’s sort of why they read the news, perhaps to their detriment. Perhaps they should stop, as some people have recently argued. But if you have that thought — that it’s important just to be a knower, independently of its benefits to you or anyone else, in hedonic terms — then that can cut against the hedonist point of view.
Another way of putting some pressure on the hedonist point of view is to start talking about things like would you want to be plugged into the experience machine? Which is this thing that simulates all of the experiences that you want to have.
Luisa Rodriguez: The Matrix, basically.
Bob Fischer: Right, but you don’t actually accomplish any of the things you think you’re accomplishing or what have you. So you can put some pressure on it that way.
I think my own inclination… I’m somewhat divided here. Part of me says, we value ourselves as animals and we value ourselves as agents. And when we care about ourselves as animals, hedonism seems like it just gives the right results. And when we care about ourselves as agents, we value things that have very little to do with hedonic goods and bads, and have much more to do with things like knowledge and actual achievement and so on and so forth. So I have that impulse that I think fits with what you’re saying.
And then there’s a lot of me that just says, you know, this is all kind of a silly game, at the end of the day. Like, why does anything matter? It’s because it feels good or feels bad. And there’s a lot of intellectualising to try to make us feel like we’re more important and somehow different from other animals, when in fact we too are just chasing pleasures and trying to avoid pains, and we just dress it up in prettier language.
Luisa Rodriguez: Yeah, that makes it sound almost more agent-y than it feels to me. The thing that it feels like to me, when I really take seriously the idea that it’s all pains and pleasures, is that it’s just kind of built in that I think things are more special than pleasure. And that’s just an illusion I have, but not true, in fact.
Bob Fischer: Well, it might not be an illusion in some important sense that the right way to make your life better is to value things other than pleasure, right? So as a strategy for having your life go as well as possible, actually going after pleasure per se is probably not the right idea. So it’s really easy to explain why we would have these strong anti-hedonist intuitions, even if hedonism were true.
Luisa Rodriguez: Right.
Bob Fischer: Because it’s just such a mistake to just be chasing, I don’t know, highs from cocaine as your way of getting as much pleasure as possible — because in the long run, it’s way worse for you.
Luisa Rodriguez: Yeah, I guess it just always feels like, when I use the words pleasure and pain, cocaine is what comes to mind for pleasure, and taking a jackhammer to my foot is what comes to mind for pain. As soon as I stop using that language and have a broader definition of pleasure — like it feels good, in a very complicated way, to learn things and know things about the world — and that still is pleasure, even if it’s not like having the most literally, viscerally pleasurable experience I’ve ever had in some very narrow sense… I wonder if sometimes that language is part of what’s tripping me up.
Bob Fischer: Totally. Absolutely. I think so as well. Because really, what we’re talking about is this idea that positively valenced states are what’s good for you and negatively valenced states are what’s bad for you. And if you think about that as just pleasures in the narrow physical sense, then it does sound like it can’t possibly represent the richness of human experience. That being said, I just should flag it is really hard to understand how all positively valenced states could fit on one scale. So this isn’t universally cut in favour of hedonism; it’s a complication, too.
But then the other thing is just, it does really challenge our self-conception when you think about it. If you’re someone like me, I’m an academic and I get satisfaction from working on these really hard intellectual problems, and I like thinking about the welfare ranges we should assign to nonhuman animals, et cetera. And then my theory implies that actually I would get more welfare if I just played pool in the bar more. It doesn’t make you feel good, but it’s probably true if you’re thinking in purely prudential terms. And the case for doing this is just not purely prudential. Or it could be that I’m wired in a weird way, where it does work out for me prudentially, but that says something weird and idiosyncratic about me.
So I think there’s a lot of ways you can start to put pressure on the intuitions, if you want to think more systematically about them.
Luisa Rodriguez: Right. Yeah, it is true that I have a very strong… I think it is a bias, where I’m like, it just icks me out to think that I could be living my best life by just living on MDMA. I’m like, no, life is about more than that. It’s about all sorts of complicated experiences, and those are different from that narrow definition.
Bob Fischer: Yeah, there are hedonists who think that. But I will say that when I have that reaction that you’re describing, Luisa, I often think actually that’s the right result. The reason I’m doing a lot of the stuff I’m doing is not for my own benefit. Prudentially, yes, give me the drugs — but I also care about my children’s happiness, and I care about my friends, and I want things to be better for animals, and blah, blah, blah. There’s a lot of other happiness at stake, and it turns out that doing things for those individuals almost always requires prudential hits for me. But they’re still worthwhile, and they’re worthwhile precisely because of the amount of good I can do for them.
So I think it’s the right result in many ways, if our theory says that there is this tension between my welfare and the welfare of others, and it’s an error if our theory of welfare says that it turns out that the thing that’s best for me also happens to be the thing that’s best for everyone else. Oh no, that means you played some game, right? You were just trying to engineer a happy result, as opposed to thinking that the world contains these tensions.
Luisa Rodriguez: Right. OK, so it sounds like we’re both pretty sympathetic to hedonism. But let’s say someone doesn’t buy hedonism. Does that mean that the results of the Moral Weight Project in general aren’t particularly relevant to how they decide how to spend their career and their money?
Bob Fischer: Not at all, because any theory of welfare is going to give some weight to hedonic considerations, so you’re still going to learn something about what matters from this kind of project. The question is just: how much of a welfare range do you think the hedonic portion is? Do you think it’s a lot or do you think it’s a little? And if you think it’s a lot, then maybe you’re learning a lot from this project. If you don’t think that, learning less. But insofar as you’re sympathetic to hedonism, learning about the hedonic differences is going to matter for your cause prioritisation.
Luisa Rodriguez: Yeah. You have this thought experiment that you argue shows that non-hedonic goods and bads can’t contribute that much to an individual’s total welfare range, which you call, for a shorthand, Tortured Tim. Can you walk me through it? It’s a bit complicated, but I think it’s pretty interesting and worth doing.
Bob Fischer: Sure. Well, the core idea is not that complicated. The way to think about it is: just imagine someone whose life is going as well as it can be in all the non-hedonic ways. They’ve got tonnes of friends, they’ve had lots of achievements, they know all sorts of important things, et cetera, et cetera. But they’re being tortured, and they’re being tortured to the point where they’re in as much pain as they can be.
So now we can ask this question: is that life, is that individual, are they well off on balance, or is their life net negative? Is it, on the whole, bad, in that moment? And if you say it’s on the whole bad, then you’re saying, you could have all those great non-hedonic goods — all the knowledge and achievements and everything else — and they would not be enough to outweigh the intensity of that pain. So that suggests that having all those non-hedonic goods isn’t actually more important, isn’t a larger portion of the welfare range than the hedonic portion — and that kind of caps how much good you can be, in principle, getting from all the non-hedonic stuff.
Luisa Rodriguez: Yeah, I’ve never been tortured —
Bob Fischer: Good, good.
Luisa Rodriguez: — and part of me is like, yes, I buy that. I can’t imagine being tortured and still feeling like my love of knowledge and learning, and the fact that my family is out there and doing well, and I’ve got friends who care about me, I can’t really imagine having that outweigh the torture.
On the other hand, is it insane to think that there are people being tortured who would prefer not to die because they value life deeply, inherently, or the knowledge of their family and friends and the world existing is worth existing for, despite immense pain? Maybe I’m just not thinking about enough pain. Maybe there is just some extreme level of pain that, because I’ve never experienced torture, I will not at all be able to fully intuit this.
Bob Fischer: Sure. So there are a couple of things to say. One is, as a direct response to your question: no, it’s not crazy. I mean, you can certainly imagine people who do have that view. I go to different universities and give talks about some of these issues, and I gave a talk about the Tortured Tim case at one university, and a guy just said, “This couldn’t be further from my view. It’s just obvious to me that Tim’s life is not just worth living, that it’s one of the better lives.”
Luisa Rodriguez: Whoa.
Bob Fischer: Because of all of these non-hedonic things that I had listed out. And in philosophy, we sometimes say there’s the incredulous stare, where you just sort of can’t believe that someone has this view. And I confess, I did give him the incredulous stare. I feel guilty about it now. But that was his view. So, all right, there are folks out there who think really differently about this.
And the second thing to say is that maybe there’s a problem in the thought experiment. Maybe it turns out that you can’t really have the objective goods when you’re being tortured. I mean, I don’t really think that’s that plausible, but you could imagine there being various philosophical moves that show that we’re missing something here in the details.
So maybe the takeaway is just: think about how valuable these non-hedonic goods are. Maybe you think they’re much more valuable than I suggest in that thought experiment, but at least maybe it provides a bound; at least maybe it challenges you to think that — at least given your views, the way you want to think about things — you shouldn’t say that the non-hedonic goods are like 100x more important than the hedonic stuff. And as long as you don’t say that, you’re still getting some information from our project about just how important chickens are.
Luisa Rodriguez: Yeah. When I try to make it really concrete, and actually step away from the thought experiment and think about chickens, and I’m like, OK, it does seem like chickens probably have less capacity for the range of experiences that I have.
Bob Fischer: Totally.
Luisa Rodriguez: They’re not getting to learn mind-blowing stuff about philosophy the way I am. I am like, OK, but if in fact chickens, while being raised in factory farms, are regularly having their limbs broken, are sometimes starving, as soon as I’m like, if that’s anything like what that would be like for me — where you don’t have to assume anything about whether there’s also stuff about knowledge going on for chickens; it’s just like, if their pain threshold is anything like my pain threshold, that alone is I think basically getting me to the point where I’m like, yes, if I’m living in those conditions, it doesn’t matter that much to me whether I also, in theory, have the capacity for deep philosophical reasoning. And maybe that’s not the whole story here, but that’s the intuition this is trying to push. Does that feel right to you?
Bob Fischer: Yeah, I think something like that is correct. I would just get there via a slightly different route, and would say something like: think about the experience of dealing with children, and what it’s like to watch them be injured or to suffer. It’s intensely gripping and powerful, and they have very few capacities of the kind that we’re describing, and yet that suffering seems extraordinarily morally important. And when I try to look past the species boundary and say, oh look, this is suffering, and it’s intense and it’s acute, it’s powerful. Does it seem like it matters? It just seems that yeah, clearly it does. Clearly it does.
Final question [02:19:11]
Luisa Rodriguez: Let’s do one final question. If you just had to completely change careers, and you somehow became totally indifferent to making the world a better place, what would be the most self-indulgent or hedonistic career for you to pursue instead?
Bob Fischer: Well, it may not sound terribly self-indulgent, but if I were going to abandon this kind of work entirely, I think what I would really like to do is physical therapy. I really like helping people move better, and figure out how to work out all the kinks in their backs and be able to perform all the kinds of movements that matter to them in the context of their lives. I think that’s what I would really enjoy doing, but instead I’m a philosopher.
Luisa Rodriguez: That’s really interesting, and really lovely. And hopefully we solve all these problems, and you get to do physical therapy.
Bob Fischer: One can only hope.
Luisa Rodriguez: Thank you so much for coming on today. My guest today has been Bob Fischer. Thank you again.
Bob Fischer: It has been great, Luisa. I really appreciate it.
Luisa’s outro [02:20:19]
Luisa Rodriguez: Before we wrap up, I wanted to flag that we’re hiring for multiple roles on our operations team, so if you’re excited about building and running the systems that help 80,000 Hours run effectively, we’d love to hear from you. You can find out more about the roles and apply at 80000hours.org — just click on the link that says “We’re hiring!” in the menu on the top right corner of the site.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.