Transcript
Cold open [00:00:00]
Jonathan Birch: In 1994, when there barely was a science of consciousness, it didn’t stop a taskforce of experts assembled to rule on this question from extremely confidently proclaiming that these patients were not conscious.
The case of Kate Bainbridge where Kate fell into what was perceived by her doctors to be a vegetative state, and sadly was treated in a way that presumed no need for pain relief — when in fact she was experiencing what was happening to her, did require pain relief, did want things to be explained to her that were going on. That didn’t happen. When she later recovered, she was able to write this quite harrowing testimony of what it had actually been like.
There’s these celebrated cases from Adrian Owen’s group, where patients presumed vegetative have been put into fMRI scanners, they ask them yes/no questions, “If the answer is yes, imagine playing tennis. If the answer is no, imagine finding a way around your house.” These generate very different fMRI signatures in healthy subjects, and they found in some of these patients the same signatures that they found in the healthy subjects, giving clear yes/no answers to their questions.
That’s not as clear-cut as someone actually telling you after recovering, but it’s pretty clear-cut.
Luisa’s intro [00:01:20]
Luisa Rodriguez: Hi listeners, this is Luisa Rodriguez, one of the hosts of The 80,000 Hours Podcast.
In today’s interview, philosopher Jonathan Birch walks me through some of the “edge cases” of sentience he explored for his new book — where sentience is the capacity to feel pain, pleasure, or emotions.
The major throughline of the episode is that until very recently we’ve been acting as if we’re sure that beings like coma patients, foetuses, and octopuses are incapable of having subjective experiences — but Jonathan thinks that that certainty is completely unjustified.
He shares some chilling tales about overconfident policies that probably caused significant suffering for decades — including one that meant that infants were having major surgeries without anaesthesia until the 1980s. We also talk through how policymakers can act ethically given real uncertainty, as well as:
- Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
- How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
- Why Jonathan is so excited about citizens’ assemblies.
- And plenty more.
Without further ado, it is my sincere pleasure to bring you Jonathan Birch.
The interview begins [00:03:04]
Luisa Rodriguez: Today I’m speaking with Dr Jonathan Birch. Jonathan is a philosophy professor at the London School of Economics, where he specialises in the philosophy of the biological sciences. He’s also the author of The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI — which was released on August 15, and is what we’re here to talk about today. [Check out the free PDF version!] Thanks for coming on the podcast, Jonathan.
Jonathan Birch: Hi, Luisa. Thanks very much for inviting me.
Why does sentience matter? [00:03:31]
Luisa Rodriguez: So, to start off this discussion, why do questions of sentience matter so much?
Jonathan Birch: I think Jeremy Bentham put it very well in his famous footnote where he says, “The question is not, Can they reason? nor, Can they talk? but, Can they suffer?” in relation to other animals. Introducing to a wide audience this idea that sentience is the basis of moral significance, moral standing, moral status: that it doesn’t matter if you don’t have language, you don’t have rationality; if you can feel pain or pleasure or any valenced experience — anything that feels bad or feels good — then you’re a proper object of moral concern. And I think that’s right.
Luisa Rodriguez: Yeah. It feels like a lot of people do have the intuition that intelligence matters quite a lot, so when thinking about which beings kind of warrant moral consideration, the fact that some nonhuman animals and other beings might seem less intelligent is a big barrier for them in seeing them as morally relevant. How do you feel about this?
Jonathan Birch: I don’t think I’ve ever really seen it that way. I think that there is a link between intelligence and sentience in that intelligent animals have more ways they can demonstrate their sentience to us — so they’re methodologically linked, as it were, in that the behavioural repertoire of an octopus, for example, is much richer than the behavioural repertoire of a snail. But personally, I don’t think that translates into a huge moral difference. Personally, I’m quite inclined towards a sentientist view, as it were, that says sentience gets you into the club of beings that matter morally. And there’s not really a VIP tier, you know? There’s not really a second tier you can get by crossing some threshold of intelligence.
Luisa Rodriguez: Yeah, I like the way you put that.
Inescapable uncertainty about other minds [00:05:43]
Luisa Rodriguez: In the book you write, “We start in a position of horrible, disorienting, apparently inescapable uncertainty about other minds, and then the uncertainty is still there at the end. Sorry, it’s inescapable.” Why do you think that we can’t ever be more certain about the sentience of most other minds?
Jonathan Birch: Yes. My slogan is “no magic tricks”: I’m not offering magic tricks in the book that will resolve our uncertainty about sentience. And I say this as a point of contrast with a lot of books that are written about consciousness and sentience, where there is a lot of over-promising, and there is sometimes a bait-and-switch that goes on where they say they’re going to resolve all your uncertainties, just settle these issues — then you get to the end and you realise what they’ve given you is a very speculative theory that of course doesn’t resolve the issue at all, because other speculations are also possible.
So I wanted to write a very different kind of book. I wanted to write, in a sense, the antidote to that other kind of book, where I’m just upfront from the beginning about huge uncertainty. And it really is huge, because you’ve got very basic metaphysical disagreements about how the mind relates to the body. Then in addition, you’ve got these neuroscientific disagreements about which brain mechanisms are most important and why. And you have very significant ethical disagreement too, about what follows morally when a realistic possibility of sentience is acknowledged.
And what we need to be doing is developing frameworks for managing that uncertainty, rather than pretending that we’re able to resolve it.
Luisa Rodriguez: Yeah, I feel like this approach to the book was extremely useful for me, as a person who for a long time has been kind of desperate for more certainty about the sentience of other minds. I think that’s because, until reading your approach — which we’ll get into, about how to make really concrete decisions in the world, in practical cases, about how to treat other beings that might or might not be sentient — until reading that, I was like, “If we’re not certain, then we have no idea what to do. And that’s terrifying and disturbing and troubling. So I guess I’m just going to have to try to figure out what our best guess is and do whatever that theory says.”
But you have an actual satisfying approach that’s very practical and that allows for all of that uncertainty.
Jonathan Birch: Yes. I mean, I don’t like this situation either. I would like there to be a theory about which we could be certain. But of course I want to resist that line of reasoning you were just describing: where acknowledgement of uncertainty in the domain of theory leads to despair in the domain of action. I think it’s actually entirely possible to come to agreements about sensible actions, despite our continuing disagreement at the level of theory. I think there are lots of sensible ways in which we can err on the side of caution.
And that idea is not new; it’s an idea lots of people come to when they think about these cases at the edge of sentience. But I think another key idea in the book is that we can’t simply say, “Let’s err on the side of caution” and leave it there — because it’s a very vague slogan, and it can be used to describe anything from the tiniest gesture to the most dramatic changes to our way of life.
So what we need is a precautionary framework that will help us think through these cases, and help us develop responses that are proportionate to the risks that we’ve identified.
Luisa Rodriguez: Yeah. I think that concept of proportionality was part of what felt so huge. The book really not just describes, but then takes a bunch of concrete cases at the edge of sentience and looks at what we know, looks at the evidence for these particular cases, and then tries to actually come up with proportionate proposals for how to set policy — at the high level and also at the organisational level. And I found that just a huge unlock key for me learning to accept that this uncertainty is just real, and we can actually do things despite it, so thank you for that.
The “zone of reasonable disagreement” in sentience research [00:10:31]
Luisa Rodriguez: Moving on a little bit, there’s quite a range of plausible views about how sentience arises, but it seems incredibly important to have at least some understanding of it so that we can work out how to treat beings that might be sentient, given that is maybe the key to their moral significance.
So there’s differences in the views at the deep philosophical level, and then also at the empirical level — so questions about which parts of the brain may or may not contribute to or may not be required for something like sentience. Do you think there are any kinds of key philosophical views of consciousness or sentience that can be ruled out of the zone of reasonable disagreement?
Jonathan Birch: Ruled out? That’s very difficult. There’s a very wide “zone of reasonable disagreement,” as I call it in the book. I think it’s important to realise that there are limits there to do with what is a reasonable position to hold. I introduce in the book some caricatures of ways of being unreasonable about sentience.
I think one of the main ways in which people can be unreasonable is by recommending a course of action where there is no evidence at all that favours that course of action over relevant alternatives.
In the book I talk about someone who says, “Never use antibacterial soap on a Tuesday: it causes unbearable pain to the bacteria. Only use it on Wednesdays when it’s fine.” And this person is being unreasonable. We know that not because we have a theory of sentience that decisively rules out the idea of sentience in bacteria — we’re not in a position to speak with certainty about such things — but because that person has not offered any positive evidence in favour of the course of action they’re recommending over relevant alternatives.
Also, I think we can set aside dogmatism. It’s another huge problem. It’s not tied to any specific view, but the problem is rather when somebody is so sure that their preferred theory is correct, that they won’t entertain the possibility of any alternatives.
For example, they might be just sure that consciousness requires the prefrontal cortex at the front of the brain. It’s a realistic possibility that it does, but there’s also a realistic possibility that it doesn’t — and that other, more evolutionarily ancient brain regions are enough. So the unreasonable thing is not to hold the view that the prefrontal cortex matters; the unreasonable thing is to be so sure that you don’t take seriously the possibility that you might be wrong, and that another view in that space might be correct.
Luisa Rodriguez: Yeah. And I want to ask you more about how exactly to do that, but before I do, I want to ask you a bit about your own preferred views. I’m actually going to focus on the neuroscientific views of consciousness and skip over the more philosophical theories of consciousness for today. Those include views like dualism, which is the idea that the mind and body are distinct and separate entities; another is physicalism, which is the idea that everything, including the mind, is ultimately physical and can be explained in terms of physical processes. And if anyone’s interested in those, I can highly recommend our interview with David Chalmers on the nature and ethics of consciousness.
But if you’ll allow me to press you on the neuroscience side or the empirical side, do you have preferred views, or views where you hold the most of your own belief weight, credence, about which of those neuroscientific theories of consciousness are most likely?
Jonathan Birch: There’s no individual theory that I would give more than 50% chance of being correct. It seems entirely possible to me that all of the current theories we have now will quite quickly be replaced by successors. So there’s no one theory that I’d bet a lot of money on.
I do think it’s very important to take seriously these subcortical theories that in the past have often been dismissed — and I think wrongfully dismissed, in that they’re describing realistic possibilities. You don’t have to think they’re likely, but their consequences for the edge of sentience are quite significant — because if these subcortical mechanisms in the midbrain are enough to generate simple kinds of conscious experience, it really leads very quickly to the idea that animals like fishes, for example, that have versions of those mechanisms, might meet the requirements. Whereas that seems much less likely for people who are sympathetic to the cortex-centric theories.
Luisa Rodriguez: Actually, that feels really helpful to get concrete. Are there other clear lines where midbrain-centric theories and the cortex-centric theories point to different beliefs about sentience in nonhuman animals?
Jonathan Birch: Oh, it is a huge axis of disagreement. I think those midbrain-centric theories are also very sympathetic to the idea of subjective experience in invertebrates — like bees, for example. Not because bees have exactly the same brain mechanisms — they don’t — but because they seem to have convergently evolved very functionally similar mechanisms: the central complex in bees resembles functionally the midbrain in vertebrates. So there’s that case.
But also this axis of disagreement has consequences for a wide range of the cases I discuss in the book. If you think of the development of foetuses in the womb, those midbrain mechanisms are online earlier than cortical mechanisms. If you think about patients with a disorder of consciousness after brain injury, those midbrain mechanisms are often functional, even though large parts of the cortex have been taken out. So recognising these theories as describing realistic possibilities has very wide-ranging implications for what it means to err on the side of caution across all of these cases.
Disorders of consciousness: comas and minimally conscious states [00:17:06]
Luisa Rodriguez: So a big part of the book then explores these candidates for sentience: beings that we think could plausibly be sentient, but because we just know so little about exactly what sentience even is, and how different beings feel it and in what proportions, there’s not enough clear understanding or evidence to be sure what it is like to be them.
You look at some familiar candidates, including animals of different classes, as well as AI. And then you also look at some cases that I was really unfamiliar with, including people with disorders of consciousness — so people in comas, for example — and then also “neural organoids,” which I’d never heard of.
So if you’re happy to, I would love to start with disorders of consciousness. Can you give an overview of these disorders?
Jonathan Birch: A “disorder of consciousness” is a clinical term for what happens when consciousness is disturbed, following a brain injury, typically. It’s something that can happen to any of us at any point in our lives; it can be due to an accident or due to a stroke.
People often talk about it in terms of levels. I’m not sure whether “level” is the right concept at all, but it is the traditional way of thinking about it. And of course, each level has its own detailed clinical criteria, which I can’t possibly summarise here, but basically, if you’re in a coma, people think about a coma on TV, right? The subject just wakes up and they’re fine. There’s a lot of confusion, a lot of thinking of the whole category of disorders of consciousness as just being coma. Coma is really the lowest “level,” so to speak. Typically, it’s very much like being under general anaesthesia. What typically happens is that people emerge from coma, and if the brain injury is severe, they don’t just go straight back to ordinary wakeful consciousness. There are these in-between states, as it were. Some argue they resemble the states one passes through very briefly when emerging from general anaesthesia. Of course, general anaesthesia is a marvel. It’s routine; the progression back to a normal conscious state is very reliable. We can think of these other states as ways of getting stuck along the way, as it were.
And one is traditionally called the “vegetative state.” Here, too, I don’t like the term — I’m a critic of the taxonomy I’m describing — but it’s these cases where sleep/wake cycles reemerge, but obvious signs of voluntary behaviour do not reemerge. So you do get some kinds of responsiveness — things like grunting, groaning, grimacing — but they’re not considered clear evidence of voluntary behaviour.
Then, if you emerge from that, there’s also this clinical construct of the “minimally conscious state,” where it’s now accepted that some of the things you’re doing are signs of consciousness. So it’s a big group of patients. The diagnostic categories, as you can imagine, are unbelievably uncertain. And it’s an obvious case where one needs to err on the side of caution, but historically that’s not always happened.
Luisa Rodriguez: Yeah. A fact I found really unsettling from the book is that different clinicians, when trying to diagnose whether the person is in a coma, a “vegetative state,” or these different levels of minimally conscious states, they have terrible inter-clinician reliability. I’m not actually sure what the term is, but something like some high percentage of diagnoses by one clinician are then diagnosed differently by another. And that seems very unsettling.
Jonathan Birch: Yes. In cases where specialists have come in to review the diagnoses initially made, they’ve ended up estimating the misdiagnosis rate at around 40%.
Luisa Rodriguez: Right. So clearly, even just distinguishing the observable characteristics of these minimal states of consciousness is extremely difficult. What do we know about what these states are like internally? Whether consciousness is switched off, or dulled, or intact but muted?
Jonathan Birch: Well, the problem here is this taxonomy of vegetative state, minimally conscious state, with the very stark implication that the vegetative patients are not even minimally conscious. That’s obviously implied. And there’s no evidence for that, really. It’s very problematic.
You’ve got views in the zone of reasonable disagreement — like Panksepp’s view, for example, and Damasio’s — where the midbrain mechanisms, the subcortical mechanisms that are still in place and that are regulating these sleep/wake cycles are also enough for simple kinds of conscious experience — including, for Panksepp at least, basic emotions. It’s then very problematic to say that just because there is an absence of outward voluntary behaviour, the patient is experiencing nothing — and yet that has been the standard clinical practice for a long time. I think it’s starting to change, but slowly.
Luisa Rodriguez: Yeah, I’m interested in a little bit more of the history. Has it been the case that, historically, views on whether these patients are at all experiencing things good and bad is based entirely on their voluntary behaviour?
Jonathan Birch: Of course it’s long been a source of debate. But then that history is very problematic. The book talks about a major taskforce report from the 1990s that was very influential in shaping clinical practice, that just very overconfidently states that pain depends on cortical mechanisms that are clearly inactive in these patients, so they can’t experience any pain.
You know, it shocked me, actually. It shocked me to think in 1994, when there barely was a science of consciousness — and you could argue that 30 years later, maybe the science hasn’t progressed as much as we hoped it would, but in the mid-’90s, it barely existed — it didn’t stop a taskforce of experts assembled to rule on this question from extremely confidently proclaiming that these patients were not conscious.
And one has to think about why this is, and about the issue of inductive risk, as philosophers of science call it: where you’re moving from uncertain evidence to a pronouncement — that is, an action where implicitly you’re valuing possible consequences in certain ways. Presumably, the people making that statement feared the consequences of it becoming accepted that the vegetative patients might be feeling things. To me, that’s the wrong way to think about inductive risk in this setting. There’s strong reasons to err on the side of caution, and hopefully that is what we’re now starting to see from clinicians in this area.
Luisa Rodriguez: Yeah, I’m interested in understanding how the field has changed, but I have the sense that the impetus for the field changing has actually been concrete cases where people who have experienced some kind of disorder of consciousness have recovered and revealed that they were experiencing things, sometimes suffering. Can you talk about a case like that?
Jonathan Birch: That’s right. I tend to think that is the best evidence that we can get, that they were indeed experiencing something.
The case of Kate Bainbridge that I discuss in the book is a case where Kate fell into what was perceived by her doctors to be a vegetative state, and sadly was treated in a way that presumed no need for pain relief — when in fact she was experiencing what was happening to her, did require pain relief, did want things to be explained to her that were going on. That didn’t happen. When she later recovered, she was able to write this quite harrowing testimony of what it had actually been like for her. So in these cases, there’s not much room for doubt. They were indeed experiencing what they report having experienced.
In other cases, you get a little more room for doubt. There’s these celebrated cases from Adrian Owen’s group, where patients presumed vegetative have been put into fMRI scanners, and they’ve come up with this protocol where they ask them yes/no questions, and they say, “If the answer is yes, imagine playing tennis. If the answer is no, imagine finding a way around your house.” These generate very different fMRI signatures in healthy subjects, and they found in some of these patients the same signatures that they found in the healthy subjects, giving clear yes/no answers to their questions.
That’s not as clear-cut as someone actually telling you after recovering, but it’s pretty clear-cut. So I think this has got the attention of the medical community, and it is starting to filter through to changes in clinical practice.
Luisa Rodriguez: I mean, that, to me, is pretty mind-blowing. What is the best case that isn’t an indicator of something going on, something that it is like to be that person, having thoughts like yes and no?
Jonathan Birch: I think it is pretty good evidence. In a way, you’re probably not going to get better evidence than that from a patient who is outwardly unresponsive. The danger is that techniques like that might have a high rate of false negatives, because there’s so many reasons why a patient experiencing something might not have the cognitive ability to perform such a sophisticated task. Their capacity for comprehending language might have been knocked out, for example.
So you can think of these cases from Owen’s group as showing us there’s a real phenomenon here, a phenomenon that they call “covert consciousness” — or more cautiously, they call it “cognitive motor dissociation.” It establishes the reality of the phenomenon, but in no way gives us an upper bound on how frequent it is. And we’ve got to be open to the fact that patients who never display any outward cognitive behaviour, even in the fMRI scanner, might — if a view like Panksepp’s is correct — still be having emotional experiences.
Luisa Rodriguez: Yeah, it should make us much more open to the idea that patients with disorders of consciousness are still experiencing something. It doesn’t seem like a way to diagnose patients who are experiencing things from those who aren’t.
Jonathan Birch: It can give you a pretty compelling yes, but at the risk of false negatives. So there’s always this balance of false positives and false negatives. And I think quite a widespread view in this area is that we shouldn’t worry as much about the false positives as we do about the false negatives: if we end up over-attributing consciousness in some of these cases where it’s not really there, it does have consequences, because maybe it instils false hope in the family that wasn’t justified. It’s not without consequences, but the consequences are of a perhaps less severe and more controllable kind than in those cases where we miss consciousness that is actually there.
Luisa Rodriguez: Yeah. So to just paint a picture, it’s things like, there are clinically invasive procedures happening on a patient without pain relief. There are procedures happening without any explanation that they’re going to happen, or what’s being done, and no explanation of even what’s happened to the patient.
Jonathan Birch: Yeah, and you shouldn’t need fMRI evidence to change this. I mean, that kind of low-cost erring on the side of caution — precautions that cost you very little, but it might make a tremendous difference to the patient to have had it explained to them what was happening — these are obviously proportionate, I would say.
Luisa Rodriguez: Yeah, yeah. And is there any pushback in the medical field toward these lower-cost interventions to try to be more cautious in these cases?
Jonathan Birch: It’s sometimes quite hard to tell what is going on around the world in how this patient group is treated.
In the UK, the most recent clinical guidelines I think are an improvement over what came before, because they do have an approach to pain management that says, “Always assume the possibility of pain unless there’s clear evidence to the contrary.” And in a way, that “clear evidence to the contrary” is a bit of a concession to the traditional view. But I would say there’s never going to be clear evidence to the contrary, and so that precautionary demand always applies. That’s how I’d read that.
So I think the UK guidelines have been dragged from a position of assuming these so-called “vegetative” patients were not conscious, to an approach that is more precautionary, and that’s good. But I don’t think it goes far enough, and I don’t think it’s become standard practice around the world either.
Luisa Rodriguez: How much farther would you like it to go?
Jonathan Birch: I would really like to see the end of these categories of vegetative and minimally conscious, particularly in therapeutic and legal settings.
People sometimes say maybe they have some value for prognosis. Even there, I suspect finer-grained categories will be more useful for prognosis as the science develops. I think in therapy, you have to be tailoring care to the patient’s individual needs, rather than sorting them into this “minimally conscious” versus “not even minimally conscious” bucket.
And I think in legal contexts, it’s an extremely problematic distinction. In some jurisdictions, you get this view that if the patient is minimally conscious, there’s no way treatment can be withdrawn, even if it’s in their best interests. And that’s just not good for anyone. I mean, reading about these cases is quite sad. I don’t personally know anyone with a disorder of consciousness, and it’s still very affecting to read how serious the situation is. Of course, for people personally affected, it must be even more difficult.
Luisa Rodriguez: You pointed out one other downside I hadn’t thought much about, which is the potential legal consequences of who you can and can’t take off life support. Maybe just to flesh out why that seems really troubling, I have kind of only really thought from the perspective of a family member in something like a coma or another minimally conscious state or whatever.
Jonathan Birch: Coma is usually an acute state, so usually a person emerges from coma within two weeks at most. And they emerge into one of these other states. And society doesn’t talk about it, you know? In TV, it’s always coma; you don’t see people with other disorders of consciousness depicted in the media in any way — with very rare exceptions, like The Diving Bell and the Butterfly, which was a wonderful film about locked-in syndrome from the 2000s. But those exceptions are so rare, so most ordinary people are able to go through life never thinking about this patient group. I’d like to see that change.
Luisa Rodriguez: Yeah, yeah. I hope this will do a tiny bit to help. But to also just shift the perspective people might have on why it’s important to think about which cases should and shouldn’t allow for withdrawing life support: it hadn’t occurred to me that it could be torturous to exist in one of these states where I don’t have motor control of my body, but where I am having conscious and sentient experiences. And that actually is reason to want to explore what it is like to be these patients; is it suffering heavy? And also how can we know that, so that we can withdraw life support if there’s not much hope of recovery?
Jonathan Birch: It’s often very hard to tell, and I think values differ enormously as well. For me, and I think for many people, if we reflect on whether we would want to be kept alive in a state like that for decades, we come to the view that we wouldn’t. And in a way it’s not because we think we’ll be in pain all the time, although of course they can: I think there very likely can be pain from just the prolonged body posture in a lying position, and it’s not going to be wonderfully comfortable. I would be worried about that, but I would also be worried about the effects on my family. It’s well documented that families go through terrible, neverending rollercoasters of emotions in these cases.
I suppose that’s my personal view. And then you obviously get people who have entirely the opposite view: that human life is sacrosanct, and one needs to let nature take its course. There’s just a wide variety of views on these questions.
But I think the current situation helps no one. A situation where, in many jurisdictions, you get protracted legal battles — even in cases where it’s well known that the patient held these values, and would not want to be kept alive in those conditions. No one’s talking really about those cases where the patient has expressed a strong wish to be kept alive, but in those cases where it’s totally unambiguous that they would not want to be kept alive in this state, there’s often no provision for actually acting in their best interests.
Luisa Rodriguez: Before we move on, are there any other studies, investigations happening to understand better or learn to better understand what people with disorders of consciousness may or may not be experiencing? Or how to engage with them and get their preferences, if they’re able to express them?
Jonathan Birch: Well, there’s loads of research, yes. It’s a very active area of research. In researching the book, I realised there are over 100 recent studies using EEG-based methods alone. This is considered promising because it can be done at the bedside, potentially — unlike fMRI, which requires moving the patient, which is generally not a good idea in the acute phase of the injury. So it’s a very vigorous area of research, and that’s great.
But let’s not get into a situation where we say, unless this or that EEG signature or fMRI signature has been found, we’re not going to start treating this patient as if they felt anything. Let’s err on the side of caution from the beginning, even in cases where experimental EEG techniques have not been used. Then it lowers the stakes of those techniques quite a lot, because the patient’s fate is no longer so dependent on what they reveal.
Luisa Rodriguez: Right. Are there any other practical changes you’d like to see in this field?
Jonathan Birch: Well, the book is full of proposals, often quite specific proposals. I think in the case of disorders of consciousness, there’s a lot we need to be doing differently. As I’ve said, in the context of pain management, assume that the patient may be capable of feeling pain. To me, that’s an obvious one.
In addition, I think there’s got to be serious discussion in society about introducing humane ways of ending life in cases where it is clearly in accordance with the values of the patient and their family. It’s really bad: a situation where the only legally permitted method is through withdrawal of assisted nutrition and hydration would not be considered an acceptable method for any animal in a slaughterhouse. It’s only for humans that this is legally allowed. It should not be that way.
Luisa Rodriguez: Yeah, that actually reminds me of how shocked I was. It’s not that I didn’t know it, but to kind of relearn and focus on this fact that in humans, the only way to end someone’s life is to withdraw food and water and have the person starve to death and die of dehydration.
Jonathan Birch: It’s extraordinary, right? It’s unbelievable. There’s some studies by Kitzinger and Kitzinger that I describe in the book that involve interviews with the patients’ families, and the patients’ families just clearly make the case. And for me, we need to be involving the public in these discussions. I think when we do involve the public, it’s very beneficial — because a lot of the time, they just cut through the nonsense, and it’s so clear that it’s not a complicated case for why there should be humane ways of ending life for this patient group when it’s in the patients’ best interests.
Luisa Rodriguez: Yeah. It is just this incredibly bizarre case to me, where it’s not that the field has decided we can never end human life, but because of our squeamishness, we haven’t gone far enough to end it humanely.
Jonathan Birch: I mean, it’s the old doing/allowing thing.
Luisa Rodriguez: Exactly.
Jonathan Birch: Yeah, utilitarians call it squeamishness, and here we see the human cost of an excessive attachment to that distinction. This idea that if you’re merely allowing death, this is fundamentally different from hastening it — but it can be far less humane.
Luisa Rodriguez: It seems like when people are really close up to this issue, those kind of value judgements that lots of people are interested in and can debate in a seminar room disappear for people who are non-philosophers — even though they might have had some views in advance of like, yes, allowing someone to die is more OK than ending their life deliberately. But the fact that, for so many of them, all of that goes out the window — and they’re just like, “No, it’s horrible to watch my loved one die of starvation and dehydration” — feels really compelling.
Jonathan Birch: It’s a political failure to do anything about this, an ongoing policy failure. And I wanted to put this case first in the book, because for me, it illustrates my approach to this whole family of cases at the edge of sentience, and it puts the reader in the right kind of frame of mind.
Firstly, you see that these issues are incredibly serious: these are matters of life and death. Seminar room speculation sometimes has to be put to one side.
Secondly, listen to the people at the centre of these cases. In the case of disorders of consciousness, some of them can talk to us following their recovery. In other cases, the family can talk to us. We have a lot to learn from this, and it can cut through some of the rather unhelpful kinds of argument that you get when people — rather than looking at the real details of how these cases unfold on the ground — simply treat them as a kind of thought experiment. You don’t have to engage with this literature for very long to see why terms like “vegetative state” are problematic.
Foetuses and the cautionary tale of newborn pain [00:43:23]
Luisa Rodriguez: All very well said. Let’s leave that case for now, and turn to another edge case of sentience in humans: foetuses. You start this section of the book with what you call “the cautionary tale of newborn pain.” Can you talk me through that case?
Jonathan Birch: It’s another case I found unbelievable: in the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. There was a public campaign led by someone called Jill Lawson, whose baby son had been operated on in this way and had died.
And at the same time, evidence was being gathered to bear on the questions by some pretty courageous scientists, I would say. They got very heavily attacked for doing this work, but they knew evidence was needed to change clinical practice. And they showed that, if this protocol is done, there were massive stress responses in the baby, massive stress responses that reduce the chances of survival and lead to long-term developmental damage. So as soon as they looked for evidence, the evidence showed that this practice was completely indefensible and then the clinical practice was changed.
So, in a way, people don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous.
Luisa Rodriguez: Yeah, it really does. I’m sure that had I lived in a different time, I’d at least have been much more susceptible to this particular mistake. But from where I’m standing now, it’s impossible for me to imagine thinking that newborns don’t feel pain, and therefore you can do massively invasive surgery on them without anaesthetic.
Jonathan Birch: It’s a hard one to believe, isn’t it? Of course, the consideration was sometimes made that anaesthesia has risks — and of course it does, but operating without anaesthesia also has risks. So there was real naivete about how the surgeons here were thinking about risk. And it’s what philosophers of science sometimes called the “epistemology of ignorance”: they were worried about the risks of anaesthesia, which is their job to worry about that, so they just neglected the risks on the other side. That’s the truly unbelievable.
Luisa Rodriguez: And just in case anyone missed it: it was possible to do the surgeries because they were using some kind of paralytic, so the babies weren’t moving. But when these studies were done, they found these enormous stress responses. Why did those scientists get so much backlash?
Jonathan Birch: Well, it was initially reported as “scientists are operating on babies without anaesthetic.”
Luisa Rodriguez: Oh, I see.
Jonathan Birch: “These sick scientists.” And then the scientists were saying, “Whoa, our aim here is to test the consequences of a routine clinical practice.” And yeah, they had to, because of the extraordinary situation they were in.
Luisa Rodriguez: So the clinical updates that have happened since: my sense is that now it is standard procedure to give newborns anaesthetic during surgeries, and that the benefits outweigh the risks. You argue that that wasn’t inevitable. What’s the case for that?
Jonathan Birch: I think the public outcry also mattered. Clinical norms are very hard to shift. If it was really just these two people, Anand and Hickey, against the medical establishment, that’s not really how change happens. You know, we talk about theories of change sometimes — and “just get the evidence and take it to the establishment” is not a good theory of change. I think in this case, the fact that there was at the same time a powerful public campaign going on based on these horrible stories, that is why clinical practice got changed very quickly.
Luisa Rodriguez: So there’s a lesson there. I mean, there’s really nothing to say, but that that’s horrific, and important to know it happened, because we may be doing it again — which is a segue to the question of foetuses.
You argue that, presumably, if newborns can feel pain, it’s not like newborns suddenly start to feel pain as soon as they are born and exit the womb. Before we get into evidence about foetal sentience, my first reaction when you raise this issue in the book was anxiety about what this was going to mean for arguments about abortion. But you argue that we should really separate these two issues: the question of foetal sentience and the question of whether abortion should be legal and acceptable. Why is that?
Jonathan Birch: Yes, my view is that these issues should be separated. I recognise that opinions might vary on that. I think that abortion generates these extremely polarised debates, famously polarised in America.
I think there’s much less polarisation on this issue in the UK, as far as I can see — and I think it’s because there’s something close to consensus in the UK on what are we trying to do here with this right to access abortion? Why is it such an important right? It’s because of there being really something very seriously bad about the idea of a forced pregnancy. It’s just one of the worst kinds of coercion you could imagine, and a very serious violation of bodily autonomy of the woman. In the philosophical literature, Judith Jarvis Thomson made that argument a long time ago.
I think in the UK it’s really got traction, and I think it’s probably the right view. Why is the right to access abortion important? Not because the foetus is not sentient. It’d be kind of strange to assume that foetuses were not sentient, I think. And I don’t think that’s what is at the basis of the claim to the right to access abortion, and I don’t think it should be the basis of that claim.
What I fear is a situation where people who want to argue for this very important right end up tying it to the question of sentience, and then get ambushed by the evidence — because the evidence may, in the future, drag them to ever-earlier time points when it becomes plausible that the foetus is sentient, and then the argument has the rug pulled out from under it. So I don’t think that’s the kind of argument that we should be making when defending this right.
Luisa Rodriguez: So concretely, it might be me saying, “I don’t think a three-and-a-half-month-old foetus is sentient, and so I think that women should be able to abort them before that date.” And if it turns out that the evidence points toward something like sentience emerging at earlier and earlier dates, then this right to abortion will be really seriously undermined.
Jonathan Birch: It’s quite a mistake, yeah. It’s a comparable mistake to thinking that the time limit should be tied to viability. This doesn’t work, because medical technology is improving all the time, so the point at which a foetus becomes viable is getting earlier and earlier. So if that’s your moral case for why this right is important, that case is going to get eroded. Similarly, if the case is based on sentience, and on this claim that a foetus is not sentient, there’s every possibility that evidence will erode that case as well.
So I think it’s important to recognise that that’s not the case. The real basis of this right is bodily autonomy. So I have my own views, and the book doesn’t hide those views.
But also I think we need to be thinking about the sorts of processes through which we might be able to resolve these deep value conflicts. And sometimes a debate is so polarised that that’s impossible. But in a case where it’s not — and I think Ireland was in this position — it’s possible to use citizens’ assemblies to make real progress on these issues. In a way, it’s a great example, I think, what happened in Ireland: they were able to come up with a policy that commanded agreement in a referendum through this process of a citizens’ assembly.
Luisa Rodriguez: Yeah, nice. And I want to come back to citizens’ assemblies in a bit. But for now, concretely, how do you think we should think about the case of foetal sentience, holding issues of abortion and women’s bodily autonomy aside?
Jonathan Birch: I think that the way the evidence has been going is towards a realistic possibility of sentience in the foetus from around the beginning of the second trimester. So, particularly if one takes seriously midbrain-centric theories — like those of Merker and Panksepp — those subcortical mechanisms are very plausibly active pretty early, from around that time.
And that’s just one of the ways to make the case for sentience candidature here. I think my general view is that sentience candidature is something that goes with the earliest evidence-based estimate — so the earliest estimate for which there is a position in the zone of reasonable disagreement with some serious empirical evidence behind it. It doesn’t have to be likely, certain, known — but it has to be a realistic possibility, and I think the realistic possibility is there from around the second trimester.
Luisa Rodriguez: And what is the number one policy you’d like to see changed?
Jonathan Birch: I suppose what I would like to see is more discussion of the relevant clinical norms in two contexts.
One of which is the context of therapy, where surgery is performed on a foetus to treat a condition. Because of medical technology, this is happening more and more. And it’s great that it’s happening more and more, but I think we need public involvement in discussions of general clinical norms around when pain relief is going to be used. And we need to be aware that foetal analgesia exists, it can be administered, and in many of these cases should be administered. Of course, specific details need to be left to experts.
Then the second issue is around honesty in relation to the possibility of foetal sentience in the context of abortion. I don’t think this information can be hidden from people, but that opens up some very delicate issues about how it is appropriate to communicate that information. And here too, I would like to see the public involved. Of course really it’s not the whole of the public, but it’s really women. I’m never going to be pregnant. It’s the part of the public that may experience pregnancy that is relevant here, and should be brought into these discussions about how to communicate that information.
Luisa Rodriguez: Those make a lot of sense to me, at a minimum.
Neural organoids [00:55:49]
Luisa Rodriguez: OK, a third human edge case you consider is neural organoids. To start us off, what exactly is a neural organoid?
Jonathan Birch: This is another very fast-moving area of emerging technology. Basically, it uses human stem cells that are induced to form neural tissue. The aim is to produce a 3D model of some brain region, or in some cases a whole developing brain.
Luisa Rodriguez: And what’s the case for creating them?
Jonathan Birch: I think it’s a very exciting area of research. You can make organoids for any organ, really. In a way, it’s a potential replacement for animal research. If you ask what we do now, usually people do research on whole animals, which are undeniably sentient. And here we have a potential way to gain insight into the human version of the organ. It could be a better model, and it’s much less likely to be sentient if it’s something like a kidney organoid or a stomach organoid. It’s really only when we’re looking at the case of the brain and neural organoids that the possibility of sentience starts to reemerge.
Luisa Rodriguez: Yeah. And intuitively, the case for sentience does feel like it immediately lands for me. If you are trying to make an organoid that is enough like a brain that we can learn about brains, it doesn’t seem totally outrageous that it would be a sentience candidate. What is the evidence that we have so far?
Jonathan Birch: It’s a complicated picture. I think there are reasons to be quite sceptical about organoids as they are now, but the technology is moving so fast, there’s always a risk of being ambushed by some new development. At present, it really doesn’t seem like there’s clear sleep/wake cycles; it doesn’t seem like those brainstem structures or midbrain structures that regulate sleep/wake cycles and that are so important on the Merker/Panksepp view are in place.
But there are reasons to be worried. For me, the main reason to be worried was a study from 2019 that allowed organoids to grow for about a year, I think, and compared them to the brains of preterm infants using EEG. So they used EEG data from the preterm infants to train a model, and then they used that model to try and guess the age of the organoid from its EEG data, and the model performed better than chance.
Luisa Rodriguez: Wow.
Jonathan Birch: So it’s hard to interpret this kind of study, because some people, I suppose, read it superficially as saying these organoids are like the brains of preterm infants. And that’s an exaggeration, because they’re very different and much smaller. But still, there’s enough resemblance in the EEG to allow estimates of the age that are better than chance.
Luisa Rodriguez: It’s definitely something. I find it unsettling, for sure.
Jonathan Birch: It is, yeah. I think a lot of people had that reaction as well, and I think that’s why we’re now seeing quite a lively debate in bioethics about how to regulate this emerging area of research. It’s currently pretty unregulated, and it raises this worrying prospect of scientists taking things too far — where they will often say, “These systems are only a million neurons; we want to go up to 10 million, but that’s so tiny compared to a human brain.” And it is tiny compared to a human brain. But if you compare it to the number of neurons in a bee brain, for example, that’s about a million. So these near-future organoids will be about the size of 10 bee brains in terms of neuron counts.
And I think bees are sentience candidates, so naturally I take this risk quite seriously, and I think it would be wrong to dismiss it.
Luisa Rodriguez: Yeah. So you’ve kind of alluded to the fact that there are important differences between neural organoids and actual brains.
Jonathan Birch: Yeah. Currently huge differences.
Luisa Rodriguez: Can you just talk about those a little bit, so that I have a bit more of an intuition for this other side? Like why we shouldn’t expect them to actually be acting cohesively like a sentient mind?
Jonathan Birch: There’s still this problem of vascularisation: maintaining active blood flow to all parts of the organoid, as I understand it, is still a challenge that has eluded researchers. But lots of labs around the world are working on it. I would expect breakthroughs in the near future.
And there’s also these systems, like the one called DishBrain that I talk about in the book, where the system is essentially smeared out flat over quite a large area, and connected to electrodes in that case. And in these sorts of systems that are two dimensional, you’re not going to be recreating the 3D structure of the human brain, but there’s much more potential for delivering nutrients and oxygen.
So lots of different things are being tried out all at once. Currently, none of the things I’ve seen have the fundamental features of a functional human brain. But you know, let’s be worried. Let’s take these warning signs seriously about where the technology is going.
Luisa Rodriguez: Yeah. Is the main thing that you’d like to see concern, and also some kind of regulation? Do you have a view on what that regulation should be like?
Jonathan Birch: Yes, I think we should be talking about when these systems would be sentience candidates. I propose in the book a rule called the “brainstem rule” that says that one sufficient condition for being a sentience candidate — and triggering that whole discussion about proportionality — is when the system either develops or innovates a functioning brainstem, including the midbrain and reticular activating system that sustains sleep/wake cycles. When you’re in that situation, I think you should be worried. Of course, one can conceive of sleep/wake cycles without any conscious experience, obviously. But it’s a clear risk factor, as it were, so I think that’s one trigger.
Luisa Rodriguez: Yeah, that actually reminded me that I realised later I wasn’t totally clear on what reasons there are for thinking that sleep states, or the particular brainwaves associated with sleep states, are evidence about sentience. Is there an argument there that’s easy to explain?
Jonathan Birch: I think sleep/wake cycles are not compelling evidence of sentience. When we see them in nematode worms, for example, people are rightly sceptical about any inference to sentience on the basis of that.
But I think the situation is a bit different in humans, because the existence of a sleep/wake cycle is telling you that the mechanisms that regulate that cycle — particularly the reticular activating system, as it’s called — is in place. It’s pointing towards the midbrain being functional. And that’s relevant in this context because then you have theories, like those of Jaak Panksepp and Bjorn Merker, that say those mechanisms are sufficient for sentience. So in the human case, establishment of sleep/wake cycles is a clue towards at least one set of potentially sufficient conditions being met.
It’s also common sense, to be fair. I mean, in a way, you know, patients’ families will very naturally assume that if the patient has woken up there’s a realistic possibility they’re feeling things — and then it’s the traditional medical received wisdom that has pushed back on that. And I think maybe the common-sense view needs to be taken seriously.
Luisa Rodriguez: Right. And just to be clear, in this context, when you say a patient has “woken up” and their families might then assume they might be feeling things, the thing that you mean is that they’re having sleep/wake cycles — not that they’ve regained clear consciousness.
But back to neural organoids, what sort of regulation would you want to see if neural organoids did meet that criteria for sentience candidacy?
Jonathan Birch: That discussion of proportionality is quite a complicated one, because it might be an overreaction to just ban this research when we’re talking about something that aims to replace animal research on systems that are definitely sentient.
So what I propose is: let’s talk about the idea of bans, as long as they’re targeted. But also, let’s talk about the idea of regulating this research on the model of animal research — where a really crucial idea, particularly in the UK, is harm-benefit analysis: this idea of making sure that what you’re doing, could it impose harms, and are those harms justified by the benefit of doing the research?
In a way, what I’d most like to see is an integrated framework where review boards are considering the organoid research alongside the animal research, and they’re saying, “Could we actually be replacing more of this animal research with organoid research?” And at the same time considering, “Is the organoid research going too far?”
AI sentience and whole brain emulation [01:06:17]
Luisa Rodriguez: Yeah, nice. Moving away from neural organoids, another set of sentience candidates you cover in the book are AI systems. We’ve talked about the possibility of AI sentience on the show before with Robert Long, Jeff Sebo, and Carl Shulman, so we won’t spend ages on this now.
But you did raise a few interesting points about AI sentience I wanted to ask more about. One is that you emphasise that AI sentience could arise in a number of ways. I think I intuitively imagine it arising either intentionally or unintentionally as a result of work on LLMs. But one of these other ways is whole brain emulation. And one case I hadn’t heard that much about is OpenWorm. Can you talk a bit about the goals of OpenWorm and how that project has gone?
Jonathan Birch: This was a project that caught my eye around 2014, I think, because the goal was to emulate the entire nervous system of the nematode worm C. elegans. This is a worm for which we’ve had the entire connectome, so all the connections between neurons, mapped out since the ’80s. There’s fewer than 400 of them.
So the goal was to emulate this system in its entirety in software. And they had some striking initial results, where they put their emulation in charge of a robot, and the robot did some kind of worm-like things in terms of navigating the environment, turning round when it hit an obstacle, that kind of thing. And it generated some initial hype.
Luisa Rodriguez: It’s already pretty incredible.
Jonathan Birch: Yeah. It wasn’t enough hype, sadly, to create sustained investment, so it’s not had the trajectory that in 2014 seemed like could be possible. There was certainly a vision of the future there, where brain emulations would advance in tandem with knowledge of connectomes. And we’re now mapping out the connectome of Drosophila, for example, so if the emulations had kept pace, we might now be looking at OpenDrosophila, and that really would be something. But for whatever reason, it did not attract the major investment that large language models have attracted, so it hasn’t accelerated at the same pace.
Luisa Rodriguez: So there are a couple of things I want to pick up on there. First, in case not everyone’s familiar, can you say a bit about what exactly a connectome is, and why people are trying to map them out?
Jonathan Birch: Well, it’s all the connections between neurons in a brain. And it doesn’t tell you all that much, it turns out, about the functioning of the brain. It doesn’t tell you the weights of those connections; it doesn’t tell you what’s going on within the neurons either; and it doesn’t tell you about ways neurons might interact with each other, other than through direct synaptic connections. So it’s this amazing fine-grained knowledge of the structure of the brain, and yet it’s tantalising because that knowledge of structure doesn’t give you function.
Luisa Rodriguez: Right. That’s fascinating. It feels naive now, but it was eye-opening to me when you pointed out that we actually just wouldn’t need whole brain emulation in humans or of human brains to start thinking about the risks from AI sentience. We just need to go from OpenWorm to OpenZebrafish or OpenMouse, or maybe even OpenDrosophila — which sounds like not an insane step from just where we are now. How likely is it, do you think, that researchers would try to create something like OpenMouse?
Jonathan Birch: Oh, it’s very likely. If they knew how, of course they would. I think one of the main themes of that part of the book is that once we see the decoupling of sentience and intelligence — which is very important, to think of these as distinct ideas — we realise that artificial sentience might not be the sort of thing that goes along with the most intelligent systems. It might actually be more likely to be created by attempts to emulate the brain of an insect, for example — where the intelligence would not be outperforming ChatGPT on any benchmarks, but perhaps more of the relevant brain dynamics might be recreated.
Luisa Rodriguez: Yeah, it was a jarring observation for me. I think part of it is that it hadn’t occurred to me that people would be as motivated as they are to create something like OpenMouse. Can you say more about what the motivation is? Does it have scientific value beyond being cool? Or is the fact that it’s just a cool thing to do enough?
Jonathan Birch: I think it would have an immense scientific value. It would appear to be a long way in the future still, as things stand. But of course, we’re talking here about understanding the brain. I think when you emulate the functions of the C. elegans nervous system, you can really say you understand what is going on — and that just isn’t true for human brains, currently. We have no idea. At quite a fundamental level, our understanding of C. elegans is in some ways far better.
And it’s another step again, if you don’t just understand how lesioning bits of the nervous system affects function, but you can also recreate the whole system in computer software, would be a tremendous step.
And it holds the promise over the long term of giving us a way to replace animal research. Because once you’ve got a functioning emulation of a brain, you can step past that very crude method of just taking the living brain and injuring it, which is what a lot of research involves, or modifying it through genome editing. You can instead go straight to the system itself and do incredibly precise manipulations.
So I feel like, if anything, it hasn’t been hyped enough. I want more of this kind of thing, to be honest, than has been the case so far.
Luisa Rodriguez: Intuitively, it seems plausible — and maybe even likely — that if you were able to emulate a mind that we thought was sentient, that the emulation would also be sentient. But is there a reason to think those come apart? Maybe we just don’t know.
Jonathan Birch: It’s another space where you get reasonable disagreement, because I think we have to take seriously the view that philosophers call “computational functionalism”: a view on which, if you recreate the computations, you also recreate the subjective experience. And that leads to further questions about at what grain does one have to recreate the computations? Is it enough to recreate the general type of computation? Or does every algorithm at every level, including the within-neuron level, have to be recreated? And there too, there’s disagreement.
I think we have to take seriously the possibility that recreating the general types of computations might be enough. I call this view “large-scale computational functionalism”: that it might be a matter of simply creating a global workspace or something like that, even if the details are quite different from how the global workspace is implemented in the human brain.
And if we take that view seriously, as we should, it really does suggest a kind of parity. I wouldn’t want to overstate it, because I’d say that the probability of sentience is higher in the biological system than in its software emulation. But still, that software emulation is potentially a candidate.
Luisa Rodriguez: It sounds like you’re both excited about this as an area of research, and you think probably it should be invested in more, but that it also comes with risks. What do you think is the right kind of approach to regulation of this space, given that it could be very beneficial, but also we don’t know that much about whether these emulations will feel things?
Jonathan Birch: I think that’s another major theme of this part of the book: I want to encourage discussion about regulating this area. Discussion that, when I started working on the book, was very rare, and I think is now much more visible. But we still need more of it, because it’s really an unregulated free-for-all as things stand now.
Thomas Metzinger in particular has been loudly calling for a moratorium, saying that the risk of creating artificial sentience in the near future is too great to allow this research that “knowingly risks” it, as he puts it, to continue. And I think we’ve got to take that seriously as a proposal.
I also worry about it, because depending on “knowingly risks,” and how that is understood, the scope is either too broad or too narrow, I think. Because philosophers tend to say knowledge implies belief, and very few AI researchers believe they’re on the verge of creating artificial sentience and so on. On that understanding of “knowingly risks,” this ban would not cover anything. But at the same time, if we say it’s not really about knowledge or belief — it’s about negligence, it’s about recklessness, it’s about not taking the risks seriously enough — then we’re talking about quite a large fraction of the AI sector, if not all of it. So when you think about it in that way, a ban might be rather too draconian and not very realistic.
Luisa Rodriguez: Is the thing to do now to raise this possibility that OpenZebrafish or OpenMouse would be real sentience candidates, but that wide-blanket bans seem too far?
Jonathan Birch: I think it would be a mistake to outlaw attempts at brain emulation. I think there’s a parallel here with the organoids case — where, when you’ve got a line of research that has the potential to allow us to replace a lot of animal research, where we’re experimenting on clearly sentient beings, we shouldn’t curtail that just because we’re worried about creating sentience. There would be a certain inconsistency there.
Luisa Rodriguez: Another source of risk you talk about is artificial evolution. Can you explain that risk?
Jonathan Birch: I think it’s an old idea that in simulations of evolution, the more detailed they get and the more lifelike they get, the more realistic it is to think that very complex forms might evolve in those simulations. Again, the current technology seems to be a very long way from even recreating the sorts of complexity we find in C. elegans, let alone Drosophila. But again, one should see this as a source of risk.
Luisa Rodriguez: Yeah. Just because I was so unfamiliar, can you talk a little bit more about the current technology? What we can and can’t do now, and what the aim even is?
Jonathan Birch: What I’ve come across is really quite crude, if I’m honest. Because of course, one has neural networks, and you can have virtual environments in which lots of little agents controlled by neural networks interact with each other. And forms of learning can happen in these environments, and forms of evolution can be run in these environments too. But again, because it’s not a major priority of the tech giants — and even if it was, it would be highly secretive stuff — I’m not aware of any seriously interesting results coming out of this kind of stuff yet.
Luisa Rodriguez: So is the idea something like you have these environments where you can create agents, and the goal is to be able to really manipulate these environments and specific things about the agents, and learn about different dynamics in the world in a more simplified context?
Jonathan Birch: Yeah, it’s pretty interesting. As a strategy for modelling evolution, it’s very interesting. But as a strategy for creating artificial sentience, it’s clearly not there, I think.
Luisa Rodriguez: How plausible do you think it is that sentience does eventually arise from something in this vein?
Jonathan Birch: I guess it depends on how likely sentience is to evolve, full stop. And we don’t understand its evolution very well. It’s certainly plausible to think it’s arisen multiple times, rather like eyes. Eyes have evolved almost 30 times. And sentience, it’s entirely plausible to think, has arisen at least three times in the vertebrates, the arthropods, and the cephalopods. So I don’t think we’re talking about a one-off event, but nor are we talking about something, I suspect, that is likely to evolve once one starts simulating evolutionary processes.
Luisa Rodriguez: I think the impression I’m getting currently is that this doesn’t feel like one of the areas to worry most about. Is that basically right, or should we be even more uncertain than that?
Jonathan Birch: It’s not something I worry about as much as organoids — and that, in turn, I suppose I don’t worry about as much as other animals. So when thinking about priorities, absolutely let’s not neglect the trillions or more of other animals that are pretty clear sentience candidates, and that we’re mistreating in horrible ways.
Luisa Rodriguez: Yeah, that’s helpful. Turning to large language models, what do you think is the most likely route to consciousness and or sentience?
Jonathan Birch: It’s an interesting case. It’s one of the parts of the book that I had to rewrite most often.
Luisa Rodriguez: Really?
Jonathan Birch: Well, of course, I was working on it since 2020.
Luisa Rodriguez: Oh, yeah, of course.
Jonathan Birch: And that period, 2020 to 2023, one might say there were a few changes. And of course, like everyone else, I started off extremely dismissive of the idea that large language models might achieve some kind of sentience. There is this basic problem that in the book I call the “gaming problem”: that these models are adopting characters, they’re role-playing, and typically they’re instructed to adopt the persona of a helpful human assistant. So when one sees superficial mimicry of the dispositions of a helpful human assistant, that is not evidence of sentience; that is the system impressively using the information in its training data to generate user satisfaction.
And it does lead to kinds of gaming, I think, that can be superficially very plausible. There was that famous case involving Blake Lemoine in 2022 that I talk about in the book, where this superficial mimicry of human dispositions can be very shocking to the user, but is not evidence of sentience of any strong kind, of the kind that would allow us to call them sentience candidates.
But you know, I’ve been amazed at the rate at which technology has developed, so it would be a mistake to take that snapshot of the technology in 2023 and say that, because those were not sentience candidates, this is not a pathway that could ever generate sentience candidates.
I think what we’re starting to realise is that they mimic human dispositions not just at a superficial level, but at often very deep levels. And there’s a dialectic there where it starts to sound increasingly shaky to say, “This is just a kind of superficial mimicry; you’re being duped,” and increasingly credible to say, “Maybe the way to portray a helpful human assistant so brilliantly is to in fact instantiate some of the mechanisms of a helpful human assistant.”
Luisa Rodriguez: Like actual empathy.
Jonathan Birch: Like actual cognitive and affective processes, yes. So there’s this dialectic in which that second view started off extremely implausible, and the first few started off extremely plausible. And the situation has changed a little bit; the second view has increased in plausibility. So there’s a possible trajectory where it increases more and starts to become a serious competitor with the superficial mimicry explanation.
Luisa Rodriguez: Is it the case that you’d be most likely to see for there to be sentience in an LLM that you’re interacting with, or are there other parts of the process of training and creating large language models that might be more likely to create something like a sentient mind?
Jonathan Birch: It’s an idea that’s out there, that sentience might be more likely in the training phase, where the weights are being changed, than in the use phase, where, in effect, you’re interacting with this crystal-like structure of frozen connections, frozen weights no longer subject to any further change. So there is that view out there, and it could be right. But I think we’re also not in a position to be sure that a frozen system with unchanging weights can’t be sentient.
Luisa Rodriguez: For anyone thinking about the kinds of policies to implement now to guard against suffering in AI systems, what policy would be your top priority?
Jonathan Birch: It’s a really hard question. I propose a principle in the book that I call the “run-ahead principle.” That says that when thinking about the risks and what would be proportionate to those risks, it’s important not just to consider current technology, but also to consider credible future trajectories. It’s based on the fact that the wheels of policy often move very slowly, and the technology does not.
There’s a real danger here of just being completely overtaken by incredibly fast-moving technology, so that idea about running ahead is very important. Perhaps being willing to be more precautionary than we think the current technology warrants I think could be really important as well.
I float in the book the idea of animal welfare law as a model, and the idea that in the future we might need AI welfare laws that do similar things. If you think of animal welfare law, it’s usually not in the business of just banning this system from existing, and it’s not really in the business of giving the system human-like rights either. But it’s in the business of trying to put limits on what people can do, creating codes of practice, creating licencing schemes — so that we’re talking about a system here where there are limits on what you can do to it: you’ve got to have a licence, you’ve got to follow this kind of practice.
And if we’re looking to run ahead, the technology as it is now would seem not to really justify that kind of response. But we don’t know. It could be months, it could be years, events could overtake us. So to me, there’s no harm at all in trying to develop that framework now, and trying to debate what would it actually take to regulate this sector properly, what would the codes of practice have to say, what would the licencing schemes have to say?
And how would the sector have to become more transparent to make this possible? Because currently, the secrecy of this sector is an enormous problem. It’s a danger to humanity in multiple ways. One of the ways that is less talked about, I think, is the danger of creating sentience.
Luisa Rodriguez: I agree with you. And that’s also a great segue to our next topic: policymaking about sentience, especially at the edge of sentience.
Policymaking at the edge of sentience [01:28:09]
Luisa Rodriguez: You’ve been involved in an impressive amount of policymaking, at least from my perspective. I haven’t known of very many philosophers who are as engaged in actual policy discussion. So I’d love to talk to you about some of that work and get a better sense of specifically what policies you’d advocate for, but also how you think policymaking should happen in cases involving sentience candidates where we have tonnes of uncertainty about what they’re actually experiencing.
Just to start at the high level: you argue that we should use a precautionary framework for thinking about the policies we create to govern the treatment of sentient beings and sentience candidates. Can you walk me through that framework?
Jonathan Birch: Yes, there’s a few parts to it. There’s three “framework principles,” as I call them in the book, and then there’s the procedures that should follow if those principles are adopted.
The first one is the idea that we have a duty to avoid gratuitous suffering. The thought is that, for all of our ethical disagreement, we can agree on this minimum point: that in some cases, suffering is gratuitous; no reason can be given to justify it. Either the activity is completely indefensible, or the steps taken to minimise suffering are not proportionate. And despite the vagueness of that distinction, we can see clear cases on both sides. Many of us will think, as a matter of personal morality, that we should go beyond that, and we should have aspirations to do more than just avoid gratuitous suffering. But it’s a baseline. It’s a baseline that we can all agree on.
Then framework principle two applies this to the case where we’re uncertain about whether the system is sentient or not. It says that sentience candidature is enough; that it may warrant precautions and should trigger a process to reach a decision through discussing proportionality. In effect, what I’m doing is using the sentience candidate concept to set a bar for recklessness or negligence: that if a system is a sentience candidate, has realistic possibility of sentience, if you cause it to suffer through taking no precautions at all, because you just didn’t take that possibility seriously, that suffering was a result of your recklessness or negligence. And to avoid that, we need to at least consider possible precautions.
The third idea is that the processes we use for deciding what precautions to take should be aimed at assessing proportionality through democratic, inclusive debate. And I propose that citizens’ assemblies are a very valuable mechanism here for conducting that kind of debate.
Citizens’ assemblies [01:31:13]
Luisa Rodriguez: Yeah. Can you talk a little bit more about how they work, concretely? So you sample a random segment of the population, you bring them in. And then are you kind of wanting the citizens to talk amongst themselves about the problem and see if they can come to some sort of consensus?
Jonathan Birch: Yes, it’s intended to implement deliberative democracy. In the end they may end up voting. It may not be unanimous consensus, obviously, but to get to that stage of voting, they’ve listened to experts, they’ve broken into small groups, they’ve deliberated, and then they come back to the group as a whole.
Luisa Rodriguez: Yeah. What is the general case you’d make for citizens’ assemblies?
Jonathan Birch: I think they can deliver democratically legitimate recommendations in cases where we think there might be problems with leaving the issue entirely to elected representatives or entirely to a referendum.
I think we’re in exactly that situation with cases at the edge of sentience. There’s just very general reasons to think that election campaigns are always going to focus on a certain core set of issues relating to the economy, public services, and so on — and they’re not going to be considering these cases at the edge of sentience in general; they’re not going to be considering risks that may or may not materialise. And elections are very bad at creating responsiveness to those kinds of risks, as I think we’ve all seen in the COVID-19 pandemic — for which really no country on Earth was prepared, but democracies, like my own country, were conspicuously badly prepared.
Citizens’ assemblies are a great antidote to that, because if the elected representatives themselves can see that this is not their priority, they have good reasons to create this mechanism through which a subset of the population is sampled. It takes the issue outside the run of party politics, and it can lead to a recommendation that can inspire cross-party support. I think there have been good examples of that, particularly in Europe — the Irish assembly on abortion was pretty successful — and I want to see more of this kind of exercise happening.
Luisa Rodriguez: Yeah. I guess my first impression, or my first reaction — and I think maybe you had a similar experience — was thinking that it would be really hard for a random group of citizens to get really up to speed on questions like, “What is the science about bee sentience, and how should we treat them?”
What is your take on that now? How plausible is this for cases where we have to look at scientific studies about sentience and make recommendations there?
Jonathan Birch: I think one just needs to have a clear division of labour between the experts and the assembly. I participated in a couple of exercises on genome editing where I feel like that worked reasonably well, because it was pretty clear that the public were being brought in to assess evaluative questions.
That’s how I propose doing it in the book: that you need to have scientific experts who can convey the zone of reasonable disagreement — experts who are not scientifically partisan, they’re not just going to bang a drum for their favourite theory of consciousness, but will try to give a sense of the different views that exist in the scientific community on the question. And of course, experts have to do that, and the public should not then be asked to referee that dispute, because that would go horribly wrong.
What they need to be asked is evaluative questions about what would be proportionate to specific identified risks. And I give this “pragmatic analysis,” as I call it, of proportionality in terms of four tests: it’s about looking for responses that are permissible in principle, that are adequate, that are reasonably necessary, and that are consistent. These questions are asking people about their values, and people can answer questions about their values. I think it’s not unrealistic to think that an assembly could come to a judgement of whether this proposed policy is proportionate to this identified risk.
Luisa Rodriguez: Yeah, I’m interested in hearing about the case where you participated. Did you say it was genomics?
Jonathan Birch: It was about genome editing of farm animals. It was run by the Nuffield Council on Bioethics, and it was quite a positive experience for me. I was participating as an expert, obviously, and had a worry going in that these panels are just exercises in expertise laundering: that the experts state their views, and then those views come back freshly washed as the will of the public.
And that wasn’t what happened at all in this case. The public was in some sense better than the experts at challenging assumptions and breaking out of groupthink. Groups of experts are very susceptible to forms of groupthink, and groups of randomly selected citizens seem to suffer from it rather less.
In the case of genome editing, it’s a separate issue of course, but there’s very big issues around which corporations are going to benefit from a liberalisation of the law in this area, and how much should we trust the narrative they’re presenting about how it will help animal welfare rather than making it worse? And a lot of this narrative experts were accepting and the public just wasn’t taking, and I found that quite reassuring.
Luisa Rodriguez: Wow. So something like, there was a narrative about how genome editing might be used to improve the welfare of farmed animals?
Jonathan Birch: There very much is, yeah.
Luisa Rodriguez: And experts had kind of accepted that. Then in practice, the public participating in the panel was like, “It’s not clear that’s the main use. Maybe this is actually going to be used for things we don’t endorse, like making chickens fatter.” Is that the kind of thing?
Jonathan Birch: Yeah.
Luisa Rodriguez: That’s really impressive.
Jonathan Birch: Yeah. I think experts are a bit like career politicians, in that they can often be intensively lobbied. When they work on a particular area, and an industry has a strong interest in getting expert opinion in that area to take a certain shape, they will try to bring that about, of course. So I worry about a tyranny of expert values — where we say the factual and evaluative sides of these questions need to be left to the experts. This goes very wrong, I think.
What you need is a division of labour. Of course, it’s approximate; of course it’s imperfect, and you’re never going to completely cleanly separate the factual and the evaluative. But if you can have experts making the judgements about what the sentience candidates are, and what the risks are, and what might reduce them, and then citizens making the evaluative judgements about which of the possible options are proportionate to those risks, it’s not perfect, but I think it’s the best we can do.
Luisa Rodriguez: What is the hardest thing about implementing citizens’ assemblies in practice?
Jonathan Birch: It’s a very difficult challenge. I don’t envy the companies that do this. Representativeness is currently very challenging, because at some level it has to be voluntary — and the people who would volunteer for such a thing won’t be fully representative of the whole population. You probably won’t get that large section of the population that never votes, because if they don’t vote, they’re not likely to participate in a citizens’ assembly either.
So it’s a worry. It’s a similar worry as those faced by polling companies, because the less representative it is, the less legitimate its outcomes are perceived as being.
Luisa Rodriguez: Yeah, that makes sense.
The UK’s Sentience Act [01:39:45]
Luisa Rodriguez: Another example of trying to shape policy while taking into account all this uncertainty: you advised the UK government when it wrote its Sentience Act. I just want to learn about that case a bit. Maybe to start, what’s the background for the Sentience Act?
Jonathan Birch: The background, in a way, is the UK leaving the European Union. The European Union has a treaty called the Lisbon Treaty that has a line in it about respecting animals as sentient beings. When the UK left, the government declined to import that line into UK law, leading to some bad press and a campaign from animal welfare organisations to get them to reverse that decision. So they said, “We’ll do better. We won’t just import it into UK law; we will introduce new legislation that improves on the Lisbon Treaty by creating a new duty on policymakers to consider the animal welfare impacts of their actions.” And this was the Sentience Act.
Of course, when one commits to creating a new duty like that, you have to say what its scope is going to be, and they produced a draft of the bill that only extended to vertebrate animals. In a way, it’s good that they included fish. But no invertebrates. That led to a new wave of pushback from animal welfare organisations who said, “What about octopuses? What about crabs? What about lobsters?”
So in that context, the government commissioned a team led by me to produce a report: Review of the Evidence of Sentience in Cephalopod Molluscs and Decapod Crustaceans. The report is freely available online. We ended up recommending that they should fall within the scope of this new duty. The government enacted our recommendation: they amended the bill, and the act, passed in 2022, does create this duty to take account of crabs, lobsters, and octopuses.
Luisa Rodriguez: We’ve been talking so much about all of the uncertainty, and with invertebrates, there’s immense uncertainty. How did you go from all of that, and the humility that you have about our ability to know about these cases, to making really concrete recommendations?
Jonathan Birch: It’s a good challenge, and I think an illustration of that point I’m trying to make at many points in the book about how one can get clear recommendations for action despite a great deal of uncertainty about the science.
We conveyed that uncertainty as clearly as we could. We had this framework of confidence levels. We had eight different criteria we reviewed the literature on. We expressed a confidence level about how confident we were that this particular taxon meets that criterion, and produced these big tables of graded colours showing our confidence.
And we’re just totally open about the fact that this is a messy, gradated evidential picture, and we’re not going to deliver any false certainty on the basis of that picture. But we do have to come to a recommendation, and the recommendation should not involve double standards that involve protecting fish on the basis of very similar kinds of evidence, and then refusing to protect crabs, lobsters, and octopuses when presented with substantially similar evidence. So that was our rationale.
We said basically the most sensible thing to do here is to amend the bill to include these taxa of animals. I’m kind of proud of it, in a way, because at no point did we feign certainty. Even though people say sometimes that politicians won’t listen unless you feign certainty. I think sometimes politicians want a clear recommendation, but if you can trace a path of reasoning from uncertain evidence to that recommendation, that’s fine.
Luisa Rodriguez: That is really heartening. Were there any other things about the experience that struck you?
Jonathan Birch: Well, we were engaging primarily with civil servants, and I think one very clear thing is that things do not happen in the policy world without ministerial energy. So in the US, this is secretaries of state and things like that, that we call “ministers” in the UK. If there’s that top-down energy saying, “Decide: reach a decision on this,” then it will happen. So in a way, that was fortunate for us that we were in that position. In that particular context, a clear recommendation was what was needed. And I think once we put it out there, the path of least resistance is then to accept the recommendation.
Luisa Rodriguez: Yeah. My sense is that you, as part of putting together this set of recommendations, must have reviewed dozens or hundreds of academic articles.
Jonathan Birch: Yeah, over 300. The review ran to over 100 pages.
Luisa Rodriguez: Incredible. Do you remember learning anything in particular while doing this that you remember as particularly surprising at the time?
Jonathan Birch: I think we were all very surprised by Robyn Crook’s study on octopuses. It came out while we were working on the report, so we were writing this version of the report that said that there’s a missing piece of the puzzle, even concerning octopuses: “conditioned place avoidance.” Which is quite a standard test for pain in mammals, where you give the animal a choice of two different chambers, you see which one it prefers, and then in the preferred chamber, it experiences the effects of a noxious stimulus — typically something injected into them — and then you see how the animal’s preferences change. And access to analgesia is in the other chamber, or they experience the effects of analgesia in the other chamber, and you see if the preferences reverse.
In mice and rats, often one experience of the noxious stimulus is enough to create a lasting reversal of the preferences.
Luisa Rodriguez: Wow.
Jonathan Birch: And no one had done that with octopuses. Then Robyn Crook published a study in 2021 that did that protocol with octopuses. There was this very lasting avoidance of the chamber they initially preferred, based on one experience of injected acetic acid in that chamber, and additionally a skin-scraping behaviour that they performed as if trying to scrape off acid from the skin — that was then silenced by administering local anaesthetic.
So it’s very striking. Obviously, if one sees that in a mouse or a rat, you think that’s a painful experience that the local anaesthetic has relieved. And it would seem like a double standard to not see that as at least a realistic possibility in the octopus.
Luisa Rodriguez: Yeah, absolutely.
Ways Jonathan has changed his mind [01:47:26]
Luisa Rodriguez: How about your views on the sentience candidates that you looked at? Any change in your best guesses there?
Jonathan Birch: Well, there’s really difficult cases in invertebrates, because it’s all well and good to say that octopuses, crabs, lobsters, bees, I do think they’re all sentience candidates. And the evidence from insects has really amazed me. That evidence has been around for a long time in the case of bees, but then other insects, I’ve been very struck by evidence from Drosophila. Initially I would have dismissed the idea that Drosophila, which is much smaller than a bee, would be a sentience candidate. And I’ve changed my view on that.
And then once you get to that stage of just regarding Drosophila as candidates, of course you start wondering about insect larvae, about other arthropods like spiders, and you’re confronted with the lack of evidence. I think when we’re drawing lines pragmatically, we do have to say that, if there’s no evidence, there’s no basis on which we can design precautions. But crucially, that does not mean that the system is not sentient: it means that we need to do more research.
So I have this category in the book of investigation priority, where to get to that stage where we can design precautions, we really need to do some more research, and we should regard that as an urgent need. And I think we are in that situation with insect larvae, which are farmed on a massive scale, but often the idea that they might have welfare needs is completely dismissed. There’s no humane slaughter regulations or anything like that.
And I feel like that should change. Meghan Barrett has established an Insect Welfare Research Society to try to change it, and I’m on the advisory board for that. We’ve been trying to do some initial work on black soldier fly larvae to try and fill some of those evidence gaps.
Luisa Rodriguez: Do you remember what specifically you learned about Drosophila that started changing your mind?
Jonathan Birch: There’s Bruno van Swinderen’s work on Drosophila, which has identified that not only are there sleep/wake cycles, but also different kinds of sleep: active sleep that somewhat resembles REM sleep in mammals, and quiet sleep that somewhat resembles slow-wave sleep. That’s very striking, particularly given the association between REM sleep and dream states.
And then also Michael Young’s lab — this Nobel Prize winner, Michael Young at Rockefeller University — in the pandemic, he was thinking to himself, “What can I, as a fly researcher, do to help the pandemic response?” And he saw reports of how lockdowns were affecting people psychologically and how they were disrupting people’s sleep. So he asked, does the equivalent of a lockdown for a fly disrupt the fly’s sleep? He designed these experiments where individual flies were socially isolated so they could see other flies through a glass partition, but they couldn’t interact with them — and he found evidence that this social isolation does indeed disrupt sleep.
There’s lots of possible explanations for these kinds of results, but the picture that is emerging is of these mechanisms in the central complex that are a bit like the vertebrate midbrain, they’re not restricted to just bees: they’re right across the insects. And even small insects, like fruit flies, have versions of them, and they have versions of the mushroom bodies as well, which are linked to learning and memory. And they can do versions of the stuff that the bees do — maybe a bit less impressive, but not, I think, fundamentally different.
Luisa Rodriguez: That’s fascinating. Anything else you’ve changed your mind about?
Jonathan Birch: I was surprised by a lot of the bee stuff. It’s kind of strange that I would say that, because I’ve been writing about social insects for so long — including in my work before the Foundations of Animal Sentience project, in my book The Philosophy of Social Evolution. So it wasn’t like I was ignorant of social insects, but still, some of the recent bee evidence — like the opening two-step puzzle boxes, string-pulling, rolling balls into holes to get food reward — this evidence is quite extraordinary, and has changed the way I think about bees. It certainly convinced me that they are sentience candidates.
Luisa Rodriguez: Has there been any evidence that’s moved you one way or the other on just the nature of sentience? Materialism versus panpsychism versus other theories?
Jonathan Birch: Oh, the big-picture mind-body problem stuff, where I think we have to be open-minded about a range of possible views. I suppose I used to go along with the orthodox view that at least a lot of people profess to hold, which is that dualism is stupid and panpsychism is stupid. I guess I no longer think that — partly through having thought of myself as someone pretty sympathetic to materialism, thinking through the problems of that view and the difficulty of making it work, and particularly the difficulty of getting determinate facts of the matter out of a view with that shape.
Luisa Rodriguez: Yeah, that makes sense. I also relate to how I used to think panpsychism and dualism were silly. And now I still think they’re silly, but I just also think materialism is silly, and that’s been very disorienting.
Jonathan Birch: Everything is silly.
Luisa Rodriguez: Yeah, everything is silly!
Jonathan Birch: I mean, dualism is silly in the sense that dualist hypotheses always sound ridiculous. And this was true in Descartes’s time, when the hypothesis was about the pineal gland; it was true in John Eccles’s time in the late 20th century when the hypothesis was about these “psychons” affecting the workings of the synapse.
These views sounded ridiculous — and they were wrong. But in a way, you could see it as a virtue of dualist hypotheses that they often entail falsifiable predictions that allow the view to then be disproved. If you predict that consciousness depends on the pineal gland, then this can be disproved, and indeed was.
But in a way, it’s no virtue if you have a way of thinking about consciousness that fails to entail any such predictions at all. And there’s often a worry about contemporary theories that indeed they’re formulated in sufficiently broad-brushed ways that it is actually quite hard to come up with testable predictions that they entail.
Careers [01:54:54]
Luisa Rodriguez: OK, turning to careers: for anyone listening who in some way would like to contribute to figuring out either how to research sentience in different minds or how to think about policymaking, consciousness and sentience are famously hard problems. Do you think we need to solve them to make progress on these more practical issues?
Jonathan Birch: No, I don’t think so. My book is my attempt to find a way forward here, where we acknowledge this wide zone of reasonable disagreement, and we look for points of consensus that are nonetheless possible around ways of erring on the side of caution in the face of our uncertainty. To me, that’s a better way forward than thinking a solution to the hard problem is within reach in the near term.
Luisa Rodriguez: What kinds of people and skill sets are you excited to see entering the field to help with this?
Jonathan Birch: Well, animal sentience research is this radically interdisciplinary, emerging field — where there are people from philosophy, like me, but on my project team at the LSE, there’s also biologists, there’s people in veterinary science. We’ve had visitors from evolutionary biology as well, computer science, and we need to say law, economics, psychology: the social sciences are also extremely important.
So it’s more like a shared research agenda, where the more disciplines we can involve the better — because there’s questions about animal sentience that arise within all these disciplines, and at various intersections of them as well. So I’m always trying to encourage people to get involved, particularly people who’ve established themselves in one of these disciplines. But perhaps they’ve done their PhD on something totally unrelated to animal sentience, but they can see the urgency of the issues. I’m always keen to help people move across, move into this exciting interdisciplinary area. I just think there’s lots of ways to contribute.
Luisa Rodriguez: And what do you think is the best resource for people understanding what those ways are? I guess I’m asking because I think if I were looking at this space, I can imagine me and lots of other people thinking, “Well, I’m not an animal researcher, and I’m not a neuroscientist, and I’m not a philosopher, so I probably can’t contribute.” Are there other groups of people that you think can contribute, or is there a way of making more concrete this “lots of people can contribute” idea?
Jonathan Birch: It’s definitely untrue that you have to be either a philosopher or a neuroscientist or a biologist — partly because these issues are so neglected that there’s not a lot of competition if you want to move into them. If you want to study welfare in shrimps, it’s not like you need a shrimp welfare degree. You can move in from lots of directions.
But also, there’s lots of work that needs doing that is at the science-policy interface. That’s about: what sort of changes in the law would be appropriate? How might we change the way we do cost-benefit analysis to include other animals? If we’re going to have a duty to respect animals as sentient beings, what does that mean? What sorts of changes to policy evaluation would actually deliver on that duty?
So you see how this is not biology anymore, nor is it philosophy. It’s more social science. How do we change human behaviour, given that recognising farm animals as sentient often does not change people’s behaviour towards them? What does change behaviour? So it’s really covering a huge stretch of academic research here, not just biology and philosophy.
Luisa Rodriguez: That’s really helpful.
Discussing animal sentience with the Dalai Lama [01:59:08]
Luisa Rodriguez: OK, moving to our final question: in 2023, you spent a week in Dharamsala, India, discussing animal sentience with Tibetan Buddhist monks. What surprised you most about that visit?
Jonathan Birch: You know, we’ve had two really wonderful trips. In 2023, we went to Dharamshala, which is the home of the Tibetan government-in-exile and the Dalai Lama. I was part of a group of Western scientists and philosophers and Tibetan monks that had an audience with the Dalai Lama, where we asked him questions about animal consciousness.
I asked him whether he thought insects are conscious. He said, “Yes, of course. They have eyes and faces and they sleep.” I was very struck by the remark about sleep, because in Western science, this is considered a recent discovery that insects have sleep cycles. And it would seem as though Tibetan Buddhist tradition has always recognised this, which was very striking. And then, earlier this year, we had another similar event in Kathmandu, in Nepal.
It’s just been extremely interesting for me to try and gain a deeper understanding of Buddhist perspectives on these questions, part of which has been about understanding the variation. They don’t all agree among themselves on these questions. They’re not all vegetarian, for example, though some of them are. So their opinions are as diverse as Western opinions, and it’s all about mapping that zone of reasonable disagreement.
Luisa Rodriguez: Nice. Let’s leave that there. My guest today has been Jonathan Birch. Thank you so much for coming on.
Jonathan Birch: Thanks, Luisa.
Luisa’s outro [02:01:04]
Luisa Rodriguez: All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.