#206 – Anil Seth on the predictive brain and how to study consciousness
#206 – Anil Seth on the predictive brain and how to study consciousness
By Luisa Rodriguez and Keiran Harris · Published November 1st, 2024
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Cold open [00:00:00]
- 3.2 Luisa's intro [00:01:02]
- 3.3 The interview begins [00:02:42]
- 3.4 How expectations and perception affect consciousness [00:03:05]
- 3.5 How the brain makes sense of the body it's within [00:21:33]
- 3.6 Psychedelics and predictive processing [00:32:06]
- 3.7 Blindsight and visual consciousness [00:36:45]
- 3.8 Split-brain patients [00:54:56]
- 3.9 Overflow experiments [01:05:28]
- 3.10 How much we can learn about consciousness from empirical research [01:14:23]
- 3.11 Which parts of the brain are responsible for conscious experiences? [01:27:37]
- 3.12 Current state and disagreements in the study of consciousness [01:38:36]
- 3.13 Digital consciousness [01:55:55]
- 3.14 Consciousness in nonhuman animals [02:18:11]
- 3.15 What's next for Anil [02:30:18]
- 3.16 Luisa's outro [02:32:46]
- 4 Learn more
- 5 Related episodes
In today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain.
They cover:
- What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness.
- Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains.
- Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is.
- Whether our theories of human consciousness can be applied to nonhuman animals.
- Anil’s thoughts on whether machines could ever be conscious.
- Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next.
- And much more.
Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore
Highlights
How our brain interprets reality
Anil Seth: If I take a white piece of paper from inside to outside, still nicely daylight here, then the paper still looks white, even though the light coming into my eyes from the paper has changed a lot.
So this tells me that the colour I experienced is not only not a property of the object itself, but it’s also not just a transparent readout of the light that’s coming into my eyes from the object. It’s set into the context. So if the indoor illuminance is yellowish, then my brain takes that into account, and when it’s bluish, it takes that into account too.
This, by the way, is what’s at play in that famous example of the dress, which half people in the world saw one way, half people saw the other way. It turns out there’s individual differences in how brains take into account the ambient light.
All this is to say that colour is one example where it’s pretty clear that what we experience is a kind of inference: it’s the brain’s best guess about what’s going on in some way out there in the world.
And really, that’s the claim that I’ve taken on board as a general hypothesis for consciousness: that all our perceptual experiences share that property; that they’re inferences about something we don’t and cannot have direct access to.
This line of thinking in philosophy goes back at least to Immanuel Kant and the idea of the noumenon, which we will never have access to, will only ever experience interpretations of reality. And then Hermann von Helmholtz, a German polymath in the 19th century, was the first person to propose this as a semiformal theory of perception: that the brain is making inferences about what’s out there, and this process is unconscious, but what we consciously experience is the result of this inference.
And these days, this is quite a popular idea, and it’s known under different theoretical terms like predictive coding, or predictive processing, or active inference, or the Bayesian brain. There are all these different terminologies.
My particular take on it is to finesse it to this claim that all conscious contents are forms of perceptual prediction that are arrived at by the brain engaging in this process of making predictions about what’s out there in the world or in the body, and updating those predictions based on the sensory information that comes in.
And this really does flip things around. Because it seems as though the brain just absorbs the world; it just reads the world out in this kind of outside-in direction. The body and the world are just flowing into the brain, and experience happens somehow.
And what this view is saying is it’s the other way around: yes, there are signals coming into the brain from the world and the body, but it’s not that those signals are read out or just transparently reconstituted into some world, in some inner theatre. No, the brain is constantly throwing predictions back out into the world and using the sensory signals to calibrate its predictions.
And then the hypothesis — and it’s still really a hypothesis — is that what we experience is underpinned by the top-down, inside-out predictions, rather than by the bottom-up, outside-in sensory signals.
How our brain experiences our organs
Luisa Rodriguez: I guess you might think that the body, in the same way that I’ve got a bunch of touch receptors on my hands, and I could pick up an apple and feel the shape of it, that you could have that for your spleen and feel the shape of it. But instead, we have really different experiences of our internal organs than we do for outside things. And it seems like that could be because the kinds of information we need about the inside things is really different. Am I heading in the right direction?
Anil Seth: Yes, you are. You’re telling the story beautifully. This is exactly the idea. I think one of the powerful aspects of the whole predictive brain account is the resources it provides to explain different kinds of experiences: different kinds of prediction, different kinds of experience.
So when we are walking around the world with vision, the nature of visual signals and how they change as we move explains the kinds of predictions the brain might want to make about them. And visual experience has that phenomenology of objects in a spatial frame.
But then when it comes to the interior of the body, really there’s no point or need for the brain to know where the organs are, or what shape they are, or even really that there are internal organs at all.
Luisa Rodriguez: Right. Which is why mostly I don’t feel anything like organs.
Anil Seth: That’s right. I mean, I wouldn’t even know I had a spleen. I don’t know if I do. I mean, I just believe the textbooks. They could all be wrong. I’ve got no real experiential access to something like a spleen. Sometimes you can feel your heartbeat and so on, feel your stomach.
So what the brain cares about in these cases is not where these things are, but how well physiological regulation is going — basically how likely we are to keep on living. And this highlights this other aspect of prediction that we talked about: that prediction enables control. When you can predict, you can control.
And the core hypothesis that comes out of the book is really that this is why brains are prediction machines in the first place. They evolved over evolutionary time. They develop in each of us individually, and they operate moment to moment, always under this imperative to control, regulate the internal physiological milieu: keep the blood pressure where it needs to be, keep heartrate where it needs to be, keep oxygen levels where they need to be, and so on.
And if you think about things through that lens, then emotional experiences make sense, because emotional experiences are — to oversimplify horribly — variations on a theme of good or bad. They have valence: things are good or bad. Disappointment is like, things were going to be good and things are bad. Regret might be, things could have been better. Anxiety is, everything is likely to be bad.
So there’s sort of valence to everything. And that’s what you would expect if the predictions corresponding to those experiences were more directly related to physiological homeostasis, physiological regulation: when things depart from effective regulation, valence is low; when things appear to be going well, valence is higher, more positive.
What psychedelics teach us about consciousness
Anil Seth: Psychedelics are hugely interesting. I think they’re quite controversial still in many spaces about their clinical efficacy. But setting that to one side, they provide this potentially insightful window into consciousness, because you have this fairly, in some ways, subtle manipulation, pharmacologically — a small amount of LSD or some other psychedelic substance — and then experience changes dramatically. So something is going on.
I think the interesting thing here is not to take psychedelic experiences as some deeper insight into how things really are — you know, as if other filters have come off and “I see the universe truly for the first time” — but to think of them as data about the space of possible experiences and what we shouldn’t take for granted.
And you find people interpreting experiences in both ways, but I’m very much on the side of: no, they don’t give you a deeper insight into how things are in the universe, but they do help us recognise that our normal way of experiencing things is a construction, and is also not an insight directly reflecting reality as it is.
And then, if you think about the kinds of experiences people have on psychedelics, there’s a lot of hallucinations, but now these hallucinations become uncontrolled compared to the controlled hallucination that is a characteristic of normal, non-psychedelic experience. So I think predictive processing provides a natural framework for understanding at least these aspects of the psychedelic experience. I mean, there are other aspects too that could be more emotional, could be more numinous, and other such words.
But in the creation of visual experiences, it does seem that the brain’s predictions start to overwhelm the sensory data, and we begin to experience the acts of perceptual construction itself in a very interesting way. I remember staring at some sort of clouds and just seeing them turn into people and scenes in ways which seemed almost under some kind of voluntary control, although I didn’t have much voluntary control at the time. But this makes sense to me from the perspective of perception as a controlled hallucination becoming uncontrolled.
In the lab we’ve done some studies now where we’ve built computational models of predictive perception, and then screwed around with them in various ways to try and simulate the phenomenology of psychedelic hallucinations, but also other kinds of hallucinations that people have in Parkinson’s disease, in dementia, and in other things. There are different kinds of hallucinations. So what we’re trying to do is get quite granular about the phenomenology of hallucination, and tie it down to particular differences in how this predictive process is unfolding.
Luisa Rodriguez: And so the way that the hallucination is becoming uncontrolled is because the psychedelic substance is kind of breaking the predictive process? Correct me if I’m wrong.
Anil Seth: I think that’s the idea. It’s hard to know exactly. There’s a bit of a gap still. On the one hand, what psychedelics do at the pharmacological level, the molecular level, is pretty well understood: they inhibit serotonin reuptake at this particular serotonin receptor. We know where these serotonin receptors are in the brain. That’s what they do at that level. And then we kind of know what they do at the level of experience: everything changes. Many things change, at least.
So what connects the two? I think that’s the really interesting area. So the hypothesis is that at least part of the story can indeed be told: it must be something about their mechanism of action at these serotonin receptors that disrupts this process of predictive inference. But exactly how and why is still an open question. Some colleagues of mine at Imperial College have done some work on this, trying to simulate some predictive coding networks and model how they may get disrupted under psychedelics. But something like that, I think, is going on.
The physical footprint of consciousness in the brain
Anil Seth: There are some parts of the brain where, if you damage them, then consciousness goes away entirely. Not just the specific conscious contents, but all of consciousness. And typically, these are areas lower down in the anatomical hierarchy — so brainstem regions, this bit at the base of your skull. If you’re unlucky enough to have a stroke that damages some of the regions around there, like especially these so-called midline thalamic nuclei, then you will be in a coma. So consciousness gone.
Is that where consciousness is? No, it doesn’t say that at all. In the same way that if I unplug the kettle, the kettle doesn’t work anymore — but the reason the kettle boils water is not to be found in the plug. So that’s one kind of thing you can find, but it doesn’t necessarily tell you very much.
Then, when it comes to which parts of the brain are more directly implicated in consciousness, this is, of course, where a lot of the action is in the field these days: let’s find these so-called “neural correlates of consciousness.” And there’s one surprising thing, which it’s worth saying, because I always find it quite remarkable: everyone will say the brain is this incredibly complex object — one of the most, if not the most complex object that we know of in the universe, apart from two brains. And it’s true, it’s very complex: it has about 86 billion neurons and 1,000 times more connections.
But about three-quarters of the neurons in any human brain do not seem to care that much about consciousness. These are all the neurons in the cerebellum. The cerebellum just is like… I feel sorry for the cerebellum. Not many people talk about it — well, with respect to people who spend their whole careers on it. But when you hear people colloquially talking about the brain, they talk about the frontal lobes or whatever.
But the cerebellum is this mini brain that hangs off the back of your head that is hugely important in helping coordinate movement, fine muscle control. It’s turning out to be involved in a lot of cognitive processes as well, sequential thinking and so on — but just does not seem to have that much to do with consciousness. So it’s not a matter of the sheer number of neurons; it’s something about their organisation.
And the other, just basic observation about the brain is that different parts of it work together. It’s this fascinating balance of functional specialisation where different parts of the brain are involved in different things: a visual cortex is specialised for vision, but it’s not only involved in vision. And the further you get up through the brain, the more multifunctional, pluripotent the brain regions become. So it’s a network. It’s a very complex network. If we’re tracing the footprints of consciousness in the brain, we need to be looking at how areas interact with each other, not just which areas are involved.
How to study the neural correlates of consciousness
Anil Seth: A very common example here is something like binocular rivalry. In binocular rivalry, if you show one image to one eye, another image to another eye — or a “hemifield” is better. […] And if you show different images — one to the left, one to the right — our conscious experience tends to oscillate between them. Sometimes we’ll see one, maybe an image of a house; sometimes we’ll see another, maybe an image of a face — yet the stimulus is exactly the same; the sensory information coming in is not changing.
So what you’ve done here is: the person is conscious in both cases, so you’re not looking at the correlates of being conscious, but you’ve got a lot more control on everything else. The sensory input is the same. So if you can look at what’s changing in the brain here, then maybe you’re getting closer to the footprints of consciousness.
But there’s another problem, which is that not everything is being controlled for here. Because, let’s say in this binocular rivalry case, and I see one thing rather than another, I also know that I see that. And so my ability to report is also changing.
Actually, there’s a better example of this. I think it’s worth saying, because this is another classic example: visual masking. For instance, I might show an image very briefly. And if I show it sufficiently briefly, or I show it sort of surrounded in time by two other images or just irrelevant shapes, then you will not consciously see that target image. As we would say in psychophysics, it is “masked.” The signal is still received by the brain, but you do not consciously perceive it. If I make the time interval between the stimulus and the mask a little bit longer, then you will see the stimulus.
Now, I can’t keep it exactly the same. Basically you can just work around this threshold so it’s effectively the same, but sometimes you see the stimulus and sometimes you don’t. And now again, you can look at the brain correlates.
Luisa Rodriguez: Oh, that’s really cool.
Anil Seth: Not of like house versus face, but in this case, seeing a house or not seeing a house. But again, in both cases the person is conscious. So there are many different ways you can try and apply this method.
And the reason I use that example is that the problem here is that, when the person sees the face, so the masking is a bit weaker, yes, they have a conscious experience — but again, they also engage all these mechanisms of access and report. Like they can say that they see the house, they press a button. So you’ve also got to think that maybe the difference I’m seeing in the brain is to do with all that stuff, not with the experience itself.
And you can just keep going. People have designed experiments where they now ask people not to make any report, and try to infer what they’re seeing by clever methods: these no-report things. And then other people say, “But hold on, they’re still reportable. So you’re not controlling for the capacity to be able to.” And it’s like, oh my word.
So you just keep going down this rabbit hole, and you get very clever experiments. It’s really interesting stuff. But ultimately, because correlations are not explanations, I think you’ll always find something where you can say, well, is it really about the consciousness, or is it about something else?
Articles, books, and other media discussed in the show
Anil’s work:
- Being you: A new science of consciousness
- Your brain hallucinates your conscious reality — Anil’s TED Talk
- Anil Seth on emergence, information, and consciousness — episode on Sean Carroll’s Mindscape
- Theories of consciousness (with Tim Bayne)
- Conscious artificial intelligence and biological naturalism
- Psychedelics and schizophrenia: Distinct alterations to Bayesian inference (with coauthors)
- Intentional binding without intentional action (with coauthors)
- For more, check out Anil’s website, as well as research from the Sussex Centre for Consciousness Science (where Anil is the director)
Others’ work in this space:
- Blindsight: The strangest form of consciousness by David Robson
- Intact navigation skills after bilateral loss of striate cortex by Beatrice de Gelder et al.
- Blindsight in monkeys by Alan Cowey and Petra Stoerig
- Thinking, fast and slow by Daniel Kahneman
- What a contest of consciousness theories really proved by Elizabeth Finkel
- Work on split-brain patients by Yair Pinto
- Consciousness in artificial intelligence: Insights from the science of consciousness by Patrick Butlin et al. (also check out our episode with coauthor Robert Long on this topic)
- Other minds: The octopus, the sea, and the deep origins of consciousness by Peter Godfrey-Smith (also a recent guest of the show!)
- The edge of sentience by Jonathan Birch (who Luisa also recently interviewed on the show)
Other 80,000 Hours podcast episodes:
- Jonathan Birch on the edge cases of sentience and why they matter
- David Chalmers on the nature and ethics of consciousness
- Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation
- Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more
- Meghan Barrett on challenging our assumptions about insects
- Eric Schwitzgebel on whether the US is conscious
- Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers
- Robert Long on why large language models like GPT (probably) aren’t conscious
- Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
Transcript
Table of Contents
- 1 Cold open [00:00:00]
- 2 Luisa’s intro [00:01:02]
- 3 The interview begins [00:02:42]
- 4 How expectations and perception affect consciousness [00:03:05]
- 5 How the brain makes sense of the body it’s within [00:21:33]
- 6 Psychedelics and predictive processing [00:32:06]
- 7 Blindsight and visual consciousness [00:36:45]
- 8 Split-brain patients [00:54:56]
- 9 Overflow experiments [01:05:28]
- 10 How much we can learn about consciousness from empirical research [01:14:23]
- 11 Which parts of the brain are responsible for conscious experiences? [01:27:37]
- 12 Current state and disagreements in the study of consciousness [01:38:36]
- 13 Digital consciousness [01:55:55]
- 14 Consciousness in nonhuman animals [02:18:11]
- 15 What’s next for Anil [02:30:18]
- 16 Luisa’s outro [02:32:46]
Cold open [00:00:00]
Anil Seth: Everyone will say the brain is this incredibly complex object — one of the most, if not the most complex object that we know of in the universe. And it’s true, it’s very complex: it has about 86 billion neurons and 1,000 times more connections.
But about three-quarters of the neurons in any human brain do not seem to care that much about consciousness. These are all the neurons in the cerebellum. I feel sorry for the cerebellum. Not many people talk about it. When you hear people colloquially talking about the brain, they talk about the frontal lobes or whatever.
But the cerebellum is this mini brain that hangs off the back of your head that is hugely important in helping coordinate movement, fine muscle control. It’s turning out to be involved in a lot of cognitive processes as well, sequential thinking and so on — but just does not seem to have that much to do with consciousness. So it’s not a matter of the sheer number of neurons; it’s something about their organisation.
Luisa’s intro [00:01:02]
Luisa Rodriguez: Hi listeners. This is Luisa Rodriguez, one of the hosts of The 80,000 Hours Podcast. In today’s episode, I talk with neuroscientist Anil Seth about how much we can learn about consciousness by directly studying the brain.
Regular listeners will know that we’re especially interested in whether we can identify consciousness in nonhuman minds — from chickens to insects to even machines — because it seems like such a big deal for figuring out where to put our finite resources. It’s a lot easier to avoid a moral catastrophe if you have a good idea of which beings experience things like pain and pleasure and joy and loss, and who or what don’t experience anything.
So we cover:
- Whether theories of human consciousness can be applied to nonhuman animals or machines.
- Whether looking for the parts of the brain that correlate with consciousness is the best approach.
- Anil’s view that our conscious experience is more a result of predictions the brain is making, rather than a direct representation of reality.
- What we can and can’t learn from classic experiments on blindsight and split-brain patients.
- The biggest disagreements between scientists in this field.
- And much more.
Without further ado, I bring you Anil Seth.
The interview begins [00:02:42]
Luisa Rodriguez: Today I’m speaking with Anil Seth. Anil is a neuroscientist at the University of Sussex and director of the Sussex Centre for Consciousness Science. He’s the author of Being You: A New Science of Consciousness, and his TED Talk, Your brain hallucinates your conscious reality, has over 14 million views. Thanks for coming on the podcast, Anil.
Anil Seth: It’s a pleasure to be here. Thanks for having me.
How expectations and perception affect consciousness [00:03:05]
Luisa Rodriguez: In both your book and your TED Talk, you explore this idea that one of the major tools our brain uses to generate our conscious experience is prediction.
So I’ll quote you: “The brain doesn’t hear sound or see light. What we perceive is our best guess of what’s out there in the world.” Can you start by explaining a bit more about exactly what that means?
Anil Seth: Yes, I’ll try. It’s a bit of a poetic way to put what is a very old idea that what we experience is indirect: it’s not a direct reflection of objective reality. It’s not even clear what that could even mean, to have a transparent experience of the world as it really is.
Think about colours, for instance. We experience colours as being out there in the world, as just existing in this kind of mind-independent way: The car across the street really is this red.
Luisa Rodriguez: It’s inherently red.
Anil Seth: It’s inherently, intrinsically red. And that’s a property of the car or the paint or whatever.
But we know this is a really unsatisfactory explanation for what’s going on. Some people with colour-blindness will see it differently. We probably all see it differently. In fact, one of our major studies at the moment is to look at perceptual diversity: how we each experience a different world.
The experience of colour is what the brain makes of a particular way in which the surface of the car reflects light. Our eyes contain more types of photoreceptors, but three types of cone cells, which are sensitive to different wavelengths of light. And these wavelengths, we typically call them red, green, and blue; or short, medium, and long. But they’re not actually red, green, and blue; they’re just three wavelengths of electromagnetic radiation — which is not intrinsically colour; it’s just radiation.
And it’s our brains that, through combinations of these different wavelengths, come up with an inference about how surfaces reflect light. And that’s what we experience as colour: an inference about how a surface reflects light. If I take a white piece of paper from inside to outside, still nicely daylight here, then the paper still looks white, even though the light coming into my eyes from the paper has changed a lot.
So this tells me that the colour I experienced is not only not a property of the object itself, but it’s also not just a transparent readout of the light that’s coming into my eyes from the object. It’s set into the context. So if the indoor illuminance is yellowish, then my brain takes that into account, and when it’s bluish, it takes that into account too.
This, by the way, is what’s at play in that famous example of the dress, which half people in the world saw one way, half people saw the other way. It turns out there’s individual differences in how brains take into account the ambient light.
All this is to say that colour is one example where it’s pretty clear that what we experience is a kind of inference: it’s the brain’s best guess about what’s going on in some way out there in the world.
And really, that’s the claim that I’ve taken on board as a general hypothesis for consciousness: that all our perceptual experiences share that property; that they’re inferences about something we don’t and cannot have direct access to.
This line of thinking in philosophy goes back at least to Immanuel Kant and the idea of the noumenon, which we will never have access to, will only ever experience interpretations of reality. And then Hermann von Helmholtz, a German polymath in the 19th century, was the first person to propose this as a semiformal theory of perception: that the brain is making inferences about what’s out there, and this process is unconscious, but what we consciously experience is the result of this inference.
And these days, this is quite a popular idea, and it’s known under different theoretical terms like predictive coding, or predictive processing, or active inference, or the Bayesian brain. There are all these different terminologies.
My particular take on it is to finesse it to this claim that all conscious contents are forms of perceptual prediction that are arrived at by the brain engaging in this process of making predictions about what’s out there in the world or in the body, and updating those predictions based on the sensory information that comes in.
And this really does flip things around. Because it seems as though the brain just absorbs the world; it just reads the world out in this kind of outside-in direction. The body and the world are just flowing into the brain, and experience happens somehow.
And what this view is saying is it’s the other way around: yes, there are signals coming into the brain from the world and the body, but it’s not that those signals are read out or just transparently reconstituted into some world, in some inner theatre. No, the brain is constantly throwing predictions back out into the world and using the sensory signals to calibrate its predictions.
And then the hypothesis — and it’s still really a hypothesis — is that what we experience is underpinned by the top-down, inside-out predictions, rather than by the bottom-up, outside-in sensory signals.
This is why in the book and in the TED Talk I use this metaphor — or slogan, I think is probably better — of “perception is a controlled hallucination.” It’s a hallucination in the sense that it’s internally generated, it’s coming from the inside out. But equally important is the control: it is not dissociated from reality; it’s very tightly coupled to the world and the body as they are, in some mind-independent way, they are controlled by reality.
Luisa Rodriguez: Yeah, yeah. I basically feel like I got a really clear sense of this idea when you point at the visual illusions I’m familiar with, like the dress.
Anil Seth: Do you remember what you saw when you saw that image?
Luisa Rodriguez: I see white and gold.
Anil Seth: You see white and gold. And I see blue and black. So, you know, we can’t be friends.
Luisa Rodriguez: Yeah. Done. That’s enough for the interview. We’re done here.
So yeah, there is something I do understand a lot about how kind of intuitively our brains seem to not just be taking wavelengths, but doing something like, “That’s a sheet of paper. I expect it to be white.” So even when it’s in this room with yellow light or when it’s in the blue or daylight, I still just perceive it as white.
How does this apply to other kinds of brain processes besides perception? Is there a sense in which you think that predictive processing or something similar is informing how we understand other parts of reality, besides just what we see and what we hear?
Anil Seth: I think so. And I think that it’s certainly going to give me something to do for the rest of my career, to sort of see what mileage that has.
But I think there’s good reason to think this principle has a fairly general applicability to many different things brains do and many different aspects of our experience. This is because this process of predictive inference is useful in so many ways. In my book and in the work I was doing that led to the book, a lot of that stemmed from a recognition — again, not mine, but others’ — that prediction is very useful for control.
I know we’re going to talk about AI later, but right at the beginning of AI, in the 1950s and ’60s, it had this cousin called cybernetics, which was equally prominent at the time. And instead of going down the road of trying to build computers that could think, reason, and play chess, cybernetics was all about control and regulation: feedback loops and generative models and things like that.
And cybernetics got kind of lost from the centre stage, but I think it’s coming back now because it’s an essential framework for understanding what biological brains do: they’re fundamentally implicated and probably evolved for control and regulation — not only control of our bodies through space, and my hands if I pick up a cup of tea, but control of the internal physiological milieu as well.
If you think about it, the fundamental reason an organism has a brain is helping it stay alive. And staying alive is, first and foremost, a business about what happens under the skin: keeping the heart beating, keeping blood pressure right, keeping blood oxygenation within the very narrow bounds that are compatible with continuing living.
And prediction is a very important way of exerting control to make it robust. If I stand up now, then my blood pressure would normally fall and I might faint. The reason I don’t faint when I stand up is my brain is anticipating that standing up will lower blood pressure, and it’s increasing blood pressure so that it actually stays the same.
So this is anticipatory control, or allostasis is the more general term: you change things to achieve stability in the long run. You can think of economics doing something similar: you know, change interest rates to try and keep inflation the same over time — usually unsuccessfully. The brain does a better job when it comes to standing up.
So this is a very deep-seated reason why brains might have evolved this ability to do predictive inference. Because now, instead of using predictive inference to figure out what’s there, it’s basically saying that there’s a goal, there’s a set point: “I want blood pressure to be X.” And in this whole framework of inference, this serves as a prior, right? But now, instead of just as a starting point for what my brain should believe, it’s a goal, it’s an endpoint. So instead of updating the prior to fit the data, you now update the data to fit the prior.
So how can I do that? Well, my brain changes the constriction of blood vessels so that the blood pressure stays in the place it’s expecting to be. In predictive processing, this is called “active inference”: the use of action to fulfil expectations, rather than updating expectations in the face of data.
I see what brains do as this delicate shifting balance between these two ways in which predictive processing can unfold. If you spin me around, take me to a different room and open my eyes, it’s all going to be about updating the priors with new sensory data. “Where am I? What’s going on?” But if I then stand up from the chair, it’s going to be the other thing. It’s going to be, “I don’t care what’s going on. I want my blood pressure to be X. So I’m going to take the action needed to do X.”
Luisa Rodriguez: Again, I want to make sure I understand. So on the perception side of things, we don’t actually have access to the physical things out in the world, directly into our brain. Our brain doesn’t get to be somehow directly accessing lightwaves. Lightwaves come in through the eye and then the brain has kind of figured out, through evolution, how to turn lightwaves into something it can add to its general picture of what’s out there through an understandable… I guess, by colour coding, literally —
Anil Seth: It’s very literally these lightwaves get turned into electrical impulses. Everything gets turned into electrical impulses, whether they’re soundwaves or lightwaves or neurochemicals or hormones. From the brain’s perspective, pretty much everything that’s happening — maybe not everything, because there are chemicals floating around as well, washing around — but electrical impulses, currents, fields are at least a major common currency for the brain.
I think another way to put it is: imagine being your own brain. If you just put yourself in your brain’s shoes, so to speak, for a second, it becomes a bit clearer, because in the skull there’s no light, there’s no sound: it’s dark, it’s silent. So this idea that you’d have direct access to the real world already seems to be rather strange, because light doesn’t even get in to the brain as light, it gets in as something else. So what we experience as light is not light; it’s the brain’s inference about electrical impulses that are triggered by electromagnetic radiation.
Luisa Rodriguez: Right. That was really helpful. And so then the prediction part is that it’s not just taking those electrical impulses that get created… Literally every wavelength of a certain amplitude or whatever isn’t perceived as the exact same colour, because the brain is doing this thing of context. We’re in the context of a dark room, and therefore all the wavelengths I’m going to slightly shift up a bit. They’re all going to seem a bunch more like colours than what they actually are — which is like barely any colours, mostly blacks and greys or something. And so the thesis is just that the brain is doing this all the time and everywhere.
Actually, maybe it’s a good time to do an example from your TED Talk.
Here’s an audio clip you’ve played around with, so that, for me, it’s basically unintelligible:
[Brexit treated]
Then here’s the original clip:
[Brexit]
And then here’s the doctored one again:
[Brexit treated]
And I can now hear it easily! And I guess that’s because my brain can now fill in the gaps in the audio using its prior belief about what’s being said there! Which is super cool, and really illustrates this concept for me!
Anil Seth: Another part of this story I think is worth dwelling on for a second is that we have this idea that the brain is never going to have direct access to the world, so everything we experience has this necessary element of interpretation.
So why prediction? I said a bit earlier that prediction is very useful for control. There’s also another way to think about it, which is that the brain — even when it’s not trying to control, obviously, but it’s just trying to figure out what’s out there — the problem looks like one of Bayesian inference. It’s trying to figure out, given some unavoidable uncertainty, “What’s my best guess? What’s the best guess of what’s going on, given some prior starting point?”
Some “prior belief,” as it’s called in Bayes. It doesn’t mean the person believes. I don’t know what my brain believes. My brain believes all kinds of crazy things about how light behaves and so on. But the idea is the brain encodes some beliefs about what’s going on — that’s the prior — and there’s some new information, which is the sensory data (in Bayes that’s the “likelihood”). And the whole idea of Bayes is that you make your best guess: you combine the prior and the likelihood and normalise and whatever, and you get the posterior — and that’s the best guess, the Bayesian posterior.
Now, it turns out that this is a very tricky computation to achieve, Bayesian inference. Analytically, it just can’t often be done. It always has to be done in approximation. So you need to take the sensory signal as a prediction error, and you then try and minimise prediction error everywhere and all the time. So if the brain follows this principle — this gradient of changing the data or changing its predictions to continually try to minimise prediction error, so it’s a very simple rule for the brain to follow — it turns out that will lead to the whole thing approximating Bayesian inference.
Luisa Rodriguez: Wow, that’s fascinating.
Anil Seth: So that’s another reason why we can think of this being a very general principle for the brain to follow.
Now, as with all these things, there’s a lot of nuance and detail. And at the front edge of this research, people argue whether we should indeed think about the brain as being Bayesian everywhere and all the time. But I think as an intuitive framing for it, it’s very helpful. That’s why the brain is engaged in this dance of prediction and prediction error, because it’s its way of approximating, inferring to the best explanation, making its best guess.
Luisa Rodriguez: Right.
How the brain makes sense of the body it’s within [00:21:33]
Luisa Rodriguez: OK, so you gestured earlier about how the brain is also doing this kind of prediction thing to make sense of our internal body. Can you talk more about that?
Anil Seth: The basic idea is very similar. We mentioned just a minute ago that the brain is making its best guess of the causes of sensory signals that come from the world. And the claim is that that’s what we experience, the best guess. But if you think about the body from the perspective of the brain, the interior of the body, it’s also isolated: the brain doesn’t have direct access to what’s going on inside the body. It also has to infer, it has to make a best guess.
So the idea here is that the same principle applies: in order to make sense of the signals coming from inside the body, the brain is continually making predictions and then updating these predictions using sensory data. But now these are sensory signals from the interior of the body, conveying things like blood pressure and heart rate and gastric tension and levels of different chemicals in the body and so on. Collectively, this is called interoception — not introspection, which is thinking about thinking — but interoception, which is perception of the interior milieu.
And the upshot of this, and this is the claim — it’s very hard to test, because it’s much harder to do these precise experiments inside the body than you might do when you show people images in a very well-controlled fashion in the lab — and I’ve been making this claim along with a couple of others for over 10 years now, is that emotion is the result of this process. So what we experience as an emotion is the content of the brain’s best guess about sensory signals coming from inside the body, this process of interoceptive inference.
And it’s actually just a predictive processing gloss on a very old story about emotion from William James and Karl Langer: that emotions are appraisals of physiological state. So this goes back a long time, that emotions are what happens when the brain perceives the interior of the body in some wider context.
Luisa Rodriguez: Yeah. Can you give a specific example? Is anxiety an example where you can describe a scenario where this is where this might be the case?
Anil Seth: Yes. Well, I think it’s the case all the time, but where it’s particularly plausible are things like anxiety and related emotions.
So there’s a classic study from the 1970s, the kind of thing you’d never get ethics for these days, by Dutton and Aron. They got a bunch of students to walk over bridges. One bridge was a very stable bridge, low over a river, very non-scary. The other bridge was very rickety, very high above a raging torrent, a pretty scary bridge to walk over. So these students would walk over the bridge, and they’re all male students, and on the other side — this is why you’d never get ethics for this — was an attractive woman holding a questionnaire.
And they go through some questions, and at the end of that questionnaire, the female would give each of the guys her phone number and say, “If you have any questions about the study, just give me a call.” And of course, that was the object of the study: how many of the guys would make a call, ask the woman out for a date or something like that? It turns out that the guys who went over the rickety bridge were much more likely to make the call.
The interpretation is that they’d have misread the physiological arousal — which was actually caused by walking over the scary bridge — as some kind of attraction, some sexual tension, maybe. And so they felt this emotion because the emotion is not just a transparent readout of what’s in the body; it’s a kind of contextualised readout of what’s going in the body — in exactly the same way that when we were talking about colour, the colour we experience something to be is not just a transparent property of how it reflects light. We experience the colour in the wider context of the ambient light. And the same goes for emotion.
So that’s the story. It is a story: none of this is evidence at the level of the idea that there’s this continual dance of interoceptive prediction and prediction error. That’s a much harder thing to test.
Luisa Rodriguez: Yeah, yeah. And just to make sure I get the emotion idea: the thinking is something like, maybe in the way that your brain is interpreting light, it’s also interpreting different kinds of signals from inside the body, maybe like stress hormones. So it’s getting stress hormones in response to this bridge, and then it’s thinking about calling this experiment-runner person. And it’s got a bunch of stress hormones going on, so the brain is also like, “What’s going on there? Probably we are attracted to this person, because that’s the kind of internal response I get in that context as well.” Is that kind of understanding it?
Anil Seth: Yeah, that’s about right. I mean, it’s not just stress hormones. One key thing is there’s an awful lot of neural activity that goes into the brain from the body. There’s the vagus nerve, there’s all sorts of neural traffic that comes directly into the brain.
I think the tricky thing to get your head around with this, that certainly troubled me for a while, is that it seems as though things like emotion and the other aspects of what we call “the self” are even more given than things out there in the world. When we start to think about how the brain makes sense of sensory signals from the world, it kind of makes sense that there’s got to be some process of interpretation going on here.
But if we think about all the facets of experience that have to do with being you or being me — and this is really where the book goes — it’s easier just to take all that for granted, and think, “Well, that’s just me. What’s there to explain?” And the brain being part of the body, you might think there’s just nothing really to explain, that it can just take that for granted. But of course you can’t. I mean, the whole point, I think, of research into consciousness is to not take for granted things that you otherwise would. And the experience of being a self is pretty central to that.
Luisa Rodriguez: Yeah, nice. That’s really helpful. You also gave a few examples in the interoception category about just how we perceive our organs. Like, why our perception of organs might be more like fuzzy pain sometimes than like… I guess you might think that the body, in the same way that I’ve got a bunch of touch receptors on my hands, and I could pick up an apple and feel the shape of it, that you could have that for your spleen and feel the shape of it.
But instead, we have really different experiences of our internal organs than we do for outside things. And it seems like that could be because the kinds of information we need about the inside things is really different. Am I heading in the right direction?
Anil Seth: Yes, you are. You’re telling the story beautifully. This is exactly the idea. I think one of the powerful aspects of the whole predictive brain account is the resources it provides to explain different kinds of experiences: different kinds of prediction, different kinds of experience.
So when we are walking around the world with vision, the nature of visual signals and how they change as we move explains the kinds of predictions the brain might want to make about them. And visual experience has that phenomenology of objects in a spatial frame.
But then when it comes to the interior of the body, really there’s no point or need for the brain to know where the organs are, or what shape they are, or even really that there are internal organs at all.
Luisa Rodriguez: Right. Which is why mostly I don’t feel anything like organs.
Anil Seth: That’s right. I mean, I wouldn’t even know I had a spleen. I don’t know if I do. I mean, I just believe the textbooks. They could all be wrong. I’ve got no real experiential access to something like a spleen. Sometimes you can feel your heartbeat and so on, feel your stomach.
So what the brain cares about in these cases is not where these things are, but how well physiological regulation is going — basically how likely we are to keep on living. And this highlights this other aspect of prediction that we talked about: that prediction enables control. When you can predict, you can control.
And the core hypothesis that comes out of the book is really that this is why brains are prediction machines in the first place. They evolved over evolutionary time. They develop in each of us individually, and they operate moment to moment, always under this imperative to control, regulate the internal physiological milieu: keep the blood pressure where it needs to be, keep heartrate where it needs to be, keep oxygen levels where they need to be, and so on.
And if you think about things through that lens, then emotional experiences make sense, because emotional experiences are, to oversimplify horribly, variations on a theme of good or bad. They have valence: things are good or bad. Disappointment is like, things were going to be good and things are bad. Regret might be, things could have been better. Anxiety is, everything is likely to be bad.
So there’s sort of valence to everything. And that’s what you would expect if the predictions corresponding to those experiences were more directly related to physiological homeostasis, physiological regulation: when things depart from effective regulation, valence is low; when things appear to be going well, valence is higher, more positive.
Luisa Rodriguez: That’s really fascinating.
Psychedelics and predictive processing [00:32:06]
Luisa Rodriguez: I think you talk a bit in your TED Talk about psychedelics and how they might relate to predictive processing as a theory. Can you talk about that? I found that kind of fun.
Anil Seth: Yeah. Psychedelics are hugely interesting. I think they’re quite controversial still in many spaces about their clinical efficacy. But setting that to one side, they provide this potentially insightful window into consciousness, because you have this fairly, in some ways, subtle manipulation, pharmacologically — a small amount of LSD or some other psychedelic substance — and then experience changes dramatically, so something is going on.
So I think the interesting thing here is not to take psychedelic experiences as some deeper insight into how things really are — you know, as if other filters have come off and “I see the universe truly for the first time” — but to think of them as data about the space of possible experiences and what we shouldn’t take for granted.
And you find people interpreting experiences in both ways, but I’m very much on the side of: no, they don’t give you a deeper insight into how things are in the universe, but they do help us recognise that our normal way of experiencing things is a construction, and is also not an insight directly reflecting reality as it is.
And then, if you think about the kinds of experiences people have on psychedelics, there’s a lot of hallucinations, but now these hallucinations become uncontrolled compared to the controlled hallucination that is a characteristic of normal, non-psychedelic experience. So I think predictive processing provides a natural framework for understanding at least these aspects of the psychedelic experience. I mean, there are other aspects too that could be more emotional, could be more numinous, and other such words.
But in the creation of visual experiences, it does seem that the brain’s predictions start to overwhelm the sensory data, and we begin to experience the acts of perceptual construction itself in a very interesting way. I remember staring at some sort of clouds and just seeing them turn into people and scenes in ways which seemed almost under some kind of voluntary control, although I didn’t have much voluntary control at the time. But this makes sense to me from the perspective of perception as a controlled hallucination becoming uncontrolled.
In the lab we’ve done some studies now where we’ve built computational models of predictive perception, and then screwed around with them in various ways to try and simulate the phenomenology of psychedelic hallucinations, but also other kinds of hallucinations that people have in Parkinson’s disease, in dementia, and in other things. There are different kinds of hallucinations. So what we’re trying to do is get quite granular about the phenomenology of hallucination, and tie it down to particular differences in how this predictive process is unfolding.
Luisa Rodriguez: And so the way that the hallucination is becoming uncontrolled is because the psychedelic substance is kind of breaking the predictive process? Correct me if I’m wrong.
Anil Seth: I think that’s the idea. It’s hard to know exactly. There’s a bit of a gap still. On the one hand, what psychedelics do at the pharmacological level, the molecular level, is pretty well understood: they inhibit serotonin reuptake at this particular serotonin receptor. We know where these serotonin receptors are in the brain. That’s what they do at that level. And then we kind of know what they do at the level of experience: everything changes. Many things change, at least.
So what connects the two? I think that’s the really interesting area. So the hypothesis is that at least part of the story can indeed be told: it must be something about their mechanism of action at these serotonin receptors that disrupts this process of predictive inference. But exactly how and why is still an open question. Some colleagues of mine at Imperial College have done some work on this, trying to simulate some predictive coding networks and model how they may get disrupted under psychedelics. But something like that, I think, is going on.
Luisa Rodriguez: Super, super cool.
Blindsight and visual consciousness [00:36:45]
Luisa Rodriguez: OK, turning to another topic. I’m especially interested in — and I think our listeners are especially interested in — whether we can identify consciousness in other animals, and in machines, potentially. I feel like that would get me a lot of the way in knowing how to prioritise the welfare of nonhuman minds, and think about what policies we want in place to protect nonhuman minds.
So I’ve been super excited to ask you a bunch of questions looking at the state of neuroscientific research into consciousness: What do we actually know about how it works? And how much progress have we made on identifying the associated mechanisms in the physical brain?
I guess there will be lots of angles on this question, but I thought it’d be interesting to start with some kind of early groundbreaking studies in consciousness — that I think some people may have heard of, but that I think I’d at least not really thought about their full conclusions for consciousness. These are experiments like the split-brain experiments, the blindsight experiment.
So if you’re happy for me to dive in there, I’d love to start with blindsight.
Anil Seth: Sure. These are a wonderful series of experiments. It’s a very evocative term, and they track what people can do after a particular kind of brain damage. These are people that had brain damage to the visual cortex and particular parts of the visual cortex. The key observation in blindsight was that people with this condition would report that they had no visual experience — that they were essentially blind; they did not experience seeing anything — however, they were still able to behave in ways that seemed to rely on vision.
There are a couple of examples of this. There’s a famous blindsight patient called “DB.” (In neurology, people are always known by their initials; HM is another famous one in the study of memory.) DB was given pieces of paper and asked to post them through a slot like a letterbox, which could either be horizontal or vertical. And he would say that he can’t see the slot, so how can he do the task? And if you asked, “Well, just guess. Just do it anyway,” he would get it right most of the time — but he would not know how he was doing this.
In another example, a person with blindsight was able to walk down a corridor with lots of furniture strewn — again, while reporting not being able to see anything.
So this is kind of fascinating. What it shows is that not everything that is visual is consciously visual. It’s really led to this idea of multiple visual pathways, some of which underpin our conscious visual experience — and these tend to have more to do with what things are; the identity, appearance — and another pathway, which is more to do with visually guided behaviour, and is not always or necessarily implicated in consciousness.
Now, the problem with experiments that depend on lesions, on brain injuries, is that we’re really damaging a system which in the rest of us is working in a very integrated way. So we also have, in neuroscience, these ideas of two visual pathways: the what pathway and the where pathway. But most of the time we’re conscious of what things are and where they are and how they’re moving. So I don’t think that the two things are exactly the same.
But blindsight certainly shows that you can sort of strip away aspects, the conscious perception aspect, and still leave something. So that gives some idea about, maybe the part that’s damaged then was really implicated in the conscious experience itself.
But just to bring it up to date: the great thing about blindsight experiments is they’re pretty dramatic, right? People still behave and they claim that they’re blind, whereas in the lab we tend to do these very subtle manipulations. The challenge with a lot of them is that it’s sometimes unclear what people with blindsight mean when they say they’re blind. Are they saying they see nothing, but really it’s just a very, very impaired version of normal vision? It’s kind of hard to know. And actually getting the data on what somebody’s experience is like is a very tricky problem indeed.
Luisa Rodriguez: Wait, I’m confused by that. I can absolutely imagine how lots of my conscious experience is actually not that accessible to me to report on, but I would have thought vision, even if it was incredibly impaired, something like, “I see things, but they’re a little bit fuzzy” would have been reportable. Is there some really complicated way in which it might be true that people might be having really fuzzy, vague visual experiences but not be able to consciously report on them?
Anil Seth: Well, I think it’s a good question. It’s one of those things that’s a bit hard to know. People might have internal thresholds about what counts as a reportable visual experience, and those thresholds might be set somewhere other than zero.
And this does bring us into the territory about how much we can know about our visual experiences. I think you sound like you’d be comfortable with the idea that we might not be able to report about all aspects of our visual experience, but less comfortable with the idea that, if there was any kind of visual experience, that we might not be able to report that. I think I understand that, and it seems intuitively like a different kind of thing that we would have any.
But you know, in the lab, when you do experiments showing very, very dim patches of light, it’s very hard, if you introspect on your experience, “Did I…? Was that an experience or wasn’t it?” At the edges, it becomes quite difficult to know.
Luisa Rodriguez: Huh. OK.
So do you think that these people, does it count as a conscious experience if their brain is taking this input and taking actions as a result, even though they’re not sort of aware of it? Is that consciousness, or is that just purely in the subconscious?
Anil Seth: Well, I’m glad you raised that, because what we didn’t do yet is define consciousness.
Luisa Rodriguez: Right. Yes, let’s do that.
Anil Seth: It’s probably useful. And controversial, of course: people have their own definitions. But the definition I like is quite pragmatic. It’s from Thomas Nagel, a philosopher, who says that for a conscious organism, there is “something it is like to be” that organism. So it’s very minimal. I think, in essence, it’s just saying that consciousness is any kind of experience. If it feels like something to be me or you, there’s consciousness happening. If we’re out under general anaesthesia or dead or turned into a rock, well, there’s no consciousness there at all.
Really, it’s almost so simple that it’s almost circular: consciousness is any kind of experience. Any kind of experience is consciousness. But you can define it in opposition, too: it’s what goes away under general anaesthesia, or when you die.
So from that perspective, someone who is reporting not experiencing anything visually, yet still navigating visually or doing some visual task, they’re still conscious in the sense that, as an organism, they’re in a globally conscious state — because they’re able to talk to you and move around. And it’s just in this specific domain of vision that one would say that type of conscious content is missing.
So I would say they’re unconscious in a visual sense, but not in a global sense. And this, of course, relies on accepting their reports at face value. When they say they don’t experience anything visually, we just accept that. Of course it’s a very interesting question exactly what that means for each person in question.
Luisa Rodriguez: Yeah, yeah. That was really helpful. Is there anything else you think we can learn from these studies before I ask you about another one?
Anil Seth: Not really. I just thought what I should have done at the beginning is just mention the key people who did these experiments — people like Larry Weiskrantz and Alan Cowey and Beatrice de Gelder — really pioneered this stuff. And it’s fascinating. It’s really fascinating work.
It’s quite rare to find people with blindsight, because when you get lesions in the brain, it’s not that often circumscribed or restricted to the visual cortex exactly. This is another issue, because if you like natural experiments, you don’t go out and deliberately damage a human being’s brain to see what happens. So there are questions as well about how extensive the brain damage was and so on.
Now, you can do some of these experiments in animals — if you have good enough ethical justification to do it — but you also face the problem that an animal can only indirectly tell you whether it’s experiencing anything or not. So there’s a whole set of blindsight studies that were done in monkeys by Alan Cowey and Petra Stoerig, and they’re fascinating. On the one hand, you can be much more confident about which part of the visual cortex is gone, because it was defined experimentally. On the other hand, it’s rather more challenging to interpret what the monkey is, if anything, consciously experiencing.
Luisa Rodriguez: Yeah, absolutely. I had the thought to ask if there had been any animal studies done, but I assumed you couldn’t, because while you might get evidence like the monkey can put the thing through the mail slot, you don’t know whether the monkey is having the experience of visually what it looks like and how to get it right. Do we just kind of look for them behaving as if they can’t see anything otherwise?
Anil Seth: That’s right. So Petra Stoerig and Alan Cowey did a very clever experiment. I’m not sure I’ll remember the details fully, but I think the basic idea was that you can show that monkeys can indeed still perform visually guided behaviour after you damage the early visual cortex. And then you give them another task, which is you try and ask them — through giving them a reward if they get it right — to tell the difference between a visual display with something on it and a visual display with nothing on it. And they seem to not be able to do that. I think it’s something like that.
Basically there’s another aspect of the experiment which suggests that the monkeys are really unable to discriminate whether there’s something or nothing going on, which suggests that they’re not able to know whether they’re having a visual experience. It doesn’t mean that they’re not having it; it’s a level of indirection higher. They’re not able to know whether they’re having one or not, which is suggestive that if you don’t know whether you’re having a visual experience or not, then probably you’re not having one. But it is tricky to interpret.
Luisa Rodriguez: That’s fascinating. That’s going to make me want to go read about that. I’ll move us on in just a second, but do these kinds of experiments then point to a really specific part of the brain? How specific do we get information on, like, “This part of the brain seems highly correlated and possibly responsible for conscious vision”?
Anil Seth: They help a little bit. I think the search for “the part” where consciousness happens is the wrong search to be engaged in. You’re not going to find it like a little piece of magic dust underneath one fold in the cortex.
Luisa Rodriguez: These neurons.
Anil Seth: Yeah, exactly. But no, they give us some intuition, or some evidence. What these studies show is that if you damage V1, early visual cortex: this is the first cortical way station that visual information takes on its march through the brain. It comes to the eyes, it goes to a deep part of the brain called the lateral geniculate nucleus, and there it goes to the cortex. And V1 is right at the back of your head. It’s the part of the brain that’s right at the very back.
If you get rid of V1, it seems as though conscious experience also goes away, but some aspects of visual behaviour still remain. So it doesn’t tell us that V1 is where consciousness happens; it just tells us that it seems to be necessary. It’s necessary but not sufficient.
So those are the kinds of things that we can infer from these lesion experiments. The tricky thing is when we try to line up different kinds of experimental data and make sense of everything in the round. But you’ll not find a single way of isolating that that’s where the magic happens.
Luisa Rodriguez: Right. Do you think it’s possible to pinpoint the specific locations in the brain that are responsible for consciousness?
Anil Seth: Well, there’s a lot of ways to answer that question. There are some parts of the brain where, if you damage them, then consciousness goes away entirely. Not just the specific conscious contents, but all of consciousness. And typically, these are areas lower down in the anatomical hierarchy — so brainstem regions, this bit at the base of your skull. If you’re unlucky enough to have a stroke that damages some of the regions around there, like especially these so-called midline thalamic nuclei, then you will be in a coma. So consciousness gone.
Is that where consciousness is? No, it doesn’t say that at all. In the same way that if I unplug the kettle, the kettle doesn’t work anymore — but the reason the kettle boils water is not to be found in the plug. So that’s one kind of thing you can find, but it doesn’t necessarily tell you very much.
Then, when it comes to which parts of the brain are more directly implicated in consciousness, this is, of course, where a lot of the action is in the field these days: let’s find these so-called “neural correlates of consciousness.” And there’s one surprising thing, which it’s worth saying, because I always find it quite remarkable: everyone will say the brain is this incredibly complex object — one of the most, if not the most complex object that we know of in the universe, apart from two brains. And it’s true, it’s very complex: it has about 86 billion neurons and 1,000 times more connections.
But about three-quarters of the neurons in any human brain do not seem to care that much about consciousness. These are all the neurons in the cerebellum. The cerebellum just is like… I feel sorry for the cerebellum. Not many people talk about it — well, with respect to people who spend their whole careers on it. But when you hear people colloquially talking about the brain, they talk about the frontal lobes or whatever.
But the cerebellum is this mini brain that hangs off the back of your head that is hugely important in helping coordinate movement, fine muscle control. It’s turning out to be involved in a lot of cognitive processes as well, sequential thinking and so on — but just does not seem to have that much to do with consciousness. So it’s not a matter of the sheer number of neurons; it’s something about their organisation.
And the other, just basic observation about the brain is that different parts of it work together. It’s this fascinating balance of functional specialisation where different parts of the brain are involved in different things: a visual cortex is specialised for vision, but it’s not only involved in vision. And the further you get up through the brain, the more multifunctional, pluripotent the brain regions become. So it’s a network. It’s a very complex network. If we’re tracing the footprints of consciousness in the brain, we need to be looking at how areas interact with each other, not just which areas are involved.
Luisa Rodriguez: That’s fascinating. We’re going to hopefully dive deep into neural correlates of consciousness soon.
Split-brain patients [00:54:56]
Luisa Rodriguez: Holding off for now, though, another classic study is from the ’60s and ’70s: the split-brain patients experiments. These are patients who have had the corpus callosum — which separates their right hemisphere from their left hemisphere — [severed], in order to treat severe epilepsy by preventing the kind of electrical storms that are apparently responsible for seizures from spreading from one hemisphere to the other. I include that just because when I was researching these studies, I was like, “Why do they sever the corpus callosum?” That seems like a weird way to treat epilepsy.
But anyways, I feel like I’ve learned bits and pieces about the split-brain findings throughout the years in popular science, but I haven’t really ever tried that hard to connect them to consciousness, and what we should take from them from a consciousness perspective. To start, do you mind talking a little bit about specifically what the researchers found in these patients?
Anil Seth: Sure. Firstly, you’re right that any kind of neurosurgery done with human beings has to be done for very good reasons. And these so-called callosotomies, or split-brain operations, were done in cases of very difficult to treat, so-called intractable epilepsy.
They were also done for a reason that makes them very relevant for consciousness, which is that it’s a surprisingly benign thing to do, or at least it seems to be. You would think cutting the brain in half would be a major thing, would have a big obvious effect, but it doesn’t — which is why it became a fairly, I wouldn’t say “common,” but unproblematic surgical procedure ethically. Because people with callosotomy, in everyday life, you often wouldn’t notice, and they often wouldn’t seem to notice. Although when I say “they,” that’s when things get interesting, because is it now a single entity?
And also, just another kind of framing thing is that it’s a bit like a historical exegesis now, because as medication for epilepsy has improved, and other forms of brain surgery have improved in people’s ability to target and remove very small parts of the brain where seizures originate, there’s been less need to do these split-brain operations — and certainly less need to do full callosotomies, where you completely segregate the two hemispheres. Many split-brain surgeries that are done are partial ones, because it turns out you can still prevent the seizure spread while not doing a full callosotomy. And of course, all things being equal, the less damage you do to a brain, the better.
So a lot of these things were unfortunately restricted a little bit to studies that were done very well for the time, but done a very long time ago. The classic studies were done by Roger Sperry and Mike Gazzaniga. I think Sperry won the Nobel Prize for his part in this. And Mike Gazzaniga is still around: I’ve met him a couple of times working in Santa Barbara, and he’s an enormously impressive figure in cognitive neuroscience.
And just to give you a flavour of the kinds of things you get in these split-brain experiments: basically, you don’t see anything unless you contrive a situation where each hemisphere has access to different information.
This is easiest to do in vision. Each visual hemifield projects to a different brain hemisphere. This is not the same as each eye projecting to a different hemisphere: it’s not that the left eye goes to the right, the right eye goes to the left; it’s like the left side of each eye goes to the right hemisphere and the right side of each eye goes to the left hemisphere. So it’s hemifields rather than eyes. But this means that there’s a very nice thing you can do: you can present information in one hemifield and it will go to one hemisphere, and you can flip it and do the other way around.
And it turns out, in these situations, you start to see interesting things going on. For instance, if you show something to the right hemisphere of the brain, this is usually the part of the brain that does not support language. Language is one of the few things in the human brain that is very strongly lateralised; it’s normally lateralized to the left hemisphere.
There’s a lot of work in left brain/right brain stuff — you know, left brain is more analytical, right brain more holistic. And there’s a grain of truth to this, but let’s not get distracted down that road. It’s different from the split-brain thing.
But language is on the left. So if you show something to the right hemifield, and then when you ask the person as a whole what they see, the person — through their left hemisphere — will say, “Nothing. I don’t see anything.” But if you asked them to draw, then the left hand — controlled by the right hemisphere — might draw something. And [if you ask], “Why did you draw that?”, the left hemisphere might make something up — like, “Well, it’s cold outside, so I decided to draw a snowplough.” When it was actually shown the word “snow” to the right hemisphere. So it confabulates, would be the word.
This is interesting, because it clearly raises the idea: are there two parallel conscious experiences happening in one brain? This is the kind of philosophical, interesting thing. Does it challenge the unity of consciousness? Can we have two conscious subjects in one brain? I don’t think it establishes that, but it’s certainly interesting to figure out what might be really going on here.
And there’s other examples. For instance, the left hand, controlled by the right hemisphere, might start to do something like button up his shirt, and then the other hand starts to unbutton it — and you have these kind of conflicting goals, as if there’s conflicting agency between the two hemispheres. So yeah, these are the kinds of things.
And then much more recently, there is some work that was done by a former postdoc of mine called Yair Pinto with a couple of patients who did have full callostomies. There were a few in Italy. And he found that the specific issue in these patients was the inability to integrate information between the two hemispheres. So each hemisphere could detect something across the visual field. But actually putting the information together across the hemispheres was where you saw a deficit.
Luisa Rodriguez: Huh. Yeah. So again, I have heard of these — mostly, I think, in my high school psychology classes — and every time I hear about them again, I find it really mesmerising and delightful and just fascinating. But I just want to make sure I understood: these patients report only having one stream of consciousness? They don’t flip between them, or flip between knowing and not knowing, in some almost split personality kind of way, as far as we can tell?
Anil Seth: I think that’s right. The caveat here is I have not spoken to any of these patients myself directly, and I also don’t know every paper in this area either. But that’s certainly the impression that I’ve gotten from talking to people who know a lot more than me about this: that it’s not that these people caveat their description saying, “There’s also this other experience going on over there,” or like, “I’m only having half of the experience that’s happening in this body.” No, it’s just like, “This is what I am experiencing.” That’s the kind of report that you get.
Luisa Rodriguez: I find it mind-boggling. OK, so there’s one weird thread here where this might imply something about whether half a brain hemisphere is capable of conscious experience, and whether two split hemispheres are both having separate streams of consciousness. Is there anything else to be learned from split-brain studies about the nature of consciousness?
Anil Seth: I think there’s a lot we could learn if we had a steady supply of people with fully separate hemispheres. I know there’s some studies going on in Santa Barbara, which I am fascinated by. There’s a paradigm we’ve used in my lab called “intentional binding,” which is this phenomenon that if you make an action and it has a consequence — like if I press a button and a light comes on, then if I feel that my button push caused the light to come on — I’ll perceive the two events as drawn together in time. So my brain kind of brings together actions or evidence for actions and their inferred causes in time, binding intentions together with outcomes.
So this is an indirect way of measuring. And there are some issues with this, actually, which I won’t go into, but my colleagues Keisuke Suzuki and Warrick Roseboom did some cool work on this. But it can be thought of as a way of assessing whether an action was intentional or not. Like, is the consequence judged as happening closer in time than it actually did?
So one question I’d love to ask in a split-brain patient, and there are people doing this now, is: Is that something that crosses the hemispheres, or does each hemisphere have its own intentional binding? That would be another question you get at. Basically, you can adapt a lot of the experiments we might do on a fully intact person, and there are ways to try and adapt them to a split-brain design.
I think all of these things would speak to this general question of the unity of consciousness.
Luisa Rodriguez: Oh, fascinating.
Overflow experiments [01:05:28]
Luisa Rodriguez: Are there any other major early consciousness studies that we haven’t covered yet that you think are worth talking through?
Anil Seth: One other experiment I think might be worth mentioning briefly is these overflow experiments. These can be pretty fascinating. These were pioneered by a psychologist called George Sperling, I think, in the 1960s. I’m not entirely sure. And they’re all about the richness of our conscious experience. So they’re coming in from another angle on this question that came up in the blindsight studies as well: What’s the relationship between the experience we have in the moment and our ability to talk about it?
Imagine that if you close your eyes right now — I’m just doing it — how much can I report of what was going on in my visual field? Can you do that?
Luisa Rodriguez: I could list probably 15 things?
Anil Seth: That’s pretty good.
Luisa Rodriguez: I think. I haven’t actually tried.
Anil Seth: And we don’t know how accurate they would be either, right? But there’s an impression. So there’s this impression of richness that we have, and 15 things is less than if you’re going to estimate the number of different things that would actually be out there.
But then, how accurate is even that 15? So Sperling did these experiments where basically he showed grids of numbers to participants. I think there were grids of a few numbers on three or four rows. They would flash up and they would disappear, and you ask people to basically report as many numbers as they can. And people can report a few. Not too many. I think four to six or something like that. That’s kind of our visual working memory.
But then he did another thing, which was to cue the row after it had already disappeared. So the numbers would be there, it would disappear, and an arrow would come up where the numbers were. And it turns out if you do that, people are much better at reporting the numbers. So it’s as if the brain indeed encoded information about the numbers, but it did so in this interesting way that’s not available just to free recall, and may have been very limited in time. If you leave too long a gap, then people can’t do it anymore.
So this sort of suggests that maybe the fact that people couldn’t recall that many numbers may suggest that actually our visual experience isn’t as rich as we think it is, if we have this impression of seeing all the numbers in detail.
But then Sperling’s experiment says, no, that might just be a reflection of our memory capacity rather than the richness of our visual experience. And if we provide this little post cue, then things look different. So that’s been taken as evidence that our visual experience is rich, because if you probe it, it’s all there, or a lot more.
So, these have been fun experiments, and I wanted to bring them up partly because I spent some time as a visiting professor at the University of Amsterdam. There’s a group there led by a colleague of mine called Victor Lamme, and his whole group was doing experiments of this sort.
And he had these other nice tweaks. Instead of just getting people to try and recall the numbers or letters that they saw, he would do this change blindness version of it, where you’d have two grids of letters or numbers or shapes or whatever they might be, and then one of them might change. In the second array, there would be a change.
And people are typically very bad at noticing whether there’s been a change when there’s a gap in between. This is a phenomenon called “change blindness,” or one articulation of that phenomena.
But then again, if you cued in the gap, then people were much better at being able to detect what had changed. So again, it’s this idea that visual capacity is larger than it might appear if we just try and do things under free recall.
And with some postdocs in his group, who I ended up working with in my lab, Yair Pinto and Marte Otten, we asked another question: What happens if the letters that people are exposed to in this task depart from our standard expectations of what letters might be like? So this is getting a little bit into the weeds, but it’s interesting.
What we did was we flipped letters to make them mirror images of those letters. And letters are not normally mirror images; they’re the right way around. So what we found was that over a period of a couple of seconds, people’s visual memory of what they saw started to revert to what they would normally expect to have seen: the right way around image, rather than the mirror image.
So a lot of my work in my lab is about the influence of expectations on what we perceive, and that we see what we expect to see, broadly speaking. This experiment showed that this applies to memory as well: we remember what we expect, rather than what we saw, over very short time scales.
Luisa Rodriguez: Yeah, that makes perfect sense. That’s really fascinating.
Anil Seth: And I think unlike the split-brain and the blindsight one, these experiments that Sperling pioneered, because they can be done on normal, intact, healthy human beings, have had a longer life. We can always figure out new ways to tweak them, new things we can do with them. And I think that’s an interesting contrast.
Luisa Rodriguez: Yeah. So when I first read about the Sperling overflow experiments, I remember feeling confused about what I was supposed to learn from them. And now, I feel like I sometimes fully have the understanding, and sometimes go back to being like, wait, what exactly are we learning? Why isn’t that just that it turns out that this memory tool means that we can better remember numbers? Why is it that it tells us actually something about consciousness?
Anil Seth: I think it’s because it’s getting at this question about richness, about how this relationship between the immediacy of our visual experience, which seems very rich: there’s so many things in my visual experience right now. It seems like that. However, when I’m asked to describe them, these descriptions can be often very impoverished.
So this observation has sometimes been used to say that we’re mistaken about the richness of our visual experience: we only think that it’s rich; or there’s this inflation that happens, and that actually our visual experience is quite poor and we just overestimate its richness. So this is a debate that’s rumbling on and on, and the Sperling experiments really kickstarted the experimental investigation of this, showing that, actually, you can get at this. You can do these experiments which show under what conditions people can indeed report more about what they feel they experienced.
Luisa Rodriguez: Yeah, I want to see if I understand it and can say it back. So if I’m sitting in my office, and if I turn around, I have this feeling that I’m seeing hundreds and hundreds of objects. And that is an incredibly rich visual picture: there are lots of colours, there are lots of shapes, there are lots of things that I own that are mine. It feels like a very rich painting.
But because, if I were to close my eyes, I could only really correctly describe a tiny fraction of what’s there, maybe there’s some sense in which we have this feeling that we’re perceiving a rich tapestry, but really we’re only conscious of some limited amount.
And the reason that that’s distinct from that we do have this rich experience, but we can only remember a few things, is because even though I might only be able to list 10 things accurately right now if I closed my eyes, if you came up with clever ways to prompt me to remember kind of a corner of a room, and maybe that would lead me to actually be able to describe much more of it, that’s evidence that, yes, I was really experiencing all of that stuff. It wasn’t just my brain tricking me into thinking that I’ve got this rich experience. Does that feel right?
Anil Seth: That’s beautifully said. Yeah, that’s basically exactly the point.
Luisa Rodriguez: Cool. OK, amazing.
How much we can learn about consciousness from empirical research [01:14:23]
Luisa Rodriguez: Let’s turn to another topic. How much do you think we can learn about consciousness by studying it empirically?
Anil Seth: A lot. I think this one observation about consciousness is that it intimately depends on the brain — other things maybe as well, but certainly on the brain. So the empirical study of the brain behaviour, the interaction with the body, is going to tell us quite a lot. It may not tell us everything, but it’s certainly going to tell us a lot.
Luisa Rodriguez: In your book, you point out that a lot of people used to think that life was as mysterious as consciousness, and that there was some mysterious flame that sparked life that wasn’t biological — which I actually didn’t realise was true. Can you say more about that belief?
Anil Seth: Sure. I don’t know if it was considered in exactly the same way, but what I want to draw attention to with this parallel is that there was this sense of mystery. So there were living things in the world and there were dead things — that either died or had never been alive, like a rock or something like that. So the question arises, what’s the difference? What makes something alive rather than dead?
And it seemed intuitive at the time — although none of us were there, so we don’t know for sure — but it seemed intuitive at the time that this property of being alive could not be explained in terms of physics and chemistry of the day; that it was somehow beyond the kind of explanation that was within the remit of science. So the idea was that there has to be something else. There’s got to be a spark of life — an élan vital or essence vital, something like that — and that’s what explains the difference. This was the philosophy of vitalism.
I think it’s a sort of interesting parallel, because what happened in the study of life was, of course, there is no spark of life. And the idea that one might need to appeal to that kind of explanation has rather faded away — although we don’t know everything about life, and there’s still disagreement about how you even define what life is, and we have all these borderline cases like viruses and synthetic organisms, and so on. But the general idea that life is a matter of physics and chemistry and biochemistry doesn’t seem to be much in question anymore. It seems conceptually OK to think of life as within the remit of science.
So the parallel is really historical. It’s saying that there’s this idea today — and it’s not a new idea; it’s certainly been around for a lot longer — that consciousness is a little bit like life was. It seems as though consciousness exceeds the capacities of the tools we have to explain how it fits into the universe as we know it — in physics, chemistry, and now biology and psychology and neuroscience, too.
So the question is: Is this mystery real? Is consciousness really beyond the realm of current and near-future scientific methods, where we need some kind of total paradigm shift? Or are we overestimating the sense of mystery in the same way that vitalists overestimated the sense of mystery about life?
Luisa Rodriguez: Yeah. I found this analogy really helpful, because even though I know a lot about the biological systems that give rise to this life thing, I totally can imagine not knowing anything about those systems, because science just hasn’t figured it out yet, and being like, “It’s crazy that there are living things and there are dead things, and somehow some things walk around and have complex experiences in this very magical-seeming way.”
I guess it seems like there are lots of philosophers still who think that we won’t ever understand consciousness in the same way we understand life. What’s the strongest argument against your view?
Anil Seth: I think there are some good arguments against it, actually. Because I’m not saying that our understanding of consciousness will necessarily follow the way in which we understood life. It may actually be that it’s a different kind of mystery.
Of the two arguments that I think have the most force, the first is maybe the less problematic. The first is that when we study consciousness, we have the additional problem that the thing we’re trying to explain is, by its nature, private and subjective — and not the kind of thing we can put on the table and dissect, and look at it in the same way we might do with a living cell or a frog, or even subatomic particles in a particle accelerator. A conscious experience is available to the organism that has that conscious experience.
And there’s even a lot of debate in philosophy and neuroscience about the extent to which that’s true: we may not even have access to our own conscious experiences in a level of detail that’s important. But certainly other people don’t have that access either. That’s one disanalogy, and I think it’s a problem, but it’s not a dealbreaker. It just means that data are harder to get and a little less reliable in the sense that there’s a level of indirectness.
I think the more challenging problem or argument against this view of why these mysteries are different is that, when you look at life, it’s still essentially a kind of functional thing. Like, you look at different molecules and they have roles and they do things, and what they do is dependent on what they are. So a lot of life is about metabolism, and metabolism makes sense only for particular kinds of stuff — sugars and carbohydrates and things like that — but it’s still a set of processes that have some causality, some functional organisation. We’re not quite sure what that functional organisation is.
But consciousness, this is something people argue about: Is it really like that? Can we explain consciousness in terms of functional organisations? Some branches of philosophy say that we can, but at least on the surface there’s still this suspicion that consciousness doesn’t seem like that. It seems to be something of a different nature.
This is why I think for many people, the most intuitive position on consciousness is something like dualism: that there’s matter, there’s material stuff — which can be incredibly rich and complex; it’s not just atoms bouncing off each other — and then we have conscious experiences. And the two things just seem very, very different.
Again, I think this is sort of begging the question. In the history of our understanding of life, life also seemed to be very, very different at the time, with the concepts and the tools that they had. So whether it’s a real difference or not, I don’t know. That’s something that we will just have to see how mysterious consciousness seems further down the road.
So it often gets to just kicking the can down the road. And this may be true, but I think there’s a lot of progress that can be done in the meantime. I think one of the signs of progress in science is how our framing of the problem changes, how our questions change, not just how our answers change in response to a problem that was set in stone at one particular point. The analogy of life is again very useful: the questions we ask about life have changed. We no longer look for a spark of life or an élan vital. We have other questions, more interesting questions about life.
Luisa Rodriguez: Yeah. I basically want to come back to how much progress you think we can make, and will make over the next few decades, and kind of where the field is.
But holding off on that for now, I’m wondering if you’d be willing to say how much weight do you put on dualism, the view you just described, versus something more like physicalism — where consciousness really does just emerge from physical things, and there’s not this serious distinction between the mind and the body?
Anil Seth: Well, I like to think of myself as a “pragmatic materialist.” It’s just a useful heuristic for doing the kind of work that is gradually chipping away at this problem of consciousness. I allow that this might not be the case, and there are plenty of other isms as well.
I find dualism, although it’s intuitively appealing — and I think most of the day, most of us walk around the world being intuitive dualists, feeling that there’s this difference between what’s going on in our minds and our conscious experiences and what’s going on in our bodies and in the world — sometimes this can break down when we meditate, or introspect, or something else happens in our brains and bodies to make clear the intimate relation. But I think materialism has been a very successful strategy. It’s the kind of thing that scientific experiments by themselves won’t prove or disprove.
I have lots of conversations with a friend of mine called Philip Goff, who’s a well-known proponent of panpsychism, which is another ism: that we can get around this seeming mystery of how consciousness and the physical world relate by building it in at the most basic level — so that conscious experience is fundamental in the same way that mass or energy or charge is fundamental.
And Philip will always tell me that pretty much everything that I say in my work and in the book, and pretty much every other materialist neuroscientist says, is also compatible with panpsychism. And this may be true, but my response to that is always, “Well, yes, but would we have found these things out, would we have done those experiments, developed those theories, if we brought a panpsychist mindset to it?” I think historically, the trajectory of our knowledge depends on the metaphysical view that we have, even if the actual knowledge or provisional knowledge that we have is, in fact, compatible with pretty much any metaphysical position.
So yeah, I’m a pragmatic materialist. I think we ask mainly the right kinds of questions from a materialist point of view, but we also need to be careful. Because you said it in your question — which was, I apologise, now some time ago — when you said, how does consciousness emerge from the brain? And words like “emergence” can be used in many ways. Sometimes they’re used in ways which are basically equivalent to abracadabra or magic. Something happens, some point of complexity is reached, and bingo, you get consciousness.
That’s not an explanation, but it sort of pushes, it poses a challenge: how can we then be more precise about emergence? What do we mean by that? How do we measure it? How do we operationalise it in a way that has explanatory and a predictive grip on consciousness? So that’s why I still think it’s useful. But I think we’ve always got to be sensitive to what are placeholders for an explanation, and what are actually explanations.
Luisa Rodriguez: Yeah. For anyone whose interest is piqued by that discussion of emergence, you did a really great interview
Anil Seth: Yeah, it was with Sean Carroll. He’s a fantastic physicist and communicator about physics. And his new interest — fortunately for all of us — is exactly in complexity and emergence. We’ve worked in my group on emergence for a long time as well, precisely so we can kind of demystify it and make it useful, rather than you just shovel everything into the emergence box and wave a magic wand and bingo.
Luisa Rodriguez: Yeah. I really like this pragmatic materialism. If I’m understanding correctly, it’s pragmatically asking questions from a materialist perspective — where we think maybe physical bases for consciousness is the thing that is most likely to teach us about consciousness. Because it is hard to kind of build an experiment to test dualism — and probably impossible — but maybe we will learn things by pursuing materialist research angles. And that’s what a bunch of your work is.
Which parts of the brain are responsible for conscious experiences? [01:27:37]
Luisa Rodriguez: I’d love to turn to how neuroscience is allowing us to learn more about exactly which parts of the brain might be responsible for different kinds of conscious experiences. We’ve already touched on this idea that there might be “neural correlates of consciousness,” but can you re-explain that idea in a bit more detail?
Anil Seth: Sure. This was a really important development in the recent history of attempts to understand consciousness, because until around the 1990s, there had been isolated islands of really interesting work on consciousness, but still a very general suspicion about consciousness being something that could be studied within neuroscience, cognitive science and so on. It was in the 1990s, also with the advent of brain imaging, or the widespread availability of things like functional MRI, which allow you to localise brain activity, that spurred this strategy of looking for the neural correlates of consciousness.
And the idea is very simple, it’s very pragmatic: it just says, forget about the philosophy, forget about the metaphysics. We know brain activity in practice shows interesting relationships with consciousness. When you lose consciousness, brain activity changes. When you’re conscious of X rather than Y, your brain activity changes. So let’s just look for these correlations, the footprints of consciousness in the brain.
So I think it was really productive, not because it promised to give the full answer to how and why consciousness happens, what its function is, and so on — but it gave people something very clear to do: we can design experiments that contrast conscious with unconscious conditions in various ways, and then we can look to see what in the brain changes.
This was made very popular by Francis Crick and Christof Koch, who were working in California in the ’90s at the time, but it still drives a lot of the work that’s done these days.
It gets very hard, it gets very tricky — because what you want to do in these situations is you want the only thing that changes to be consciousness, but guaranteeing that is really hard. And it depends which aspect of consciousness you’re trying to look at, how you might try to do that. So it’s evolving.
I think this actually highlights why this strategy is self-limiting in a way, because correlations can be arbitrary. The price of cheese in Wisconsin I think correlates with the divorce rate in France or something like that, but it doesn’t tell you anything. At least I don’t think it tells you anything. If you only apply this strategy and think this way, you’ll get to the final answer — “Here they are, here are the correlates. And now we understand everything” — I think it’s not going to work. You also need theory.
So, especially within the last five or 10 years, the empirical emphasis on finding the correlates of consciousness has been increasingly accompanied by different theories which suggest what kinds of neural correlates, like which brain areas you might expect to find under which circumstances and why.
Luisa Rodriguez: Yeah, this does seem incredibly hard, and maybe impossible to conclude anything about causality? How would we ever be able to tell the difference between the parts of the brain creating the conscious experience of the colour red, and the parts of the brain that are playing an unconscious role in taking in a certain wavelength of light?
Anil Seth: That’s a very good way into this, actually, because you mentioned another thing that correlation fails to give you, which is causation. So correlations are neither explanations nor do they isolate causes. You can have a common cause and observe a correlation.
And also, the example of anaesthesia is interesting, because, sure, the difference in consciousness is very clear. It’s probably the biggest change that you can create: somebody loses consciousness entirely. But of course, many other things change too, besides just the absence of consciousness. When you set someone on general anaesthesia, a whole lot of stuff is going on. So how do you know what changes are related to the loss of consciousness, and what changes are to do with the loss of physiological arousal or just the inevitable but unrelated changes that anaesthetics might bring about?
When you change consciousness that much, you run into these problems. So, in fact, probably most of the work done using this method takes another approach: let’s take a person who is conscious, and who will always be conscious during this experiment, and let’s change what they’re conscious of, and then let’s study the brain correlates of that.
A very common example here is something like binocular rivalry. In binocular rivalry, if you show one image to one eye, another image to another eye… Or a hemifield is better. Don’t do this as a split-brain patient; then we’d be confusing our explanation. So this is somebody like you or me. And if you show different images — one to the left, one to the right — our conscious experience tends to oscillate between them. Sometimes we’ll see one, maybe an image of a house; sometimes we’ll see another, maybe an image of a face — yet the stimulus is exactly the same; the sensory information coming in is not changing.
So what you’ve done here is: the person is conscious in both cases, so you’re not looking at the correlates of being conscious, but you’ve got a lot more control on everything else. The sensory input is the same. So if you can look at what’s changing in the brain here, then maybe you’re getting closer to the footprints of consciousness.
But there’s another problem, which is that not everything is being controlled for here. Because, let’s say in this binocular rivalry case, and I see one thing rather than another, I also know that I see that. And so my ability to report is also changing.
Actually, there’s a better example of this. I think it’s worth saying, because this is another classic example: visual masking. For instance, I might show an image very briefly. And if I show it sufficiently briefly, or I show it sort of surrounded in time by two other images or just irrelevant shapes, then you will not consciously see that target image. As we would say in psychophysics, it is “masked.” The signal is still received by the brain, but you do not consciously perceive it. If I make the time interval between the stimulus and the mask a little bit longer, then you will see the stimulus.
Now, I can’t keep it exactly the same. Basically you can just work around this threshold so it’s effectively the same, but sometimes you see the stimulus and sometimes you don’t. And now again, you can look at the brain correlates.
Luisa Rodriguez: Oh, that’s really cool.
Anil Seth: Not of like house versus face, but in this case, seeing a house or not seeing a house. But again, in both cases the person is conscious. So there are many different ways you can try and apply this method.
And the reason I use that example is that the problem here is that, when the person sees the face, so the masking is a bit weaker, yes, they have a conscious experience — but again, they also engage all these mechanisms of access and report. Like they can say that they see the house, they press a button. So you’ve also got to think that maybe the difference I’m seeing in the brain is to do with all that stuff, not with the experience itself.
And you can just keep going. People have designed experiments where they now ask people not to make any report, and try to infer what they’re seeing by clever methods: these no-report things. And then other people say, “But hold on, they’re still reportable. So you’re not controlling for the capacity to be able to.” And it’s like, oh my word.
So you just keep going down this rabbit hole, and you get very clever experiments. It’s really interesting stuff. But ultimately, because correlations are not explanations, I think you’ll always find something where you can say, well, is it really about the consciousness, or is it about something else?
Luisa Rodriguez: Yeah, yeah, that was really well explained for me. So given that, how optimistic do you feel about this as a line of research? What do you see as the realistic achievable goal, if not literally pinpointing that this is where that conscious experience of the house is happening?
Anil Seth: I think it’s an important part of the enterprise. It’s not something I’m doing much of in my lab at all, but I follow this work pretty closely. It’s interesting. It’s limited also in the kind of data we can get.
And this is one of these things that you just wish there was this invention next year that would solve this problem.
When it comes to looking inside the brain, we can either do pretty well in terms of space, but badly in terms of time: so fMRI scanners, functional MRI scanners, have pretty good spatial resolution, but really appalling time resolution. Seconds — and a second in the brain is a lifetime.
Or we can use electroencephalography or magnetoencephalography. The time resolution is much better — milliseconds, which is the natural timescale of the brain, or one of them — but the spatial resolution is terrible. Better for MEG than EEG, but still pretty rubbish.
Or we can, in certain cases, stick wires into the brain — and then we know exactly where we’re recording from, so we have good space, and we get very good time resolution. But we have crappy coverage, because you’re only going to have a few wires in any single brain.
So there’s no technology out there that allows us to look in high resolution in space and time and with coverage. That’s a limitation. That is just a technological limitation. I don’t know whether it will ever be solved. There’s no sort of thing around the corner, as far as I know, that’s going to solve that.
But given that, I think it’s very useful. We will still learn a lot, for sure.
Current state and disagreements in the study of consciousness [01:38:36]
Luisa Rodriguez: Yeah. I’m curious what the current state is. Is there somewhere, like, everything we know so far, here’s our crude mapping? Do we have a couple of good correlates? Do we have more like 10? Are they mostly on visual things? Yeah, how are things going?
Anil Seth: There’s an emerging story, and also a couple of very strong and fairly blunt disagreements.
So the emerging story seems to be that early stages of perceptual cortex are not where you find the correlates. And this is interesting, if we remember what we were talking about with blindsight not long ago. The blindsight studies showed that in vision, the primary visual cortex was necessary, but it did not show that it was sufficient and it didn’t show that it correlated with consciousness. So you just need it; you need that activity.
And a lot of these neural correlates of consciousness studies now — it really depends, though, which one you look at — but they certainly seem to suggest that you get tighter correlations with reports of consciousness as you get deeper into the brain, as things become further from the sensory periphery, more multimodal.
Now, I think this is a huge oversimplification, because I think it really depends on how you look at it: the imaging method you use and the experiment you do. The answer can change a lot. This whole binocular rivalry example we talked about: there’s a long history of people finding different things about the involvement of early visual cortex if they look in different ways.
The other reason it’s hard to give a good answer about that is I think it’s becoming increasingly clear it’s not so much about areas, but it’s about networks and their interactions.
This then highlights one of the main points of debate. So a lot of the early studies showed that activity of, say, perceptual cortex by itself was not enough: you needed to have that activity spread to parietal and frontal regions of the brain — so regions towards the sides, a little bit to the back, that’s the parietal cortex, and the frontal cortex.
So this finding was replicated a lot, pioneered by Stanislas Dehaene and others. And it became associated with a particular theory of consciousness: the global workspace theory. The idea that for some stimulus to trigger a conscious experience of the causes of that stimulus, it had to ignite activity in this broadly distributed frontoparietal brain network. These things sort of lined up quite well.
And the overall frame for this, you can call this kind of idea the “front of the brain theory”: you need to have activity in the front of the brain, otherwise it’s just unconscious processing in some ways. So that’s one set of theories and set of experimental data.
But then there’s a whole set of theories and data that push back against that, and say no, the front of the brain stuff is only needed for report — for saying what you saw — and for doing things, for engaging your whole cognitive apparatus. But it’s not really necessary for the conscious experience itself.
So this is why these experiments like no-report studies, when people don’t have to report, the frontal activity seems, in some cases anyway, to go away. So that then suggests that actually the consciousness bit is in the back of the brain, and the front is doing this other stuff that we just confuse with consciousness.
There’s recently been a big study — a so-called adversarial collaboration, which is a beautiful idea. This study was funded to directly pit competing theories against each other.
Luisa Rodriguez: Oh, cool.
Anil Seth: To have theorists come together to design experiments that would try and tease theories apart — just like back in the day in physics, where people came up with the idea that you could measure something about mercury and the sun in order to distinguish between Newtonian physics and Einstein’s theory of relativity. So people went to see an eclipse in Antarctica, and Rutherford was involved in this, and relativity won. But it was an experiment that could distinguish the two. It didn’t set out to prove one or the other. It was about distinguishing the two.
So this is what’s happening now in neuroscience. The problem is that the theories aren’t as specific as Newton’s theory of gravity and Einstein’s theory of gravity. They tend to be theories of different things; they make different assumptions. So we learn a lot by doing this.
The results from these studies are beginning to come out, but it’s a mixed picture. If you look at it one way, you might say that that’s really supporting the front of the brain theories; if you look at it another way, you might say it’s supporting the back of the brain theories.
Luisa Rodriguez: This is fascinating! If you’re familiar enough with them, can you give an example of the type of experiment that’s meant to distinguish between these, and how you might interpret them in both ways?
Anil Seth: I’ll give two very quick examples. The one that’s already been published is a simple experiment, and it basically just involves showing images that are very clearly above threshold. What I mean by that is there’s no ambiguity about whether you see it or you don’t. So you just look at a bunch of images. So in a sense, it’s a bit of an odd experimental design, because it’s not contrasting conscious with unconscious conditions; it’s just looking at conscious perception of images. So that’s one thing that they decided to do, and they had reasons for that.
These experiments were then conducted in independent laboratories, and many different kinds of data were recorded. And the kinds of predictions people were making were like: If the front of the brain theories — these so-called global workspace theories — are on the right track, then it should be possible to decode what the image was from neural activity in the front of the brain. Not just that you saw activity: you should be able to decode the stimulus identity, the picture identity. There are also predictions about, for instance, when you would see activity come on and go off. So lots of different predictions. Some of them stood up and others didn’t stand up so well.
And it was the same for the back of the brain predictions. They said that you should be able to decode stuff from the back of the brain. And that turned out to be true, but the criticism there was that we already knew that was very likely to work, and that could work for many reasons. It’s not specific enough to that theory. So that’s been one example of just in a way how hard it is to design experiments that try to tease apart these theories.
There’s another one I will just mention very briefly, because I’ve been involved as an advisor — not as anyone planning or doing the work, but it’s been fun to be an advisor. So one theory, it’s called integrated information theory, is a very interesting and strange theory.
It has a very strange prediction that it should make a difference to your conscious experience between two cases.
Case one: let’s say in visual cortex, some neurons are just not firing. Maybe you’re just looking at there’s nothing there and there’s like two points of light separated by some space. So the whole bunch of neurons that are responding to the bit in the middle are, let’s assume it’s totally quiet: they’re not firing at all. So that’s case one.
Case two is the neurons in the middle that were not firing because there’s nothing stimulating them are now prevented from firing, maybe inhibited optogenetically or whatever. They’re just prevented from doing anything.
Intuitively, it’s very hard to think that would make a difference, because in both cases these neurons are not firing, so they’re not influencing any downstream activity by firing. But integrated information theory predicts that that will make a difference, that there is a substantive difference between inactive neurons and inactivated neurons.
And this is great. I mean, I think the theory is probably wrong, but I love the fact that it makes this specific prediction. It’s very hard to test, because in the real world, in the real brain, nothing is ever fully quiet. So even in this case where there’s no stimulation happening, of course there’s background activity and so on, so it’s very hard to actually do.
But I like the fact that it’s a counterintuitive prediction that would not arise from the other theories being tested here, that we can at least think about how to design experiments. So the people running these experiments in Amsterdam are using mice and optogenetics to try to get at something a bit like this.
Luisa Rodriguez: Cool. And to make sure I understand the thinking: I don’t know if a good analogy is something like the zeros and ones in a computer, where maybe it’s something like the zeros are not just not being used, they tell you that something in addition to the zeros and ones make up a complete picture. It’s not like they’re just off.
Anil Seth: Well, that’s almost right. Of course, the zeros matter as much as the ones in a computer. Otherwise you wouldn’t have information. Information is about the balance, the patterning of ones in amongst zeros, or the other way around.
I think that the subtle difference is that it’s not just the zeros, it’s the why they are zeros. So you have a neuron that is zero, and it’s not active. But a neuron that is inactivated is still zero, but it’s a different kind of zero because it’s a zero that could not become a one. And that’s the weird bit.
Luisa Rodriguez: Oh, fascinating! That’s really cool.
Can I go back for a second and ask about the first study you mentioned? In that study, it’s just kind of wild to me that you even can imagine decoding what image someone is looking at from their neural activity, as measured probably on an fMRI. Do you have an interpretation for how it could be true that it both is possible to kind of read the image from the frontal and the deeper parts of the brain? Is it surprising to you that both are true, or does it seem unsurprising?
Anil Seth: I don’t think it’s that surprising. I think one of the initially surprising, but progressively less surprising findings is the possibility of decoding, or “brain reading,” you might call it. It is fascinating that you can, in fact, do this. If you train machine learning classifiers on data recorded from the brain, then you can basically tell in a lot of cases what people are hearing, seeing, and so on within a limited set.
There’s a lot of action here now. This is used in brain-machine interfaces too, because there’s a lot of clinical applications for this. If we can read out somebody’s intentions or planned movements from someone who’s paralysed, then we can allow them to speak or move when they otherwise couldn’t. So there’s really good reasons for trying to do this in practice.
Now, theoretically, it’s a little trickier to interpret this kind of thing — because the fact that the information is there in patterns of neural activity, and can be decoded by some machine learning algorithm, does not mean that it is used by the brain; it means that it’s there in the data. So in some sense, these results, as interesting as they are — and they are interesting — we have to be careful about how much they’re telling us about the power of machine learning algorithms, and how much they’re telling us about what the brain is actually doing.
Luisa Rodriguez: Right. So if we imagine a patient with blindsight, it could be the case that a machine learning algorithm could identify that the person is taking in the sensory input of the corridor and picking up on the objects in their way and moving them around them. And you might get an image of the objects in the corridor. But in this case, we in fact know that the person with blindsight can’t consciously see the objects, so it doesn’t therefore entail or ensure that the algorithm being able to pick up on that is actually measuring something like where the conscious experience is.
Anil Seth: Yeah. I think at the moment these kinds of experiments won’t give you definitive answers like that, but they’re still very interesting to do. You raise a really interesting experiment idea. I don’t know if it’s been done. I doubt it. But what would decoding in a blindsight person look like? Would you still be able to decode the image from activity in the visual cortex?
I suspect you might well be able to do so, which would show that being able to decode is not sufficient for consciousness, that the information being present in a region is not a guarantee that a person will have the corresponding conscious experience. This is a guess. I don’t know if this is actually true for blindsight. It would be interesting just to compare some with blindsight, some without, about where you can decode and where you can’t.
But there’s a lot you can do with this line of work. One thing you can also try to do is cross-decoding. So you train a classifier from information in one region and see whether it works on another region, because that tells you something about how information is encoded, if it is encoded, and the similarities of the encodings between the different regions.
So you can do pretty sophisticated and interesting things, but I don’t think any of them answer the question of, “Here are the neural correlates, and here are the sufficient conditions for consciousness.” But they gradually are painting in this picture, so I think it’s exciting stuff.
Luisa Rodriguez: Yeah, super exciting. Do you basically think that we still need a kind of unforeseen paradigm shift before we get a better grip on consciousness?
Anil Seth: No, I don’t. But it depends what you mean by “better.” It seems tempting to think that we’ll just have this eureka paradigm-shift moment, and suddenly everything will become clear — whether it’s a new kind of physics or stroke of philosophical genius — that will suddenly reveal the road, and then we just turn the wheel of science and all the data comes out.
I don’t think things generally work that way. I don’t think it’s going to work that way with consciousness. I don’t think it needs to work that way with consciousness. This gets back right to the start of our conversation, about the idea of taking a pragmatic view, kind of metaphysically. So I can’t rule out that a full understanding of consciousness might require some dramatically new science or philosophy, but I don’t see that we are at some kind of limit at the moment. With the tools that we have, we’ve already done a lot, and we can keep doing a lot more.
And we still have to keep asking ourselves what would constitute a satisfactory explanation. Why do we still feel unsatisfied?
Because there’s another distinctive thing about this: we’re trying to explain ourselves. The methodological problem of not having objective access to our private data has this other aspect to it too: we’re trying to give an objective explanation for something that’s intrinsically subjective. That I think induces another kind of gap, which you might want to call the “satisfactoriness gap”: that it’s never going to seem to be up to the job.
But that just is the nature of the thing we’re trying to explain. We’re happy enough with explanations in other fields of science that make no intuitive sense whatsoever, in some quantum mechanics, and no one cares that no one really knows what black holes do. But it’s fine, the theory works. But when it comes to us, we’re much more like, “No, that’s not good enough. I need it to give me this sense of, ‘Aha! It has to be that way.'” Well, why should it? It just might not.
Luisa Rodriguez: Yeah, we might not have the capacity for having the “Aha!” OK, let’s leave that there.
Digital consciousness [01:55:55]
Luisa Rodriguez: Let’s turn to consciousness outside of humans. So I’m sympathetic to a functionalist view of consciousness, where mental states are kind of defined by their functional roles or relations, rather than by their biological makeup. To be more explicit for some listeners who don’t know as much about this theory: consciousness kind of arises from the patterns of interaction among various processes, regardless of the specific materials or the structures involved — meaning that a biological or artificial system could potentially be conscious if it functions in a way that meet the criteria for being conscious.
So on that view, it’s possible that we’ll end up building AI systems that are conscious if they can carry out those functions. Do you find that plausible?
Anil Seth: Well, I’m glad you said “functions” rather than “computations” — because I think that’s a difference that’s often elided, and I think it might be an important one. I’m much more sympathetic to the way you put it than the way it’s normally put, which is in terms of computation.
I think there’s actually three positions worth differentiating here. There’s many more, but for now, three is enough.
One of them is, as you said, this idea of biological naturalism: that consciousness really does depend on “the stuff” in some deep, intrinsic way. And the idea here is: say we have something like a rainstorm. A rainstorm needs to be made out of air and water. You can’t make it out of cheese. It really depends on the stuff. Another classic example is building a bridge out of string cheese. You know, you just can’t do it. A bridge, to have the functional properties that it has, has to be made out of a particular kind of stuff. And a rainstorm, it’s not even just the functional properties. It’s like, that’s what a rainstorm is, almost by definition.
So that’s one possibility. It’s often derided as being sort of magical and vitalist. Back to vitalism again: you’re just saying there’s something magic about that. Well, it doesn’t have to be magic. Saying that a rainstorm depends on rain or water is not invoking any magic. It’s saying that it’s the kind of thing that requires a kind of stuff to be that thing.
So that’s one position. As you can see, I’m a little bit sympathetic to that.
Luisa Rodriguez: Yep.
Anil Seth: Then you have functionalism, which is the broadly dominant perspective in philosophy of mind and in the neuroscience of consciousness — so much so that it’s often assumed by neuroscientists, without really even that much explicit reflection.
This is the idea that, indeed, what the brain is made of, what anything is, doesn’t actually matter. All that matters is that it can instantiate the right patterns of functional organisation: the functional roles in terms of what’s causing what. If it can do that, then it could be made out of string cheese or tin cans, or indeed silicon.
Of course, the issue with that is not all patterns of functional organisation can be implemented by all possible kinds of things. Again, you cannot make a bridge out of string cheese. You probably can’t make a computer out of it either. There’s a reason we make things out of specific kinds of things.
So that’s functionalism broadly. And it’s hard to disagree with, because at a certain level of granularity, functionalism in that broad sense kind of collapses into biological naturalism. Because if you ask, What is a substrate?, ultimately, it’s about the really fine-grained roles that fields and atoms and things do. So you kind of get to the same place, but in a way that you wouldn’t call it functionalism, really; it’s about the stuff. So that’s another possibility.
And then the third possibility — which is what you hear about all the time in the tech industry, and a lot in philosophy and neuroscience as well — is that it’s not just the functional organisation; it’s the computations that are being carried out. And often these things are entirely conflated. When people talk about functionalism, they sort of mean computation functionalism — but there is a difference, because not all patterns of organisation are computational processes.
Luisa Rodriguez: Yeah, I think I’ve just done this conflation.
Anil Seth: You can certainly describe and model things computationally. I mean, there are fundamental theories in physics, in philosophy, like Church–Turing or whatever, that you can do this — but that doesn’t mean the process itself is computational.
Luisa Rodriguez: You can model a rainstorm, but it’s not a rainstorm.
Anil Seth: Absolutely, absolutely. And this has been, I think, a real source of confusion. On the one hand, functionalism is very broadly hard to disagree with if you take it down to a really low level of granularity. But then, if you mix it up with computation, you get actually two very opposite views: on the one, consciousness is a property of specific kinds of substrate, specific kinds of things; on the other, it’s just a bunch of computations, and GPT-6 will be conscious if it does the right kind of computations.
And these are very divergent views. The idea that computation is sufficient is a much stronger claim. A much stronger claim. And I think there’s many reasons why that might not be true.
Luisa Rodriguez: Yeah. That was a really useful clarification for me. Maybe let’s talk about computational functionalism in particular. So this basically is maybe a claim that I still find plausible. I’d be less confident it’s plausible than functionalism, but I still find it plausible. There are thought experiments that kind of work for me, like if you replaced one neuron at a time with some kind of silicon-based neuron, I can imagine you still getting consciousness at the end.
What do you find most implausible about computational functionalism?
Anil Seth: You’re absolutely not alone in finding it plausible. I should confess to everyone listening as well that I’m a little bit of an outlier here: I think the majority view seems to be that computation is sufficient. Although it’s interesting; it’s recently been questioned a lot more than it used to be. Just this year, really, I’ve seen increasing scepticism, or at least interrogation — which is healthy, even if people aren’t persuaded. You need to keep asking the question, otherwise the question disappears.
So why do I find it implausible? I think for several reasons. There are many things that aren’t computers and that don’t implement computational processes. I think one very intuitive reason is the computer has been a very helpful metaphor for the brain in many ways, but it is just a metaphor — and metaphors, in the end, always tie themselves out and lose their force. And if we reify a metaphor and confuse the map for the territory, we’ll always get into some problems.
So what is the difference? Well, if you look inside a brain, you do not find anything like the sharp distinction between software and hardware that is pretty foundational to how computers work. Now, of course, you can generalise what I mean by computers and AI, but for now, let’s just think of the computers that we have on our desk or that whir away in server farms and so on.
So the separation between hardware and software is pretty foundational to computer science. And this is why computers are useful: you can run the same program on different machines, it does the same thing. So you’re kind of building in this substrate independence as a design principle. That’s why computers work. And it’s amazing that you can build things that way.
But brains, just in practice, are not like that. They were not built to be like that. They were not built so that what happens in my brain could be transferred over to another brain and do the same thing. Evolution just didn’t have that in view as a kind of selection pressure. So the wetware and the mindware are all intermingled together: every time a neuron fires, all kinds of things change — chemicals wash about, strengths of connections change. All sorts of things change.
There’s a beautiful term called “generative entrenchment” — which is maybe not that beautiful, but I like it — and it points to how things get enmeshed and intertwined at all kinds of spatial and temporal frames in something like a brain. You just do not have these clean, engineering-friendly separations. So that’s, for me, one quite strong reason.
Another reason is you mentioned this beautiful thought experiment, the neural replacement thought experiment. This is one of the major supports for this idea of substrate independence — which is very much linked to computational functionalism, by the way, because the idea that consciousness is independent of the substrate goes hand in hand with the idea that it’s a function of computation, because computation is substrate independent. That’s why computers are useful. So the two things kind of go hand in hand.
So this idea that I could just replace one neuron at a time, or one brain cell at a time with a silicon equivalent, and then if I replace one or two, surely nothing will happen, so why not 100, why not a million? Why not 10 billion? And then I’ll behave exactly the same. So either consciousness is substrate independent, or something weird is going on and my consciousness is fading out, but I’m still behaving exactly the same. So it kind of forces you into the horns of this dilemma, supposedly. Right?
But you know, I just don’t like thought experiments like this. I just don’t like them. I don’t think we can draw strong conclusions from them. They’re asking us to imagine things which are actually unimaginable — not just because we lack the imagination; it’s just in fact we don’t have enough imagination to really understand what it would take.
If you try to replace a single part of the brain, as we said, everything changes. So you can’t just replace it with a cartoon neuron that takes some inputs and fires an output; you’d have to make it sensitive to the gradients of nitric oxide that flow freely throughout the brain. What about all the changes? What about the glia? What about the other astrocytes? It just becomes like, well, if I have to replace all those too, then basically you end up… It’s equivalent to making a bridge out of string cheese, to make a brain that is functionally identical out of silicon. You just can’t do it. And that’s not a failure of imagination.
So I don’t think you can draw strong conclusions from that thought experiment.
Luisa Rodriguez: Yeah, I’m trying to figure out why I feel sympathetic to that, and yet I still find it plausible. I think it’s something like I do just buy this computation aspect being fundamental and sufficient. Maybe not buy it, but still think it’s very plausible.
So if you imagine the functions of the neuron could be performed by a computation, and you’ve described things like, well, then you have to adjust the weights and then you have to kind of replicate the glia. I think I just do find it intuitively possible that you could write a program that replicates the behaviour of the glia and have it kind of relate to a program that replicates the behaviour of a neuron.
What is the difference between our views there? Or why does that feel so wrong to you?
Anil Seth: Well, I think you can simulate at any level of detail you want. But then we get back to this key point about is simulation the same thing as instantiation? And then you’re just assuming your answer. So I don’t think that really tells you very much.
It’s slightly different from the neural replacement thought experiment, because, yes, we can simulate everything. You can just build a big computer, simulate it more. Maybe you won’t ever be able to simulate it precisely. We already know that even very simple systems like three-body problem type things have such a sensitivity to initial conditions that no matter how detailed your simulation is, its behaviour will actually start to diverge after quite a short amount of time. So even that is a little bit questionable.
But my point is, even if you could simulate it, you’re just begging the question then. That’s just assuming computation is sufficient. If you simulate a rainstorm in every level of detail, it’s still not going to get wet. It just isn’t.
Luisa Rodriguez: It sounds like you don’t think this is very plausible, but I don’t get the impression you think it’s literally impossible. Do you have a take on whether we’re on track to determine whether AI systems are conscious, if it turns out computational functionalism is actually right, and we’re headed in the direction of conscious AI systems?
Anil Seth: I think it’s a critical question, and we are not ready. We’re not in a place where we can do that. I think it’s important to recognise that, because people rush to all kinds of pronouncements about AI being conscious — based mainly, I think, on assumptions and biases, rather than on reason and evidence.
You’re right: I cannot disprove the possibility of computational functionalism. It may be true. I just wish people wouldn’t take it as obviously true, as for granted, which is what has been happening. I think that it’s not obviously true. I think it’s actually quite unlikely, but it is still possible. Fine.
So I think the best we can do at the moment, when it comes to assessing what credence we should have in AI systems being or becoming conscious, is a few things.
Firstly, we need to understand how our own biases play into these things. We tend to project consciousness into systems that are similar to us in specific ways, in ways which we think are sort of human distinctive. This is why language models have been so bloody seductive and disruptive and confusing to a lot of people. Like, no one is really thinking that DeepMind’s AlphaGo is conscious, or there are other AI algorithms that are. But people are very ready to say that language models are.
This goes back to Blake Lemoine, the Google engineer, but you hear many other people saying similar things as well. Why? It’s not that the system under the hood is very much different. What’s different is that it’s engaging with us in a different way.
We humans, we tend to think we’re at the centre of the universe. We think we’re special. And one of the things that we think makes us special is language. We also think we’re intelligent, so we tend to use intelligence as a benchmark for consciousness, and language in particular as a benchmark for consciousness. This of course — and we’ll come to this — affects how we consider the possibility of nonhuman animal consciousness too — where we might make the reverse error.
So given this sort of, I think, slightly unhealthy brew of anthropocentrism and human exceptionalism, you couple that with anthropomorphism — which is how we project human-like qualities onto things on the basis of the similarities that seem to us to matter — and it’s no surprise that people are feeling that things like language models may actually be conscious.
But that is so much a reflection of our biases, not of what really matters. I think there’s very little support for the idea that language is either necessary or sufficient for consciousness among philosophers and neuroscientists. So using it as a benchmark, even implicitly — which is what people are doing — that’s problematic.
Luisa Rodriguez: Do you think there is an approach that, if applied really well, would get us closer to the right direction?
Anil Seth: One approach that was explored in a very widely read and quite long paper by Patrick Butlin and Robert Long and many other colleagues took a slightly different approach. They said, “Let’s take our best current theories of consciousness, neuroscientific theories of consciousness, and let’s see whether the principles central to these theories are implicitly or explicitly present in AI systems.”
And this is kind of nice. It’s a useful exercise to do, because these theories were not designed as theories of AI, typically. But you can sort of see, is there a global workspace? Maybe there is, in a multimodal language model or a multimodal sort of GPT model. So you can ask that question. And to the extent that the central principles of more than one theory of consciousness are present, then you might increase your credence that maybe this AI system is conscious.
But — and I’m really pleased they did this — they caveat the whole thing by saying, “We assume computational functionalism.” This approach entirely depends on whether you think computation is sufficient. So that’s just this big unknown hovering over that whole approach. All you can do at the moment is kind of concede that that’s a conditionality.
And I think the other thing is — and this is where my interest really is heading now — let’s try and flesh out the alternatives. Let’s really try and understand whether and how the substrate matters — the brain, the actual biological messiness of the brain matters. And if it does matter, how and why?
Luisa Rodriguez: Yeah, yeah. Do you have any initial thoughts on that? What matters about that physical substrate messiness?
Anil Seth: Well, I think it’s an evolving story. I mean, other people have been working on this too, thinking on related lines. But I’ve been thinking about it for 10 years or so now, at least, and it really is about this predictive processing story. Now, again, you can say predictive processing is a very computational theory. You know, you use it as a computational approximation to Bayesian inference and all that. Again, yes, we can abstract it computationally; we can abstract anything computationally, and it can be very useful.
That’s fine. It doesn’t mean that it’s computational in the brain. And in fact, there are sort of continuous, non-computational theories of how the brain does something that we can computationally model as predictive inference. So it’s really an elaboration of this story: what’s fundamentally going on in these systems that look as if they’re doing Bayesian inference?
So this is where, among others, two other whole bundles of concepts come in here. One is the free energy principle from Karl Friston. Another is ideas of things like autopoiesis in biology.
The free energy principle is going to require another six hours to talk about, so we won’t. All I’ll say about it is: what it really offers, or seems to offer — and there’s a lot of discussion; I confess I’m not fully clear on this either; it’s part of the work I’m doing with colleagues — is that it shows, or potentially shows, how this whole story of things that look as if they’re doing Bayesian inference really originates in some fundamental property of a living substrate to keep on living and to keep on regenerating its own components and distinguishing itself from what is not itself.
And if that’s the case, then there’s a really strong throughline from the mechanisms that seem to underpin our conscious experience to their sort of bottoming out in the substrate, in the nature of a living system. So this, at the very least, means we can’t understand consciousness except in light of our nature as living systems. Does it mean we can’t create consciousness unless that thing is alive? That’s a stronger claim — and I think it might be right, but I don’t think it can be demonstrated as being correct yet.
Luisa Rodriguez: Yeah. For anyone interested in this, I got like 75% of the way to understanding this through your book, and I think if I read it again, I’d get even closer. And I think it’s really worth digging into.
Anil Seth: I think about 75% is about as far as I got writing it as well. So we’re probably about the same.
Luisa Rodriguez: Then I’m probably wrong. I’m probably actually about 25% of the way, and it feels like more. But I found it really exciting and inspiring to read. It felt really new to me.
Consciousness in nonhuman animals [02:18:11]
Luisa Rodriguez: Leaving that, because I’m really, really curious to get your thoughts on animals. I think maybe I would like to start with different neuroscientific theories of consciousness. Which parts of the brain are sufficient and required for consciousness feels like it might be a really key question for thinking about which nonhuman animals we should expect to have conscious experiences — because some nonhuman animals, like insects, only have things that look much more like the very old parts of the human brain, the parts that are deeper in.
Do you have a view on which theories seem most plausible to you? Are the kind of really old parts of the brain, the subcortical parts, sufficient for consciousness?
Anil Seth: To be honest, I don’t know. But I think to help orient in this really critical discussion — critical because, of course, it has massive implications for animal welfare and how we organise society and so on — it’s worth taking a quick step back again and just comparing the problem of animal consciousness with the one of AI consciousness. Because in both cases there’s uncertainty, but they are very different kinds of uncertainty.
Luisa Rodriguez: Almost opposite ones.
Anil Seth: Almost exactly opposite. In AI, we have this uncertainty of, does the stuff matter? AI is fundamentally made out of something different. Animals are fundamentally the same because we are also animals.
And then there’s the things that are different. Animals generally do not speak to us, and often fail when measured against our highly questionable standards of human intelligence. Whereas AI systems now speak to us, and measured against these highly questionable criteria, are doing increasingly well.
So I think we have to understand how our psychological biases are playing into this. It could well be that AI is more similar to us in ways that do not matter for consciousness, and less similar in ways that do — and nonhuman animals the other way around. We’ve got a horrible track record of withholding conscious status and therefore moral considerability from other animals, but even from other humans. For some groups of humans, we just do this. We’ve historically done this all the time and are still doing it now.
There’s this principle in ethics called the precautionary principle: that when we’re uncertain, we should basically err on the side of caution, given the consequences. I think this is really worth bearing in mind for nonhuman animals. You could apply the same to AI and say, well, we should just apply that: since there’s uncertainty, we should just assume AI is conscious. I think no: I think the effect of bias is so strong, and we can’t care for everything as if it’s conscious, because we just only have a certain amount of care to go around.
But when it comes to nonhuman animals, they have the brain regions, the brain processes, that seem highly analogous to the ones in human and mammalian brains for emotional experiences, pain, suffering, pleasure and so on, that I think it pays to extend the precautionary principle more in that direction.
Figuring out exactly which animals are conscious, of course, we don’t know. But there are things that I think are relatively clear. If we just take mammals very broadly, from a mouse to a chimpanzee to a human, we find very similar brain structures and similar sorts of behaviours and things like that. So it seems very, very unlikely that there are some mammals that lack consciousness. I think mammals are conscious.
But even then, we’ve had to get rid of some of the things that historically you might have thought of as essential for consciousness, like higher order reasoning and language. I mean, Descartes was infamous — but at the time, it was probably a very sensible move because of the pressure he was under from the religious authorities — he was very clear that only humans had consciousness, or the kind of consciousness that mattered, and that was because we had these rational minds. So he associated consciousness with these higher rational functions.
Now people generally don’t do that. So mammals are within the magic circle. What else? Then it becomes really hard, because we have to just walk this line: we have to recognise we’re using humans — and then, by extension, mammals — as a kind of benchmark.
But you know, there might well be other ways of being conscious. What about the octopus, as Peter Godfrey-Smith has written beautifully about? And what about a bumblebee? What about a bacteria? It’s almost impossible to say. It seems intuitive to me that some degree of neural complexity is important, but I recognise I don’t want to fall into the trap of using something like intelligence as a benchmark.
Luisa Rodriguez: Yeah. I mean, that’s just basically what I find both maddening and fascinating about this question of nonhuman animals. It feels like there’s this very frustrating slippery slope thing, where I don’t want to be overly biased toward humans, or towards structures that kind of “create consciousness,” whatever that means, in the way that ours does.
And it does seem like there might be multiple ways to do it. And over time, I’ve become much, much more sympathetic to the idea that not just birds, and not just cephalopods, but insects have some kinds of experiences. And I just find it really confusing about where and how and whether it makes sense to draw a line, or maybe that’s just philosophically nonsensical.
So I’m really drawn to this question, which is why I opened with it, of: Can neuroscience point to functions or places or parts of the brain that seem related enough to consciousness, that if we see analogous things in bees, we should update a lot on that? But I also have the impression that there’s still so much debate about subcortical and cortical theories, and which is more plausible, that maybe we’re just not there, and that’s not possible now, and might not be for a while.
Anil Seth: Jonathan Birch, who’s a philosopher at UCL, has this beautiful new book called The Edge of Sentience, which I think is all about this. He’s trying to figure out how far we can generalise and on what basis.
I think the issue is that it seems very sensible that consciousness is multiply realisable to an extent: that different kinds of brains could generate different kinds of experience, but it’d still be experience. But to know when that’s the case, we have to understand the sort of basis of consciousness in a way that goes beyond, “It requires this or that region.”
We need to know what is it that these brain areas are doing or being that makes them important for consciousness in a way that we could say, well, we obviously don’t see a frontal cortex in a honeybee, because they don’t have that kind of brain, but they’re doing something, or their brains are made of the stuff and organised in the right way, that we can have some credence that that’s enough for consciousness.
And we don’t really have that yet. I mean, the theories of consciousness that exist are varied. Some of them are pretty explicitly theories about human consciousness, and they’re harder to extrapolate to nonhuman animals: like what would be a global workspace in a fruit fly? You could make some guesses, but the theory as it is is more assuming there’s a kind of cortical architecture like a human.
And other theories, like integrated information theory, are much clearer: wherever there is some nonzero integrated information maxima of X, there’s consciousness. But it’s just impossible to actually measure that in practice. So very, very hard.
But the path, I think, is clear that the better that we can understand consciousness, where we are sure that it exists, the surer our footing will be elsewhere where we’re less confident, because we will be able to generalise better.
So where are your areas of uncertainty? I’m always interested. Like, just for you, where are you like, “I’m not sure…”?
Luisa Rodriguez: I feel just through getting to learn about these topics for this show, I constantly get told these amazing facts about fish and bees, and the kinds of learning they can do, and the kinds of motivational tradeoffs they make, and the fact that they do nociception, and that nociception gets integrated into other parts of their behaviour. And that all feels really compelling to me.
Then I talk to someone who’s like, “Yeah, but a lot of that could just be happening unconsciously.” And at what point it’s more plausible that they’re little robots doing things unconsciously, and then at what point it becomes more plausible that a little robot doing that is just a less likely story than it’s got the lights switched on in some way, and it’s making tradeoffs because the world is complicated and it’s evolved to have more complex systems going on so that it can survive. I just find that really confusing.
Anil Seth: Yeah. I think me too. But actually there’s a point you make which I didn’t make, so I’m grateful for you bringing that up, which is the functional point of view too. So instead of just asking which brain regions or interactions between brain regions do we see, we can ask from the point of view of function and evolution, which is usually the best way to make sense of things comparatively between animals and biology.
So what is the function of consciousness? And if we can understand more about that, then we can have another criterion for saying, which other animals do we see facing and addressing those same kinds of functions? And of course there may be other functions. We have to be sensitive to that too. But at least it’s another productive line.
And in humans and mammals, there’s no single answer. But it seems as though consciousness gets into the picture when we need to bring together a lot of different kinds of information signals from the environment in a way that is very much centred on the possibilities for flexible action — all kind of calibrated in the service of homeostasis and survival.
So automatic actions, reflexive actions, don’t seem to involve consciousness. Flexibility, informational richness, and sort of goal-directedness, those seem to be functional clues. So to the extent we see animals implementing those similar functions, I think that’s quite a good reason for attributing consciousness. But it’s not 100%.
Luisa Rodriguez: Yeah. That’s basically where I am. And it means that I now feel a lot of feelings about the bees in my garden. And mostly I feel really grateful to have learned about these topics, but I also feel really overwhelmed.
What’s next for Anil [02:30:18]
Luisa Rodriguez: Final question: What’s an idea you’ve been thinking about lately that you’re really excited about?
Anil Seth: I mean, the problem is there’s just too many of them. So I’ve just this week decided it’s time to write the second book. I’m going to try.
Luisa Rodriguez: Incredible.
Anil Seth: I think it should be a book. This is really focusing on this question of whether AI could be conscious and whether life matters. So what is this distinction between computers and brains? Why might that matter for the possibility of consciousness?
Luisa Rodriguez: Fantastic! I can’t wait.
Anil Seth: Just developing this thing we were talking about, this challenge to computational functionalism, and trying to really make a strong case for why life matters, I think is really theoretically exciting for me. But I think it matters in the wider context too, because there’s such a kind of heated and in many ways confused dialogue about AI, and whether it’s conscious, and what we’re going to do, and should AI be regulated?
And all of these things get massively more confused when you bring consciousness into the picture, because people start talking about singularities and Terminator situations and living forever and uploading themselves to the cloud, or giving legal person status to a chatbot, all kinds of stuff. And I just think, well, there are no 100% answers, but we really need to see this landscape clearly.
And I think this has the other effect: that it reminds us what we are, as living human beings. I think we really sell ourselves cheaply if we project something as central as conscious experience into the statistical machinations of a language model. We are more than that. So I think this tendency to project ourselves into our technologies can be destructive and disruptive socially. But it’s also denuding ourselves of not just our humanity, but our living inheritance.
Luisa Rodriguez: Let’s leave that there. My guest today has been Anil Seth. It has been such a pleasure having you. Thank you so much.
Anil Seth: Luisa, thank you very much. It’s been a real pleasure to talk to you. I’ve really enjoyed it. Thank you very much.
Luisa’s outro [02:32:46]
Luisa Rodriguez: If you enjoyed this episode but haven’t already listened to our episode on consciousness with David Chalmers, I highly recommend you do! That’s episode #67 – David Chalmers on the nature and ethics of consciousness.
All right, The 80,000 Hours Podcast is produced by Keiran Harris.
Content editing by me, Katy Moore, and Keiran Harris.
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.