#206 – Anil Seth on the predictive brain and how to study consciousness

In today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain.

They cover:

  • What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness.
  • Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains.
  • Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is.
  • Whether our theories of human consciousness can be applied to nonhuman animals.
  • Anil’s thoughts on whether machines could ever be conscious.
  • Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next.
  • And much more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Highlights

How our brain interprets reality

Anil Seth: If I take a white piece of paper from inside to outside, still nicely daylight here, then the paper still looks white, even though the light coming into my eyes from the paper has changed a lot.

So this tells me that the colour I experienced is not only not a property of the object itself, but it’s also not just a transparent readout of the light that’s coming into my eyes from the object. It’s set into the context. So if the indoor illuminance is yellowish, then my brain takes that into account, and when it’s bluish, it takes that into account too.

This, by the way, is what’s at play in that famous example of the dress, which half people in the world saw one way, half people saw the other way. It turns out there’s individual differences in how brains take into account the ambient light.

All this is to say that colour is one example where it’s pretty clear that what we experience is a kind of inference: it’s the brain’s best guess about what’s going on in some way out there in the world.

And really, that’s the claim that I’ve taken on board as a general hypothesis for consciousness: that all our perceptual experiences share that property; that they’re inferences about something we don’t and cannot have direct access to.

This line of thinking in philosophy goes back at least to Immanuel Kant and the idea of the noumenon, which we will never have access to, will only ever experience interpretations of reality. And then Hermann von Helmholtz, a German polymath in the 19th century, was the first person to propose this as a semiformal theory of perception: that the brain is making inferences about what’s out there, and this process is unconscious, but what we consciously experience is the result of this inference.

And these days, this is quite a popular idea, and it’s known under different theoretical terms like predictive coding, or predictive processing, or active inference, or the Bayesian brain. There are all these different terminologies.

My particular take on it is to finesse it to this claim that all conscious contents are forms of perceptual prediction that are arrived at by the brain engaging in this process of making predictions about what’s out there in the world or in the body, and updating those predictions based on the sensory information that comes in.

And this really does flip things around. Because it seems as though the brain just absorbs the world; it just reads the world out in this kind of outside-in direction. The body and the world are just flowing into the brain, and experience happens somehow.

And what this view is saying is it’s the other way around: yes, there are signals coming into the brain from the world and the body, but it’s not that those signals are read out or just transparently reconstituted into some world, in some inner theatre. No, the brain is constantly throwing predictions back out into the world and using the sensory signals to calibrate its predictions.

And then the hypothesis — and it’s still really a hypothesis — is that what we experience is underpinned by the top-down, inside-out predictions, rather than by the bottom-up, outside-in sensory signals.

How our brain experiences our organs

Luisa Rodriguez: I guess you might think that the body, in the same way that I’ve got a bunch of touch receptors on my hands, and I could pick up an apple and feel the shape of it, that you could have that for your spleen and feel the shape of it. But instead, we have really different experiences of our internal organs than we do for outside things. And it seems like that could be because the kinds of information we need about the inside things is really different. Am I heading in the right direction?

Anil Seth: Yes, you are. You’re telling the story beautifully. This is exactly the idea. I think one of the powerful aspects of the whole predictive brain account is the resources it provides to explain different kinds of experiences: different kinds of prediction, different kinds of experience.

So when we are walking around the world with vision, the nature of visual signals and how they change as we move explains the kinds of predictions the brain might want to make about them. And visual experience has that phenomenology of objects in a spatial frame.

But then when it comes to the interior of the body, really there’s no point or need for the brain to know where the organs are, or what shape they are, or even really that there are internal organs at all.

Luisa Rodriguez: Right. Which is why mostly I don’t feel anything like organs.

Anil Seth: That’s right. I mean, I wouldn’t even know I had a spleen. I don’t know if I do. I mean, I just believe the textbooks. They could all be wrong. I’ve got no real experiential access to something like a spleen. Sometimes you can feel your heartbeat and so on, feel your stomach.

So what the brain cares about in these cases is not where these things are, but how well physiological regulation is going — basically how likely we are to keep on living. And this highlights this other aspect of prediction that we talked about: that prediction enables control. When you can predict, you can control.

And the core hypothesis that comes out of the book is really that this is why brains are prediction machines in the first place. They evolved over evolutionary time. They develop in each of us individually, and they operate moment to moment, always under this imperative to control, regulate the internal physiological milieu: keep the blood pressure where it needs to be, keep heartrate where it needs to be, keep oxygen levels where they need to be, and so on.

And if you think about things through that lens, then emotional experiences make sense, because emotional experiences are — to oversimplify horribly — variations on a theme of good or bad. They have valence: things are good or bad. Disappointment is like, things were going to be good and things are bad. Regret might be, things could have been better. Anxiety is, everything is likely to be bad.

So there’s sort of valence to everything. And that’s what you would expect if the predictions corresponding to those experiences were more directly related to physiological homeostasis, physiological regulation: when things depart from effective regulation, valence is low; when things appear to be going well, valence is higher, more positive.

What psychedelics teach us about consciousness

Anil Seth: Psychedelics are hugely interesting. I think they’re quite controversial still in many spaces about their clinical efficacy. But setting that to one side, they provide this potentially insightful window into consciousness, because you have this fairly, in some ways, subtle manipulation, pharmacologically — a small amount of LSD or some other psychedelic substance — and then experience changes dramatically. So something is going on.

I think the interesting thing here is not to take psychedelic experiences as some deeper insight into how things really are — you know, as if other filters have come off and “I see the universe truly for the first time” — but to think of them as data about the space of possible experiences and what we shouldn’t take for granted.

And you find people interpreting experiences in both ways, but I’m very much on the side of: no, they don’t give you a deeper insight into how things are in the universe, but they do help us recognise that our normal way of experiencing things is a construction, and is also not an insight directly reflecting reality as it is.

And then, if you think about the kinds of experiences people have on psychedelics, there’s a lot of hallucinations, but now these hallucinations become uncontrolled compared to the controlled hallucination that is a characteristic of normal, non-psychedelic experience. So I think predictive processing provides a natural framework for understanding at least these aspects of the psychedelic experience. I mean, there are other aspects too that could be more emotional, could be more numinous, and other such words.

But in the creation of visual experiences, it does seem that the brain’s predictions start to overwhelm the sensory data, and we begin to experience the acts of perceptual construction itself in a very interesting way. I remember staring at some sort of clouds and just seeing them turn into people and scenes in ways which seemed almost under some kind of voluntary control, although I didn’t have much voluntary control at the time. But this makes sense to me from the perspective of perception as a controlled hallucination becoming uncontrolled.

In the lab we’ve done some studies now where we’ve built computational models of predictive perception, and then screwed around with them in various ways to try and simulate the phenomenology of psychedelic hallucinations, but also other kinds of hallucinations that people have in Parkinson’s disease, in dementia, and in other things. There are different kinds of hallucinations. So what we’re trying to do is get quite granular about the phenomenology of hallucination, and tie it down to particular differences in how this predictive process is unfolding.

Luisa Rodriguez: And so the way that the hallucination is becoming uncontrolled is because the psychedelic substance is kind of breaking the predictive process? Correct me if I’m wrong.

Anil Seth: I think that’s the idea. It’s hard to know exactly. There’s a bit of a gap still. On the one hand, what psychedelics do at the pharmacological level, the molecular level, is pretty well understood: they inhibit serotonin reuptake at this particular serotonin receptor. We know where these serotonin receptors are in the brain. That’s what they do at that level. And then we kind of know what they do at the level of experience: everything changes. Many things change, at least.

So what connects the two? I think that’s the really interesting area. So the hypothesis is that at least part of the story can indeed be told: it must be something about their mechanism of action at these serotonin receptors that disrupts this process of predictive inference. But exactly how and why is still an open question. Some colleagues of mine at Imperial College have done some work on this, trying to simulate some predictive coding networks and model how they may get disrupted under psychedelics. But something like that, I think, is going on.

The physical footprint of consciousness in the brain

Anil Seth: There are some parts of the brain where, if you damage them, then consciousness goes away entirely. Not just the specific conscious contents, but all of consciousness. And typically, these are areas lower down in the anatomical hierarchy — so brainstem regions, this bit at the base of your skull. If you’re unlucky enough to have a stroke that damages some of the regions around there, like especially these so-called midline thalamic nuclei, then you will be in a coma. So consciousness gone.

Is that where consciousness is? No, it doesn’t say that at all. In the same way that if I unplug the kettle, the kettle doesn’t work anymore — but the reason the kettle boils water is not to be found in the plug. So that’s one kind of thing you can find, but it doesn’t necessarily tell you very much.

Then, when it comes to which parts of the brain are more directly implicated in consciousness, this is, of course, where a lot of the action is in the field these days: let’s find these so-called “neural correlates of consciousness.” And there’s one surprising thing, which it’s worth saying, because I always find it quite remarkable: everyone will say the brain is this incredibly complex object — one of the most, if not the most complex object that we know of in the universe, apart from two brains. And it’s true, it’s very complex: it has about 86 billion neurons and 1,000 times more connections.

But about three-quarters of the neurons in any human brain do not seem to care that much about consciousness. These are all the neurons in the cerebellum. The cerebellum just is like… I feel sorry for the cerebellum. Not many people talk about it — well, with respect to people who spend their whole careers on it. But when you hear people colloquially talking about the brain, they talk about the frontal lobes or whatever.

But the cerebellum is this mini brain that hangs off the back of your head that is hugely important in helping coordinate movement, fine muscle control. It’s turning out to be involved in a lot of cognitive processes as well, sequential thinking and so on — but just does not seem to have that much to do with consciousness. So it’s not a matter of the sheer number of neurons; it’s something about their organisation.

And the other, just basic observation about the brain is that different parts of it work together. It’s this fascinating balance of functional specialisation where different parts of the brain are involved in different things: a visual cortex is specialised for vision, but it’s not only involved in vision. And the further you get up through the brain, the more multifunctional, pluripotent the brain regions become. So it’s a network. It’s a very complex network. If we’re tracing the footprints of consciousness in the brain, we need to be looking at how areas interact with each other, not just which areas are involved.

How to study the neural correlates of consciousness

Anil Seth: A very common example here is something like binocular rivalry. In binocular rivalry, if you show one image to one eye, another image to another eye — or a “hemifield” is better. […] And if you show different images — one to the left, one to the right — our conscious experience tends to oscillate between them. Sometimes we’ll see one, maybe an image of a house; sometimes we’ll see another, maybe an image of a face — yet the stimulus is exactly the same; the sensory information coming in is not changing.

So what you’ve done here is: the person is conscious in both cases, so you’re not looking at the correlates of being conscious, but you’ve got a lot more control on everything else. The sensory input is the same. So if you can look at what’s changing in the brain here, then maybe you’re getting closer to the footprints of consciousness.

But there’s another problem, which is that not everything is being controlled for here. Because, let’s say in this binocular rivalry case, and I see one thing rather than another, I also know that I see that. And so my ability to report is also changing.

Actually, there’s a better example of this. I think it’s worth saying, because this is another classic example: visual masking. For instance, I might show an image very briefly. And if I show it sufficiently briefly, or I show it sort of surrounded in time by two other images or just irrelevant shapes, then you will not consciously see that target image. As we would say in psychophysics, it is “masked.” The signal is still received by the brain, but you do not consciously perceive it. If I make the time interval between the stimulus and the mask a little bit longer, then you will see the stimulus.

Now, I can’t keep it exactly the same. Basically you can just work around this threshold so it’s effectively the same, but sometimes you see the stimulus and sometimes you don’t. And now again, you can look at the brain correlates.

Luisa Rodriguez: Oh, that’s really cool.

Anil Seth: Not of like house versus face, but in this case, seeing a house or not seeing a house. But again, in both cases the person is conscious. So there are many different ways you can try and apply this method.

And the reason I use that example is that the problem here is that, when the person sees the face, so the masking is a bit weaker, yes, they have a conscious experience — but again, they also engage all these mechanisms of access and report. Like they can say that they see the house, they press a button. So you’ve also got to think that maybe the difference I’m seeing in the brain is to do with all that stuff, not with the experience itself.

And you can just keep going. People have designed experiments where they now ask people not to make any report, and try to infer what they’re seeing by clever methods: these no-report things. And then other people say, “But hold on, they’re still reportable. So you’re not controlling for the capacity to be able to.” And it’s like, oh my word.

So you just keep going down this rabbit hole, and you get very clever experiments. It’s really interesting stuff. But ultimately, because correlations are not explanations, I think you’ll always find something where you can say, well, is it really about the consciousness, or is it about something else?

Articles, books, and other media discussed in the show

Anil’s work:

Others’ work in this space:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.