Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

I’m inclined to think that I can imagine a being with the ability to consciously think and perceive, but no ability to feel pleasure or pain. And still, it just actually seems monstrous to me to think about killing such a being.

David Chalmers

What is it like to be you right now? You’re seeing this text on the screen, you smell the coffee next to you, feel the warmth of the cup, and hear your housemates arguing about whether Home Alone was better than Home Alone 2: Lost in New York. There’s a lot going on in your head — your conscious experiences.

Now imagine beings that are identical to humans, except for one thing: they lack conscious experience. If you spill that coffee on them, they’ll jump like anyone else, but inside they’ll feel no pain and have no thoughts: the lights are off.

The concept of these so-called ‘philosophical zombies’ was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic ‘trolley problem’:

Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?

Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is greatly reduced, or absent entirely.

So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.

He asks us to consider the ‘Vulcans’. If you’ve never seen Star Trek, Vulcans experience rich forms of cognitive and sensory consciousness; they see and hear and reflect on the world around them. But they’re incapable of experiencing pleasure or pain.

Does such a being lack moral status?

To answer this Dave invites us to imagine a further trolley problem: suppose you have a conscious human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?

Dave firmly believes the answer is no, and if he’s right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.

Dave is one of the world’s top experts on the philosophy of consciousness. He helped return the question ‘what is consciousness?’ to the centre stage of philosophy with his 1996 book ‘The Conscious Mind’, which argued against then-dominant materialist theories of consciousness.

This comprehensive interview, at over four and a half hours long, outlines each contemporary answer to the mystery of consciousness, what it has going for it, and its likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an ‘illusion’, to panpsychism, according to which it’s a fundamental physical property present in all matter.

These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If accurate computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?

Dave Chalmers is probably the best person on the planet to interview about these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode and our personal favourite so far.

They discuss:

  • Why is there so little consensus among philosophers about so many key questions?
  • Can free will exist, even in a deterministic universe?
  • Might we be living in a simulation? Why is this worth talking about?
  • The hard problem of consciousness
  • Materialism, functionalism, idealism, illusionism, panpsychism, and other views about the nature of consciousness
  • The story of ‘integrated information theory’
  • What philosophers think of eating meat
  • Should we worry about AI becoming conscious, and therefore worthy of moral concern?
  • Should we expect to get to conscious AI well before we get human-level artificial general intelligence?
  • Could minds uploaded to a computer be conscious?
  • If you uploaded your mind, would that mind be ‘you’?
  • Why did Dave start thinking about the ‘singularity’?
  • Careers in academia
  • And whether a sense of humour is useful for research.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Highlights

Simulations
You don’t need to believe that we actually are in a simulation to think there’s interesting conclusions to be drawn from reasoning about what happens. If we’re in a simulation that tells us something about our grasp on reality and therefore the relationship between the mind and the world.

But I think it’s also interesting to think about. There are these practically important questions, which approach as we begin to spend more and more time in virtual realities and in simulated worlds. Are we engaging in a virtual reality or are we fundamentally engaging in a fiction? Is it a form of escapism where none of this is genuinely real, or can we live a meaningful, substantive life in a virtual reality in interacting with real objects and real people, which can have the kind of value that a genuine life has? I’m inclined to think that yes, we can. Thinking very seriously about simulations and virtual reality, you can actually shed some light on those questions about practical technology.

The hard problem
How do you get from physical processing in the brain and its environment to phenomenal consciousness? Why is it that there actually should be first person experience at all? When looking at the brain from the objective point of view, you can say, okay, you can see where there would be this processing. These responses. These high level capacities. But on the face of it, it looks like all that could go on in the dark in a robot, let’s say, without any first person experience of it. So the hard problem is just to explain why all that physical processing should give you subjective experiences.

I contrast these with the easy problems, which are roughly the problems of explaining behavioral capacities and associated functions like language and learning and response and integrated information and global reports. And we may not be able to explain how it is that humans do those things, but we’ve got a straightforward paradigm for doing it.

Find a neural mechanism or a computational mechanism and show how that can perform this function of producing the report or doing the integration, find the right mechanism, perform the function and you’ve explained the phenomenon. Whereas that works so well throughout the sciences, it doesn’t seem to work for phenomenal consciousness.

Explain how it is the system performs those functions, does things, learns, reports, integrates and so on. It seems prima facie all that could go on in the absence of consciousness. Why is it accompanied by consciousness? That’s the hard problem.

AI

I guess one would expect to get to conscious AI well before we get human level artificial general intelligence, simply because we’ve got a pretty good reason to believe there are many conscious creatures whose degree of intelligence falls well short of human level artificial general intelligence.

So if fish are conscious for example, and you might think if an AI gets to sophistication and information processing and whatever the relevant factors are to the degree present in fish, then that should be enough. And it does open up the question as to whether any existing AI systems may actually be conscious. I think the consensus view is that they’re not. But the more liberal you are about descriptions of consciousness, the more we should take seriously the chance that they are.

I mean, there is this website out there called ‘People for the Ethical Treatment of Reinforcement Learners’ that I quite like. The idea is that every time you give a reinforcement learning network its reward signal, then it may be experiencing pleasure or correspondingly suffering, depending on the valence of the signal. As someone who’s committed to taking panpsychism seriously, I think I should at least take that possibility seriously. I don’t know where our current deep learning networks fall on the scale of organic intelligence. Maybe they’re at least as sophisticated as worms, like C. elegans with 300 neurons. I take seriously the possibility that those are conscious. So I guess I do take seriously the possibility that AI consciousness could come along well before human level AGI, and that it may exist already.

Then the question though is, I suppose, how sophisticated the state of consciousness is. If it’s about as sophisticated as, say, the consciousness of a worm, I think most of us are inclined to think, “Okay, well then that brings along, say, some moral status with it, but it doesn’t give it enormous weight in the scheme of conscious creatures compared to the weight we give humans and mammals and so on.” So I guess then the question would be whether current AIs get a truly sophisticated moral status, but I guess I should be open to them at least getting some relatively small moral status of the kind that, say, worms have.

Uploaded minds

I’m inclined to think an uploaded mind at least can be conscious. There are really two issues that come up when you think about uploading your mind into a computer. One is will the results of this be conscious and the second one is will it be me? Maybe it’ll be conscious but it won’t be me. It’ll be like creating a twin of me, but I think in a way the second is the most worrying prospect. But let’s just stay with the first one for now, will it be conscious? Some people think that no silicon-based computational system could be conscious because biology is required. I’m inclined to reject views like that, and there’s nothing special about the biology here. One way to think about that is to think about cases of gradual uploading. You replace your neurons one at a time by silicon chips that play the same role.

I think in cases like this make it particularly hard to say that if you say that the system at the other end is not conscious, then you have to say that consciousness either gradually fades out or during this process or it suddenly disappears during this process. I think it’s at least difficult to maintain either of those lines. You could take the line that maybe silicon will never even be able to simulate biological neurons very well even in terms of its effects. Maybe there’s some special dynamic properties that biology has that silicon could never have. As far as that, I think that would be very surprising because it looks like all the laws of physics we know about right now are computational. Roger Penrose has entertained the idea that’s false.

But if we assume that physics is computational, that one can in principle simulate the action of a physical system, then one ought to at least be able to create one of these gradual uploading processes, and then someone who denies that the system on the other end could be conscious is going to have to say either it fades out in a really weird way during this process. You go through half consciousness, quarter consciousness, while your behavior stays the same, or that it suddenly disappears at some point. You replace the magic neuron and it disappears. Those are arguments I gave, well, years ago now for why I think a silicon duplicated device can be conscious in principle. Once you do that, then it looks like uploading is okay, at least where the consciousness issue is concerned.

Articles, books, and other media discussed in the show

Dave’s work

Everything else

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.