Enjoyed the episode? Want to listen later? Subscribe by searching 80,000 Hours wherever you get your podcasts, or click one of the buttons below:

I’m inclined to think that I can imagine a being with the ability to consciously think and perceive, but no ability to feel pleasure or pain. And still, it just actually seems monstrous to me to think about killing such a being.

David Chalmers

What is it like to be you right now? You’re seeing this text on the screen, you smell the coffee next to you, feel the warmth of the cup, and hear your housemates arguing about whether Home Alone was better than Home Alone 2: Lost in New York. There’s a lot going on in your head — your conscious experiences.

Now imagine beings that are identical to humans, except for one thing: they lack conscious experience. If you spill that coffee on them, they’ll jump like anyone else, but inside they’ll feel no pain and have no thoughts: the lights are off.

The concept of these so-called ‘philosophical zombies’ was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic ‘trolley problem’:

Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?

Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is greatly reduced, or absent entirely.

So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different.

He asks us to consider the ‘Vulcans’. If you’ve never seen Star Trek, Vulcans experience rich forms of cognitive and sensory consciousness; they see and hear and reflect on the world around them. But they’re incapable of experiencing pleasure or pain.

Does such a being lack moral status?

To answer this Dave invites us to imagine a further trolley problem: suppose you have a conscious human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human?

Dave firmly believes the answer is no, and if he’s right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself.

Dave is one of the world’s top experts on the philosophy of consciousness. He helped return the question ‘what is consciousness?’ to the centre stage of philosophy with his 1996 book ‘The Conscious Mind’, which argued against then-dominant materialist theories of consciousness.

This comprehensive interview, at over four and a half hours long, outlines each contemporary answer to the mystery of consciousness, what it has going for it, and its likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an ‘illusion’, to panpsychism, according to which it’s a fundamental physical property present in all matter.

These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If accurate computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?

Dave Chalmers is probably the best person on the planet to interview about these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode and our personal favourite so far.

They discuss:

  • Why is there so little consensus among philosophers about so many key questions?
  • Can free will exist, even in a deterministic universe?
  • Might we be living in a simulation? Why is this worth talking about?
  • The hard problem of consciousness
  • Materialism, functionalism, idealism, illusionism, panpsychism, and other views about the nature of consciousness
  • The story of ‘integrated information theory’
  • What philosophers think of eating meat
  • Should we worry about AI becoming conscious, and therefore worthy of moral concern?
  • Should we expect to get to conscious AI well before we get human-level artificial general intelligence?
  • Could minds uploaded to a computer be conscious?
  • If you uploaded your mind, would that mind be ‘you’?
  • Why did Dave start thinking about the ‘singularity’?
  • Careers in academia
  • And whether a sense of humour is useful for research.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Key points

Simulations
You don’t need to believe that we actually are in a simulation to think there’s interesting conclusions to be drawn from reasoning about what happens. If we’re in a simulation that tells us something about our grasp on reality and therefore the relationship between the mind and the world.

But I think it’s also interesting to think about. There are these practically important questions, which approach as we begin to spend more and more time in virtual realities and in simulated worlds. Are we engaging in a virtual reality or are we fundamentally engaging in a fiction? Is it a form of escapism where none of this is genuinely real, or can we live a meaningful, substantive life in a virtual reality in interacting with real objects and real people, which can have the kind of value that a genuine life has? I’m inclined to think that yes, we can. Thinking very seriously about simulations and virtual reality, you can actually shed some light on those questions about practical technology.

The hard problem
How do you get from physical processing in the brain and its environment to phenomenal consciousness? Why is it that there actually should be first person experience at all? When looking at the brain from the objective point of view, you can say, okay, you can see where there would be this processing. These responses. These high level capacities. But on the face of it, it looks like all that could go on in the dark in a robot, let’s say, without any first person experience of it. So the hard problem is just to explain why all that physical processing should give you subjective experiences.

I contrast these with the easy problems, which are roughly the problems of explaining behavioral capacities and associated functions like language and learning and response and integrated information and global reports. And we may not be able to explain how it is that humans do those things, but we’ve got a straightforward paradigm for doing it.

Find a neural mechanism or a computational mechanism and show how that can perform this function of producing the report or doing the integration, find the right mechanism, perform the function and you’ve explained the phenomenon. Whereas that works so well throughout the sciences, it doesn’t seem to work for phenomenal consciousness.

Explain how it is the system performs those functions, does things, learns, reports, integrates and so on. It seems prima facie all that could go on in the absence of consciousness. Why is it accompanied by consciousness? That’s the hard problem.

AI

I guess one would expect to get to conscious AI well before we get human level artificial general intelligence, simply because we’ve got a pretty good reason to believe there are many conscious creatures whose degree of intelligence falls well short of human level artificial general intelligence.

So if fish are conscious for example, and you might think if an AI gets to sophistication and information processing and whatever the relevant factors are to the degree present in fish, then that should be enough. And it does open up the question as to whether any existing AI systems may actually be conscious. I think the consensus view is that they’re not. But the more liberal you are about descriptions of consciousness, the more we should take seriously the chance that they are.

I mean, there is this website out there called ‘People for the Ethical Treatment of Reinforcement Learners’ that I quite like. The idea is that every time you give a reinforcement learning network its reward signal, then it may be experiencing pleasure or correspondingly suffering, depending on the valence of the signal. As someone who’s committed to taking panpsychism seriously, I think I should at least take that possibility seriously. I don’t know where our current deep learning networks fall on the scale of organic intelligence. Maybe they’re at least as sophisticated as worms, like C. elegans with 300 neurons. I take seriously the possibility that those are conscious. So I guess I do take seriously the possibility that AI consciousness could come along well before human level AGI, and that it may exist already.

Then the question though is, I suppose, how sophisticated the state of consciousness is. If it’s about as sophisticated as, say, the consciousness of a worm, I think most of us are inclined to think, “Okay, well then that brings along, say, some moral status with it, but it doesn’t give it enormous weight in the scheme of conscious creatures compared to the weight we give humans and mammals and so on.” So I guess then the question would be whether current AIs get a truly sophisticated moral status, but I guess I should be open to them at least getting some relatively small moral status of the kind that, say, worms have.

Uploaded minds

I’m inclined to think an uploaded mind at least can be conscious. There are really two issues that come up when you think about uploading your mind into a computer. One is will the results of this be conscious and the second one is will it be me? Maybe it’ll be conscious but it won’t be me. It’ll be like creating a twin of me, but I think in a way the second is the most worrying prospect. But let’s just stay with the first one for now, will it be conscious? Some people think that no silicon-based computational system could be conscious because biology is required. I’m inclined to reject views like that, and there’s nothing special about the biology here. One way to think about that is to think about cases of gradual uploading. You replace your neurons one at a time by silicon chips that play the same role.

I think in cases like this make it particularly hard to say that if you say that the system at the other end is not conscious, then you have to say that consciousness either gradually fades out or during this process or it suddenly disappears during this process. I think it’s at least difficult to maintain either of those lines. You could take the line that maybe silicon will never even be able to simulate biological neurons very well even in terms of its effects. Maybe there’s some special dynamic properties that biology has that silicon could never have. As far as that, I think that would be very surprising because it looks like all the laws of physics we know about right now are computational. Roger Penrose has entertained the idea that’s false.

But if we assume that physics is computational, that one can in principle simulate the action of a physical system, then one ought to at least be able to create one of these gradual uploading processes, and then someone who denies that the system on the other end could be conscious is going to have to say either it fades out in a really weird way during this process. You go through half consciousness, quarter consciousness, while your behavior stays the same, or that it suddenly disappears at some point. You replace the magic neuron and it disappears. Those are arguments I gave, well, years ago now for why I think a silicon duplicated device can be conscious in principle. Once you do that, then it looks like uploading is okay, at least where the consciousness issue is concerned.

Articles, books and blog posts discussed in the show

Dave’s work

Everything else

Transcript

Rob’s intro [0:00:00]

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.

This interview is my single favourite episode of the show so far. And it’s not just me — Keiran and Peter McIntyre both said it was their favourite interview we’ve done to date.

David Chalmers is a philosopher I’ve admired since I was an undergrad — he’s consistently entertaining and insightful, and willing to have a go at thinking through a huge range of fundamental questions. His work on theory of mind has been hugely influential, and rightly so.

Arden and I recorded one three hour session with Dave and then decided to come back for seconds, because the conversation was going so well, and there was plenty more to cover.

As a result it’s our longest episode so far, but Keiran and I at least didn’t find any of it boring. If you want to skip to a particular section, for example the key part on theories of consciousness and their practical implications, you can always use the chapters function.

Two quick notices before that though.

First we recently put out an article that might be very interesting to podcast subscribers, called Advice on how to read our advice. We go through the 8 ways people most often misunderstand our advice, and how they should approach it instead. If you might use the show to shape your career, I definitely recommend having a read. You can find a link to that in the show notes.

Second, for the last two months we’ve been publishing advice from a dozen or so people whose careers we admire, many of whom are working on the problems we’ve been focusing on on this show.

The catch is the advice is anonymised, so the people we spoke with wouldn’t have to worry about whether their employer would be happy with what they were saying, or otherwise censor themselves for reputational reasons.

We’ve released six sets of answers so far, including responses to:

  • What bad habits do you see among people trying to improve the world?
  • How risk-averse should talented young people be about their careers?
  • How have you seen talented people fail in their work?

I’ll link to those in the show notes. If you enjoy this podcast I expect you’ll enjoy the insights in the anonymous advice series as well.

Alright, without further ado, here’s my colleague Arden Koehler and me interviewing one of the most cited living philosophers, Prof David Chalmers.

The interview begins [0:02:11]

Robert Wiblin: Today, I’m speaking with Professor David Chalmers. Dave is a philosopher at New York University and director of the Center for Mind, Brain, and Consciousness. He specializes in philosophy of mind and cognitive science and is interested in the philosophy of language, epistemology, metaphysics, and philosophical questions about philosophy itself.

He’s co-director of the PhilPapers Foundation, which maintains the largest catalogue of philosophical books and papers in the world with 2.4 million entries and over 200,000 users and he’s also an honorary professor of philosophy at the Australian National University where I actually happened to first meet him about 10 years ago when I was an undergraduate.

Dave is perhaps best known for his work on the philosophy and science of consciousness, and especially for his work on finding and trying to answer the hard problem of consciousness, which we’ll talk about in a minute. He’s gone on to help pioneer the new field of consciousness studies, which lies somewhere between neuroscience, psychology, and philosophy.

And he’s also just generally a very prolific author who’s written a lot of articles and given a bunch of talks on future-related topics that matter to people who potentially want to steer the direction that humanity is going in. And, I was also surprised that Dave, you grew up in the same city as me in Adelaide, Australia.

David Chalmers: Yeah, that’s right. I lived there until I was about 20 or so.

Robert Wiblin: Yeah, me too. I guess I moved away when I was 18 to go study. Guess that a lot of good people leave Adelaide is what I say.

David Chalmers: Which football team did you support?

Robert Wiblin: Oh, neither, embarrassingly. What about you?

David Chalmers: Ah, Sturt.

Robert Wiblin: Okay, right. Yeah, actually we maybe grew up in the same suburb. I grew up in Unley.

David Chalmers: Oh really? I went to Unley High School. How about you?

Robert Wiblin: Oh, I almost went to Unley High School. I actually went to Glenunga, which is like right nearby.

David Chalmers: Yeah, I grew up in Mitcham which is just south of there.

Robert Wiblin: Yeah, I went to Mitcham Primary School.

David Chalmers: Well how about that. We probably have many mutual friends.

Robert Wiblin: Yeah. Anyway, today we’re also joined by Arden Koehler, and she’s the newest member of the 80,000 Hours research team. Coincidentally, she’s actually until now been a PhD student in philosophy at NYU where she’s been studying ethics, and so she happens to know Dave in this case by being a teaching assistant in one of his undergraduate classes. Thanks for coming on the show. Arden.

Arden Koehler: Thanks. Excited to be here.

Robert Wiblin: All right, so today we hope to get to talk about a whole lot of juicy topics like the simulation hypothesis and virtual reality, some of the work on what philosophers actually do and whether they’re succeeding at it, and the ethical implications of different theories of consciousness.

But first, as I usually ask, what are you working on at the moment and why do you think it’s really important?

David Chalmers: As always, I’m working on a whole lot of things at any given time, but I guess the biggest thing I’m working on right now is trying to finish a book on philosophical issues about virtual reality and simulated worlds and trying to approach many philosophical questions through that lens.

And why is that important? Well, theoretically I think this just provides a very productive way of shedding light on some very traditional philosophical questions about knowledge of the external world, about the nature of reality and about the value of lives. And, at the same time, it raises a whole bunch of very new philosophical questions about technologies which are coming into our lives today.

Technologies involving virtual reality and virtual worlds: why is it practically important? Well, I think people are beginning to spend more and more time in virtual worlds of various sorts and it’s easy to imagine that in the future, we’re going to have at least the option of spending a whole lot of time in virtual worlds and increasingly sophisticated virtual realities.

And I think the question that’s going to arise is, “Is this actually a meaningful good way to spend one’s life, or is there something deficient about it?” I think if you’re interested in building, for example, a better or more valuable future, you want to give some attention to some of these issues about the status of virtual worlds and thinking about what makes for the best world.

The other thing I’m thinking about are general issues about consciousness and its relationship to the physical world. Consciousness is one of the most central phenomena in our lives and one of the most ill-understood, so intellectually it’s fascinating. But again, practically for you and your listeners interested in building a better world, arguably, consciousness is one of the primary determinants of the value of our lives. Some people think it’s the only determinant, but generally people believe it’s at least one of the primary determinants of what makes a life better or worse.

So if you’re trying to think about what makes for a better world and for better lives, I think you just have to think about consciousness. And I would love to see people interested in building a better world get really seriously interested in some of these issues about consciousness. To think about how focusing on different states of consciousness can indeed play a role in helping us to build a better world.

Philosopher’s survey [0:06:37]

Robert Wiblin: So back in 2009, you ran this survey with David Bourget where you surveyed quite a lot of philosophers about what they thought about current issues in philosophy.

The bottom line is that the results suggested that there’s just very little consensus among philosophers about a lot of these questions. There were some points of agreement, but a lot of points of difference, I guess. What are the main things that you think you learned from running this survey?

David Chalmers: Good question. The main things I learned were specific answers to specific questions. We sent out emails to about 2000 philosophers at a hundred odd departments of philosophy in the US, Canada, Australia, New Zealand, UK, Europe, and we got about a 50% response rate.

So we’ve got a thousand people responding, each answering about 30 questions which gave them the choice typically between two views like mind: physicalism or non-physicalism, normative ethics: consequentialism, deontology or virtue ethics, and so on. They got the option of accepting every answer or leaning towards that answer.

Or, there was a whole host of “other” options. Like the question is too meaningless to answer or that they’re insufficiently familiar with the details or it all depends what you mean by a key term and so on. Philosophers love those “other” options, but still we got enough information on a lot of these for it to be very interesting.

At a general level, it wasn’t terribly surprising. We found a lot of disagreements about the answers to these questions. Many of them ended up 50-50 Platonism versus Nominalism about abstract objects, roughly whether abstract objects like numbers exist or not, came out about 50-50. Physicalism versus non-physicalism came up about 56% for physicalism, 28% for non-physicalism.

The rest were varieties of agnostic. The biggest consensus we got was on the external world. Actually realism about the external world, that we know the external world exists. Skepticism is the position that we don’t know, or idealism, that it’s all in the mind. I think we got about 80% for realism or close to that. Maybe 5% each for skepticism or idealism.

Arden Koehler: I’m sort of relieved.

David Chalmers: If you defer to philosophers about this, we can now infer that we do know about the external world. But we actually at the same time, to quantify the degree to which we should be surprised, we took a meta-survey where we asked people for their predictions after they returned the survey in the ensuing couple of weeks. We asked people for their predictions about what the results of the survey would be.

So we could say, okay, with respect to the question, is there an analytic-synthetic distinction, that is, things which are true in virtue of the meanings of words versus things which are true in virtue of the world?

The actual answer to that question came out 70 to 30. 70% of people said there is an analytic-synthetic distinction. 30% said there’s not. But philosophers predictions were that it would come out 50-50, so actually, many philosophers had a false sociological belief about philosophers.

They tended to believe that the analytic-synthetic distinction is less popular and less widely believed than it actually is, and we got many results of that form where people underestimated or overestimated the popularity of certain hypotheses by up to about 20%. So that was the way of actually quantifying how surprised you might be about these results.

Now I think we also got to quantify people’s performance on the meta-survey. I’m pleased to say that I came out in the top five or so for my performance on the meta-survey; maybe I cheated a bit because I’d run some test surveys along the way. So I was relatively well informed about this.

How surprised was I by the results of the survey? Well, I was still surprised. The one that surprised me the most was the question about aesthetic value. Aesthetic value, is it objective or subjective? I was sure that a large majority of people would say it’s subjective. In fact, we got more people saying it’s objective.

Arden Koehler: Yeah. I’m surprised by that one too.

Robert Wiblin: Yeah. That’s one we pulled out to potentially discuss. I saw that. I just couldn’t believe it. This raises an interesting question of what should you do when you get the results of the survey and you just think that the results, like the answer is absolutely mental. It’s just like there’s a plurality saying that they think aesthetic value is objective. And I’m like, do they really think that? If there was another set of beings and they thought that the art that we liked, that they hated it and there were no other beings that liked it, that they’re just mistaken and in fact, this art that they hate is actually just, is the right art, the best art?

Yeah, what should one do? Should one think maybe we’re answering a different question and we’re misunderstanding one another or should one just be like, “Wow, these people thought about it as much as I have and they disagree”.

Did you shift your views a lot? Maybe in reaction to the results even when you thought they were wrong?

David Chalmers: My reaction to getting a surprising result like that was to think that probably they’re understanding the question in a way different from the way that I understand it.

We very deliberately did it with just very short labels, not with long expositions of whatever your option means. Just because that process is endless and contestable. So it may be that what people meant when they said that aesthetic value is objective is something different from what you and I meant.

Not that there’s some objective standards that would apply, for example, to aliens, but maybe there are standards that apply, say to more than one human being and that somehow some works of art can be better than others, at least relative to say, human being’s standards and one can make aesthetic mistakes.

It’s not totally up to an individual observer to have certain very general norms of aesthetic appreciation. So my sense is, and I’ve talked to a few people in aesthetics about this result, and typically debates about whether aesthetic values, objective or subjective, seem to have that form rather than the one that would be about the possibility of species with different aesthetic norms who might equally be getting things right.

So that was my reaction and that was an issue I was less familiar with than some others so maybe I updated on what philosophers mean by objective or subjective. In other cases, there is this general question as to whether we should defer to the results of this survey. If it turns out that most philosophers think that P, then why don’t you think P? I think very few philosophers have this reaction, and I certainly didn’t have it. It’s a field where disagreement is rife. These are all hard issues. Philosophers can get these things wrong. So I don’t think there was a lot of actual updating of first order views.

Robert Wiblin: But not the philosopher who’s saying that. Other philosophers of course can get it wrong.

David Chalmers: Oh, well I think philosophers at a certain level have a certain kind of humility. At a first order level, we give arguments for our views and make them as strong as we can and accept them maybe or even are confident in them. But then at a second order level, we step back and say, “Well, these issues are really hard and we may be getting things wrong”.

That’s something like the attitude I have in doing philosophy. I think I’ve got good arguments for my views and I’ll give them. But then, at a higher order level, you can’t help but step back and say, “There’s a good chance that I’m wrong”. I still think you should try and pursue those views as well and as robustly as you can. But it does mean that for practical purposes, just say, a life and death issue depending on this, you might want to factor in a good degree of humility into your actions.

Free will [0:13:37]

Arden Koehler: A lot of the survey responses were very mixed. There wasn’t even a majority. It was like split in thirds between yes, no or other. One place where I thought it could make sense to update in the direction of philosophers is the compatibilism question or the free will question. So there was a question about free will, compatibilism, libertarianism or no free will. And just in the wider world, it seems like a lot of people don’t recognize that first option and the fact that 59% of philosophers favor compatibilism. I thought that’s like maybe one of the few cases where it could make sense to read the survey and be like, “Oh, maybe I’ll update in that direction”.

David Chalmers: Yeah, I guess maybe interact with what you think about the first order–

Arden Koehler: It’s true, definitely.

David Chalmers: –First order question. I’m sympathetic, but it is the case that philosophers are generally much more sympathetic with compatibilism than non-philosophers. I think it’s crazy and I’m inclined to think this is the case where the philosophers have thought about it a bit more deeply than the non-philosophers, especially when you think, why do we care about free will? Because we care about moral responsibility, having some kind of genuine responsibility for your actions.

And once you think about this, you start to think, even if the Universe’s deterministic, is there still a distinction between being responsible for your actions and not, and it’s very easy to motivate the idea that even in a deterministic Universe, there could be that distinction.

So that leads you in the direction of, even in a deterministic Universe, there could be free will. Now someone interested in free will could say, well, that wasn’t what I cared about (moral responsibility). I cared about some other stronger things, like being able to fundamentally make a difference to the time course of the Universe in some very strong sense.

So I think your average philosopher’s gonna say, “Okay, well maybe that’s not compatible with determinism. There’s a very strong sense of free will, but that one actually turns out to be less worth caring about than this other one tied to moral responsibility”.

And I think at that point, wherein something like in a verbal dispute, which can happen in these cases where you have different people mean different things by free will and well, I think that diagnosis is more apt for some of these questions than for others, but like this is the case again where your average philosopher uses free will, well the thing tied to moral responsibility. Many people outside philosophy may still think, ah, why use free will for that? I want to use free will for this other thing. This ability to fundamentally go against the laws of nature. And then we just have a difference about which one is worth caring about.

Robert Wiblin: Yeah. I’m pretty drawn to compatibilism, so I was relieved to see this survey result definitely confirming that my view was correct. I think that probably compatibilism is right. At least like for a particular understanding of the question, but I suppose I don’t think that you can get more responsibility out of that, so I’m kind of maybe not drawn to it for that reason.

Is the reason that most people are going for compatibilism is that they want to bring back moral responsibility for things?

David Chalmers: It’s not so much that they want to bring it back. It’s just that they think that there is an intuitive difference between cases where we are morally responsible and cases where we are not morally responsible, even in a deterministic Universe.

Robert Wiblin: Yeah. I guess it just seems like if you’re not responsible for like the preferences that you have, then even though in a sense you like had the ability to predict and then create the outcome that you wanted, you’re not then culpable for having had the wrong preferences in my mind.

Arden Koehler: I think often the idea is that you can be responsible for who you are and it’s not because you created who you are freely or something like that. But that we just have to reunderstand what you can be responsible for. And one of the things is like being the person you are with the traits you have and the character you have and then everything that flows from that can be a responsibility.

Robert Wiblin: Yeah. It seems odd to me, but yeah.

David Chalmers: And even if you’re not responsible for your character, you might think you can be responsible for your actions. You can deny the principal that responsibilities for your actions requires responsibility for everything that led to your actions. So you think no one is morally responsible for anything?

Robert Wiblin: Fundamentally. I think we should punish people and reward them and so on. I just don’t think that it’s because of like deservingness or moral culpability.

David Chalmers: Oh, well. I think many people want to distinguish moral responsibility from desert or deservingness. I think some people think that to have this notion of desert that you really deserve certain things that would require some strong kind of free will, which might not exist in a deterministic Universe. But I think some people want to nevertheless say there’s a weaker notion of responsibility which can exist.

Robert Wiblin: Yeah. Okay. So, you can be causally responsible, I suppose, but I guess I don’t feel like that then necessarily creates like a motivation for retribution or punishment, like beyond the kinds of consequences that create the right incentives for people to produce good outcomes. Does that make sense?

David Chalmers: Yeah. But I think many people think they want to reconstruct a notion of moral responsibility that doesn’t go along with retribution and desert and you can have moral responsibility that doesn’t ground those things, but nonetheless grounds being, having certain attitudes towards their actions, where we say when they did the right thing and when they did the wrong thing.

Arden Koehler: Or like pride. Do you think that like it’s fundamentally inappropriate to feel proud of like a morally good action because–

Robert Wiblin: You’ll feel guilty and proud if it will produce good outcomes.

Arden Koehler: Right. So it’s not made appropriate by the character of the action. It’s just like instrumental.

Robert Wiblin: Yeah, definitely.

David Chalmers: There’s still some ways of producing a good outcome. We want to say you’re responsible for and you should feel guilty about and we should have certain attitudes and be treated a certain way. Other ways that it’s done is under the influence of a drug or something someone gave you. Then the attitudes of the person is inappropriate. Even if it’s in a deterministic Universe. I think one can draw those distinctions that end up being what we use to track moral responsibility.

Robert Wiblin: Yeah, it does. It’s interesting how people’s intuitions about the cases in which you should feel proud or not, or be punished or not seem to track so well. Like when the incentives actually will make society better. But anyway, we should probably move on because ultimately this wasn’t meant to be a section about compatibilism, specifically.

Survey correlations [0:20:06]

Robert Wiblin: I guess a really interesting finding from the survey, which Arden pointed out to me, is that there was like only very weak correlations between kind of the answers that philosophers were giving across different questions.

For example, if you are serving the general public and ask them their view on the death penalty, then that gives you like a remarkably good ability to then predict their view on climate change or their view on like international relations, which is in a sense, kind of surprising, but people are like lined up in these likeideological groupings where it’s like you can, yeah, the answers to one question predicts another. But the highest correlation coefficient in this survey was 0.56 which is kind of only moderate and it was between moral realism and cognitivism which obviously have a lot to do with one another directly.

Were you surprised by the kind of low correlations or the kind of low level kind of ideological, maybe consistency is not the right word, but ideological fervor between different camps within philosophy.

David Chalmers: I don’t know. Where I come from, 0.56 is a pretty high correlation coefficient. So I don’t know. Maybe it depends on the area.

Arden Koehler: Philosophers considers that a very high correlation?

David Chalmers: Well, in psychology it’s very rare you get something as high as that. You’re pretty happy to get 0.3. 0.56 corresponds to 80% agreement on the on-diagonal elements I think in that question.

You see, basically 80% of the realists are cognitivists and vice versa, and then maybe about 15% in the off-diagonal. So 15% of the realists are non cognitivists or something. So that’s one way of looking at it. It’s pretty high agreement. Given one person’s result you can break the other person’s result with 80% accuracy. But yeah, there are reasons to think it’ll be imperfect because all these questions can, uh, involve subtle distinctions and there are non cognitivists who nonetheless consider themselves realists and vice versa.

Robert Wiblin: Yeah. I guess in politics it seems that that brings out people’s tribal instincts, so they tend to group together for practical reasons, if not intellectual reasons, like kind of all sharing the same views or like wanting to fall into line and are particularly incentivized to do that. An interesting thing, I’ll provide a link to a study looking at how ideologically tightly grouped are people in politics, which found that uneducated people just like have views all over the place. Their views on one question don’t really predict their views on another.

So in a sense they’re like very ideologically flexible, whereas the more educated you get, and once you’ve like done a PhD, then you’re just like completely in one camp, and like all of your views line up very consistently, which I guess from one point of view could be viewed as a success because it means that they’ve like brought their views on different questions into line by seeing kind of common elements. On the other hand it could just be viewed as a social phenomenon where it’s like you’ve fallen in line with a social group and now you’ve just adopted all of their views.

David Chalmers: We did do a factor analysis and we found some very strong correlations and factors and we can eyeball the factors and try and give them labels. There was a realist factor for people who tend to think certain phenomena are real. There was a naturalist factor for people who wanted to reduce things. There’s an internalist and externalist factor tending to think the environment matters or what’s inside the system matters.

Those are very loose groupings, but we did find pretty strong correlations between clusters of five or six questions where the response to one question would predict pretty strongly response to others in the cluster.

Arden Koehler: For what it’s worth, to my mind, it’s like kind of a good thing, or I was pleased to see that 0.56 was the highest correlation because it seems like, at least for many of the questions that were on the survey, like there really wasn’t like that much logical connection between them. And so it does seem like it should be possible to hold different views on different ones. Was that part of how you designed the survey? Like each question was supposed to be like logically separate than the others.

David Chalmers: Yeah. At least if they were too closely related, that was a reason not to do it. For example, we had a question about Newcomb’s problem. Could you be a one box or two box person in Newcomb’s paradox?

Arden Koehler: Didn’t most people say like other or something?

David Chalmers: It was probably the most technical question on the survey, the one that required the most background knowledge. So maybe half the people said, “Ah, I haven’t thought about it enough”. Of the people who did it, I think it was about, it was a small majority for two boxes, saying maybe three to two or something for two boxes. But we thought about also asking a question about decision theory, causal or evidential. Then, well, what’s the point of asking that question because this is going to correlate so strongly with Newcomb’s problem. Basically two boxes are usually causal decision theorists and one box is usually evidential decision theorist and yeah, these can maybe come apart in some circumstances. But yeah, that question just didn’t really seem worth spending a whole extra one of our 30 questions on because it was so strongly correlated. Whereas moral cognitivism versus moral realism are distinct enough that we expect this from correlation. But it’s also been formative to find out how many people these come apart on maybe we got just enough information out of that.

Oh, we have a new version of the survey that we’re going to launch very soon because the first one was conducted in November 2009, so we’ve got to get on this quick because November 2019, which is next month, will be the 10 year anniversary.

For the new survey, we’re going to ask the same 30 questions again and see how answers to those have changed over 10 years, both individually and as a distribution, which will be interesting. And we also want to ask a whole bunch of new questions, maybe another 10 questions we’ll ask to everyone and then another 50 or 60 questions we’ll ask to some randomly selected subset of the population just to get more information about more questions. And for those, maybe you’d be less choosy about picking ones which are completely distinct from each other. So maybe there’ll be some even stronger correlations among these. If you have any questions you want to suggest for the survey, feel free.

Robert Wiblin: Yeah, shall have to think about that. Maybe listeners can email in if they’ve got a good one.

David Chalmers: Okay, great.

Robert Wiblin: Do you have any particular predictions about how things will go? I guess I predict that this is like a famous survey now, so you get a higher response rate because people would be like, finally, I get a chance to cast my vote for philosophical questions.

David Chalmers: Yeah, it is. I think it’s respectable now. Someone drew up a list of the most highly cited papers in philosophy over the last 10 years, and this one, the paper that David and I published was number one. Not because of any particular merit of any amazing insights in our paper, but because people just wanted to cite the results. Papers on contextualism say most philosophers believe this. Many papers on free will say this, et cetera.

So, yeah. So at the very least it’s a respectable thing. Hopefully we’ll get more respondents this time.

Robert Wiblin: Yeah, that’s a genius strategy, Dave. To get other people to say things and then associate yourself with that and then get lots of attention via them. If only I could find some way to do that in my own life.

David Chalmers: Yeah, I’m not sure one should really become known as the world’s leading cataloger of other philosopher’s views.

Robert Wiblin: Kind of citation farming. Yeah. So it raises an interesting question of why hasn’t this been done long ago? Cause it seems like you’ve got this whole field that’s trying to answer these questions.

Wouldn’t it be useful to know what they think about these questions? Like help us update our views to produce knowledge for the public to form opinions about all of these issues. Like it’s kind of a curious social phenomenon or professional phenomenon to me that this took until 2009 to happen.

David Chalmers: Yeah, that’s a good question. As far as I know they haven’t. The same question applies to many fields. Of course, it’d be great to know what most physicists think and most chemists think and so on. And my sense is that it has not happened in many fields. There are some little things I’ve seen here and there. Small groups of physicists, and I guess there’s something involving economists, but yeah, I don’t know as A), it’s logistically tricky, I guess, to do it and B), in the case of philosophy, there’s the extra thing that philosophers are meant to think independently. They pride themselves on this. They don’t defer. They think of themselves as not deferring to other philosophers much. We also know these are fields with lots of disagreements. So if you’re just trying to find the truth about a topic, I don’t think most philosophers think that asking a bunch of philosophers is the best response whereas this is actually a reason why you might’ve expected it to happen sooner in physics or chemistry or economics because there is somewhat more consensus in those fields. But maybe the thought is when there’s a consensus, it’s obvious what the consensus is so we don’t need to do the survey.

I do predict that if people do these surveys in many fields, there’ll be surprises. But the other thing was that David and I were just in a position to do it because we had set up this web service, PhilPapers, which most philosophers turned out to be users of and it wasn’t terribly hard to extend that to get a full database and a controlled population that we could survey. This year we’ll be in a position to go much wider still. Maybe just the internet makes these things a lot easier.

Robert Wiblin: Yeah. There is the Chicago Booth School. They do very regular surveys of economists on like policy issues, which I guess, I suppose it seems very decision relevant there, so it’s easy to maybe get funding to find that out.

David Chalmers: They don’t take surveys on theoretical questions?

Robert Wiblin: Much less, I think. Yeah. It’s more typical on minimum wage, yay or nay kind of thing. Yeah. At least like those are the ones that I’ve seen anyway.

David Chalmers: Every now and then, I see polls on physicists on like the correct interpretation of quantum mechanics.

I was just at a conference of physicists and philosophers where they actually took a survey at the end of the conference on many questions on the foundations of physics. But again, the groups are small. I think you’ve got to really try and really go for the biggest results.

Robert Wiblin: Yeah. there’s also the survey of AI researchers on when they expect different thresholds of competence in AI. Which we talked about on another episode a while ago with Katja Grace. Interestingly, it seemed like the conclusion with that from talking with her was that we don’t know and AI researchers themselves don’t really know because their answers are kind of inconsistent and are very spread out. So I think that is potentially a real finding, both from this survey about philosophers and the survey about AI researchers is just that there’s no consensus, which I think should then let people to be more agnostic and say, “Well, a lot of different things are possible and maybe we should hedge our bets a bit on all of these questions”.

David Chalmers: Yeah. Although again, there’s probably a selection effect to take surveys on questions where there’s no consensus, because a lot of the time when there’s a consensus, it’ll be fairly obvious and therefore not worth taking a survey about.

Robert Wiblin: Yeah, that makes sense. I guess that that helps explain why there’s not a survey of chemists on like, really live controversial questions in chemistry because presumably at any given moment it’s much more limited than amongst philosophers.

David Chalmers: Philosophy is basically selected to be the field where there’s a controversy and disagreement over questions because many fields started out as philosophy. Newton was a philosopher, but came up with some methods to settle these questions. And then once you got those methods, then there’s a certain degree of agreement. Then we spin it off and we call it physics. It’s no longer philosophy and this happens again and again.

So there’s some selection effect for philosophy basically to be a field where almost by definition, there’s disagreement over the key questions.

Robert Wiblin: Yeah. That leads nicely into in the next section on your paper about why hasn’t there been more progress in philosophy. But before, I wanted to comment: would this be good for people’s academic careers to be doing these surveys in different fields?

Cause it seems like, well you can get a lot of citations at least, and I suppose people will pay attention to you and like maybe know your name because he did this thing that they actually find really interesting to read about and blog about. Yeah. Is this something that people could potentially do that’s like both very useful, at least from my point of view, and also it’s good for their academic career?

David Chalmers: I think, yes, it’s useful. No, it’s not particularly good for your academic career. I don’t think that if I had done this just starting out and it’d been, the main thing I did, it certainly is not the kind of thing that particularly helps you get a job in a leading philosophy department.

It’s viewed as somehow statistical and something that doesn’t require great philosophical insight. So fortunately, I had a reputation already, and my co-author David has a reputation for working on other things in the philosophy of mind. So, is it a marginal benefit?

Yeah, it probably helped. It might’ve helped David on his tenure case to have such a highly cited paper, but I doubt that many people were, even then, probably when people are writing letters of evaluation and so on, it’s something that gets a few lines along the way, rather than people say, “Wow, this is an amazing service to the field.”

Or maybe they think it’s a good service to the field, but somehow it’s not something that brings you individual credit.

Robert Wiblin: Yeah. It’s somehow just like almost too crass. So too practical maybe for academia.

David Chalmers: It’s certainly true that when we took the survey, we got many people who responded by saying, this is a ridiculous thing to be doing and unphilosophical to be taking a survey of philosophers as if we should be deciding philosophical questions by democracy. That said, a lot of people loved it. But we did get quite a lot of negative reactions the first time around, which you try to answer by saying, well, there are various obvious reasons why this is important.

People do make sociological assumptions about what philosophers believe in writing their philosophy papers all the time. For example, they feel that they think they don’t need to defend certain assumptions. Because they think most philosophers already accept it and so on. Having better information about that should allow you to do better philosophy.

Arden Koehler: That actually suggests that philosophers do think the fact that a lot of philosophers believe something is a good reason to believe it if they–

Robert Wiblin: They’re going to use that as a–

David Chalmers: I think it’s more than just sociologically. They’re going to think that if enough people disagree with this premise, they’re going to reject my paper at the starting point. But for my own purposes, I need you to argue for this if I want to bring people along with me. But yeah, philosophers do think about this. Maybe this is not going to be important to them in figuring out what’s true, but it is going to be important to them in figuring out how to write a paper and a lot about what’s involved in writing a philosophy paper is of course not about figuring out what’s true, but convincing other people that it’s true.

Robert Wiblin: There is this interesting form like people trying to do experimental philosophy where they would take the premises that philosophers would claim that like every normal person would believe, and then like actually survey people to see if they agreed.

And they often found that just the typical person off the street would very often disagree with what philosophers thought was completely common sense. I wish I could think of a good example off the top of my head, but yeah. Yeah, I suppose I have learned from just looking at lots and lots of polling data on political questions and academic questions that it’s very hard to predict what a typical person or what, like what is the distribution of views because it’s like all of us has such a filtered perspective because we tend to associate with people who have the same common sense as us.

And so yeah. For example, if you look at the polling of the United States, it seems like immigration has never been more popular with the American public since polling began and trade has never been more popular, but it’s absolutely not probably the perception you would get by like reading the newspaper or just like guessing what it would be.

It’s constantly shows that, yeah, it produces these really surprising results.

David Chalmers: Yeah. Experimental philosophy is also building all this stuff cross-culturally, and it turns out that, well, initially it seemed to turn out that a whole lot of assumptions that Western philosophers were making about say, knowledge versus justified true belief, which might be rejected by people in different cultures.

That was about sort of 20 odd years ago. I think now the trend has been towards making the case that there’s actually more convergence between cultures and across cultures than people had thought before. But I actually just lately got interested in the question as to what extent intuitions about consciousness are shared across cultures and various people who have tried to make the case that some Western assumptions are not shared in other cultures.

I’m hoping that we might get some data from that soon. Of course, we can get limited data from polling philosophers on these questions. But to get a broader data, one would need somehow to poll other people in a way that we don’t have access to doing right now. But I’m hoping some experimental philosophers will start doing that.

Progress in philosophy [0:35:01]

Robert Wiblin: Yeah. Coming back to this paper you wrote on like why hasn’t there been more progress in philosophy. I think it was back in 2004 that you wrote that consensus in philosophy is as hard to obtain as it ever was and decisive arguments as are as rare as they ever were. And to me, this is the largest disappointment in the practice of philosophy. I guess, what can be said in defense of philosophy given that there hasn’t been convergence on most questions. There hasn’t been a convergence on the right answers to most of these questions.

David Chalmers: Well, the big obvious thing that can be said in defense of philosophy here is the thing that I said already. Which is philosophy by its nature is the field where there’s disagreement, because once we obtain methods for producing an agreement on questions and reasonably decisive ways, we spin it off and it’s no longer philosophy.

So from that perspective, philosophy has been this incredibly effective incubator of disciplines. Physics span out of philosophy. Psychology span out of philosophy. Arguably to some extent, economics and linguistics span out of philosophy. So what usually happens is not that we entirely solve a whole of a philosophical problem, but we come up with some methods of say, making progress experimentally or formally on a certain subquestion or aspects of that question and then that gets spun off. The part that we haven’t figured out how to think about well enough that remains philosophy.

Is that the philosophers fault? No, absolutely not. Look at all the great philosophers who successfully addressed those questions. It’s just the nature of the field. There is still the question why is it that on the questions that remain that they are as hard as they are? I’m sure certainly they’ve been selected for being hard so one shouldn’t be surprised that philosophical questions are subject to disagreement, but still, faced with any individual question, like say the mind-body problem, it’s like, damn, this is so hard, why don’t more people agree with me? Why is this so hard to come to grips with? And I think my own view is probably something about the difference in the character of the problems. It’s not the differences in the character of the field.

I don’t think it’s that. Philosophy, like every discipline has its pathologies, but I kind of suspect that if you sort of redid philosophy with a different population with somewhat different pathologies, you’d still find disagreement over the big questions of philosophy, which is subject to the biggest, most fundamental disagreement, like say, I don’t know physicalism versus non-physicalism about the mind or consequentialism versus non-consequentialism about ethics or deep differences in political philosophy.

I’m inclined to think that those were probably, those are, those are just disagreements that run deep. And it’s something about the nature of the questions that at least so far, we’re not in a position to compel agreement on them. So yeah. So on this way of looking at things, the problem is not exactly the problem of philosophers, which is not to say that it might not be something specific to our situation.

And in the future, with enough information and enough reasoning, enough new background, enough advances, these problems might eventually be solved.

Arden Koehler: This is maybe a little bit too big of a question or will take us a little bit off track, but so even though we haven’t converged very much on true answers to the big questions of philosophy of like how did the Universe begin and the mind-body problem, that kind of thing, we do make progress on little questions. And we also clarify questions a lot and we create new questions and we map out logical space and we figure out like sort of what’s really going on underneath. Apparent disagreements are resolved,verbal disagreements, all kinds of other stuff like that. So that also sort of feels like progress to me.

What do you think is the value of that kind of progress? Does it have independent value or is its value mostly derivative of it allowing us to do a better job of answering philosophical questions big or small.

David Chalmers: I think absolutely is progressive. The kind most of what philosophers typically call, or think of as progress in the field consists of this smaller kind of progress: making an important distinction, getting a new framework, finding a new argument for review, refuting some versions of a view. And so, I’m not like deciding the big question for once and for all, but getting new reasons on either side, carving up the landscape better, getting a better understanding. So I think all that is really important. I think it’s very conducive towards understanding. I think understanding is a virtue even if it’s not necessarily conducive towards first order knowledge.

Arden Koehler: Right. You at least understand what you don’t know.

David Chalmers: Yeah. I think understanding is genuinely important. A lot depends on what you see as the aim of philosophy. If it’s a practical aim. I’m not interested in philosophy primarily to improve the world. I’m interested in philosophy to actually ultimately understand reality. And then I guess, well, then all of this, by definition, understanding is going to be valuable even if it doesn’t produce first order knowledge.

I would like as part of this quest to understand reality, to know things about reality, not just to understand issues about consciousness, but to know a correct theory of consciousness and to fully understand and know the truth about the relationship between mind and body.

So I think that’s kind of an ultimate goal. But even if you fall short of that goal, there are these forms of understanding that don’t involve knowing the deep and ultimate truth that I think if you’re in philosophy to understand the world, I think those things at least feel as if they have a really significant intellectual value.

Now, how does that play into the question of the practical role of philosophy, I’m not sure. I’d like to think that our understanding faces some issues, and even if we don’t totally resolve the issue, say between the truth of consequentialist and nonconsequentialist theories in ethics, nonetheless, understanding those issues and the considerations on either side and the different varieties that work better than others is still going to be a whole lot of practical use in making the world a better place and so on. So yeah, maybe the smaller questions that you’re raising nonetheless can play sort of at least some fraction of the practical roles would have if we actually knew the definitive answers to the big questions.

Robert Wiblin: Yeah. You mentioned earlier that you don’t think that pathologies of philosophy are worse than in others. I guess I want to push back on that a little bit because it seems like to some extent, part of the goal of philosophers or like one way to succeed as a philosopher is just to carve out some new position that someone else hasn’t taken and because it’s like there’s only so many positions out there.

At some point people just end up pushing into more and more ridiculous views in order to have something new to say in order to get an academic job. I’m reminded of a sort of like someone who was starting out their PhD and both they and their supervisor kind of agreed on this philosophical view and the students says, “Could I do my thesis on explaining why this view is correct?”

And the supervisor said, “No. There’s no point in writing the defense of the correct view. That’s like a pointless move.” It’s like from an academic career point of view, you’ve got to find something new and different to say. I guess I wonder whether that like creates a perverse incentive for philosophers to just like spread out all across the board to have many different views in order to make sure that they can justify having their jobs.

David Chalmers: Yeah. There may be something to that. Philosophy does have its pathologies. Certainly. I don’t think I said they’re no worse than in other fields. I think every field has its pathologies. Philosophy may be open to having more because we’re not as constrained, say by experiments and formal methods and so on. The things that pin things down more in other fields. That said, I think every academic field I’ve gotten to know well, which is quite a few by now, has got very, very serious pathologies. And I’m not saying philosophers are any better and they could well be worse.

The one that you mentioned, I think it happens, there are certain kinds of reward for interesting disagreement in philosophy, not disagreement alone, but if you can, there’s certainly rewards for novelty as in all academic fields.

The same goes on in psychology. You come up with results confirming what people thought would be the case, then it’s very hard to get them published. You come up with things disconfirming, then that’s much easier to get them published and–

Robert Wiblin: But then exactly, we do see there, it’s like–

David Chalmers: Massive biases.

Robert Wiblin: It seems highly corrupted. Yeah. And it just leads to like very bad ideas getting promoted and–

Arden Koehler: Just to defend philosophy for a second. You could think that maybe one of the reasons that having a discipline of philosophy is useful is making sure that people are sort of like checking all of these strange seeming views and like coming up with new views.

Maybe most of them are definitely, most of them can’t be right. But then they might like stumble upon something that really is right or can give us a deeper understanding of something and that it really is useful to have that pressure toward novelty. I’m not sure this is justifies it in like the case of the story that you told, but like–

Robert Wiblin: Well, yeah. I think that that is like a very good justification for having philosophy as a field. Like, yeah, exploring the space of ideas that we currently think are wacky, but then it means that there’s no mystery why it is that philosophers have very widely different views cause like we don’t allow them to get the job unless they do.

Arden Koehler: This is just pushing back on the idea that there might be a pathology of philosophy. Maybe it’s actually a feature.

David Chalmers: Yeah. I think there are various reasons why it’s good to have different views be explored and understood. It’s also true in science, of course. You want to have, you want to make sure the views are not being overlooked and it’s good for the field to have individuals who are pursuing all kinds of different views, even if the science of the field as a whole comes to a collective judgment.

But just a couple of things. I think this does happen. In philosophy. I think the reverse also happens as in science. People can be rewarded for sticking to known paradigms and for extending them in certain directions. There are many, many supervisors who are strictly happy when their students work on extending their views.

But I think the question you’re really trying to raise here is whether all that disagreement that we find on big philosophical questions is somehow explained by this effect. I’d be extremely surprised if that’s the case. I think it may well make a difference to the numbers and many things make a difference.

But if the idea is well without this particular pathology, then we might’ve actually had convergence on a certain view of normative ethics or a certain view of the mind-body problem. I really find that extremely implausible. I think it’s something about the questions here that just the evidence is not really in. There were strong considerations in favor of this. My expectation is you could rerun philosophy with many different psychologies and many different pathologies and it’s just that there were these kind of incommensurable considerations in both directions.

It’s certainly true. There are subcultures that converge on some of these things. I think that’s actually a way of making philosophical progress is through having subcultures that share certain assumptions. So yeah, maybe most effective altruists say are consequentialists of a certain kind.

And one way to make philosophical progress is to make those assumptions go ahead. To push things ahead in a way which is harder if you didn’t share those assumptions. But if you then come back and say, “Ah, it’s just a pathology, say all of academia, that everyone is not a consequentialist”. I think that’s just an overly optimistic view of the intellectual territory here. I think the reasons to worry about consequentialism are just very, very strong reasons. And it almost, anyway, I can see it re-running philosophy. There’s going to be a very big body of people who reject it, which is not to say that one view isn’t right. But that just to say the reasons run deep.

Robert Wiblin: Yeah. Another reason is just that like, people, it’s so easy to deny the premises of arguments that people make or even sometimes to deny the inferences even when they seem like pretty strong.

Why is it that it’s like, it seems easier just to deny the strength of arguments in philosophy than in other fields? I guess with experimental fields, it seems like it’s more obvious why it’s a bit different, but it seems like it’s completely different from mathematics as well, which is also like dealing in the realm purely of ideas.

Yeah. In math-like arguments, there’s usually much more agreement on whether they go through or don’t. But in philosophy, people just completely regularly deny arguments other people find incredibly compelling.

David Chalmers: Yeah. Well, I think that basically comes down to having these certain standard methods both in mathematics and in the sciences.

The method of proof in mathematics gives people a consensus framework. You’ve got a consensus on when, on what counts as a good proof and a consensus on when something is proved. And likewise in the sciences, you’ve got the experimental method with reasonable consensus on what the method is and what counts as establishing a result.

Of course in the sciences it’s a lot more blurry and there is room for a lot of disagreement in specific cases. But we’ve basically got. agreement on a broad method, which can basically serve to at least tentatively established results in science and to definitively establish them in mathematics.

I just don’t think there’s any analog to that in philosophy. In philosophy, we do have this method of argument, but the argument all have to start from certain premises and those premises are all questionable. You might say, well in principle someone could question the premises of a mathematical argument by say, questioning an axiom and so on. But —

Arden Koehler: They do do that, just not very often.

Robert Wiblin: Or then you end up with a different branch of mathematics or something. Or people are discussing a different set of objects.

David Chalmers: Yeah. And mathematicians are happy to back up and say, “Okay, well if you insist on doing that for our purposes, mathematics is just what follows from these axioms”. They might question the logic but that looks even more eccentric. So just as a matter of fact, there are certain starting points that seem to compel sufficient agreement that they can serve as the foundation. I feel like mathematics and science can do this, and there are some that don’t. There are some areas where there are starting points that seem plausible to some people, but not to other people.

And then you could still do this in the field of philosophy, which is why I think it is actually important in philosophy instead of if we’re speaking about pathologies in philosophy, one pathology is we spend endless time debating those foundational assumptions that we disagree about and less time exploring the consequences, which is one reason why I think it’s actually very good to have subcultures. I’d say, yeah, maybe the effective altruism is an instance of this, a subculture or a people that are interested in AI safety or something where people make certain assumptions which might not be shared by philosophy as a field but nonetheless, go ahead and see what follows.

There’s still going to be the question of bringing it back to the field. In many of these cases, you might find disagreement about various foundational assumptions among the field as a whole, but if the project is important too, and that is, I think, as a matter of fact, how many fields end up getting spun off out of philosophy by subcultures pursuing their programs.

So if I were to see any thinking about reforms to philosophy, I’d like to see a bit more reward for people making certain assumptions and seeing where they go with them. Whereas right now, yeah that work can be rewarded, but I think it often looks a little bit eccentric to philosophers, especially those who don’t share those assumptions that afield. Maybe there’s more reward for debating the foundational concepts.

Arden Koehler: Maybe because people have more strong views on them or something, the foundations.

Robert Wiblin: Or it feels more fundamental. I guess another reform that people sometimes suggest is that philosophers should spend more time hanging out with natural scientists and I guess also maybe vice versa.

Physicists would do well to hang out with philosophers. I guess as someone who’s actually done that, I guess hanging out with neuroscientists and people who are thinking about like how the mind works, do you think that that actually would help or is that just kind of a bit of a platitude?

David Chalmers: I’m very skeptical that would make much difference for the primary reason it’s happened a lot already.

Robert Wiblin: And yet not everyone agrees, still.

David Chalmers: Knowing the science, hanging out with the scientists… It’s led to, I think, a lot of interesting progress in philosophy becoming richer and better informed.

But has it led to much convergence on those deeper questions? No, absolutely not. And very frequently, the scientists disagree as much on those big foundational questions as the philosophers do. They say, “Oh, that’s a matter for the philosophers. It’s above my pay grade.” Or they’ve got very strong opinions, but they go in different directions.

So physicists disagree about the foundations of quantum mechanics. As much as philosophers do. Psychologists and neuroscientists, if you poke them, disagree about as much about the mind-body problem as philosophers do. And so one, I think it’s very good for the field to be empirically informed, and a lot of the time empirical information is very relevant to these questions, but typically doesn’t lead to convergence.

And one reason is that the philosophical questions, by their nature, have almost become these ones which are not so easy to empirically resolve, but typically you get an empirical premise from the sciences towards one of these philosophical questions. Some people say something about neuroscience and therefore, consciousness is physical. Well okay, it turns out that to make the step from neuroscience to the philosophical conclusion you actually need a big, strong philosophical premise to link the two and that one ends up being just about as strong much of the time as what is needed.

So every now and then there’s something coming from the sciences that might refute a previous philosophical view. On case that comes close to doing that. One of the better cases maybe is relativity theory, which many people take to very strongly undermine the philosophical view, known as presentism: all the things which are real are what exists in the present.

But relativity theory says there’s no facts about absolute simultaneity that could make it the case that there is a distinguished present in the whole Universe that makes it much harder to be a present system. There are ways for the presents to survive. So there are cases where this happens.

Maybe Godel’s theorem helps to undermine a certain view known as formalism about mathematics. Where to be true is to be provable. Godel seemed to make a pretty good case that there’s unprovable truth. So every now and then it happens that science can lead to definitive progress on a philosophical question. But I think we find just a lot of the time that science will enrich the discussion of a philosophical question without really decisively settling it one way or another.

Simulations [0:51:30]

Arden Koehler: We actually want to talk about the idea that we might be living in a simulation and what, if anything, that might imply, so a lot of people think that this is not a very serious topic or very silly, or at least that not very much that’s useful can be said about it.

Why do you think it’s nonetheless worth talking about?

David Chalmers: I think it’s worth talking about in a number of ways. I got into this through thinking about some big traditional philosophical questions like how do we know about the external world? Is it possible that we could be in what Descartes called the Evil Demon scenario where a demon is trying to fool you into thinking that an external world exists when none of it is real. The modern version of that is the simulation hypothesis. How do you know that you’re not in a simulation? And many people use that to cast out on the kind of knowledge we have of the external world. Now that’s very central to traditional ways of thinking about these scenarios, that somehow if we’re in simulation, nothing is real. If the world is a giant simulation, then things like this glass and the computer I’m using and microphone and the world outside my window, none of them are real, just fake. It’s a fictional reality and that gets you to the line where if we don’t know we’re in a simulation, we can’t know anything at all. I’m actually inclined to think all that is based on a false presupposition. I think that if we’re in a simulation, things are still perfectly real.

If I’m in a simulation, this glass is real and so on. It’s just that if we’re in a simulation, we’re living in a digital world, a world made of, let’s say, bits. So we shouldn’t say, “None of this exists”. We should say “If we’re in a simulation, it has a different nature”, and I think that’s interesting for a number of reasons, if that’s right.

First it suggest that some of those traditional arguments for skepticism about the external world are much too quick. Yeah. Well, even if we don’t know we’re in simulation, maybe we can know an awful lot about the external world based on, for example, considerations about it structures.

So it’s philosophically interesting. Maybe it offers us some insights into the character of reality. You don’t need to believe that we actually are in a simulation to think there’s interesting conclusions to be drawn from reasoning about what happens. If we’re in a simulation that tells us something about our grasp on reality and therefore the relationship between the mind and the world.

But I think it’s also interesting to think about. There are these practically important questions, which approach as we begin to spend more and more time in virtual realities and in simulated worlds. Are we engaging in a virtual reality or are we fundamentally engaging in a fiction? Is it a form of escapism where none of this is genuinely real, or are we in fact living a meaningful, can we live a meaningful, substantive life in a virtual reality in interacting with real objects and real people, which can have the kind of value that a genuine life has? I’m inclined to think that yes, we can. Thinking very seriously about simulations and virtual reality, you can actually shed some light on those questions about practical technology.

Arden Koehler: So it seems like there’s at least two kinds of like situations that we might have in mind when we talk about living in a simulation. One is like this really global skepticism that maybe everything is a simulation and we don’t really know it. And then the other is simulations that we build that we might someday enter, or even local simulations that already exists, like video games and stuff.

And just to make clear to the audience why the first thing is even worth talking about. I think. It’s inspired partly, like you said, by these traditional philosophical, skeptical hypotheses, like maybe everything is a dream, but also there’s this simulation argument by Nick Bostrom, which I know you’re familiar with, but just in case for any listeners that aren’t, I thought it’d be useful to go through really quickly.

So Bostrom’s argument is roughly that if it’s possible to make simulations and people want to make simulations someday, and both of those things seem like pretty plausible, then we’ll make it just an enormous number of them. And if that’s true, then most beings throughout all of history will be simulated beings and if that’s true and we have no reason to think that we couldn’t be in a simulation, then we are overwhelmingly likely to be simulated beings. So that’s just to show why some people think this is really worth taking seriously as something that might really be the case, even if it’s also philosophically interesting to think about, even if we don’t think it’s the case.

So I was just wondering whether you had anything to say on the simulation argument, anything to add, whether you think it’s a good argument or any ways that we might get evidence as to whether we in fact are in a simulation or not.

Robert Wiblin: What’s the probability Dave? Are we in a simulation or not?

David Chalmers: I’m sympathetic with Bostrom’s argument in the sense that I think it’s at least worth taking seriously. If I had to bet on the odds of we’re in a simulation, I don’t know. It’s probably some somewhere between 0.01 and 0.99. Sorry. If I really had to go from my gut. Maybe 10 to 20% but who’s to say. In most of my work, I’ve thought a lot about the simulation hypothesis, the hypothesis that we’re in a simulation and what follows.

I haven’t done that much on Bostrom’s argument that we’re in a simulation. But I think it’s clearly an argument worth taking very seriously. And I’m inclined to think that some version of it probably works, at least if you’re clear enough about what the assumptions are and about what the possibilities are.

Bostrom makes it an argument. It’s not actually an argument that we’re in a simulation. It’s an argument that either we’re probably in a simulation or most populations never end up producing simulations for one reason or another, and those reasons are themselves very interesting. There’s presuppositions for the argument that building simulated worlds of a certain kind is possible and that consciousness and simulations are possible, but I’m inclined to think that some version of the argument work. In the book I’m writing I have a fairly extended analysis of the argument. There’s a few points where I differ from Bostrom. For example, he makes a turn very heavily on the idea that people are running ancestor simulations. Simulations which are indistinguishable from our own history. The various reasons Bostrom’s version works best that way because then it suddenly becomes possible for you that you are in that various simulations that you were constructing.

I’m not sure the argument with the version with ancestors simulations work so well because it’s very far from clearly possible to me that people will be capable of constructing perfect ancestor simulations that duplicate our history exactly. Maybe we just don’t have the right access to the facts about our history to do that. So I would prefer to give to a constructed more general version of the argument that turns on the capacity to build simulated worlds in general, grounded and simulated worlds that are exactly like ours.

I think then to make that run, the reasoning is then going to have to look somewhat different from the way that Bostrom makes it run in his argument and then there it turns out to be some different issues that arise. But I think nonetheless, an argument in the same style can still go through. Bostrom’s argument, if it worked with ancestor simulations, would say, “There are going to be all of these people indistinguishable from me and most of them are simulated and therefore I’m probably simulated”. Whereas a more general version will just say, “There are going to be all these people who’re kind of like me in some general respects, most of whom are simulated. Therefore, I’m probably simulated.” I think it’s a different style of argument, but the general framing and for many purposes, I think the upshot is maybe similar.

Arden Koehler: In terms of the upshot. So let’s say we are living in a simulation. When I say that I’m making a sort of a metaphysical claim. Basically, some people seem to have the intuition that this is a meaningless claim or like, well, I’m living in a simulation, it doesn’t really mean anything because nothing would be different on the ground level or something like that. Are you at all sympathetic to that? Do you think that’s wrong? Do you think, what do you think about that claim?

David Chalmers: Yeah, I think the simulation hypothesis is a perfectly meaningful hypothesis about the world. There is a long tradition in philosophy of saying claims like this might be meaningless if, for example, they’re untestable or unfalsifiable. To some extent, you might say the simulation hypothesis is potentially testable or maybe the stimulators could reveal evidence that we’re in a simulation, like show us the source code for the world.

They could move around planets like could break all kinds of laws of nature. They could offer us a red pill and we get to see the simulation from the outside. So arguably we could get evidence that we’re in a simulation. But then all we need to do then is to move to the perfect simulation hypothesis.

The hypothesis that we’re in a perfect simulation that completely simulates a world like ours such that we’ll never get positive evidence that we’re in a simulation. And now the proponents say, testability might say that hypothesis is meaningless. I’m entitled to think even if that hypothesis is untestable and unfalsifiable, it’s still perfectly meaningful.

And the best way to make that case, is to note that we can in fact, in principle, create beings in simulations. Right now, the simulations that we can create are very simple, but it looks like there’s no obvious obstacle, in principle, to creating whole Universe level simulations, including beings whose conscious experiences are determined by those simulations and their conscious experiences will be indistinguishable in principle from those of people who are outside the simulation.

And once we do that, then that being will be in precisely the situation we talked about. Their Universe will be a simulated Universe, even though they’re not, they will not be in a position to test this for sure. Now, there may then be simulated beings who are going to say, “Ah, the simulation hypothesis is meaningless. There’s no way to test it.” But we’ll be here looking down at them saying, “Ah, but you are in a simulation” and there’s other people who are saying, “Hey, maybe we’re in a simulation.” They are in fact correct. And the people who were saying, “No, you are not in the simulation.” They are incorrect.

So taking a bird’s eye perspective on the situation, I think we can tell that it’s a meaningful hypothesis. The people who say, in that case, “We’re in a simulation”, are correct and others are incorrect. And then, all we need to do now is to undergo a perspective shift and say, “Well, maybe that situation could be our situation.”

And then I think it’s very hard to resist the thesis that it’s at least a meaningful hypothesis. Maybe it’s not a scientific hypothesis at that point because you think science requires testability or falsifiability, but I think, yeah, there are a lot of meaningful hypotheses that are not scientific hypotheses about the nature of our world.

If you want to call it a philosophical hypothesis, then fine, there is of course then the added wrinkle that for many versions of the simulation hypothesis, it could actually be something we could get some evidence for. Actually, it’s very hard to see how you get definitive evidence against the simulation hypothesis.

So yeah. There is this question as to whether any version of it is truly well, the general version of the simulation hypothesis is truly falsifiable. It’s easier to see how you get evidence for it than against it. And the general worry here is that any evidence you might get that we’re not in a simulation, looks like any such evidence could be simulated by good enough simulators. So it’s hard to see how any evidence could constitute definitive evidence that we’re not in a simulation. So you might say that’s grist for the mill, that this is not a fully scientific hypothesis because it’s not falsifiable. But I think nonetheless, it’s very clearly meaningful for the reasons I was saying that we could have meaningful hypotheses that go beyond what’s scientifically testable.

Arden Koehler: Yeah. So I guess one way that the simulation hypothesis could be meaningful is if everything was, if it meant that everything was fake, or like if it meant that “Oh, we don’t live in a real world”. But you think that’s not true. You think if we live in a simulation, it’s not the case that everything is fake.

Do you think though that we could conclude anything else from the fact that we live in a simulation? Anything else that’s philosophical or practical or quasi-religious which I know sort of comes up in various places.

David Chalmers: Yeah. This is sort of what I’ve been most interested in and thinking about the simulation hypothesis, not so much the question of whether we are in a simulation, but what follows if we’re in a simulation. And the traditional attitude is, “If we’re in a simulation then nothing is real and everything is fake. Most of our experience is an illusion.” and I’ve tried to argue in response to that in that maybe that’s wrong. If I’m in a simulation, all the objects around me are still real and they still exist. But, one interesting thing I think would follow as a conclusion about the metaphysics of our world.

I’ve tried to make the case that the simulation hypothesis should best be seen as a hypothesis about what things in our world are made of at a relatively fundamental level. And I have suggested there’s an interesting connection here to what people call the it from bit hypothesis.

And in physics, in the foundations of physics, the idea is that everything is made of information at an underlying level. So I think if we’re in a simulation, there really are objects like chairs and tables. They’re made of molecules which are made of atoms, which are made of quarks, which are at some level made of bits.

There’s a level of bits, an algorithmic level underlying the familiar levels of physics. This is the version of what sometimes gets called the it from bit hypothesis. It’s not that the chair isn’t real, it’s just that the chair has made at some level of information or of bits that may turn out that in the next level up in the next Universe, those bits are realized by something else.

So then you get something like the it from bit from it hypothesis and maybe the levels chain further still. But I think you get an interesting metaphysics of information out of the simulation hypothesis. With respect to religion. Yeah, this is another interesting consequence. If we live in a simulation then it seems well, under some ways of understanding by the very definition of a simulation, if we’re in a simulation, there’s a simulator. There’s someone who set up the simulation, and that being can be viewed as a creator of our Universe, responsible for making this Universe come into existence.

So that’s at least a creator of our simulator. Furthermore, this creator may have properties like being all powerful in many cases with respect to our simulation. All knowing with respect to our simulation. So you’re getting a few of the properties of a traditional God.

Arden Koehler: You pointed out at some point that if the simulator was really all knowing, then like if they were able to predict what was going to happen because they knew the future, then it’d be like, why would they make that simulation? Like maybe–

David Chalmers: Yeah total omniscience would kind of undermine the point of it, except maybe as entertainment.

We do watch TV shows twice, but maybe it probably works better when now God knows a lot. There are many simulations where the simulator is not necessarily all powerful with respect to us, but then they know a lot. They’re very powerful. They’re probably not going to be all good.

There’s no particular reason to think that simulators are going to be all good, and they’re also not going to be the creator of the whole Universe. They’re not going to be a cosmic God. They’ll merely be a local god. So I’d say yeah, they’re halfway to being godlike on various dimensions, which is interesting.

So I’ve actually, in the book, I write the case that we should regard the simulation hypothesis as equivalent to what I call the “it from bit creation hypothesis”. The idea that our Universe was created by somehow arranging bits the right way. God started the Universe by saying, “Let there be bits so arranged”.

Should one erect a religion on this? No, I don’t think so. Because I don’t think anything about this indicates that this creator is in any way worthy of worship. It could be just be another hacker in the next Universe up.

But it has had the effect of making me at least a little bit more sympathetic to the possibility that our Universe might’ve been created. A possibility I was not terribly sympathetic with before.

Robert Wiblin: Maybe if they’re not worthy of worship, there are at least worthy of groveling and asking for favors and things like that.

David Chalmers: You want to at least get them to treat you well.

Robert Wiblin: Exactly. Yeah. Get on their good side. I guess one implication that some people have suggested that if you are really bought into the idea that we’re in a simulation is that it could change our expectations about what kinds of things we’re going to observe.

Because you can at least have some probabilistic reasoning about why they would be simulating things and like what sorts of things they would be wanting to simulate. In particular, you might think they’re more likely to simulate kind of interesting times in history, just as a kind of, we have a lot of crime procedural stories, but like not a lot of like hour long TV shows where people just like to sit at their desk doing work and like not doing anything interesting.

So I’d say they’re interested in like stimulating times that are perhaps particularly unpredictable or like have important consequences in the long term, either for entertainment or research purposes. So if we thought we’re probably in a simulation maybe we should expect to see really big events in our lifetime with a greater probability than we did before. Do you buy that?

David Chalmers: Yeah, all this requires a whole lot of speculation about the motives of simulators in building simulations, which I think is probably extremely difficult for us to do so I don’t put too much credence in speculation of that kind. But I guess certainly entertainment is one possible reason for building a simulation, but you might think that that’s only gonna require a relatively limited number of simulations.

After all, people tend to read one book or watch one movie at a time. Now our superintelligent successors, maybe they want to watch for all the possible movies simultaneously. I don’t know. But I guess I’d be inclined to think at least modeling the simulators on us, it’s quite likely that the great majority of simulations could be something like simulations for scientific or research purposes. Why? Because when you do things for scientific or research purposes, you don’t just make one at a time. Then you’ve got to worry about the replicability crisis.

We’ve got to make n as high as possible. So, I think people are going to be running for research purposes. They’ll be running a million Universe simulations overnight and seeing what happens and statistically maybe it’s going to be overwhelmingly likely where we’re in of those simulations where actually nobody’s paying any attention to the simulation much while it’s going on. They’re just coming back and gathering statistics in the morning for that purpose. It may not be particularly important that it’d be an entertaining or interesting simulation. When people want to do historical simulations too. For example, what happens if we rerun the election of 2016 a million times over and see what happens and maybe people will sometimes tweak the parameters just to let it run with an outrageous counterfactual event. Like let’s suppose Trump won the election and see what happens there, so maybe that could be some statistical bias in favor of occasional outrageous things happening for historical purposes.

Arden Koehler: Yeah. Do you think that… What’s the most educational thing? It’s normalcy, right?

Robert Wiblin: So maybe they want representativeness.

Arden Koehler: So like you might think that then it will be likely that normal things will happen because that really tells them what life is like.

Robert Wiblin: But I guess like in as much there’s not a lot of… What’s the term for this? When we were hunter gatherers or something, and we’re all just like hunting bison and eating them and so on. And there’s just, there’s not a lot of different ways that it can play out. And so it’s like, you run a hundred of them and you’re like, “Wow, this is the same every time”.

So it’s like doing things where it’s like there’s not a lot of randomness in the outcome or like you can’t get important, like, yeah. Flying off in different directions in history, then that seems like a smaller sample might do.

David Chalmers: Maybe they’ll want to run some mild counterfactuals too. They’ll simulate worlds roughly as it is and if you’re doing a historical simulation, you might want to, I think historians are very interested in counterfactuals. But often they’re interested in a relatively mild counterfactuals, “What would have happened if Hitler had not tried to invade the Soviet union?” and so on.

Arden Koehler: What’s a more dramatic counterfactual?

David Chalmers: More dramatic is what if a total weirdo could win the presidential election. Yeah.

Arden Koehler: I thought you were thinking about the laws of nature or something.

David Chalmers: Yeah. But running counterfactual laws of nature is a very natural thing to want to do. Physicists will be running simulations all the time of different laws of nature. They set up these laws of nature and see, okay, what happens? Does the Big Bang lead to a Big Crunch? Biologists will be running how many times does life develop? If you tweaked the parameters, how do you get to life?

How do you get to intelligence? It’s very easy to see scientists running all kinds of variations on laws of nature just for research purposes.

Robert Wiblin: Another implication that some people have drawn, which I guess is potentially more decision relevant, is that we might expect that the Universe is going to last less long.

So it’s like if we’re currently like in the fundamental like the real world and not in a simulation, then there’s every reason to think that the Universe is going to continue to play out for billions and billions and trillions of years in the future. So we have a lot of time to play with. But if you think that you’re in some kind of research simulation, it seems like there’s some decent chance that it will be shut down before we reach like a billion years into the future.

Which might give someone like a bit more reason for urgency or you might even think it might possibly be shut down in a hundred years because they’ll have figured out the thing that they wanted to learn about the 21st century and so we’ll be done. And this gives people a reason to like try to do more to improve the world.

Right now, rather than to think about these very long time scales. Do you think that’s kind of a sound inference to draw?

David Chalmers: Again, it all turns on this massive speculation about the motives of simulators. And yeah, there could be so many shutdown conditions. There is of course, the one shut down condition, which is, “Ah, shut things down when they figure out they might be in a simulation.”

So this way of thinking about it, okay, stop talking about this now. But I don’t know. I think there are so many possible termination conditions that I’m not sure I’d get particularly worried about it happening in the next hundred years. There is the Doomsday style argument that in general, whether we’re in a simulation or not, we should think that we are very typical beings. So possibly the Universe will end soon. That would also apply if we’re in a simulation. How likely is it that we’d be like this so early on in the simulation if it goes on forever and ever and ever, and you might want to update on that towards the world ending soon. But I think that applies equally whether you’re in a simulation or not.

Robert Wiblin: Yeah. The Doomsday Hypothesis is a huge can of worms, so I’ll provide a link to the paper for listeners who want to learn more about that one.

David Chalmers: I’m not endorsing it.

The problem of consciousness [1:13:01]

Arden Koehler: Let’s turn now to the thing that you’re most famous for talking about, the nature of consciousness. So we’d like to focus on the implications for practical ethics of ideas in philosophy of mind and uncertainties surrounding these ideas. But first we want to make sure that we, and all of our listeners are on the same page about what we’re discussing and when we talk about consciousness, because the word can mean a lot of different things to different people.

So when you talk about consciousness, what are you talking about and how is it related to intelligence and self-consciousness and how is it not related?

David Chalmers: Yeah, so people mean a lot of things by consciousness, but what I mean is roughly the subjective experience of the mind and the world. Roughly how it is from the first person point of view to think, to feel and so on.

My colleague, Tom Nagel wrote this wonderful paper called, “What is it like to be a bat?”. It said, “We don’t know what it’s like to be a bat, probably there is something regarding what it feels like to be a bat”, but anyway, whatever it’s like to be a bat, that’s the bat’s consciousness. It’s how things are, or how things feel from the first person perspective of the bat.

You look at a brain and you’ll see it processes information in various ways. It responds to stimuli, processing information leads to a behavioral response. That’s how a brain looks objectively. But there’s also how it is subjectively. I’m seeing you and having a visual experience with certain images in my mind, I’ve got certain sounds.

I might be experiencing thoughts. So consciousnesses is basically this stream of first person experience. Philosophers, to distinguish this from other kinds of consciousness, philosophers use the word phenomenal consciousness often to distinguish this from say, access consciousness, which is a matter of objectively having access to some information. Self-consciousness, which you mentioned, is about being conscious of yourself.

I think that is one aspect of phenomenal consciousness. Broadly, we have this sense of being conscious of ourselves, but that’s just one very specific aspect of consciousness. We’re conscious of things in the world. When I look at an object and I see a red square, that’s just vision, that’s perception; I’m conscious of the object, but that has a subjective experiential quality to me. So consciousness is much more than just consciousness of the self. You asked about intelligence, and I think about intelligence as, roughly speaking, a measure of behavior, of functional capacity, of your ability to do certain things, to solve certain problems, to achieve your ends by taking appropriate means and so on. And I mean, intelligence itself is complicated, but I think of that as very much on the objective and behavioral side, whereas consciousness is very much on the subjective side. So maybe you could have a system which is really quite intelligent but has no subjective experience at all.

And likewise, there may be systems with subjective experience that are not terribly intelligent. For example, one is basically subjective, the other is objective.

Arden Koehler: So the same thing for self-consciousness in your view? Like you could have something that was phenomenally conscious that wasn’t self conscious or maybe something that was self-conscious but not phenomenally conscious.

Maybe we wouldn’t use that term in that case. It has a model of

itself, but isn’t phenomenally conscious.

David Chalmers: Yeah. Self-consciousness itself kind of decomposes into… There’s phenomenal self consciousness which is being phenomenally conscious of yourself, having an experience of yourself and that can happen. That’s one aspect of phenomenal consciousness.

But then you can have a system which is conscious of itself in a non experiential sense, maybe, which has access to information about itself and can report information about itself. You have AI systems that can monitor their own states and talk about them. You might think of that as a form of self consciousness, but it’s not phenomenal self-consciousness. That would be on the objective side of self consciousness. One could have that kind of self consciousness in principle without being phenomenally conscious. And likewise, I think you could probably be phenomenally conscious without having any, actually it’s arguable, whether you could be phenomenally conscious without having any kind of self consciousness.

But generally there are at least states of phenomenal consciousness that don’t seem to have terribly much to do with being conscious of oneself. Like when you’re conscious of the people around you and of the world and of a problem you’re thinking about.

Robert Wiblin: So you’re famous for drawing attention to what you call the “Hard problem of consciousness”, which is this question, “Why does it feel like anything to be a person, or why does it feel like anything to be anything?” It does seem like we could just be going around like robots, taking all of the actions that we’re taking, but have no first-person perspective. Like it would feel like nothing to eat an apple. But I guess there’s a lot of people who kind of want to deny that there is a hard problem here.

That there is anything to explain. That there is anything mysterious about there being a first-person perspective. I’m sure that many of them are among listeners potentially. It seems like there’s a bit of a stream of this among rationalists and I often find natural scientists, I just can’t get them to accept that there’s like anything strange about consciousness existing. Have you found any way of getting through to people who are inclined to deny that there’s anything interesting going on here?

David Chalmers: That’s interesting. I think we need some sociological data here. My experience is that most people can at least get a sense of the problem. So, when I’ve taken surveys on this in various contexts, not terribly rigorously for the most part, but it typically seems to come out that the majority of people see that there’s a hard problem of consciousness, although it’s certainly not universal. So if your experience is that most people have a dominant reaction to deny it, I’d be surprised, but okay, we need surveys on that.

Robert Wiblin: I wouldn’t say it’s a majority of people, but it does just seem like there’s something about the ideology of natural sciences which like wants to deny that there’s something going on here.

It seems like almost this kind of thing that you need like a PhD in a particular field to believe something so crazy as to think that there’s like nothing strange about–

David Chalmers: Yeah, but I also think there are these sociological effects where most people think… we got this on the PhilPapers survey that most people think that most people think a certain thing, even though most people think the opposite. Maybe part of the ideology of science is that there’s no hard problems so most people think that most people deny it. Well in fact most people accept it. In my sense, I may be wrong, because I’m biased and I’m biased in my exposure, even among your average, say, neuroscientist or AI researcher, they can pretty much appreciate the problem.

Now certainly there’s a substantial minority who reject the problem. Even among those who reject the problem, it’s probably about at least half of those who think, intuitively there’s a problem, but we should reject the intuitions. So I would say that is at least being on board with the problem.

Maybe I should actually say something about the problem, which is basically it looks like the question is, “How do you get from physical processing in the brain and its environment to phenomenal consciousness”? Why is it that there actually should be first person experience at all? When looking at the brain from the objective point of view, you can say, “Okay, you can see where there would be this processing. These responses. These high level capacities. But on the face of it, it looks like all that could go on in the dark in a robot, let’s say, without any first person experience of it. So the hard problem is just to explain why all that physical processing should give you subjective experiences.

I contrast these with the easy problems, which are roughly the problems of explaining behavioral capacities and associated functions like language and learning and response and integrated information and global reports. And we may not be able to explain how it is that humans do those things, but we’ve got a straightforward paradigm for doing it.

Find a neural mechanism or a computational mechanism and show how that can perform this function of producing the report or doing the integration, find the right mechanism, perform the function and you’ve explained the phenomenon. Whereas that works so well throughout the sciences, it doesn’t seem to work for phenomenal consciousness.

Explain how it is the system performs those functions, does things, learns, reports, integrates and so on. It seems prima facie all that could go on in the absence of consciousness. Why is it accompanied by consciousness? That’s the hard problem. Now, people who reject this, I think there’s different things going on with different people.

One certainly legitimate move is to say, “I at least accept there’s an intuitive gap here, but somehow we should reject the intuitions”. This can then be spelled out in various ways. I think the most interesting of which is that this whole idea of consciousness is an illusion. A pathology built up by our cognitive systems to believe we have these special properties of consciousness introspectively, even though we don’t. That’s a move I respect. I think it’s got very strong costs. You might have to deny that we have these experiences that seem basically undeniable that we have, but it’s at least an interesting move. On the other hand, if someone comes and says. “I just don’t have the intuitions. I’m not even sure I have the phenomenon that you’re talking about”. Then, I don’t know. I haven’t gotten that reaction terribly often. There are people who claim to be zombies. I think that’s a fairly unusual reaction. I don’t know. You said you’ve talked about this with people in the rationalist community.

Which of those reactions do you think is the most common?

Robert Wiblin: I mean, I think I’ve maybe slightly misrepresented the view. I guess there’s some people who are drawn to this kind of materialist reductionist view or they or illusionism and seem like they view it as much more intuitive to say that there’s like nothing odd about the fact that we feel that there’s something there.

Whereas to me that just seems like that’s a huge cost to pay to say, “Well actually it’s all just an illusion. That what you think is your like phenomenal experience”.

David Chalmers: I think that’s certainly reasonable. It’s certainly the case and entirely reasonable to be drawn towards a materialist and reductionist point of view and to think all this has to be reductively explainable one way or another. So it’s all going to be physical in the end. Maybe I disagree with that in the end, but I think that’s an entirely reasonable point of view, to want it all to be reducible. But that’s at least consistent with saying there’s a problem here that we have to solve.

And I think so maybe the dominant view that I’ve come across is, say from your average scientist, is to think, “Yes, we want to be materialists. There’s gotta be a materialist explanation at the end of the day, but we don’t have it yet. Hopefully someone will figure it out one of these days”, and that’s an entirely reasonable point of view.

Another point of view is to say like, “Yes, I see the intuitions. But I think we ought to dismiss them as delusions. I think, okay, that’s also a respectable point of view, but then you do have to bite a very strong bullet by saying, “You’re not actually having these experiences that you seem to be having”.

The point of view that says, “I just don’t find the phenomenon in the first place”. Well, that’s very rare and unusual. So I guess what’s left, would be a kind of opponent who says, “Yes, I find subjective experiences, but I don’t see any issue about explaining them because…”, well, what do people say at this point? You could say, “Well, all you need to do to explain to them is to explain responses, like say verbal reports, certain behaviors, integration, perceptual discrimination, and so on”, and I guess if someone says, “Okay. I’ve explained perceptual discrimination, integration and report”. Then I’d say, “Well, why does that feel like something”? What does that person say now? They either say, “Well, that was all I meant by feeling like something discrimination, integration and report”. They’ll say, “Oh, well, you just…,” I don’t know. I’m not sure you have the phenomenon then, because there are all these things that need explaining. There’s discrimination, integration, report, and experience. And it’s just a datum that we have the experience too. You could want to deny the intuition that leads you to illusionism.

So anyway, I think you really need to explore why it is that people reject the problem. But I actually find that with a bit of work, it’s not too easy to bring most people on board with at least the sense of a problem. And to at least… What I would like to be able to do is to push people into the idea that if you want to reject the hard problem of consciousness, you ought to be some kind of illusionist who says, “Okay, yes, we have these intuitions, but no, they are not to be trusted and maybe the brain is making us believe things about ourselves that are not true”. That’s a view that I can engage with.

Arden Koehler: So this is an anecdote, and I don’t know how representative it is, but, or like whether it relates really to what’s going on with other people who deny that there’s a hard problem.

But I remember that it took me a really long time when I was an undergraduate in philosophy to like, just like latch on to the thing that was being referred to by conscious experience. And I remember like staring at these blue curtains when I was like 20 or something and be like, Oh, we’re talking about the blueness!

And so I feel like that’s why philosophers like make up these words like, what is likeness and qualia in order to try to like point at the thing that we’re talking about because it’s weirdly, it was hard for me at least to figure out what it is that people were puzzled about, and that might be what’s going on with some of the people who deny that there’s a problem.

They’re like, “Oh, I know what experience is”, but they’re not really thinking about experience the way you are.

David Chalmers: Yeah. That’s interesting. And I actually find that in explaining the problem, one of the best tools is Frank Jackson’s thought experiment of ‘Mary in the black and white room’, which he used to actually give an argument against materialism about consciousness, but which I think you can just use while staying neutral on that just to introduce the phenomenon.

So Mary is a neuroscientist, maybe sometime in the future when we know all about the brain, who knows all about the physical processes associated with color processing in the brain and so on, but she’s lived her whole life in a black and white room, or maybe she’s colorblind and so she’s never had the experience of seeing red things or experiencing red for herself. And the intuition is, okay, what does Mary know? She knows all about the physical processes, the wavelength, the brain processes, the behavior associated with color processing. She knows all the objective stuff, but there’s something really crucial she doesn’t know.

And that’s, “What is it like to actually experience red from the inside?” Now Jackson uses this to go on and argue that that shows that experiencing red can’t be a physical process. We could argue about that, but I think at the very least, let’s just look at what is the thing about color that Mary doesn’t seem to know about? The first person experience of what it’s like to experience redness from the first person point of view.

That’s what I mean by the conscious experience. I think introducing the topic that way does actually help to get people to focus on what’s at issue here.

Arden Koehler: I think that we may have talked about that in the very classroom where I had this experience.

David Chalmers: There’s a great music video by the way, called “What Mary didn’t know” by Dorian Electra and the Electrodes that your listeners should seek out.

Dualism and panpsychism [1:26:52]

Arden Koehler: So there’s a lot of different views about the nature of consciousness and ways of answering the mind body problem. Are there any that you’re particularly sympathetic to? So things that have come up have been materialism, there’s also functionalism, various forms of idealism, just saying like it’s all consciousness all the way down.

There’s various types of dualism. You talked about illusionism and panpsychism, which is getting more popular it seems like these days. Do you want to just explain a few that you think are particularly interesting or worth talking about?

David Chalmers: Sure. In my work, in thinking about the hard problem, I’ve tended to focus on non reductionist approaches.

I’ve argued that consciousness can’t in fact be approached, it can’t be fully explained using the standard resources of reductionists, say neuroscience and psychology. Because basically, explanations in terms of physical processes or computational processes are always cast in terms of causal structure and dynamics.

And they’re great for explaining objective matters of causal structure and dynamics. Well, that’s basically the easy problems when it comes to consciousness and those methods don’t fully get a grip on the hard problems. I’ve argued you need something new in the story and the kind of view I’ve been drawn towards are views that take consciousness as something sort of fundamental and irreducible in our picture of the natural world in the same way that we take space and time and mass and charge as fundamental. We’re used to the idea that some things are fundamental. If you can’t explain electromagnetic phenomena in terms of your old laws, your old properties and laws, spacetime, mass, Newtonian dynamics, you bring in something else. Electric charge, Maxwell’s laws.

Likewise, I think for consciousness. So I’ve been drawn towards views that take consciousness as fundamental and what that comes to in practice in philosophy is either you’ve got the choice between either a dualist view where you’ve got. You’ve got the physical world, which is one thing, and then you’ve got say the mind, you’ve got consciousness, which is another thing.

They’re both fundamental properties distinct from each other. And then there are laws that connect them. That’s one view. And the other view is panpsychism, which says consciousness is somehow present at the very basis of the physical world and maybe the physics that we know and love basically somehow fundamentally involves consciousness somehow.

Physics is describing consciousness structurally, but the panpsychist says that consciousness itself may underlie physics. Some element of consciousness in every physical system. So, in recent years I’ve explored, I’ve been interested in both panpsychism and dualism. They both have their attractions and they both have their big problems.

The biggest problem for dualism is the interaction problem. How do mind and body interact? And in particular, how could this non-physical consciousness play any role in the physical world? I mean, you could say that it doesn’t. That’s what philosophers call epiphenomenalism. That leads to weird things like, “My consciousness is playing no role in my talking about consciousness right now.” That seems bizarre. Or you can go for an interactionist view where consciousness affects the physical world and then the question is, “How can you reconcile that with physics”? The one place where you might want to look, which looks at least consistent with known physics is quantum mechanics and the collapse of the wave function.

So lately actually, with a coauthor, Kelvin McQueen, who used to be a student of mine at ANU and is now at Chapman University in California. We’ve been exploring views where consciousness actually plays a role in collapsing the quantum wave function, which is an old idea. That’s an old idea that goes back to Wigner and further, but interestingly, no one has really tried to work out formally the details and the dynamics. We’ve just been trying to see if we can make that work. That’s been one aspect of my work on this on the dualist side. Do you want to jump in Rob?

Robert Wiblin: I was going to jump in and say, obviously I don’t understand quantum physics well enough to like comment substantially–

Arden Koehler: You don’t understand quantum physics, Rob? That’s so embarrassing for you…

Robert Wiblin: I have this concern with trying to tie together consciousness and quantum physics. I’m going like, well, consciousness that’s mysterious and I don’t understand it. And then like, quantum physics that’s mysterious and we don’t understand it. So maybe these two weird things are like actually just one weird thing. I mean maybe there’s like some force to that argument, but do you worry that there’s a bit of that going on?

David Chalmers: I think I may have introduced this actually years ago. I call it “The law of minimization of mystery”. I was making fun of some people who wanted to tie consciousness and quantum mechanics. But I do think there are interesting potential links between the two. I mean. The problem in quantum mechanics is not just that it’s mysterious.

It’s a very specific problem. How do the standard dynamics of quantum mechanics which has these two processes: Schrodinger evolution and wave function collapse. Mostly the wave function evolves this way, except on certain times, when you make a measurement, the wave function changes in this special way, and that’s totally standard quantum mechanics.

Then it raises all these questions. What on earth is a measurement? What is an observation? And some people think that alone forces you to bring in consciousness. Cause what could a measurement be but a conscious observation. So you’ve got a role for consciousness right there. No, I think that’s too strong.

There are ways that you could understand quantum mechanics. Maybe you could understand observation and measurement in ways independent of consciousness. There are alternative ways of understanding quantum mechanics, many worlds, hidden variables, that don’t postulate a collapsed process. Nonetheless, I would say more weakly, that it’s extremely natural to at least explore a role for consciousness here because there is this mysterious process, prima facie, there is this mysterious process collapse, which happens precisely on measurement which then raises the question, “What on Earth makes measurement different from anything else in the world”?

If you also think on independent grounds we have reasons to believe that consciousness is especially distinct from things elsewhere in the world and is a fundamental entity, it’s then extremely natural to at least explore the idea that consciousness is what’s doing this. So let’s just say that quantum mechanics doesn’t force you to give a role for consciousness, but it leaves a giant door open that is worth exploring.

And once I said that if God had wanted the desired design laws of nature that gave a role for consciousness, they couldn’t have done a much better job than the kind of set up we find in quantum mechanics. That said, it turns out that once you try and spell out the details, it gets very tricky.

That’s what I found in the work with Kelvin McQueen. We came up with a big problem that I don’t think anyone else had noticed before involving the quantum Zeno effect for one way of spelling this out. And it turns out that the framework we’ve gotten to is inelegant in certain ways.

So I think the results of this have actually been mixed. I don’t want to say that there’s a clear, very natural dualist picture here where consciousness plays all the roles we want it to play. I think the jury is still out and actually going through the process has made me a bit less confident in the view than I was before.

But I find this in general by the way about any view of consciousness. That is, the more I think about a specific view, the less confident I become in that view because they all have such serious problems. So the view I’m most likely to accept is the one I’ve not been thinking about. So that’s the dualist half.

The other kind of view I take seriously is the panpsychist half where consciousness underlies all matter somehow. There’s this familiar point that physics just describes the structure of matter. It doesn’t really describe what it’s made of or its intrinsic nature.

So this is another place where you might want to appeal to consciousness and say maybe, “Physics or matter somehow intrinsically involves consciousness whose structure has been characterized by the physics”. But every time there’s an interaction, say between, two particles, it’s actually some bit of consciousness doing the work.

It’s a way out speculative worldview that sounds like the kind of thing that people are going to be into after taking some psychedelics and so on, but it does have some very attractive features. It’s also got a big cost, which is how do you get from consciousness at the fundamental level to the consciousness that we have?

That’s the combination problem for panpsychism that was introduced by William James. In recent years, there’s been this huge resurgence of work on panpsychism as Arden mentioned that a lot of young people these days have been pursuing panpsychist ideas. The big challenge is the combination problem.

If you can make panpsychism solve the combination problem, it becomes a lead contender. I think it’s fair to say no one has solved that one yet. So I guess I’d say for both panpsychism and for dualism, big attractions, but big problems.

Is consciousness an illusion? [1:34:52]

David Chalmers: The other view I’ve been taking very seriously lately is on the other extreme, which is this view illusionism.

That consciousness involves some very deep-seated illusion that might be sort of brought onto our brains, make us believe we have these special properties of consciousness even if we don’t. I think it’s not too hard to motivate the ideas and there could be such illusions in a cognitive system if you wanted to design an AI which knew things about itself. It’d be very natural to give it introspective access to its own states. It’d be very natural for this to work in a way that made it believe, “Oh my God my states feel so special from the inside” even if they don’t.

Arden Koehler: Why would that be natural?

David Chalmers: I guess there’s a few different reasons.

One is, just say you’ve got a system which can make, say, perceptual distinctions from the inside, say between seeing red and seeing green. Well, you could give it knowledge of the underlying physical structure of its sensors and say, “Ah well, okay, my camera’s receiving blah, blah blah, pixel and blah, blah blah, pixel roll, this is gonna require a highly theoretical knowledge of itself”. So it’s kind of natural to see. It might just kind of make these brute distinctions and say, that’s like red there and that’s just green there. And how do you know there’s red there or there’s green there? Well if it had a full model of itself, it might say, “Oh, because process, blah, blah, blah”. But it might be very natural just to give a kind of direct access to this without knowledge of the underlying mechanisms and say, “Well, I don’t know that one’s just red. Just seems red. And that one’s just green.

Robert Wiblin: And what is greenness? It’s like greenness is just greenness and it’s like you can’t push back behind that layer.

David Chalmers: Yeah.

Arden Koehler: So in this view, it’s like sort of a shortcut and like the feeling that there’s something it’s like, is somehow comes from the fact that it’s like simplified?

David Chalmers: Yeah. Well, at least the judgements that there’s some special quality here might somehow come from some architectural features.

A conclusion of this has been put forward by the philosopher Wolfgang Schwarz who’s tried to combine this with Bayesian epistemology. To be a Bayesian, where you update your credences in light of evidence, you need there to be some layer of evidence that you are completely certain of that gets assigned a probability of one which is roughly your perceptual observations or your experience traditionally thought of as something like your experience. Now with an average AI system, I don’t know, there’s going to be questions. What is going to be that Bayesian level of evidence you update on?

It could just be something like inputs to your fundamental camera sensors or something. But the system might not have that level of detail in a model of itself. So Wolfgang Schwarz has argued that it makes sense from the inside to have this level of what he calls ‘imaginary foundations’.

I’m just experiencing red. I’m just experiencing green corresponding to these different sensor variables that the system will treat as if it was a separate realm, even though it’s in fact grounded in processes in the brain itself and he’s argued that that need for a level of foundations would lead them one to postulating this separate realm that would seem to the subject to be not reconcilable with its physical processes. Anyway, I’m fascinated by stories like this, which is tied to what I call the ‘meta-problem of consciousness’, which is why we believe there’s a problem of consciousness. Lately I’ve been trying to work through stories like that to see what might work.

Then the question is, at the end of the day, just say you do have a story like that. What does that tell us about consciousness? Some people like, say Dan Dennett, go on to draw the conclusion that, “Okay, once you’ve got an explanation like that, then consciousness has been explained away as an illusion”.

Most people find illusionism impossible to believe or many people do because it just seems a datum that we’re conscious and all you’ve done now is, okay by doing this, you’ve explained some of the things I say about consciousness. We actually haven’t explained why I have the experience. To really get into that illusionist framework, you have to reject the idea there is this datum of consciousness. All we have to do is explain why we think there’s this datum, not why there is one. Well, I guess I would say I find this a fascinating view. I don’t think it can be right. But I’m nonetheless compelled by it. So if I’m giving my overall credences, I’m going to give, 10% to illusionism, 30% to panpsychism, 30% to dualism, and maybe the other 30% to, I don’t know what else could be true, but maybe there’s something else out there.

Robert Wiblin: So I guess… I know that among listeners, there’s kind of advocates of lots of different ideas that we’ve mentioned like panpsychism, there’s probably lots of people who are into illusionism or like, materialist reductionism somehow.

There’s definitely some dualists. I guess could you maybe go through and explain like why each of these people should have been more skeptical of their preferred view? Like what’s the big cost with each of the dominant ideas?

David Chalmers: Okay, so the ones I mentioned, I think for each of them, I mentioned a cost. For panpsychism, one big cost is massively counterintuitive. I’m not sure I feel that one so strongly. The world is a weird place after all. I think the big cost is the combination problem. How did the little bits of consciousness yields our consciousness? That’s in a way, a bit like the original hard problem for panpsychists. It’s not clear there’s a good solution. For the dualist, the problem is the interaction problem. How does consciousness play a role in the physical world? How do we reconcile that with physics? Well, for the illusionist the problem is basically, “How on Earth could this be an illusion”? That if the world ran as the illusionist says, then basically I wouldn’t be having any of this conscious experience, how on Earth could that possibly be like this?

Like, what it is manifestly like to be a human being? The problem of basically seeming to deny the data. So those are those three views. But then you mentioned other views. Ones which I’m less sympathetic with, like materialist views and reductionist views. And here I guess, I mean, illusionism is one materialist view and you can see panpsychist as a materialist view, but what about boring reductionism without illusionism?

So that’s just trying to say that consciousness is perfectly real. It’s not an illusion and we can explain it in physical terms. Well, I guess I kind of want to see that spelled out. Sometimes people just say things like, “Oh, well, it’s an emergent property. Consciousness emerges in the brain”. Emergence is just here… It’s basically used as kind of a magic word for somehow it happens and we have no idea how. I don’t think emergence much good unless you’ve got an explanation. So how is it that all this processing… It’s easy to see how from the brain level you can get explanations of functions, behaviors, integration, report, discrimination and so on. The question is why, from that consciousness? If you’re illusionist, you say there’s nothing else to explain. The sense that there’s something else to explain that there’s an experience. That the experience here is an illusion. You’re not an illusionist, then you either say, “Well, it just happens.” Now the trouble is that if you say that it just happens, you might as well just be a dualist. There’s an explanatory gap there. So I guess the basic problem for any materialist view is the apparent explanatory gap between stuff about structure and dynamics and experience. I mean there’s a few, there’s the move that denies there’s anything here to explain in the first place.

It says that’s not illusionism. So much as I don’t even have the illusion of consciousness. So to that person, I don’t know what to say. Maybe in that case, there really are some zombies. I’m suspicious there are many people like that.

Arden Koehler: Just to make clear for our listeners, when you say zombie–

David Chalmers: Oh yeah. Zombies are beings who are basically a lot like us, but not conscious at all. And in the extreme case, the philosopher’s zombie is a being that’s physically, functionally, and behaviorally exactly like us. But not conscious at all, and philosophers like me use zombies for various argumentative purposes as a kind of thought experiment.

Here I guess the zombie I just alluded to wouldn’t quite be the classic case of a philosopher’s zombie, because the zombie I mentioned would be a little bit behaviorally different. It would go around denying the existence of consciousness. The classic philosopher’s zombie actually doesn’t have any consciousness, but it still behaves as if it did. But I have heard occasionally philosophers speculate that other people are actually zombies. There was a philosopher I met in Trinity College, Dublin one time who suspected many philosophers of being zombies. He was actually worried that I might be a zombie. He took me to lunch, asked me many questions, and at the end of lunch he said, “Okay, I think you’re not a zombie”.

I passed the test. I was glad. But anyway, the philosopher’s zombie… No one thinks those ones really exist. But to many people, they at least seem to be a coherent idea and they are one way to pose the problem of consciousness as the problem of why, as a matter of fact, aren’t we zombies? Why didn’t evolution produce zombies? Why did it produce conscious creatures?

Idealism [1:43:13]

Robert Wiblin: Yeah. So just going down the list of different possible positions. I guess idealism. We haven’t talked about that, but that’s a very long pedigree.

David Chalmers: I’m interested in idealism. I mean idealism can be understood as the view that the world is fundamentally mental. So physicalism says the world is fundamentally physical. That’s all there is at the fundamental level. Dualism says the world is physical and mental; there’s fundamental. Physics and fundamental mentality. Idealism says the world is fundamentally mental and that everything else is built up from that. Maybe the physical world is built up from the mental.

Standard panpsychism can be understood as a form of idealism. At least if you apply panpsychism to every physical property: space, time, mass charge. If you say they’re all fundamentally mental, then maybe you could see the world as interactions among all these mental things at the bottom level. But there’s another version of idealism which says something like the world is grounded in a cosmic mind, a single mind.

For Berkeley, one of the great idealists, it might’ve been the mind of God. But there’s also a version of this which says, “Take the world as described by quantum mechanics as a giant wave function of the universe” and now go panpsychist about that and say if the wave function of the universe is actually fundamentally something mental. So it’s a mental state in a single cosmic entity that leads to what I call cosmic idealism, and I think that’s an extremely way out view. It’s also one worth taking seriously, but it suffers from its own version of the combination problem. What some people have called the decombination problem.

How do you get from that cosmic mind to our minds. Why should the existence of giant mind necessitate the existence of our minds? And that seems to be at least as hard as the original combination problem.

Robert Wiblin: Yeah. I guess criticism I’ve heard of idealism is that it just sure seems like there’s this separate material stuff that isn’t related to us that seems to follow very predictable rules.

And wouldn’t it be like a hell of a coincidence if like none of it’s real. All that’s real is our mind. And yet it seems to have its own internal system that’s so coherent.

David Chalmers: Well, the classic version of idealism which some people associate with the term, including the one you’re discussing here is, something like observer based idealism, where the whole world exists inside the mind of an observer, like say inside my mind or your mind and Berkeley put this forward with the slogan ‘esse est percipi’: to be is to be perceived.

And then this version of idealism goes along with the idea that if it looks like a table to you, the table exists simply in virtue of someone having experience as of a table. That’s not the form of idealism I liked. So I recently wrote an article called ‘Idealism and the mind body problem’ saying that there’s that classical route to idealism, seeing everything as existing inside the mind of an observer. That view I think is very much subject to the problem you mentioned. What about all the regularities in your experiences? I look away from you. I look back, I come back to the same place the next day it’s still there. Don’t we need a world outside experience to make all that true? I mean, Berkeley appealed to the mind of God, but then once you’ve got the mind of God, why not do it with an external world? So that’s not the route to idealism that I like. That’s what people sometimes call this phenomenalist idealism.

I like a different route which is the mind underlies. All the external world is real. Physics is all real. It’s just all grounded in minds. Maybe at the bottom level. For the panpsychists, for example, it might be particles have very simple mental properties and interactions among those minds ground the interactions in physics. Here, the mind is not playing the role… things are not basically mind doesn’t ground reality through the ‘to be is to be perceived’ slogan at all. Rather it’s interactions among minds and mental properties that ground the real interactions of physics. It’s a very different kind of idealism from that classical kind associated with Berkeley, and I think maybe it’ subject to its own objections, but maybe different objections.

Robert Wiblin: So some listeners might have kind of a negative intuitive reaction to panpsychism, or it seems like a bit of a crazy idea, or at least in our culture that there’s like some fundamental consciousness at the atomic level or that there’s this consciousness all the way down.

It’s not only our brains that produce consciousness. I guess to give some of the intuition that I’ve seen you’ve give in a TED talk, it’s like we don’t regard it as super mysterious to suppose that there is mass or that there is charge or that there’s movement or space, and panpsychism just suppose as the consciousness-ness is another fundamental primitive part of how the universe is composed just like the other things that we’re more inclined to accept.

David Chalmers: I think this is one thing which is very culturally relative. It seems there are cultures where panpsychism is very natural and very easy to accept, at least right now, in our culture, it seems to be something which is quite difficult to accept.

Maybe we’ve got this view of, we’ve all absorbed this scientific view of things being built up from a very simple reductionist level of physics, and minds only come in fairly late in the day. But I think the thing to mind is, of course, it’s very difficult for some people to associate panpsychism with the idea that very simple systems might be intelligent or be thinking, which seems like a crazy view. But once you start thinking of consciousness as something very simple, rather than something so complicated, then this becomes more attractive and more tenable, and I guess we’re going to talk about consciousness in nonhuman animals sometime before too long. But I think there’s been a gradual trend towards expanding the circle of conscious creatures and conscious systems as we come to see consciousness as more and more simple. So it wouldn’t entirely surprise me if at some point in the coming years, panpsychism came to seem culturally more intuitive.

But it is certainly true that to the average person, the average science based person, it sounds like something pretty wacko the moment you hear it and that’s certainly an initial obstacle to overcome. Philosophers call this the incredulous stare. David Lewis said this about his theory that every possible world actually exists.

He said, “I can refute all the objections to this theory, but I cannot refute an incredulous stare”. You’ve got to find some other way to get past the incredulous stare.

Arden Koehler: Just to press back a little bit on the idea that panpsychism can seem more intuitive if we sort of analogize consciousness to mass or charge.

I mean, it seems like part of where people’s incredulousness might be coming from is, “Well those other things are used in explanations of stuff all the time of things besides themselves”. But so far, unless maybe some of these other theories get worked out well, we don’t use consciousness to explain not consciousness in the same sort of way. It does seem like there’s this disanalogy there.

David Chalmers: Yeah. This is a general worry for any theory involving consciousness. What role, what role does it play? We do use consciousness to explain things naturally. We say people withdrew from the flame because they were in pain, they saw colors, and that led them to react in different ways, but it’s kind of hard to tell a theoretical story where consciousness explains things. In particular at the level of physics, it sure looks like you don’t need consciousness there to explain something distinctive. In the physics, you can do all that structurally and mathematically, so some people think, okay, I can go for panpsychism if you found me some special force of consciousness at the bottom level, but I don’t think that’s going to happen.

David Chalmers: If you went that way, either then maybe quantum mechanics, but that’s dualism. I don’t think we’re going to find that. Rather, it’s the idea that consciousness is grounding all that mathematical structure. Stephen Hawking put this by saying, “What’s the fire in the equations? You’ve got some equations describing some structure, but what’s all that structure in?” The panpsychist says it fundamentally grounds their mental properties at the base of it. There is this worry if there’s all these mental properties at the base, wouldn’t you expect them to have some more distinctive role than just grounding the equations of physics? To have more sort of positive evidence for that?

Arden Koehler: That could just be because we’re used to the sort of structural and relational properties of physics. The entire point, I suppose, is that consciousness is filling in that gap of what the structural properties are-

David Chalmers: Yeah. Consciousness is giving it all reality in the first place. Although the structure and dynamics are something that we’re very used to in a scientific context and can deal with, maybe consciousness is what it was about all along and actually gave it all sort of meaning or reality or whatever. Maybe that’s just a different kind of role, where we have to get over our mathematics-first conception of what matters in a physical world.

Integrated information theory [1:51:08]

Robert Wiblin: What about integrated information theory? I think I first heard about that 10 or 15 years ago. It was this up and coming idea about how mental properties are produced. Something about integrating lots of different sensory inputs. I don’t know exactly how the math worked out. I saw there was some blog posts that seemed to be a devastating critique of it. I think it was Scott Aaronson or someone else. But I hear integrated information theory marches on. What’s the story there?

David Chalmers: So Giulio Tononi, an Italian neuroscientist, introduced integrated information theory maybe 15 odd years ago with a mathematical theory of consciousness where he associates consciousness with informational properties of systems. The centerpiece of it is a quantity, a mathematical quantity he calls phi, which is a measure of integrated information in a given system, which is systems split up into units and connections. He has this very complex quantity called phi, and the idea is the more phi you have, the more conscious you are. Any system with nonzero phi has some degree of consciousness. This has proved very popular, I think, especially among mathematicians and physicists interested in consciousness because it’s like, “Finally, some math! Finally something substantial and formal that we know how to deal with” and it does give you interesting mathematical approaches.

David Chalmers: People like, say, Max Tegmark, who’s a physicist and cosmologist, have really gotten into integrated information theory precisely because it gives you something mathematically substantial to deal with. And it’s been, to a certain extent, popular in neuroscience as well. It’s complicated, because it turns out that phi is almost impossible to measure in any physical system. It’s also computationally intractable, even in, say, a competent AI system. We can’t calculate phi for any system with more than about 15 units, so it’s very, very difficult to directly empirically test.

David Chalmers: Some neuroscientists are very skeptical about it for that reason. It is basically more of a philosophical theory than an empirically grounded theory, although there are things which are empirically suggestive about it. People have tried to use approximations to phi, and they’ve argued this does correlate with things like being conscious versus asleep, what’s going on in various patients with disorders of consciousness? There’s some connection to the neuroscience, but it’s tenuous.

David Chalmers: That’s said, there has been a bit of a backlash against IIT in recent years. Part of it is, it just got so popular and like any theory, there’s a backlash. Especially in the neuroscience, where people think it’s not well grounded in the science. But yeah, the other thing you mentioned is this blog post by Scott Aaronson which pointed out it that basically you can have systems with extremely high phi, but nonetheless look to be extremely simple. Basically, a certain kind of complex matrix multiplier that multiplies two complex matrices in a natural way will have a certain kind of integration, and it looks like if the matrices are big enough, they’re a thousand by a thousand matrix or a million by a million matrix, simply multiplying two matrices will have phi as high as you like, which seems to have the consequence that this matrix multiplier could be as conscious as a human being.

David Chalmers: Which to Aaronson seemed like a reductio, and which to many people seems like a reductio. Interestingly, Tononi, responding to Aaronson, bit the bullet and said, “No, I think this big matrix multiplier is really conscious.” He said, “It’s not like thinking or anything, but it might be having a complicated perceptual state, like looking at a wall and having full visual consciousness of that wall.” Many people took that to be a reductio of IIT. I think for other people, if you’re sympathetic with the broad approach of IIT, maybe another possible moral would be it’s missing something. Integrated information theory has five axioms of consciousness which it tries to turn into math and answer this mathematical quantity phi. I think you could easily make a case that there’s at least a couple, two or three central properties of consciousness that it’s missing, that ought to be built in as axioms.

David Chalmers: Once those were somehow translated into math, that will give us a refined measure, phi star, which maybe it was not nearly so liberal with consciousness and which didn’t have this result. I think Aaronson’s result is certainly a big problem for the very specific formula that Tononi gives. My own view is people shouldn’t be taking that specific formula so seriously in the first place, but it is interesting. There is a project of trying to come up with mathematical formulations or mathematical criteria, mathematical measures of consciousness, where a certain mathematical quantity which we can, in principle, compute in a physical system will have some connection to consciousness.

David Chalmers: Another problem is it just measures degree of consciousness. That consciousness is not unidimensional. It’s multi-dimensional. In many ways there is a version of the theory that addresses that. Anyway, I guess I would say the upshot of this is extremely early days for precise theories of consciousness. You should not take anyone’s precise theory too seriously at the moment, but it’s nice to see that, in principle, there is actually a project here. It’d be nice, over time, to see many more different mathematical theories of consciousness develop.

David Chalmers: There was recently a conference, actually, at Oxford in September, on mathematical models of consciousness, getting together 20 or 30 people with different mathematical approaches to consciousness to see what they could come up with. I couldn’t make it because I was teaching at NYU at the time, but the videos are online. I’m curious to see what progress came out of that conference.

Robert Wiblin: Integrated Information… That’s a materialist theory of consciousness? Which category?

David Chalmers: I guess I would call it a “scientific theory of consciousness”, in the sense that it’s neutral on the underlying metaphysics. Right now, if you look at what the theories of consciousness are coming out of the sciences, mostly they’re kind of neutral on materialism versus dualism versus whatever. They’re theories of correlations. They say, for example, consciousness goes along with such and such process in the prefrontal cortex. Maybe there’s a neuronal global work space, prefrontal cortex, and that goes along with consciousness. Maybe it’s processes in the sensory cortex, which give rise to certain re-entrant processes. These are basically theories of the physical correlates of consciousness.

David Chalmers: They don’t try and solve the hard problem. People don’t say that, “Yeah, well it’s obvious how this global workspace explains consciousness.” It’s rather theories of what kind of physical processes you have, at least in a human, when you’re conscious. IIT is like that, but just more general. The theory of the physical correlates of consciousness, what kinds of physical systems will be conscious, and how much consciousness will they have? If you want to, you can understand it materialistically, saying that’s all there is to consciousness, this information integration, and that fully explains consciousness.

David Chalmers: Now Tononi himself doesn’t say that. He says he’s not trying to solve the hard problem. He takes consciousness as a primitive. He takes it for granted, and he wants to explore its properties. What would you need to have to get consciousness? You’d need this integrated information. You could understand it as a dualist theory. There’s integrated information, there’s consciousness, and what he’s proposing is a law that connects the two. Fundamental law. High phi, high consciousness. Maybe you can even understand that as a panpsychist theory. Tononi himself is neutral on the metaphysics. We had a workshop where we tried to press him on what’s the best underlying philosophical story, and he said, “It is what it is.” That was a bit unsatisfying.

David Chalmers: The best we’ve seen is as a theory of correlations, which is, at the moment, neutral on the philosophy and most of what we… there’s been a lot of progress in the science of consciousness over the last, say, 25 to 30 years, but it’s almost all been at that level of correlations between physical processes and consciousness that are somewhat neutral on the underlying metaphysics and the kind of explanatory connection one would want ultimately to solve the hard problem.

Robert Wiblin: I’m just champing at the bit here to get onto the meta-problem of consciousness, but I have one more question on this topic. I know there’s a lot of people in the audience who are kind of very sympathetic to Daniel Dennett and his brand of illusionism or denying that there is any fundamental consciousness, a substance that that is consciousness. Maybe you can explain this longstanding disagreement between you and Daniel Dennett and what Daniel Dennett thinks and why, even after all these years, he has not managed to persuade you.

David Chalmers: Yeah. Well, this is actually very closely connected to the issue of meta-problem. The meta-problem, as I understand, is the problem of how is it, why is it that we talk about consciousness, and why is it that we think we’re conscious? Why do we think there’s a hard problem? And the general idea is that even though explaining consciousness itself is a hard problem, explaining why we say these things might be an easy problem. Because after all, that’s a matter of our behavior, our use of language, people writing down words in books, people making certain utterances. There might be a physical explanation of that.

David Chalmers: My read on Dan Dennett’s position, Dan has had a million different positions over the years. He’s a bit of a moving target, but the core of his position and the part I find the most interesting is a kind of illusionism based on the idea that all we need to do is solve the meta-problem, and will thereby dissolve the hard problem. Dan’s line is, we should look at the whole system objectively. If we can explain all the behaviors, all the things we say about consciousness, then we’ve basically explained everything there is to explain about consciousness.

David Chalmers: Dennett calls this third person absolutism. What you need to do is explain all the properties of the system from the third person point of view. Then you’ve explained everything. To me and to many others, this doesn’t seem fully adequate because there seems to be first-person data about, consciousness. Explain all the behavioral properties. You’ve also got to explain why we have these experiences. I guess Dennett’s line is to reject the idea there are these first-person data and say all you do, if you’ve can explain why you believe why you say there are those things. Why do you believe there are those things? Then that’s good enough. I find that line which Dennett has pursued inconsistently over the years, but insofar as that’s his line, I find that a fascinating and powerful line. I do find it ultimately unbelievable because I just don’t think it explains the data, but it does if developed properly, have the view that it could actually explain why people find it unbelievable, and that would be a virtue in its favor.

David Chalmers: The work I’ve been doing lately on the meta-problem has been all around this vicinity. Trying to look at what are the best explanations of why we say these things. Why hasn’t Dennett persuaded me? Because ultimately I think there are data here that no view like that can properly explain, but I, at least, find the story of trying to dissolve the data by explaining our beliefs about it is a very attractive one. From my point of view, it just seems that it still ultimately doesn’t explain why it’s like this to be a human being, and this is why actually very few people, once you really try and pursue this view end up being really strong illusionists.

David Chalmers: Panpsychism comes up against the incredulous stare. Illusionism comes up against arguably an even stronger incredulous stare. I think the kind of illusionism you need to dissolve the hard problem is a very strong kind. It basically requires saying you don’t really have these conscious experiences at all, you just seem to, and most people ultimately find that unbelievable. But maybe this is something where likewise people could become more open to it over the years, and especially if someone does tell a very nice, quasi-evolutionary neurobiological story of why our brains have these distorting self models. It wouldn’t surprise me at all if illusionism does come to be more widely accepted in the coming decades than it has been, at least so far.

Arden Koehler: Just to get really clear on the relationship between the hard problem and the meta-problem, is it your view that you could give a full and satisfying answer to the meta-problem? Something that just explains all of our utterances about consciousness and all of our beliefs about consciousness, and even why we believe there’s a hard problem, but still not have answered the hard problem, or even not have made progress on the hard problem? Or do you think if you were to fully answer the meta-problem, you will have answered the hard problem too?

David Chalmers: I am certain that fully answering the meta-problem will at least give us very deep insights into the hard problem. And in fact, years ago when I first started thinking about consciousness seriously, this was the number one way I was thinking about it. It was tied to the fact, “Well yeah, I’m conscious. Yeah, I think about consciousness. I talk about consciousness and there’s got to be an explanation of that, and whatever explains that has to also be tied, very strongly tied to consciousness itself”. At the very least, focusing on the meta-problem is going to give us insights into the hard problem, so I’m really all in favor of thinking substantively about that as kind of an empirical research program, which is going to somehow connect very deeply to this philosophical issue, which is a dream in philosophy. But the hard line is that coming up with some story that explains the things I say about consciousness will fully explain consciousness, and there’ll be nothing left to explain.

David Chalmers: That, I think, basically requires the illusionist view that it’s an illusion and all you have to do is explain the illusion. I think that’s a great strategy in many cases. Maybe people have tried to bring this to bear and say, in the case of God, explain why people believe in God. Maybe that ultimately then reduces our confidence that there is God and so on. With consciousness though, it’s got this status, apparent status at least, as a datum that many things don’t have. At least my view has always been that prima facie we need to do more than just explain the things we say. We need to explain why we have these experiences. To reject that line, you have to basically deny the apparent datum that we’re having those experiences.

David Chalmers: To me, merely explaining the things we say and believe about consciousness does not put one in a position to reject the datum in the way that it might with God, because there’s still the question, why should it be like this? The only response to that can be, well actually no, it’s not like that. Do you really believe it’s like this? At least at the moment, I find that too unbelievable to accept, so that’s to say that I find the hard line that this would dissolve the hard problem the most interesting and promising reductionist strategy on the hard problem, but it’s not one that I can accept. I’ve gradually gotten more sympathetic to it over the years, even though I think I’m committed to denying it, partly because of this higher order of humility. Philosophers disagree, a lot of us are getting it wrong. What could I be fundamentally getting wrong as a starting point?

David Chalmers: Maybe it’s that. Maybe there’s some way that illusionism could turn out to be true in a way that my mind just finds it impossible to come to grips with, and maybe that’s why I give it this 10% credence, but I also think maybe if somehow I do feel like there’s more work for illusionism to do, it’s not just enough for them to explain the reports, the things we say, whatever, and then that’s it. We need to somehow explain why having a brain like that would somehow seem to be lit up in the way it is. Not what would merely make us say that it is, but in some stronger sense would produce situations like the situation that we’re actually in.

David Chalmers: Maybe there’s something that an illusionist could say that would still fall on the reductionist side, but that would go beyond what illusionists have done to date. If that could be done, maybe that’s the kind of thing I could, in principle, be open to, that would ultimately leave me open to dissolving the hard problem, but I think that would require some developments that go way beyond where we’ve gotten to so far. Maybe that would be like a meta-problem plus A) Explain all those reports and so on, and B), Have some insight along the way that’s somehow connected very deeply to our sense of consciousness. Maybe something could emerge from that that could ultimately convince me to dissolve the hard problem or take a reductionist/solutionist line. But, I’m not there yet.

Moral status and consciousness [2:06:10]

Arden Koehler: Okay. We’ve been focused pretty much exclusively on theoretical, philosophical topics. I want to turn a little bit to ethics without leaving consciousness. A lot of people think that consciousness is the basis of all value and a necessary condition for a being having moral status. You can step on a leaf or something because, it’s not conscious, and that means it doesn’t matter morally. But there are a lot of on-the-ground disagreements about which beings are conscious and the answers here, they seem to really matter. Whether or not fish are conscious, or insects are conscious, it seems really important for how we should treat them. Just to get a flavor of some of the controversy here, smaller and less complex animals, some people question whether they’re conscious. Like I said, fish or insects, AI, which we’ll talk about later, either in more rudimentary forms or in more advanced forms, simulated beings, so beings in simulations, us, if we are in fact in a simulation, which we’ll also talk a bit about more later, as well as other out there ideas about things that are conscious, like current computer programs, large systems like the earth or the solar system or governments-

Robert Wiblin: Groups of people.

Arden Koehler: Groups of people, yeah. Or even just everything. Maybe that’s not so out there, if panpsychism is true, but objects, plants, so on. Before we get to thinking about the implications of particular theories of consciousness for this question, we’re obviously very uncertain about this. Do you have any sense of what we should do in the face of this uncertainty, what we should do to make somewhat informed guesses about which beings are conscious, and to what extent?

David Chalmers: Obviously for practical purposes, this is extremely important. Maybe just to underline the first thing you said, given it does seem to be very widely accepted, both among people in the effective altruism community and more generally, that consciousness is very central to moral status. One very strong view is consciousness is the sole ground of moral status and value, and all morally related value ultimately derives from states of consciousness. Whether it’s suffering or pleasure or states of consciousness more broadly, or a weaker view is at the very least, consciousness is required and plays a central role. Given all of that, it just seems to follow straight forwardly that we need to think, if you’re trying to make as good a world as possible, you need to think very, very hard about consciousness. Both what systems are conscious, what kinds of consciousness are conducive to the good, and how those kinds of consciousness are distributed and how we can change their distribution.

David Chalmers: I would love to see a whole bunch of people motivated by effective altruism come to the study of consciousness to help us figure out those questions. And likewise, people moving in other direction. Specifically on the question of the distribution of consciousness, we had a conference on animal consciousness at NYU a couple of years ago where these questions were being discussed, both by people interested in the theoretical side and the philosophy, the theoretical philosophy and the science, and the ethical and activist issues about their treatment of animals. Yeah, there’s certainly not any consensus about exactly which animals are conscious, but there very clearly is a trend towards being more liberal in ascribing consciousness to animals; both among scientists, among philosophers and among people generally.

David Chalmers: I remember when I first got started in this field about 30 years ago, when people talked about which animals were conscious, there tended to be, okay, for sure humans. There’s some debate about whether any nonhuman animals were conscious, but most people were prepared to extend consciousness to primates, to other mammals, sure. Dogs and cats, maybe. But once you get beyond that, big, big question mark. Mice, who knows? Over time, it seems we’ve now gotten to a point where practically almost everyone seems to take it for granted that just about every mammal is conscious, birds are conscious, that fish are very, very likely conscious, and now the debate is over insects. Ants and flies, or maybe worms, are they conscious? And there’s quite a lot of people saying they are, but there’s some debate, and then you’ve even got people saying, are plants conscious? And there’s quite a lot of people in favor of plant consciousness.

David Chalmers: That’s interesting as a trend. It’s interesting also to think about what underlies that. Is it greater scientific appreciation of the capacities of animals? Maybe in part. Is it partly to do is maybe verbal shifts in what we mean by consciousness? Maybe in the old days people tended to use the word consciousness more for self-consciousness. Now they use it more for phenomenal consciousness. That might also help explain why it’s gotten a bit more liberal, but also I think going along with it is gradual evolution towards more liberal views of what systems are phenomenally conscious. In the old days, people used to think, well this would require pretty complex capacities, and now it starts to look as if for every complex capacity anyone suggests that’s required for consciousness, it looks like there’s pretty good reason to think you could have consciousness without that.

David Chalmers: An extreme case would be language. People think, oh, you need language to be conscious, but now it looks like there’s pretty good reason to think that… And for linguistic individuals, humans are conscious and that primates without language are conscious and so on. And likewise for all kinds of complex capacities, very few complex capacities has anyone made the case stick, that that would be required for consciousness. That’s led to an expansion, I think, to a much greater liberal… Seeing phenomenal consciousness is something relatively simple and undemanding and has been a very big trend in the field. But all this is mostly sociology so far, and there are some disagreements.

Higher order views of consciousness [2:11:46]

David Chalmers: There are people who have higher order views of consciousness, where consciousness requires, for example, higher order states about your mental states. It’s a bit like self-consciousness, but directed at your own mind. It’s not just I’m seeing this, but I know that I’m seeing it. That might seem like a demanding capacity. Maybe humans or primates and some other mammals can have it, but it starts to look unclear that fish or insects have that. If you think that a higher order theory of consciousness is correct, then you’re going to maybe be less liberal in ascribing it to animals.

David Chalmers: There is an ongoing debate about whether fish feel pain, which really seems to come down to do they subjectively experience pain?. Then the majority seem to be yes, but there’s some people who say no. Usually the people who say no say that for these broadly theoretical reasons. A lot then comes down to, in this case, to settle that issue, well, ideally we’d like to settle the question between theories of consciousness, which is a tough theoretical question and philosophy. I think there’s pretty good reasons to reject higher order theories of consciousness and to accept what people call first order theories of consciousness for various reasons. It’s hard to see why these higher order states would be required, and it seems to be a weird thing to suppose that we have all these higher order thoughts all the time.

David Chalmers: Some people think there’s gradually empirical evidence in humans against these theories that say people are conscious despite not having activity in the brain where the higher order stuff is going on, but actually that’s still very much up in the air. Actually, in two weeks at NYU, we’re hosting a workshop to try and see if anyone can devise an experiment to decide the issue between first order and higher order theories of consciousness. The Templeton Foundation has recently allocated 20 million dollars to seeing if people can do this in general, take the leading theories of consciousness and see if people can come up with experiments to decide the issue between them. The first one was a project came out of a meeting in Seattle I went to last year, to try and decide between global workspace and integrated information theories of consciousness.

David Chalmers: 15 of us sat around a table for two days and thought about it. I was very skeptical that we’d come up with something, but we came up with something. An experiment at the end of the day, which will not decisively settle the issue between the two, which nonetheless, involves some experiments about where in the brain there’s going to be activity associated with certain kinds of consciousness involving tasks, where it looks like the theories make different predictions. The theorists are being asked, get your predictions on paper. They’ve got all the predictions on paper, the report’s now being registered, and now over the next year or two people, are going to perform the experiments.

David Chalmers: Okay, that’s exciting potential experimental evidence to bear. In two weeks we’re going to try and do the same thing for first order and higher order theories of consciousness, and see what kinds of, for example, neuro biological experiments might begin to decide this. One thing you can try to do is to get experiments involving forms of consciousness with no one reporting on their consciousness, and see whether the activity you get is just in the sensory areas of the brain, or in the higher, cognitive areas of the brain. That’s the kind of thing people can try to test.

Robert Wiblin: Can you give me the intuition for why people believe in higher order theories of consciousness? Because I just feel like I don’t get it.

David Chalmers: Here is one that’s a very simple intuition. Any conscious mental state is one that you’re conscious of. The conscious state is one that there’s something that’s like to be in a state, and many people think, well there’s something that’s like to be in a state, you’re somehow aware of that state. And then being aware of that state seems on the face of it to involve higher order awareness of that state. There’s the state itself, like being in pain, is being aware of the pain. And that now gives you then, something higher order. Maybe it’s not yet higher order thinking, but at least some kind of higher order awareness or higher order perception of mental state about a mental state. That’s at least what people appeal to.

Arden Koehler: I feel very unsympathetic because it just seems like that’s how we know we’re conscious, not what it consists and to be conscious.

Robert Wiblin: And why did they say it’s the third level, where you have to be conscious of the consciousness of the consciousness? Is there a reason why it should be two?

David Chalmers: Well actually different people go different ways here, but the most famous version of this is probably David Rosenthal’s higher order thought view, and he’s very clear that for him, the higher order thoughts themselves don’t have to be conscious. They are in fact typically unconscious higher order thoughts. You can say you have unconscious awareness of your mental states that makes the lower order states conscious, not the higher order states.

Robert Wiblin: The top level is never conscious, but all of the ones below, they can be?

David Chalmers: On this view, yeah. There’s the alternative view that the top level is conscious too, and you avoid the regress somehow, either by having infinite levels or by having… To be aware of X is to be aware of aware of X. Awareness has awareness of awareness somehow built into it. I’m not actually… maybe there are ways of making that one fly, but that makes higher order awareness much more deflationary than on the higher order thought view. It’s no longer clear that it’s going to require a whole separate cognitive system of higher order states. Basically you might have first order awareness, along with the claim that first order awareness somehow makes itself available to the system to be an object of awareness.

Robert Wiblin: Why don’t you buy it, in a nutshell? Why don’t you buy the higher order theories of consciousness?

David Chalmers: My intuition for a start is that consciousness involves awareness of the world. It’s a way of being aware of things. It’s not primarily states that we’re aware of, it’s things in the world that I’m aware of. I can be aware of my hand of something consciously. That’s not a matter of my being aware of my perceiving a hand. To me, that’s a special kind of consciousness, the higher order thing. I do feel there’s some force to the objection that when I consciously perceive my hand, I’m somehow aware. There’s some very light awareness we have of perceiving the hand, but I don’t think it’s anything as nearly as robust or as rich as a thought. There’s a kind of cognition, maybe it’s some basic form of awareness that’s built into consciousness itself. The philosopher Brentano, back in the 19th century, said, “All awareness involves some background awareness of awareness.” I can feel myself empathetic with a view like that, but that’s not going to place the same demands on consciousness that the higher order thought view had.

David Chalmers: The higher order thought where you need a whole separate thought about your perception seems to be very demanding. Maybe flies or mice are just going to be incapable of having such sophisticated cognitive states, whereas if consciousness merely involves some very simple background awareness of awareness that’s built in to first order awareness, maybe we could make sense of the views that even relatively simple animals have that. I’m most inclined towards a first order view where consciousness is just a matter of being aware of things in the world in the right way. And if that’s the case, then it’s not clear at all that this places strong restrictions on the class of creatures that could have it.

David Chalmers: My sense is that the evidence in the field is moving in the direction of these first order theories of consciousness, and therefore is leading us that one should accept a more liberal view of animal consciousness, and I’m inclined to be pretty confident myself for these reasons that, at the very least, say birds and fish and quite possibly insects have what’s required for consciousness. But then there’s still a big broad area of uncertainty, which might finally get me to answering your question, which was about what should you do in face of that uncertainty?

David Chalmers: There is massive uncertainty here, and I think one should be open to it. What should you do, theoretically? I don’t know. It’s easy for the theorists to say, “Leave all options open, and explore them, distribute the evidence, your credences as best you can”. We do have epistemological problems about consciousness in general, because we have no direct measures of consciousness. I once talked about the consciousness meter that you could just wave at a system and it would display its state of consciousness for you, but that would make the science of consciousness easy. We don’t have one, because consciousness is not like that. It’s relatively private.

David Chalmers: We’ve got ways of measuring it, but all those ways of measuring it require assumptions or theories. For humans, we use reports, we ask people and we trust that by and large, they’re reporting their consciousness correctly. Nonhuman animals, we don’t have reports. We just have very indirect measures. The epistemological situation is shaky, but for theoretical purposes, I’m a philosopher. I’m used to that. That’s fine. Now for you guys, for effective altruists who are trying to change the world, interested in minimizing animal suffering and so on, then I think obviously it’s a much trickier question.

David Chalmers: I guess there was a view of, what is it? The precautionary principle says we ought to err on the side of avoiding harm? Maybe that’s a reason for giving, at least in determining your actions, give an awful lot of weight to the possibilities where consciousness is more extensive and suffering is… If you’re 50/50 on whether fish feel pain, then be inclined to give a lot of weight to the possibility that they do feel pain in determining your action. But here’s where I hope that the effective altruists have thought about this a lot more deeply than I have.

The views of philosophers on eating meat [2:20:23]

Robert Wiblin: Well, we’ve thought about it a lot. I’m not sure whether that has borne very much fruit from that point of view. Yeah, it’s interesting. On the play it safe approach, I think I saw a survey of philosophers, which suggests that it was in the high 80s, the fraction of philosophers who thought it was wrong to eat meat in the current day situation, which is quite funny when lined up against your survey, suggesting that only 80% of philosophers have confidence in the existence of an external world, so possibly there’s some people who don’t think the external world exists, but it’s nonetheless wrong to hurt the animals in it.

David Chalmers: On the new PhilPapers survey, we’re going to have a vegetarianism question. Omnivore, vegetarian, vegan. What do you think is correct? We’re also going to have a question about what systems do you think are conscious, roughly down from humans to particles with a few stops, and we’ll be able to see how those things go together, which might be interesting.

Robert Wiblin: Yeah, I guess you would imagine that many of those philosophers are not confident that animals are conscious, or at least not all of the animals that we eat are conscious, but they’ll be saying, “Wow, if it’s 50/50, then it seems like a much better bet to stay safe and not to treat them badly”.

David Chalmers: Yeah. I think many philosophers would think that theoretically. Even I’m probably inclined to think that theoretically, which is nonetheless, I eat meat, so my behavior and my theoretical judgments here are not fully aligned. It’s not the only domain where that’s true.

Arden Koehler: It sounds like the survey asks, “Do you think it’s wrong to eat meat, not do you eat meat”?

Robert Wiblin: Yeah, I guess the trickier cases are ones where it’s the trade offs are a bit more stark, because it’s relatively easy to not eat meat, at least in a principle, philosophical sense. But there’s other cases where you’re deciding between helping one group and helping another group and then there’s potentially big costs to diverting resources to one group and not another. There, I guess you would ideally have well-calibrated probabilities about whether they suffer and how much they suffer relative to one another. I’m sure you’re familiar with Luke Muehlhauser doing this very lengthy-

David Chalmers: Yeah, that’s the thing I’m the most familiar with here, because I talked with him, he wrote this big long report on animal consciousness and I talked quite a lot with him while he was coming up with that report. We talked a lot about what are the different criteria for consciousness in ascribing consciousness to animals, and which ones are well established and which are not? There’s nothing which is tremendously well established-

Robert Wiblin: But there’s things, I guess, that seem correlated somehow.

David Chalmers: Yeah, I mean that report is our best one. I guess one question is if there’s anything which is a definitive sign of lack of consciousness. There’s almost nothing that seems to have that status. As you say, there’s nothing that it seems that we can be clearly… There’s no complex capacity X so that we can be confident, I think, that X is required for consciousness. I would say that, because I think panpsychism is coherent. But I think we’ve got nothing in our evidence that clearly rules out extending consciousness to almost any system. We’re not, I think, going to be in the position of saying anytime soon, there’s some capacity required for consciousness that therefore definitively

David Chalmers: … rules out that fish are conscious. We might get to the point where there are some capacities that we are pretty confident about do suffice for consciousness. Like most people think language, and report, and complex behavior, and flexible decision making are, at the very least, very good signs of consciousness. There are philosophical questions about why that’s so, but most people seem to be prepared to take those as pretty good positive signs of consciousness.

David Chalmers: But then as the capacities get simpler and the decision making gets more and more reflexive, then … I don’t know. I mean, I guess, morally, the situations which are most … the animals that get eaten the most are, what, mammals, birds and fish?

Robert Wiblin: Well, in terms of raw numbers, I think tiny fish are a lot, and then there’s, I guess, chickens among, yeah.

David Chalmers: I think right now, among scientists and philosophers, there’s a pretty strong consensus that birds like chickens, and fish are conscious. Of course, there’s a lot of variation among, say fish.

Robert Wiblin: And then we’ve got a second question about assuming that the amount of consciousness or ability to feel pain and pleasure is non-zero, it could still nonetheless be quite a bit less than what a human has, simply because their brain is smaller, say.

David Chalmers: Yeah. Then of course, this raises questions of what the morally relevant dimensions of consciousness are. Is it just pain and pleasure? And then, is it pain and pleasure of a certain kind? Is a fish feeling, what is for that fish extreme pain as morally significant as a human feeling extreme pain?

David Chalmers: One view would be that, somehow, there’s something different about the characters of those pains in humans. It’s just more intense or worse from us. Another view would be that, in humans, that pain has more a knock on effect, say in your thoughts, in your attitude towards life in general, some of which don’t have analogs in the case of fish. And if you’re inclined to think that the morally relevant features of consciousness include not just the pain in itself, but things like thoughts and memories and overall emotional cognitive attitudes, then you might think that adds a whole moral dimension in the human case that’s not present in the fish case.

David Chalmers: So I think around here, one has to think fairly hard about, not just about what animals are conscious, what animals have what kinds of consciousness, and what kinds of consciousness are morally relevant?

Arden Koehler: Do you think it’s accurate to say that, although people are pretty unsure about what’s conscious and to what degree, in what way, people tend to think, “Well, the more complex, the more behaviorally similar to humans and the bigger the brain, the more likely the thing is to be conscious, and in a morally relevant way.” And do you think that’s justified?

David Chalmers: I think that’s right. And I think to some extent it’s justified. We’re in, again, epistemologically very shaky grounds here. So maybe that could be wrong. But the one case of consciousness that we know about, it seems, is our own case, the human case. Or even like my own case, typically we’re prepared to extend it to other humans, because they’re so much like us.

David Chalmers: I think it’s natural to say, in thinking about say other primates, that they’re like us in relevant respects. Yes they don’t have this and this and this, but it doesn’t take much reason to think that this and this and this would be required for consciousness. So we say, “Sure, they’re conscious.” The further we get from our own case, the more epistemologically shaky it gets. It seems like in particular, as you move down the scale of capacities and of complexity, it becomes harder and harder.

David Chalmers: There’s one argument for extending it further. Take away a certain capacity, X, we say, “Look, it’s very plausible that X is not required for consciousness. So although these systems don’t have X, that’s probably not a barrier to their consciousness. They’ve still going to be relevantly like us. Therefore they are conscious.”

David Chalmers: Now the only trouble is, of course, we could be wrong. Maybe X could be required for consciousness. So the more steps we have like that, even if we’re confident like 0.99 for every capacity X we remove, that we’re probably not changing consciousness. Do that enough, then all those bits of, all those 0.01 bits of doubt might start to add up. By the time you get down to say some very simple system like a fly or a fish, you might think although there was no individual capacity they lack, such that we have much reason to think that’s required for consciousness, it could be that somehow we’ve missed a crucial thing. Maybe that would be one way to actually, to rationalize the kind of reasoning you’re talking about.

Arden Koehler: Yeah, I mean it seems like in one way surprising though, that we would put so much weight in capacities and complexity, especially since, as you were saying earlier, there’s been a move toward thinking of consciousness as something simple, as just phenomenal conscious, just, well, there being something it’s like.

Arden Koehler: So is there a way of spelling out more why it seems like it is complexity in particular, and cognitive capacities, that makes us feel like that’s … “Well, nothing is great evidence for consciousness but that’s in the ballpark,” or that’s the best we can do or something like that.

David Chalmers: Well, I guess maybe there’s some reason for thinking consciousness is somehow tied to capacities at least, that there are some capacities that … I mean if you’re a total panpsychist, you’ll think everything is conscious and your capacities have nothing to do with it. But if you think some systems are conscious and some are not. A very natural place to look is that something about the capacities of those systems to do certain things that go along with consciousness. That doesn’t yet show that it’s a complex capacity, maybe some pretty simple capacities say that plants have. Say, like abilities to adapt and to grow and to process information. But anyway, once we’re in the realm of capacities and we don’t know what the capacities are, there at least seems to be, well it’s a view worth entertaining that complex capacities may matter. I think that it’s probably simpler capacities that matter for consciousness than … rather than complex capacities.

David Chalmers: But it’s hard to be confident of that. I do think that over time, the science and the philosophies evolve in that direction for any complex capacity that someone might say is relevant to consciousness. People come back and say, “Not much evidence for that.” But I think in general, given our epistemic humility about consciousness, there’s so much that we don’t understand that I don’t think we should rule out the idea completely that it’s a complex capacity. And that would make me at least so that I’m pretty well a hundred percent confident that humans are conscious. I’m very confident that other primates are conscious. Am I that confident, am I 90% confident even that flies are conscious? No, probably not. Even though I’m inclined to think they are, but I think even going to 90% confidence would be too high, given all the things that are lost in moving from us to flies.

Robert Wiblin: Yeah. I guess my probability would be a decent amount less than 90% but it’s just, I guess it just seems like you should like avoid either extreme in that case. It shouldn’t be close to a hundred percent and it shouldn’t be close to 0% either, just given how uncertain we are about what makes this true.

Robert Wiblin: I think something I’m realizing is that I guess I thought the situation would just be hopeless, given that we can’t even agree what consciousness is or what is the underlying basis for it, to possibly think that we can then put probabilities on the likelihood of different creatures being conscious.

Robert Wiblin: But it seems like to some extent you can set aside that question and there could be an agreement across people who have different views on the hard question of consciousness and on the easy question. They might then agree on what are the actual factors in practice that matter. Like, is it self awareness or is it ability to see the environment and whatever else? Even if you could get agreement on that between the panpsychist and illusionist in some sense.

David Chalmers: Yeah, I think panpsychism and illusionism is a tricky case.

Robert Wiblin: Yeah, sorry. I was trying to choose two extremes there. Maybe that’s too extreme-

David Chalmers: Panpsychists do make claims about the distribution of consciousness after all, but many broad views about the hard problem, say materialism and dualism, are to some extent orthogonal, the questions about the distribution of consciousness. Some scientific theories make predictions about the distribution of consciousness. But many of them are also somewhat neutral. It’s possible to have credences in which theories are correct and reasonable credences. Insofar as these theories do make different predictions, you can just go through your credences and theories, like if I’m 10% confident in a higher order thought theory, that 10%, ought to weigh in to restricting the distribution of my credences about, say, fish consciousness a bit.”

David Chalmers: It’s certainly true that where views on the hard problem is concerned, they’re somewhat orthogonal, the questions about the distribution of consciousness and materialists and dualists, I mean, this happens regularly, get into debates about whether fish are conscious or not, with many of the same considerations arising. Basically what matters there is the correlations in consciousness and physical processes. And we can have scientific or philosophical reasons for believing in correlations here, which is somewhat independent of views on the hard problem.

David Chalmers: I do think it gets a bit tricky for some philosophical views. If you’re an illusionist who thinks that, in fact, no system is conscious and it’s merely an illusion of consciousness we have, then I think this ought to really affect how you think about the distribution of consciousness. Actually, this came up in my conversations with Luke Muehlhauser about this. Because he said at the same time he wants to be an illusionist about consciousness, and he’s got all these complicated credences about the distribution of consciousness.

David Chalmers: I think if you’re really a thorough going illusionist, who thinks we’re not really conscious at all, we only have the sense that we’re conscious, then I don’t think all these intuitions about the distribution of consciousness should really count for a very much. A), Strictly speaking, these animals aren’t conscious at all, nothing is conscious. B), If it’s the illusion of consciousness that matters, well, these animals may not have the illusion.

David Chalmers: So I do think that many of these questions about the moral status of consciousness may have to be rethought, and if you’re inclined with a strong illusionism, maybe that’s something we can talk about in depth.

Arden Koehler: So you’ve said that you think it’s plausible that consciousness can come in degrees. So for some beings, some people think are more conscious than others, like maybe people think human beings are more conscious than many nonhuman animals, and that very small animals, like insects, are less conscious maybe, if conscious at all, than larger animals, roughly. I’m wondering if you can explain what you think it means for something like consciousness to come in degrees? You might imagine that consciousness coming in degrees is sort of like the difference between being half awake and fully awake or 75% awake. That’s an experience that we all have. Is that a misleading analogy or is that a good analogy? How would you explain it?

David Chalmers: Yeah, I think I’d want to be cautious about saying consciousness comes in degrees. Because I don’t think there’s a single uni-dimensional scale which is the canonical scale of consciousness on which we can order every possible total state of consciousness or every possible conscious creature at a time. Rather, I think consciousness is multi-dimensional, and there’s many different ways to sort states of consciousness along scales, that is, to project these multi-dimensional scales onto single scales. I think that there’s really probably a bunch of different scales here. The amount of attention you’re paying; attentiveness, at least within human consciousness, that’s a relevant dimension. The amount of information contained within an individual state of consciousness, there’s some sense that maybe you could measure the information, say in a visual state in terms of a measure like bits. Maybe the number of kinds of consciousness, where kinds of consciousness will have to be individuated a certain way. For example, we have visual consciousness and auditory consciousness and tactile consciousness and so on. We can certainly imagine beings with more sensory modalities than us and fewer sensory modalities than us. We have cognitive consciousness, where it seems that some creatures have merely sensory consciousness but not cognitive consciousness.

David Chalmers: So I guess I’m inclined to think there’s lots of different ways to put scales on states of consciousness. No single way, but nonetheless, it may well turn out that say, once we compare humans to mice or flies or whatever, we may well be … have a higher degree for many, many of those scales. Although it’s entirely possible there are some ways of ordering states of consciousness so that mice come out having a higher degree than humans.

Artificial consciousness [2:34:25]

Arden Koehler: So you said elsewhere that if more fully autonomous artificial intelligence comes around, then we might have to start worrying about it being conscious, and therefore presumably worthy of moral concern. But you don’t think we have to worry about it too much before then. So I’m just wondering if you can say a bit about why, and whether you think it’s possible that programs or computers could become gradually more and more conscious and might … whether that process might start before they are fully autonomous.

David Chalmers: Yeah, that’s an interesting point. And I guess one would expect to get to conscious AI well before we get human level artificial general intelligence, simply because we’ve got a pretty good reason to believe there are many conscious creatures whose degree of intelligence falls well short of human level artificial general intelligence.

David Chalmers: So if fish are conscious for example, and you might think if an AI gets to sophistication and information processing and whatever the relevant factors are to the degree present in fish, then that should be enough. And it does open up the question as to whether any existing AI systems may actually be conscious. I think the consensus view is that they’re not. But the more liberal you are about descriptions of consciousness, the more we should take seriously the chance that they are.

David Chalmers: I mean, there is this website out there called ‘People for the Ethical Treatment of Reinforcement Learners’ that I quite like. The idea is that every time you give a reinforcement learning network its reward signal, then it may be experiencing pleasure or correspondingly suffering, depending on the valence of the signal. As someone who’s committed to taking panpsychism seriously, I think I should at least take that possibility seriously. I don’t know where our current deep learning networks fall on the scale of organic intelligence. Maybe they’re at least as sophisticated as worms, like C. elegans with 300 neurons. I take seriously the possibility that those are conscious. So I guess I do take seriously the possibility that AI consciousness could come along well before human level AGI, and that it may exist already.

David Chalmers: Then the question though is, I suppose, how sophisticated the state of consciousness is. If it’s about as sophisticated as, say, the consciousness of a worm, I think most of us are inclined to think, “Okay, well then that brings along, say, some moral status with it, but it doesn’t give it enormous weight in the scheme of conscious creatures compared to the weight we give humans and mammals and so on.” So I guess then the question would be whether current AIs get a truly sophisticated moral status, but I guess I should be open to them at least getting some relatively small moral status of the kind that, say, worms have.

Robert Wiblin: So maybe this is getting outside your area of expertise, but with current ML systems, how would we have any sense of whether the affective states are positive or negative? It seems like once you have a reinforcement learner, I guess on average, does it get zero reinforcement because it just has an equal balance of positive and negative reinforcements? And is there some way that you could just scale all of them up to be more or less positive? Or does that even mean anything? Like you just increase all the numbers by a hundred. How would that help? It raises this issue of the arbitrariness of the zero point on this kind of scale of goodness of the states.

David Chalmers: Yeah. This is getting to issues about value and morality that do go beyond my expertise to some extent. We’ve got absolutely no way right now to tell exactly what reinforcement learning systems might be experiencing, if anything. But if you were inclined to think they’re experiencing something and that they’re experiencing something with valence, I suppose then they’d be having a mix of positively valence reinforcement and negatively valence reinforcement, therefore, a mix of very simple precursor of, say, of pleasure and of suffering, proto-pleasure and proto-suffering. Then a lot’s going to depend on your ethical theory. If you’re feeling pleasure half the time but suffering half the time, is that net good? Is that net bad? I don’t know. If you ask me, I think that’s net bad, because all that suffering tends to outweigh the pleasure, but maybe there’s weights on the scale.

David Chalmers: At this point though, I should say that it’s by no means obvious to me that pleasure and suffering that is that valence states of consciousness are the ones that are relevant to moral status. I know people quite often have this issue. I’m inclined to think that consciousness may ground moral status in some cases quite independently of its valence. Even beings with unvalenced states of consciousness could still have moral status.

The zombie and vulcan trolley problems [2:38:43]

Arden Koehler: Yeah, so you write about a type of creature that you call a Vulcan, that is very intelligent and has many of the same sorts of conscious states that we do, except for without the valence. So they’re never happy or sad or feel pleasure or pain, right? You say that you think they would still have moral status in the sense that it would still be wrong, for instance, to kill a Vulcan in order to save an hour on your way to work. Can you just talk a little bit more about that intuition and why you have it and, yeah, why you think it’s important?

David Chalmers: Yeah, I guess it just seems like a fairly clear intuition to me. Maybe it’s worth stepping back a little bit and thinking about consciousness and moral status in general. One way that I’ve tried to motivate this and the work I’ve been doing on this is to think about various trolley problems, where one being on one track, five beings on the other track, are going to die, and you’ve got some choice on the matter.

David Chalmers: An initial case is a zombie trolley problem, where you have a conscious human on one track and five non-conscious humanoid zombies on the other track. And maybe the trolley is going towards the one conscious human, and the question is, should you divert it to kill the five non-conscious humanoid zombies, at least if you’re prepared to take the idea of a zombie seriously? Many, many people at this point have the intuition that it’s a better thing to do to kill the five zombies than to kill the conscious human precisely because the zombies are not conscious. That may be because you think if they’re not conscious they have no moral status at all, or maybe because if you think they’re not conscious they have a much diminished moral status. But many people I’ve encountered have that intuition in the zombie trolley problem.

David Chalmers: Then the question is, which features of consciousness are responsible for our moral status? Some people at this point say it’s valence states, like pleasure and suffering, and that’s all that matters for our moral status. I think Peter Singer, at least at some point is on record, as taking that view. It’s fairly common among people who think about animal ethics to take that view. I find that surprising, because my intuitions don’t go that way at all.

David Chalmers: One way to bring this out is to think of your paradigmatic extreme case of the Vulcan from Star Trek who, let’s say, has no affective states at all, but nonetheless has very rich, both sensory consciousness and cognitive consciousness. So they’re constantly thinking about the world, maybe thinking about mathematics and science and the world around them in various ways, without any valence states of pleasure and suffering. They also have rich sensory consciousness. Then the question is, does that being lacked moral status entirely? To me, it just seems bizarre and almost crazy to say that that being lacks moral status and it’s okay just to kill them. And we’re talking about a conscious being here, who senses, who thinks, who reflects.

David Chalmers: So if we had a Vulcan trolley problem with a conscious being and five Vulcans, no, it would not be okay to switch tracks to kill the five Vulcans. Okay, maybe the fact that the Vulcan is not experiencing pleasure or in suffering is in some way morally relevant to assessing the amount of value their existence contributes to the world. But to me, the amount of pleasure and suffering they’re undergoing seems to be a relatively small consideration on top of the great value conferred by their consciousness. I don’t know if this is a intuition that just I share.

Robert Wiblin: I also share the intuition that it’s bad. But I think that intuition is coming up for different reasons. For example, it’s just like, it’s very hard to imagine a creature that has all of these conscious states but no affective states. So like, nothing feels good or bad. I think it’s just very hard to intuitively grasp that idea. The sense that they must feel some pleasure and pain is like getting smuggled in.

Robert Wiblin: Then there’s also the fact that just creating a culture in which we can use violence against other creatures, seems like it would have bad consequences. Then there’s also the fact that if you just start engaging in violence against other agents that can retaliate, then that’s also a very bad idea, because they’re going to fight you, and you end up in conflicts. And the fact that they don’t have affective states isn’t going to change the fact that they’re potentially going to resist your attempts to thwart their preferences. So even if you don’t think preferences per se matter, then there can be a very strong reason to cooperate with the preferences of other agents, even agents that you think have no terminal moral value.

Robert Wiblin: And of course, there’s the fact that the Vulcans might have instrumental value to other creatures that might be capable of affective states that have terminal values. It’s all of these reasons why I’m nervous that this intuition is not super reliable.

David Chalmers: Sure. Yeah, I can see all those reasons. I’d like to think I can factor out the intuitions about instrumental value and maybe about the practices of violence and so on. Maybe the first point runs, potentially runs deepest here, that maybe we can’t imagine beings without affect, or it’s a lot harder than we think. And maybe, in fact, when I imagine these Vulcans, maybe I’m tacitly imagining some affect in there. For example, I am imagining them reasoning about the world and choosing what to do. And at least on certain views of reasoning and action, maybe doing that always involves some kind of valence. Trying to do something and achieving it might somehow always involve some positive valence. And arriving at contradictions in one’s reasoning might somehow always have some negative valence. So I do take views like that seriously. But nonetheless, it does seem to me that I can coherently … I’m inclined to think I can imagine a being without significant affective consciousness. If it turns out that those kinds of affects matter a lot, even that would reconfigure a view from say a pleasure and suffering based view. But no, I mean prima facie, I’m inclined to think that I can imagine a being without much in the way of affective consciousness. And still, it just actually seems monstrous to me to think about killing such a being.

David Chalmers: It’s true that intuitions about their preferences may come in here, if you had Vulcans who actually … You might say, okay, well if you also thought that if you have no affect, you have no preferences and then Vulcans were such they had no preference whether they live or die, then you might think, well the moral status questions are tricky. If they preferred to die, maybe it would be fine, I don’t know, to kill them. Even then, I’ve got some qualms. But even that requires a link between affect and preference, and then giving a certain weight to preference satisfaction that might well start taking us beyond consciousness alone.

Arden Koehler: So I feel pretty unsure about this whole thing. But just to Rob’s points, it seems like all of the factors that you brought up also apply to the case, to the zombie trolley problem, right? So it’s like a bit hard to imagine zombies or beings who aren’t conscious but otherwise are completely like us and act like us. And also, it seems like it would be bad to just go around killing beings and would create this culture of violence and so on. And possibly they’re instrumentally valuable. But I do at least share the intuition that it’s much more … it gives me much more pause to think that it would be good to kill five Vulcans to save a person than to kill five zombies. Do you not have a difference there, Rob?

Robert Wiblin: Well, I agree that killing the p-zombies is bad for many of the reasons that I gave. It seems like it’s two different framings on the problem, which gives you different intuitions. In one case, you’re imagining them as objects or something, when you imagine them as as zombies. So you imagine there’s just no lights on inside whatsoever. In the other case, it’s like you’re imagining them, I think, as agents, which is causing you to treat them differently, even if they can’t feel any good or negative states.

Arden Koehler: Is there anything about how we want to treat Vulcans as agents more than we want to treat p-zombies as agents?

Robert Wiblin: Well, I share the intuition that when you think about the p-zombie case and the trolley problem, I’m more inclined to say “We’ll run over the five p-zombies.” And I also agree, when you talk about these Vulcans and you paint the picture of how they have this vivid internal life, even if it doesn’t involve positive and negative affect, then I’m much less inclined to be okay with harming them, inasmuch as they can be harmed. I agree that we should try to reconcile those two intuitions. Or actually, well, maybe not.

David Chalmers: Why not, if we can solve them by moving towards a view where consciousness matters to some degree independent of its valence? I mean valence may matter too, around maybe we’ll get some weight in the equation. But when I think that consciousness itself carries some moral status.

Robert Wiblin: Yeah, I guess it’s just when I think about it, then I have this intuition, this third intuition that then says that they should be the same answer. Because I always have this intuition that conscious experiences that don’t have positive or negative valence are not valuable. So then it’s like I can’t have all three of these different intuitions. So I’ve got to drop one of them. And it’s kind of a choice of which one you drop.

David Chalmers: I’m going to give up the third one, for sure.

Robert Wiblin: Yeah. I’m curious to know, how would you evaluate whether conscious states are good or bad if they don’t have any affect associated with them? It seems like then it becomes very tricky to evaluate. I mean, even the Vulcans might not have a view on whether they’re good or bad.

David Chalmers: I mean, first of all, I’m not sure about, some atomistic view of adding up the value of your conscious states to get to the value of your existence. We’re going to have to, at the very least, evaluate something like total states or total streams of consciousness across time. I guess I’m inclined to think that there’s at least a certain baseline value that you get for being conscious. And maybe there’s also, some different kinds of consciousness that may well carry different kinds of moral status. I mean, sensory consciousness may give you a primitive kind, but cognitive consciousness, thought, reflection may well send you into a different league. And here I’m again, I’m just speculating. I don’t have a theory.

David Chalmers: But then within those realms, affect may make it more positive and more negative, but somehow at least relative to baseline. I guess my attitude is, even if you had a Vulcan who, you went from the Vulcan case to a being who suffers occasionally, then so that on an atomistic view, it would all add up to negative valence. It would still be monstrous to kill them, even though you’d be reducing the amount of suffering in the world. So then yeah, I guess I would just be inclined to think there’s some baseline value contributed by being conscious at all, and maybe contributed by having consciousness of certain kinds.

Arden Koehler: I think that that’s at least consistent with the idea that in order for a being to have moral status, it has to be capable of valenced conscious states, like happiness or suffering. But I guess maybe that would be a weird view, because then it’s like you can have moral status, but it can’t be the case that any of these other states are actually adding to the goodness of your existence, or something like that.

David Chalmers: I guess. I think that even if a Vulcan was completely incapable of pleasure or suffering, and my intuition says very strongly they have moral status, so I’m not even clear I’d like that weak a link between moral status and consciousness.

Robert Wiblin: Are there states for humans that don’t have any affect associated with them? How tightly are conscious states and affect linked, do you think?

David Chalmers: I think on the face of it, for example, something like visual consciousness can exist without affect. It’s not to say that we don’t very frequently attach positive valence to what we see or to experiencing it and sometimes negative valence. But I’m inclined to think the baseline case in perception is neutral affect in some sense, and this happens relatively frequently. Now maybe that’s getting perception all wrong and there are theories of perception all about affordances and action and getting signals to reach for things. And you get positive affect when something goes right in perception and when not. But I mean, those are, at the very least, controversial and speculative theories of perception. I’d be inclined to think the neutral case is affect-free.

David Chalmers: When it comes to cognition, thinking, reasoning, acting, it’s a bit more complicated. Because now, there’s often the question of us having preferences and goals which can be satisfied or otherwise. And you might think that when you act you try to do something and you do it, then there’s automatically some positive affect of that. I think even that is going to depend a lot on how you think, what you think is the connection between affect action and reason. Likewise, when you engage in a proof and reasoning and you come up with a justified conclusion, maybe on some views there could be some positive affect attached to that. Maybe my view of affect is limited, but I’m inclined to think it’s also quite possible to reason without much in the way of of affect there.

Arden Koehler: So we’ve been talking in a sort of science fictiony realm of Vulcans, at the moment. But it seems plausible to me that there could be creatures that have conscious experience, but not affective experiences. So maybe, I mean this is all totally speculative, but maybe insects have conscious perception, but they don’t feel happiness or suffering. Just to clarify, on your view, that would imply that insects have moral status. Whereas a lot of people would have the intuition that if insects only had conscious perception and they didn’t feel any kind of pain or pleasure or happiness, they wouldn’t have moral status. Does that sound right?

David Chalmers: Yeah, except I’m not sure that I’m committed to the claim that any state of consciousness confers moral status on you. I think in some ways that’s the most natural view around here, but it’s not the only possible view. You might think that there were kinds of consciousness that convey moral status on you and there are kinds that don’t. That said, I’m inclined towards the view that any degree of consciousness at least puts you in the realm, in the ballpark, of having moral status. Therefore, I’d be inclined to think that if it were the case that ants were conscious but didn’t have an affective states, then we’d at least want to start thinking about giving them some weight in our moral calculations, if a very small one.

David Chalmers: Now, as a matter of fact, I think that it’s plausible that if ants are conscious, they also have states of pleasure and suffering. I think, pain seems to be very primitive and there’s lots of evidence of pain behavior in insects, such that if they are conscious at all, I think it’s very plausible that they have states. It’s at least as plausible they have states like pain as that they have states like visual experience. So in that specific case, it may well be that the worry doesn’t come up. But if we imagine some creature we discover with consciousness but not suffering, I guess I’d be inclined to think that yeah, this probably will … I mean, we should take their moral status seriously, if give it relatively little weight.

Robert Wiblin: I feel like imagining these other beings as the Vulcans potentially supports my debunking explanation for the intuition that killing Vulcans is bad, or my alternative explanation for why we feel that way.

Robert Wiblin: So if you imagine a computer that we create that has perceptive consciousness, so it has a camera and it can see things and hear things, but it doesn’t feel anything positive or negative, or it perceives things, but has no affective states, and maybe doesn’t do anything, so it’s just a passive computer that sits there perceiving things. It seems like in that case, I lose the intuition that it’s bad to turn it off. And it seems like maybe that’s because of, it no longer has this agent quality where I feel like I need to interact with it and cooperate with it in the way that I do with beings that have preferences and do things.

David Chalmers: Yeah. What I’m picturing here is a computer that has mere sensory consciousness. Maybe it has states of visual experience, but no cognition, no action, no affect. I mean, I guess I share your intuition to some extent there, at least that the moral status would be relatively minimal. That, I think, goes along with the idea that sensory consciousness may bring along with only a relatively minimal moral status. And things like cognition, the consciousness associated with thinking, reasoning, reflection, action, as well as affect, may well convey much more serious moral status.

David Chalmers: I guess I’d be inclined to accommodate this intuition that way, and also be a little bit inclined to … Yeah, I’m not sure I’d want to go all the way to saying no moral status. Interesting questions about whether it’s okay to turn such a system off, destroying such a system. I don’t have clear intuitions about this, I guess my intuition is, it would probably be fine to turn it off. Or, you always turn it back on again. Turn it off for once and for all? Yeah, I mean around here the intuitions are very messy.

Robert Wiblin: Okay, so we’ll add something else here, David. The machine does reasoning. It reasons about what people will do, and makes predictions about what actions people are going to take or what they’re going to say, but feels nothing about it. So it’s like it’s doing analysis now. But I still just feel it’s like, I mean, we have AlphaGo, it tries to figure out the best Go move and tries to predict its opponent’s move. And that doesn’t seem to me in itself like it would give something like a being moral status.

David Chalmers: Well, I guess I’d need to know more about its actual conscious states to this point. Saying that it’s doing reasoning and so on, yeah, we might be imagining all that going on completely unconsciously, which is what we typically do when we imagine beings like AlphaGo. But if it actually turns out that more sophisticated versions of AlphaGo are actually undergoing complex cognitive conscious experiences of thinking and reflecting and deciding and acting, then my intuition that they don’t matter morally starts to weaken a lot.

David Chalmers: It may well be that having, for example, the sense of having projects and reflecting on one’s own existence and so on, all these things also matter a lot to morality. So it may be that you’re imagining the being lacking cognitive states like that. But again, I certainly don’t have the sense that it’s only affect that you could add that would suddenly put it in the realm of serious moral status.

Robert Wiblin: I guess I’m skeptical of this whole methodology, I guess in philosophy, of just probing intuitions like this. Because I feel like they are often just so polluted by other considerations, and your intuitions are all of these hybrids and then it’s actually very difficult to tease apart.

David Chalmers: I do agree. I wouldn’t want anyone to take much I’ve said here too seriously as a positive philosophical view, but I would say, on the other hand, that I think many people in this domain are strongly committed to the opposite view, to the very specific view that only affective consciousness conveys moral status. I think they should to be very skeptical of that view, which is itself based on intuitions which I think are are highly arguable. So if someone comes out of this agnostic, then good. But I think we ought to be open to a much broader range of views here.

Arden Koehler: One thing that feels like it’s influencing the way that I’m thinking about this is that even if I thought that some being had moral status that didn’t have any affective states, I wouldn’t really know how to respect that moral status. I wouldn’t know how to avoid harming it. It’s just not clear what implications it would have for ethics, even if I was convinced that it had moral status.

David Chalmers: Yeah, that’s fair. Sometimes we can ask it.

Arden Koehler: Right.

David Chalmers: So other beings that have language and certain kinds of sophisticated behavior, that’s now then going to turn on things like preferences, whose connection to affect is unclear. But yeah. No, it’s a good question. I don’t have anything like a positive moral theory of how one ought to act towards such beings. I just have the intuition that we’re morally required to think about it and somehow respect their consciousness. But yeah, exactly what that consists of is a great question.

Illusionism and moral status [2:56:12]

Arden Koehler: Okay, great. I want to move on to talking about some ethical implications of particular views of consciousness. We’ve been talking about a lot of ethical implications that might come up on many different views, but it seems like just which view we put the most credence in might have big ethical implications. This seems especially true with the two, you might call them most extreme views of consciousness, panpsychism, the view that everything is conscious or everything has consciousness in some sense, and illusionism, the view that nothing does.

Arden Koehler: On the face of it, it seems like if illusionism is true, that means we wouldn’t have any reason to reduce suffering, because suffering itself is an illusion as a conscious state. Maybe some people would say, “Well, suffering really isn’t a conscious state if illusionism is true.” Or maybe some people would say, “Yep, that’s it. Yeah, it turns out we have no reason to reduce suffering if illusionism is true.” Do you have a view on this, what the implications of a view like illusionism would be?

David Chalmers: Oh, it’s a great question. I do think it’s pretty clear that something has to give. I mean, you better not hold, number one, that consciousness is required for moral status; two, that consciousness is entirely an illusion; and that, three, some beings have moral status. That’s an inconsistent triad, as we say in philosophy. You can’t have all three of those. I’m myself inclined to reject illusionism. But let’s make all this conditional on illusionism. Then where should you go?

David Chalmers: An extreme view would be then to say, “Okay, so nothing does have moral status. Suffering, pleasure, all conscious states are an illusion. We think we have moral status, but only because we think we’re conscious, that moral status is an illusion too.” People sometimes read up, people in the Buddhist tradition are saying things, at least in this vicinity, that the self is an illusion, suffering is an illusion, we ought to reconfigure our whole morality. In light of that, I’m not an expert though on that tradition.

David Chalmers: I guess, I think the view that some beings have moral status is very hard to give up on. Maybe one will end up going in more of an anti-realist direction depending on if you thought, for example, it wasn’t consciousness that mattered because consciousness was somehow the best candidate for an objective source of value. I think it’s very hard to give up on the thought that some beings have moral status and others don’t because life basically ends up collapsing. But I’d be inclined to think the illusionist would then reconfigure their view that consciousness is required for moral status. They can say that’s a false insofar as we have illusions of our consciousness, we should say, “Well, that’s another illusion. That’s another false intuition that comes along with the illusion of consciousness.”

Arden Koehler: Yeah, I guess that does seem like the way they might want to go. They’d say like, “Well, the thing that you call suffering is still bad.” I agree, that’s bad as an illusionist. It’s just that I don’t think it’s conscious.

Robert Wiblin: Do they then reconceive it as having preferences thwarted or something like that? Maybe preferences don’t require consciousness, per se?

David Chalmers: Yeah, there’s a few different ways you could go here. One is to say that what actually matters is the illusion of consciousness or the illusion of suffering. If you have the illusion of consciousness or the illusion of suffering that gives you that certain moral status that we think goes along with consciousness or suffering. I find that view rather odd.

Robert Wiblin: If you can tell from my laughter, that does sound quite funny.

Arden Koehler: Well, it could just be correlated with being the kind of being that has-

David Chalmers: Yeah, it could be. You could say that there’s… we do believe there’s such a thing as unconscious suffering and there appear to be valences in unconscious processing that people roughly understand by their effects on action. Well you can say that those things carry moral status and I think that’s a very natural thing I think for an illusionist to do. Of course, the question is going to be why should those things, which have those effects on action, why should they carry a non-derivative moral status? It’s not entirely clear. And the third way to go would be to some kind of preference satisfaction view where what matters is the satisfaction of your preferences. And when you’re suffering, you’re in states that you don’t want to be in or something like that.

David Chalmers: Maybe that’s the most natural way to go. Even if there are questions about the foundations of the preference satisfaction. That’s probably the simplest view to take here. The philosopher François Kammerer has been writing about this, he’s an illusionist. He’s strongly inclined towards the intuition that consciousness grounds value. He’s been thinking about what our options should be in response to that. I do think insofar as it’s true, that, for example, many people in the effective altruism community are inclined towards some kind of illusionism about consciousness. I do think this is a question you have to think very seriously about.

David Chalmers: I remember Luke Muehlhauser said he had precisely this combination that consciousness matters for moral status and consciousness is an illusion and some things have moral status, so yeah, something has to give.

Robert Wiblin: There’s just something very funny to me about the combination of views that consciousness is what would create morality but consciousness doesn’t exist. It’s like we’re going to privilege or try to place value on this concept that you’re going to argue was incoherent and doesn’t exist, and presumably couldn’t exist and say, “But this thing that doesn’t make any sense, that is what morality is.” It’s like you’re just saying morality also doesn’t make sense I would think.

David Chalmers: You could use this to motivate an error theory about morality and the fact morality is a giant fiction and nothing has it. If there was a special property of consciousness, then there would be morality but there’s not so maybe at least there’s no ‘capital M’ morality. Maybe what we’re left is ‘small m’ morality. ‘Capital M’ morality would have been somehow objectively grounded. ‘Small m’ morality is not, it’s merely grounded say in our preferences, but that’s the best we have. I’m not unsympathetic with, I think, a view like that could probably be made coherent.

Robert Wiblin: Do you know any illusionists who have gone full error theory or full nihilist? Just been like, “If you tortured me it wouldn’t be bad, it would just be a confusion on my part to think that that was bad”?

David Chalmers: I don’t think I’ve encountered anyone with that view, but it’s actually there are very few people who are on the record embracing strong illusionism. There’s no such thing as conscious experience. Most illusionists, for one reason or another, end up backing off making that claim because it seems so crazy. It’s only I think it could probably two or three or four illusionists. The only one I know who’s addressed the issues about morality is François Kammerer who’s tried to reconfigure morality in light of.

Robert Wiblin: Oh, this sounds juicy. Why do they back out at the last minute?

David Chalmers: Well, I think it’s because it sounds like a crazy view. Basically, it seems like you’re denying the manifest. Dan Dennett, back in the 1980s, used to write articles with titles like ‘On The Absence of Phenomenology’, ‘Quining Qualia’, why there’s actually no phenomenology at all, there’s actually nothing it’s like in the sense to be us and people looked at him funny. They gave him what David Lewis called the incredulous stare and I think he just stuck to his guns and said, “Nope, we think we have these states and we don’t have these states.” But he ended up backing off and softening and saying, “Of course, we’re conscious. We just don’t have those special high grade theoretical phenomenal properties that philosophers go on about.”

David Chalmers: It turns out the view was more palatable that way. I end up calling this weak illusionism. We believe our consciousness has some properties that it doesn’t have. I just think that view is in the end too wishy-washy to really help solve the problem of consciousness. But I do think it’s sociologically much more acceptable, and if you say, actually no one is conscious and no one is really feeling pain, then people just look at you funny. It’s not a view that gets a lot of purchase in the marketplace of ideas. Although I’m very glad that illusionism has just in the last couple of years been taken much more seriously than it had been taken before. I do hope this leads to this strong version of illusionism actually getting a fair run for its money including people coming out and saying the crazy sounding thing and saying, “Here’s how we ought to revise our theory of the world in response.”

Robert Wiblin: Yeah, this classic tricky situation where illusionism seems crazy to me and then lots of the smartest people I know buy into it in one way or another. It’s like some of my best friends are illusionists and then I’m like, “How do I reconcile?”

David Chalmers: Okay, well, tell them they’ve got to actually… they owe it to the world to actually develop their views and publish them and put them on the web. I haven’t seen these views developed in a sophisticated way but if there are effective altruists and illusionists, they really owe it to the world to figure out where our theories of values stand in light of it. Otherwise, we could just be, this whole movement, can be based on a whole bunch of very dodgy intuitions about suffering and pleasure and so on that are based on an illusion.

Robert Wiblin: Yeah, I think, to be honest, the problem is with me that I haven’t taken the time to understand. Well, at least, that’s what I’ve suspected that I just don’t have either the intelligence or the expertise or the time to really fully grasp what they’re saying but maybe you disagree.

David Chalmers: I think this is important enough that if they have this view and they think it’s obvious they ought to write it down, to publish it, and let us evaluate it because this is at the foundation of, as far as I can tell, everything that matters in morality.

Arden Koehler: It seems like this is also an area where there’s a lot of near verbal disagreement about things. So people using the word consciousness differently but also just talking past each other about what they’re saying doesn’t exist. I wouldn’t be surprised if some of the illusionists you know do believe in the thing you believe in and it’s just that you’re using different words to talk about them.

David Chalmers: When you put it in terms of say feeling pain, it’s like, “Yeah, are they prepared to say no one ever feels pain?” Well, that’s a hard thing to say.

Arden Koehler: It seems like illusionists would say, “Yeah, of course, feeling pain, yeah. People feel pain but that’s just a neurobiological state that carries with it no phenomenology or whatever.”

David Chalmers: I think the moment you’ve gone to feeling pain, the experience of feeling pain, that’s what I’m talking about when I talk about it. You can say people undergo pain where that doesn’t carry any implications of conscious experience and it leads to states that are brought on by damage and lead to a variety of responses. They can say “Ah sure, people undergo pain” but I say, “No, I’m just talking about the feeling, the feeling of pain.” The thing which seems introspectively obvious that it exists. I think that they should deny and be very clear that they’re denying. But I think very few people are really willing to be very clear that they’re denying that.

Robert Wiblin: You can understand their position in practice maybe as like they think that illusionism seems likely but they’re not nearly confident enough to just dismiss the suffering or pain of others. It might be one way of reconciling the actual behavior with their stated analytical views. Yeah, to bring it back to what Arden was asking, okay, we talked about I guess if illusionism is true, do we then lose morality? Maybe, maybe not.

Panpsychism and moral status [3:06:19]

If panpsychism is true, then it seems like now our responsibility… We’ve got a lot of research to do to figure out how to act on that and figure out how can we benefit or harm all of these other things that might have some degree of consciousness? Do you have any views on the practicalities of that?

David Chalmers: Yeah, not well developed ones. You can certainly imagine ways it could go that would have extreme consequences. Say if we turned out to be such robust panpsychists that we thought that every elementary particle has a conscious life like ours, as rich and as sensory and cognitive and affective consciousness as a human consciousness. Then, boy, somehow we’d have to reconfigure and maybe somehow be morally required, it looks like we’d be morally required to reconfigure our practices greatly. I don’t know exactly what we could do. Then the question would turn on how you can affect a particle’s consciousness. Who’s to say, maybe it would be almost impossible in practice but I think from some moral standpoint it would be required.

David Chalmers: Now the versions of panpsychism that people currently entertain are not as robust as that. Nobody thinks that particles are thinking and reflecting and so on. Mostly, people think they have very simple states of consciousness which somehow may be analogous to very simple states of perceptual consciousness. But it is possible that they may be, for example, analogous to states of affective consciousness. I’ve heard people entertain the idea that, for example, things like forces of attraction and forces of repulsion might correspond to something valence, at the level of consciousness. Just say it turns out that particles have some very simple valence in their consciousness. Should this be taken into account morally?

David Chalmers: Around here is where I get the intuitions… I think one can go in different directions, but around here is where I strongly get the intuitions that different types of consciousness may matter morally to a very different extent. This very simple type of consciousness may only have the most minimal moral weight. The kind we have with cognition and reflection may carry an enormously different degree of moral weight. Maybe it carries even infinitely more moral weight, so it could turn out that, yeah, even worrying about any finite number of particles will never add up to worrying about the morality of a single human being. I don’t have any settled views here, but I guess I think there are in principle versions of panpsychism that would very clearly press a worry on us. The kinds of panpsychism that people are now inclined to believe in could but maybe don’t.

David Chalmers: Around here maybe there’s also some moral conservatism that plays a role thinking somehow, “Well, it’s just kind of baseline that particles, humans matter and particles don’t matter and let’s rejig our moral system to take that into account.” Whether that’s actually an okay methodology here is itself a very deep question. Some philosophers I think are very moved to somehow respect common sense about morality here, but insofar as we’re bringing in views about the mind that don’t respect common sense, what should we revise?

Arden Koehler: It seems worth noting that if you had a view where it’s not just consciousness but some feature of consciousness, maybe it’s affective states, maybe it’s ability to reason, maybe it’s having your preferences satisfied. As long as that goes along with consciousness that grounds moral status, that then you wouldn’t have to worry about this implication because presumably particles don’t have preferences or any of these other more sophisticated things. Although, I guess you said you thought they could have affective states.

David Chalmers: Yeah. I don’t want to put forward any positive theory of particle phenomenology. But insofar as their states are in anyway analogous to ours, something like affect maybe one of the more plausible candidates for the kinds of consciousness we have to be present. You might think there are valences and there are potential valences. Particularly, there are valences associated with action and maybe you could find those valences at the bottom level. But you’re right that even if they did have affect or a permanent version of affect, this wouldn’t necessarily involve preferences where we think of preferences as something more cognitive involving something at the level of thinking. Maybe there could be primitive goals. Particles could be said to pursue primitive goals.

David Chalmers: But around this point, I think putting things in terms of preferences may well be one way to exclude particles. If I did do things with particles I’d still be inclined to put things in terms of conscious preferences and conscious goals somehow carrying the most moral weight. So that a zombie who had some non-conscious analog of preferences wouldn’t get the same moral status. We’d still be saying that consciousness matters, but this would be a version of the view where it’s kinds of consciousness that matters and consciousness at the level of judgments and preferences carries at least enormously more weight than consciousness at the level of the mere sensations and primitive affects.

Robert Wiblin: I feel like one way to make panpsychism sound less crazy is to say that atoms are something that have consciousness but that doesn’t include affect or valence or preferences or anything like that. The way to benefit the consciousness of an an atomist is to turn into a person that lives a good life, and the way to harm it is to turn into a person or some other agent that has a bad life. Am I going too far trying to save the view?

Arden Koehler: I don’t like the second part, the first part makes sense to me. But the second part seems like you’d be double counting the… it’s like, “Oh, it’s good for the atom and it’s good for me, for me to have a good day.”

Robert Wiblin: But it’s the one and the same thing because then it’s like it’s to change the form of the consciousness into like what is a person’s consciousness, right?

David Chalmers: You’ve got some deep questions about personal identity here, like possibly atomic identity here. Making an atom into a person, is it still the same being or have you just destroyed the atom and created an entirely new being?

Arden Koehler: This is kind of a methodological question, but we’re entertaining some pretty out there ethical implications. I’m wondering how you feel about reasoning from ethical premises to conclusions about the mind? If I say, “Well, okay, I have these reasons for thinking that a certain version of panpsychism is true”, that implies that I have to do all of these things to think about how I can benefit everything in the universe because it turns out everything is conscious. That’s absurd, so that version of panpsychism can’t be true even if there are some arguments that support it from the philosophy of mind. Is that good reasoning in your book or would you prefer to go the other direction?

David Chalmers: I’m inclined to be suspicious of reasoning like that, and my natural instincts are to go in the other direction. But I don’t want to completely dismiss it out of hand. I mentioned before certain moral conservatism. If you’ve a view that has the consequence that nothing has any moral status, then I’m inclined to think you’ve probably done something wrong. Maybe what you should’ve done instead was to reconfigure a view of moral status or somehow moral lightweight more subjective. But if you had that view, then you’ve probably done something wrong. So probably there can be moral reductio ad absurdum. Whether panpsychism would be one, I’m not sure. I guess my first inclination would always be to say, “Well, it looks like panpsychism has these vast moral consequences.”

David Chalmers: Well, one part of me wants to, the radical anti-common sense philosopher wants to say, “Yeah, maybe we do have these massive obligations that we can never live up to.” There’s a part of me that thinks this is just a moral situation in general. We have enormous moral obligations that there’s just no chance that we’ll ever be able to live up to. Maybe what goes on with panpsychism is just somehow already an extension of a situation one finds with nonhuman animals in general or even intelligence throughout the world and throughout the universe. It wouldn’t entirely surprise me if that turned out to be the resolution here, but of course the other resolution is to look at reconfiguring the connection between those relevant empirical facts and the moral facts.

David Chalmers: Yeah, you thought that consciousness required, any consciousness conveys moral status that places demands on our action and now we’re going to say, “Okay, only certain types of consciousness can carry the kind of moral status that places demands on our action.” I’m a little bit more inclined to take that move, but maybe that does rest on a certain moral conservatism that I’m not certain that’s correct. If you’re inclined to be morally anti-realist and subjectivist, maybe there’s a bit more room for that kind of move here if it’s ultimately a matter of our attitudes.

Robert Wiblin: This reminds me almost exactly of a conversation I had where I pointed out to someone that there’s actually like more neurons in all the ants in the world than there are in humans. They were like they felt confident that they knew the conclusion that all of the humans in the world mattered more than all of the ants in the world. So that they were going to use that then as a premise to reason that either like ants aren’t conscious or maybe I’m wrong about the empirical fact about how many neurons there are. It’s like there’s no way that you could know whether humans, all the humans matter more than all the ants before you know these other things because that’s downstream of it. It’s reasoning from something that is causally or factually coming after these previous things then reasoning back to the premises just seems very questionable.

David Chalmers: In this case, it sounds like there’s a rather flatfooted premise about the connection between number of neurons and moral status being proportional to the number of neurons. Okay, in this case, that seems pretty clearly like the ones to reject.

Robert Wiblin: That’s one thing that you could reject but then it’d be like, “Well, I have to reject that because I know that it’s like the humans have to matter more and I’m like, “No, that’s bad reasoning. You shouldn’t be doing that.” Maybe it is true that the neurons is not at all correlated with consciousness or the valence or whatever. But you should think of that for a different reason.

David Chalmers: Yeah, I do you think there are things we could discover about atoms that would make it the case that somehow they mattered more. Like I say, if it turns out every ant has an ego, and a sophisticated thinking, reasoning being in the same way that… what is it, the mice were supposed to be in the Hitchhiker’s Guide to the Galaxy.

Robert Wiblin: Yeah.

David Chalmers: Well, if that’s the case, then yeah, they matter more. I don’t think it can be totally out of balance to say that ants matter more but, of course, yeah, the less consciousness we ascribe to them, the more room there is for debating about precisely what kinds of consciousness or which features are relevant to moral status.

Mind uploading [3:15:58]

Arden Koehler: Let’s return to virtual reality for a minute. If we make virtual worlds, which seems at least possible, then we might want to live in them someday by uploading ourselves into them. Some people have the concern that if you uploaded your mind, somehow the resulting mind, even if it could think and everything, wouldn’t be conscious. What do you say to that worry?

David Chalmers: I’m inclined to think an uploaded mind at least can be conscious. There are really two issues that come up when you think about uploading your mind into a computer. One is will the results of this be conscious and the second one is will it be me? Maybe it’ll be conscious but it won’t be me. It’ll be like creating a twin of me, but I think in a way the second is the most worrying prospect. But let’s just stay with the first one for now, will it be conscious? Some people think that no silicon-based computational system could be conscious because biology is required. I’m inclined to reject views like that, and there’s nothing special about the biology here. One way to think about that is to think about cases of gradual uploading. You replace your neurons one at a time by silicon chips that play the same role.

David Chalmers: I think in cases like this make it particularly hard to say that if you say that the system at the other end is not conscious, then you have to say that consciousness either gradually fades out or during this process or it suddenly disappears during this process. I think it’s at least difficult to maintain either of those lines. You could take the line that maybe silicon will never even be able to simulate biological neurons very well even in terms of its effects. Maybe there’s some special dynamic properties that biology has that silicon could never have. As far as that, I think that would be very surprising because it looks like all the laws of physics we know about right now are computational. Roger Penrose has entertained the idea that’s false.

David Chalmers: But if we assume that physics is computational, that one can in principle simulate the action of a physical system, then one ought to at least be able to create one of these gradual uploading processes and then someone who denies that the system on the other end could be conscious is going to have to say either it fades out in a really weird way during this process. You go through half consciousness, quarter consciousness, while your behavior stays the same, or that it suddenly disappears at some point. You replace the magic neuron and it disappears. Those are arguments I gave, well, years ago now for why I think a silicon duplicated device can be conscious in principle. Once you do that, then it looks like uploading is okay, at least where the consciousness issue is concerned.

Arden Koehler: I think I see why the sudden disappearance of consciousness in that scenario seems implausible, it’s like, “Well, what’s so special about that magic neuron?” But I don’t immediately see why the gradual fade out of consciousness isn’t a reasonable possibility to entertain there?

David Chalmers: How are you thinking the gradual fade out would go? First, we’d lose visual consciousness, then we’d lose auditory consciousness or?

Arden Koehler: I don’t know obviously exactly how it would go, but if we assume that consciousness can come in degrees, then why can’t it disappear in degrees?

David Chalmers: Yeah, I guess I’m thinking that I just put some crude measure on a state of consciousness. Like the number of bits involved in your state of consciousness. One way of imagining it fading is somehow lowering in intensity and then suddenly the intensity goes to zero and it all disappears. That to me sounds like a version of sudden disappearance because the bits which still go from being a million bits to zero bits all at once. Strange in the way that sudden disappearance is strange. Maybe looking more continuous then somehow the number of bits in your consciousness has to gradually decrease. You go from a million bits to a 100,000 to 10,000 to whatever, and how would this work? Maybe my visual field would gradually lose distinctions, will gradually become more coarse-grained, maybe bits of it would disappear. Maybe one modality would go and then another modality.

David Chalmers: But, anyway, you’re going to have these weird intermediate states where you say the system is conscious, is saying it is fully conscious of all these things because its behavior is the same, I am visually conscious in full detail, I’m auditorily conscious. In fact, their consciousness state is going to be a very, very pale reflection of the conscious state they’re talking about with very few, say bits, of consciousness. That situation is the one that strikes me as especially strange. A conscious being that appears to be fully rational and believes it’s in this fully conscious state, in fact, it’s in a very, very limited conscious state. If you’re an illusionist, you might think this kind of thing happens to you all the time.

Robert Wiblin: I think this is totally wrong, it seems like you could have a view where the brain… so there’s like information processing going on in the transmission between the neurons and that’s what’s caused generating the behavior. But then there’s some other secret sauce that’s happening in the brain that we don’t understand and that we would not then replicate on the silicon chips. As you go replacing each neuron and each of the synapse with the machine version of it, the information processing continues as before and the behavior remains the same. But you’ve lost the part that was generating the consciousness, you haven’t engineered that into the computer components and so just gradually the consciousness disappears.

David Chalmers: I can imagine this is at least conceivable but then what are you going to say about the intermediate cases? Will there be cases where… there will have to be cases where the being is conscious and just massively wrong about its own consciousness. It says you’re not having experiences of red and green and orange right now. The fact is that it’s having a uniform gray visual field or thing like that.

Robert Wiblin: It seems possible, right?

Arden Koehler: I guess I also don’t find it as implausible as you seem to, Dave, that we could be wrong about our conscious experience or how much conscious experience we’re having in this gradual uploading example.

David Chalmers: That’s fair and there certainly are many cases where people are very wrong about their own conscious experiences. Certainly, there are all kinds of pathologies where there’s blindness, denial where people say they’re having all kinds of visual experiences when it appears that they’re blind and they’re not having them. Maybe it could be like this. This is strange because functionally the system doesn’t seem to have any pathologies. Anyway, I do allow that this is conceivable and I can’t, I certainly can’t prove that it couldn’t happen. The more open you are to beings being very, very wrong about their consciousness, maybe you’ll be more open to this case. Here’s one thing I’ll say at the very least, if this actually happens and we go through it and our behavior is the same throughout, then we have beings whose heads are first a quarter silicon, then a half silicon as well.

David Chalmers: They say “Everything is fine, everything is fine, my conscious experience is exactly the way it was”. They’re telling us this, they’re talking to us. They update every week with a bit more silicon and we keep talking to them.” We are going to be completely convinced that, or very nearly completely convinced that they are conscious throughout. It’s going to become impossible to deny it. So at least as a matter of sociology, I think this view is likely to become the obvious seeming view.

Personal identity [3:22:51]

Robert Wiblin: I think it’s very likely that if we managed to yet replicate most of the information processing that’s going on in the brain, then we’d be like then we’ll be conscious. But then the other objection that people make is like, “Would it be me?” I have the very strong view that it would be me, at least, if the person was similar enough. But do you want to comment on this issue of your continuance of personal identity?

David Chalmers: Well, I find personal identity very confusing, I don’t have settled views about the matter. But I guess I’m inclined to think again, the safest case is gradual uploading. You can gradually replace my neurons by silicon chips. As you know, 1% of the brain per week. I go in every Saturday and get another 1% changed over. Then I’m inclined to think that at every step I’m going to be the same person, and by the end of it I’ll still be the same person. Somehow I’m there, my consciousness is there throughout. Now the hard case, by comparison, the hardest case is the case where you create a duplicate of me and the original is still around. Then many of us are very, very tempted to say that, “Well, the original is you and the new one is a mirror copy of you.” If you don’t want to say that, then you’ve got to say complicated things like, “They’re both me but they’re the same as each other or they are both like descendants of me.”

Robert Wiblin: They’re both you!

David Chalmers: Are they each other?

Arden Koehler: I mean they are in different spatial locations.

Robert Wiblin: I’m going to get myself into hot water here with two people who know much more about this philosophical question than I do.

David Chalmers: In your view, you say that okay, there’s the original one that is Dave. He’s originally biological, so he’s bio Dave but he’s got this descendant who is bio Dave and he’s got this other descendant who is digi Dave. You say Dave is the same person as bio Dave, the one that survives biologically, and Dave is also the same person as digi Dave, the one who survives digitally. Now, if from A equals B and A equals C, it usually follows that B equals C. Now you have to say that bio Dave and digi Dave are also the same person, but it looks like these are two totally different people. They’re now existing, they’re both going in different directions, they’re in different places. They may have entirely different mental lives after a while. Do you want to say they’re the same person?

Robert Wiblin: I’m considering the question pretty differently as a question about like my preferences or like our preferences about what we care about in terms of what agents do we regard as ourselves and do we regard as caring about as much as our future selves? I’m like, “Well, in the case where you duplicate me, then I care equally about both of the descendant creatures.” I agree there’s some sense in which two different atoms are two different atoms. Even if they have all the same properties or something like that, I’m not super up on mereology. Yeah, philosophy of identity. But it seems to me like I just don’t care about the philosophy there from a ‘what would I actually do’ point of view. To me, the more I thought about what are the properties that make me regard something as myself and to care about it in the way that I do like my future self, it’s just a similarity relationship.

David Chalmers: Around here there’s a question is it, what we do care about or is it what we should care about? It’s a matter of what we do care about that I think, “Well we’re going to have to do some experimental philosophy and some sociology.” But it may well be, as a matter of fact, people care a whole lot more about their descendant in their biological body than in their digital bodies. Things are not going to go your way and maybe you can say that’s wrong and it’s a matter of what you should care about, but then what grounds these facts about what you should care about?

Robert Wiblin: I’m just going to say, well, it doesn’t matter whether it’s you or not, you should just regard all beings whether they are you or not as having like equal importance or it certainly has nothing to do with–

David Chalmers: Well, now we’re just getting rid of self interest entirely. Sure, from the point of view of morality, everyone gets equal weight. But I think–

Robert Wiblin: They should–

David Chalmers: There is such a thing as self interest and what’s self interested rational action? I take it those are the ones that seem to be tied to personal identity and there’s a sort of self concern. Do you reject all such facts? There being any facts about self interest and what you should do?

Robert Wiblin: Well, I just think that they are just part of morality of like once you start saying should, then there’s only one should and it’s about moral facts. Well, we could talk about this other domain of what if you thought about it more and reflected on it more then what would you want?

David Chalmers: Yeah, okay, great. But now all facts about personal identity are going down the drain because I shouldn’t care about my future self anymore that I should care about yours or Arden’s or anybody else’s. Now, we’ve just lost the whole domain of personal identity.

Robert Wiblin: Well, I’m inclined to do that.

David Chalmers: Oh, I see, okay, so we should all upload because it’s the moral thing to do and self-interest doesn’t enter into it. Yeah, good luck selling that one to the rest of the population.

Robert Wiblin: Look, I’m not going into marketing, right?

Arden Koehler: You could think that even from a moral perspective, personal identity matters if you think that it’s better to continue existing yourself than to create a new being in your place in which case maybe you shouldn’t upload and kill the biological being. Whereas otherwise it’s fine.

David Chalmers: This is a certain force of conservatism, right? It’s like, “Yeah, there’s a certain value to beings that have history and so on or maybe termination of lives is bad even when qualitatively identical ones come into existence”.

Arden Koehler: Now we’re getting into population ethics which I think is maybe too big of a can of worms.

Robert Wiblin: Yeah. Do you worry that you’re just like a different person at every instant?

David Chalmers: I think I could be, yeah. I take that view seriously–

Robert Wiblin: And then I’m like, “Who cares?”

David Chalmers: Well, that’s a reasonable view and that’s the view which Derek Parfit who wrote the classic treatment of personal identity issues in his book ‘Reasons and Persons’ ends up having a view which is not that far away from that. He says, “We imagine there are these deep facts of continuity over time that make us the same person over time deep for the facts. As if there was a Cartesian ego running through our lives.” There’s no such thing, even our moment to moment survival doesn’t involve that. It just involves some causal and psychological relations. He says, “And this might be viewed as a twist on the old Buddhist view.” There’s a Buddhist view where at every moment and yourself comes into existence insofar as there are selves at all.

David Chalmers: Yeah, as with many domains of philosophy, you may well end up with a view where capital P personal identity, or I don’t know if it’s capital I identity doesn’t actually exist and there’s merely lower case I identity that does go on through time. All we have is some deflationary thing and once we have that deflationary thing, maybe it starts to seem more likely that we could stand in that relation to our uploaded descendants and maybe even to more than one being. I guess that’s actually the view I’m inclined to have, but I still find this very confusing and I can’t reconcile all of my reflective intuitions on the matter.

Virtual reality and the experience machine [3:28:56]

Arden Koehler: Yeah, so you’ve talked a bit, Dave, about how some people have the intuition that living in a virtual reality would, assuming we’re conscious and there aren’t any issues of identity that are screwing us up here, that it would still be in some sense less valuable than living in the “real world or the original world” or something like that. My sense because of your virtual realism, is that you’re not going to be super sympathetic with that idea, but is there anything that feels persuasive to you? Are there any reasons that you’re sympathetic with for thinking that life in a virtual reality would somehow be a less good life?

David Chalmers: Yeah, so this resonates with the issues about whether a virtual reality is real or an illusion and so on. Insofar as you’re inclined to think that any virtual reality somehow very deeply involves illusions, you might think there’s going to be something worse about being in virtual reality. But if you’ve got a view like mine where virtual realities involve entities that are real and need not involve illusions, then at least that reason for worrying about VR is gone. At this point, we needn’t be thinking about anything so far out as uploading our consciousness into VR. We can just think about perfectly ordinary beings still with their biological brains putting on VR headsets and entering into VR for extended periods of time. Some people have the intuition this is going to somehow always be, at best, escapism or something. Not a way to live a real life.

David Chalmers: Maybe the classic version of this was put forward by Robert Nozick in his thought experiment of the experience machine in his book ‘Anarchy, State, and Utopia’ where he says, “There are super duper neuroscientists who offer you the chance of plugging into an experience machine that’s going to give you amazing experiences for the rest of your life, but you can never come out. You know that’s all going to be amazing and wonderful and full of pleasure and satisfaction and fulfillment. Would you do it?” Nozick says, “No,” he said he wouldn’t do it. Some people have taken this to be an argument against entering into VR at all. That somehow just doesn’t have the value of a normal reality, and I think actually people’s intuitions on the experience machine differ but I feel there is a fairly solid intuition against getting into the experience machine. But I do think there may be some differences.

David Chalmers: The biggest, when I think about Nozick’s case, the biggest reason for worrying about getting into the experience machine is that it’s pre-programmed somehow. Everything that happens there is scripted. You’re living out a script, you don’t get any autonomous role really in determining what happens to you. It’s really somehow a passive experience even though it might feel active. Whereas for VR it’s not like that. VR, you enter a virtual world, it still involves real people interacting with each other, but not in any sense living out a script unless it’s a very scripted video game. You’d still be going into a second life, then we still get to make choices and decide our actions at least to the extent we do in the normal world. I think at least those issues about autonomy don’t arise which I think are the most serious issues for Nozick’s experience machine don’t arise for VR.

Arden Koehler: Just on the differences between the experience machine and VR. The thing that always disturbed me most about the experience machine or, well, two things. One, you don’t know you’re in it I think in the canonical, or the way that people usually imagine it and that feels bad cause you’re massively fooled or something, yeah. Whereas like in VR, you could presumably know that you were in it. Also, I think there’s nobody else there with you really in the experience machine usually on those versions and that seems really bad.

Robert Wiblin: I feel like that’s the killer, it’s the sense of betraying everyone else who you know in normal life and just disappearing to go off and play your computers.

Arden Koehler: I’m just thinking you don’t have any true genuine relationships and that that was what was bad.

Robert Wiblin: I say because there’s no other like actual people in it.

Arden Koehler: Yeah.

Robert Wiblin: It seems like if everyone gets into the experience machine and then they’re like all living this great life and inside the experience machine it seems like quite a bit more appealing.

David Chalmers: Yeah, all your friends and family go in and one version I said to you and your mother’s not there, there’s a simulation of your mother but it’s like, “Hey, no, well, damn, I wanted it to be my mother. This is just a mirror simulation of my mother”. Maybe it’s a conscious simulation of your mother, but even so.

Arden Koehler: Right. Well, I suppose and then I suppose if it’s really her because the personal identity questions go the right way, then it’s there’s no difference.

Robert Wiblin: What if it’s a simulated replication of exactly what your mother is doing at the time in a different world? Is that good enough?

David Chalmers: I think Nozick would claim to say, “I really want interactions with my mother and not with a mere simulation of her.” If it’s appropriately grounded in my mother by actually observing her and projecting her here, then maybe with that case it’s not even a simulation anymore. It’s just you’re basically in some sense interacting with your mother and that would be okay. If it merely is set up to be parallel, you know the questions about personal identity run deep.

David Chalmers: For VR, again, where it’s just like people, the same people and in the physical world enter the virtual world and they retain their personal identity, you can go in with your friends and your family if you choose to or you can go somewhere else without them if you choose to, that seems to me much more analogous to an ordinary physical reality. There is the worry about some people value interactions with nature. Nozick says this would just be a totally artificial reality.

David Chalmers: I live in New York so I can’t complain about artificial realities. Artificial realities are all right. If someone prefers the country or someone prefers nature then good for them. I think that’s a reasonable value for people to have, but it seems an optional value. I’m inclined to think that basically VR can support roughly the same range of values as ordinary physical reality. If we end up for whatever reason needing or wanting to spend an awful lot of our futures in virtual worlds there’s no obvious reason why that would be a much less valuable future.

David Chalmers: Some people think it would be somehow a dystopia. I think it could be a dystopia in various ways. It could also be a utopia, and it could have all the kinds of value you might have in between.

Robert Wiblin: Seems like the experience machine intuition often flips if you do the reverse and get rid of the status quo bias. If you imagine that someone comes to you now and says, “Actually you’ve been in an experience machine all along. I know your life doesn’t seem that great, but in fact this was the best we could do for you within the simulation. In fact, the outside world is this terrible hellscape that we decided to send everyone into this experience machine to escape. Do you want to leave to go out and be alone with a bunch of people you don’t know in the terrible outside world?”

Arden Koehler: They made a whole movie about this, Rob.

Robert Wiblin: He made a big mistake. I mean I think that isn’t that the conclusion of the whole thing that maybe it was necessary for the world to continue. It doesn’t seem like Neo made the right call to leave.

David Chalmers: How did Neo know, by the way, that he was actually escaping the Matrix? It’s like someone gives you a red pill and it’s suddenly followed by a massive adventure in a spaceship. The red pill has put me into the simulation. It hasn’t taken me out of the simulation. I think Neo was completely irrational.

Robert Wiblin: Maybe he’s just taken a really good party drug or something like that.

David Chalmers: Yeah. Exactly. I think you’re right about the status quo bias. Felipe De Brigard, a philosopher who’s done some experimental philosophy on this, takes the case where you’re initially in the experience machine and then saying, “Would you choose to exit the experience machine”? People are reluctant to do that to maybe almost the extent they’re reluctant to enter it. He diagnoses all this as you want to be where you’ve been already. You want to have relationships. You want to be in that world.

David Chalmers: I polled my students in my ‘Minds and Machines’ class just the other day about this. I was teaching the experience machine last week. Arden will remember this well from the course a year ago. They had roughly the same reaction. I mean relatively few of them were willing to enter the experience machine, but almost as many were reluctant to exit it. There’s a sense where you exit it, maybe there’s some good things. You’re somehow discovering more of the world, the world around you, it’s adventurous. It’s like you started off growing up in Australia, and you now discover there’s a whole planet containing it which I can explore.

Robert Wiblin: I haven’t even been to simulated Italy yet, so why do I need to go out to some fancy other universe?

David Chalmers: Well, escaping the simulation is presumably a whole new level of value and exploration.

Arden Koehler: For what it’s worth, I would not go into the experience machine because, I mean, I’m not sure about this or anything, but my sense is that there’s a lot of valuable things that aren’t experiences, but I would still be, I think, happy to live in virtual reality under the right circumstances. I’m just saying those intuitions don’t have to go together.

David Chalmers: Because you think the valuable things which are not experiences are things that you want to happen, maybe have achievements and so on.

Arden Koehler: Yeah. It seems like you can do that in virtual reality and you can have friends in virtual reality, and you can know things about your environment even if it’s a simulated environment in virtual reality.

David Chalmers: I think that seems right to me. A lot of the important values, friends, achievements, and so on, if VR is real and not an illusion, and so I think you can have those. Maybe some things if you value nature, or if you value living at level zero reality then maybe you’re moving a bit further away from those things. Those seem like less deep values, more optional, more like optional preferences to me.

Robert Wiblin: Has anyone tried coming up with non-hedonist experience machines? It’s like you go into this machine and your experiences won’t be any better, but, oh by golly, your achievements will be incredible in the experience machine, or your relationships will flourish so much.

Arden Koehler: I feel like this is like you live this in everyday life. Whenever you decide to sacrifice-

Robert Wiblin: Well, some of us do.

Arden Koehler: I’m just saying whenever you decide to sacrifice pleasure or happiness for some other value, you’re basically deciding to just try to increase that value without increasing pleasure.

David Chalmers: Well, of course, if you know that you’ve achieved the achievements then these things all get confounded, because you might think there’s an awful lot of pleasure or fulfillment in knowing this. If you know in advance that you’re going to have these achievements in the experience machine, then, well, there’s something weird about that. To what extent is it really an achievement if you could have known in advance?

Robert Wiblin: The same is, I think, true in the normal world. It’s like what if you’re the very best swimmer because you just were born with these extraordinary genes, and you can totally foresee winning the Olympic gold medal because you’re just way better than everyone else? I guess I don’t care about achievement, so it just doesn’t bother me, but it seems like this shouldn’t bother someone who does.

David Chalmers: We’re going to set up the world so you’re going to be the best. Yeah. It’s going to feel …

Robert Wiblin: That’s what it is. That’s what is true for the best swimmers now.

Arden Koehler: You can always screw it up.

Robert Wiblin: They have to leave you the option to-

David Chalmers: At least for the swimmer they have to get very lucky. If it was guaranteed, then I don’t know.

Robert Wiblin: Well, you got very lucky to get this incredible offer of the experience machine. I’m making a whole thing out of missing the point of all of these thought experiments.

David Chalmers: I do think this is actually of practical relevance. People are going to be spending, people already spend a lot of time in virtual worlds. Say, 10 years ago, ‘Second Life’, people were spending 12 hours a day in there. It’s still out there even though less people are doing it. Facebook is about to introduce their own social virtual world in the next few months that people are going to be able to do with virtual reality. It’s still just beginning, but these questions are actually going to become serious questions for people.

David Chalmers: If it’s always going to be merely escapism kind of like going to the movies or watching TV, then it’s always going to have very limited value. I think it’s kind of important to think through it. Whereas I’m inclined to think that life in VR can have the same kinds of values as life in ordinary physical reality, maybe eventually much greater than this. Presumably ought to enter into the calculations of people thinking about how we can construct the best world.

Robert Wiblin: Funny. Some people worry, I guess, about you set up the conditions in the VR world such that it’s too easy to live a good life. You wouldn’t actually be accomplishing anything. I wonder, are they against policy changes by government that make it easier to live a good life, to find friends and get a job–

Arden Koehler: Sometimes.

Robert Wiblin: Really? They’re like, “No. It’s okay.” Be like, “Yeah. No. We should shut down the health system to give people a greater sense of achievement even just from surviving.” I don’t know. There’s something very odd to me about this. It’s like we want to make the world easier in this world, but then they’re like now we’ve made it too much too easy in this VR world.

David Chalmers: There are certain ways of making the world easier which are somehow counterproductive. We’ll just give everyone pleasure. You didn’t like working. It’s like okay, we’ll just give you a life of leisure and pleasure. Then it turns out people are actually massively liking certain forms of fulfillment and meaning.

Robert Wiblin: Well, create the fulfillment machine that just fills you up with fulfillment.

David Chalmers: Would we really agree to a fulfillment machine in advance?

Robert Wiblin: Well, yeah. Maybe. Depends how good it is. I guess I’d have to try it.

David Chalmers: If it’s a fulfillment machine that merely somehow tells me what course of action is most likely to produce fulfillment in me given my psychology, then I think that’s all right.

Arden Koehler: That’s just a really good life coach.

David Chalmers: Yeah. You guys at 80,000 Hours are basically in the business of helping people at least figure this out. If it turned out that you guys are programmers in the experience machine, you may be actually making everyone’s life dystopic without realizing it by reducing their autonomy and figuring out their own fulfillment. No. I don’t think that’s plausible. It does suggest that certain ways of producing your expectations of high fulfillment are more, and at least intuitively reasonable and good than others.

Robert Wiblin: I’m just thinking that there could be a good prank where you go and find someone who’s accomplishing amazing, I don’t know, go find Angela Merkel, and tell her, “Well, actually you’ve been in this experience machine or this fulfillment machine and accomplishment machine the whole time where we set up these conditions such that you would thrive as the leader of Germany. Do you want to reset?”

Arden Koehler: She’d be crestfallen probably, right?

Robert Wiblin: Mm-hmm (affirmative).

Arden Koehler: That seems to suggest that–

Robert Wiblin: It’s true. Well, maybe we shouldn’t tell her then.

David Chalmers: I take this seriously, occasionally. I find myself thinking so much about consciousness and simulations and so on. This would just be a very, very likely for someone to have set up for me in a simulation.

Arden Koehler: You think it’s more likely because you’re thinking about simulations, like they want to see what it’s like when a human being thinks about–

Robert Wiblin: The trolls of the future.

Arden Koehler: –simulations or something.

David Chalmers: Yeah. I actually give it pretty high credence for someone to be making me, update me. If people actually take seriously my thoughts about consciousness and being in a simulation, that’s a definite sign that that happens way more often to simulated beings than to non-simulated beings. Maybe I ought to be at 90%.

Robert Wiblin: This has been fun but we should probably move on to more material or urgent issues.

Singularity [3:42:44]

Robert Wiblin: Ten years ago you wrote this paper, this fairly early paper about a singularity that might be caused by an intelligence explosion, before this idea was nearly as mainstream as it is these days. Indeed, it was more of a mock, just kind of a ridiculous idea than, at least, I perceive it to be today. I guess the rough idea comes all the way back from I.J. Good’s ideas, think back in the 50s or maybe it was the 40s.

David Chalmers: 1965 I think.

Robert Wiblin: Okay ’65.

David Chalmers: Maybe there were some antecedent pieces where he discussed a similar idea, but the classic statement was from his paper on the possibility of ultra-intelligent machine or a title like that, I think, in 1965.

Robert Wiblin: In that paper, you seem kind of sympathetic or open to that idea. I guess I’m curious to know how did people react to that at the time. Have your views changed over the last 10 years?

David Chalmers: It’s interesting. It certainly wasn’t my idea, but it’s an idea that a number of people seem to have hit on independently. I remember thinking about this back in the 90s and having the sense that there’s this amazing idea I just had. It was an interview I did back in 1995 with Australian TV. It was Lateline, you must remember Maxine McKew.

Robert Wiblin: Yes. Yes. Very well.

David Chalmers: Maxine McKew with Rod Brooks, Doug Lenat and me talking about the future of AI. I said, “Here’s this cool idea about if we create machines smarter than us, they’ll create smarter machines still.” I suspect I had probably read about it somewhere, maybe Hans Moravec. At this point I didn’t know any existing literature on this. I thought this is very, very important.

David Chalmers: Then in 1997 I remember I came across the word singularity for the first time. That was actually on Eliezer Yudkowsky’s website. Maybe I was searching around on the internet for stuff. Here was this paper on staring into the singularity, where it seemed to me he had two ideas. One was the idea that more intelligent beings would be able to create more intelligent beings in turn, so you get an intelligence explosion. Second, the idea that this would all also happen faster and faster. You’d get a speed explosion, and that Moore’s law speeds double at regular intervals, but when the AIs are actually running on computers with increasing speeds, then the interval at which they double will get shorter and shorter.

David Chalmers: You would expect there to be a point in the future wherein speed goes to infinity and intelligence goes to infinity even without, at least in a world without physical limitations. Of course, physical limitations like the speed of light will impose some limitations. I really like this idea. I thought this was an extremely interesting idea that maybe there’d be a point at which all of human history would somehow converge or go crazy.

David Chalmers: Even with physical limitations imposed, the idea of the quick takeoff from human-level intelligence to super-intelligence always struck me as extremely interesting and important. Then eventually maybe 10 years later when I started hearing about the Singularity Institute and their conferences on connected themes. By this point I think the idea was fairly widespread in things like the AI community and the rationalist community. It struck me that it needed a good philosophical treatment. I tried to really take that big idea, really which goes back to I.J. Good’s article and make it philosophically rigorous.

David Chalmers: Still even though many people see this as gratuitously speculative science fiction, my own view is there’s actually a fairly solid philosophical argument put forward by Good that I tried to make for the conclusion that there will eventually be much greater than human-level intelligence. In particular, that first we’ll get human-level intelligence, then we’ll exceed human-level intelligence, and then once we’ve exceeded human-level intelligence there will then be a rapid process that leads to much greater than human-level intelligence.

David Chalmers: That’s the spiral of once you’ve got a machine which is more intelligent than me, it’ll be better than me at making machines. Therefore it’ll be able to make a machine better than the best machine I could make. Therefore it’ll be able to make a machine better than itself, and so on. Repeat this process, spiral to super-intelligence. That’s the basic idea of the intelligence explosion.

David Chalmers: In this article I mainly tried to turn that into a philosophically rigorous argument and to look at the ways in which if there’s not to be an intelligence explosion, it looks like there’s only certain very limited ways it can fail. I’m still inclined to think it’s a good argument. I’m still inclined to think that once we get to greater than human-level intelligence, there’s at least a very significant chance that will rapidly lead to much greater than human-level intelligence.

David Chalmers: I don’t know exactly what people’s attitude is to this argument these days. One thing that’s happened over the last 10 years or so is that people have tended to focus more on just the point where we get to greater than human-level intelligence, which already poses many of the difficult issues for thinking about society and value, and so on. Once a machine’s just smarter than us, well, much smarter than us or smarter than us, even smarter than us is a very serious issue.

David Chalmers: I think for example in Nick Bostrom’s book he doesn’t put so much focus on the intelligence explosion, but just on the level of machines being smarter than us. Maybe that makes sense from a point of view of thinking about this from a practical perspective.

Robert Wiblin: Among people that I know the intelligence explosion idea has become less popular, I think, over the last 10 years. Mostly just from people pointing out that there’s lots of positive feedback loops in the world, and there’s positive feedback loops that feed on themselves a little bit, but that doesn’t necessarily mean that it’s going to be incredibly quick. It could be that the AI reprograms itself to make itself better at reprogramming AI, but it just takes a long time each time for it to find the new better design.

Robert Wiblin: In fact, while you do get this increase it just happens over many years or possibly even decades. Just knowing how tight that feedback loop or how strong that feedback loop is just is kind of an empirical question. Historically we usually don’t see feedback loops that are so strong that you get the world completely revolutionized overnight. It’s like there are just physical constraints among other things that tend to slow things down a bit.

Robert Wiblin: I don’t see any reason in principle to think that you couldn’t get a very rapid intelligence explosion, but I also don’t see any reason why it has to be that way. It could just be that as you program an AI to be more and more intelligent, the problems that you need to surmount to make it even more intelligent just get more and more harder very quickly, such that you get stuck.

David Chalmers: A couple of different issues here. One is just how fast the feedback loop is. It’s certainly true that I mean some people here have talked about it happening overnight. I don’t think much that’s essential to the idea depends on it happening in a few seconds or a few hours or a few days. In the article I wrote on this, I think I operationalized it in terms of something like a few decades. A few centuries, within a few centuries they get greater than human-level intelligence to be conservative.

David Chalmers: Within a few decades after that to get to much greater than human-level intelligence. Again, I think aiming to be conservative. Then I think the thought was it’s not too hard to motivate something like within a few decades just by reasoning about cases where the AIs we create are then put to work on the programming and a few stages of this happening in a fairly deliberate and cautious way could easily happen. It seems prima facie a few decades would be relatively conservative.

David Chalmers: Then there are other ways it could go wrong besides just thinking about the timescale. I think maybe you mentioned one, which is the possibility of diminishing returns. Maybe it could be like the first increase of 10% of intelligence is easy, and the next increase of 10% is harder, and the next increase is harder still. Maybe there are limits in intelligence spaces or massive mountains to overcome. I don’t rule that out. My own view is that would be surprising.

David Chalmers: I think it’s more natural to think of intelligence space as containing many paths in many different directions which are going to lead up hills, and so on. In the article I tried to make a back of the envelope case that it’s unlikely that you’re going to come up against diminishing returns quickly. It’s certainly a possibility. What I really tried to do in my analysis of this was look at the ways it could go wrong.

David Chalmers: If there’s not going to be an intelligence explosion, why won’t there be? One will be we kill ourselves, and one will be we decide not to create AIs. Those are roughly reasons tied to our motivations. One possibility is diminishing returns. The other one which is interesting is maybe I think the argument for intelligence explosion turns out, basically turns on there being some increases in our underlying capacity which correlates with capacities we’re interested in. Maybe those correlations could fail. I don’t know if there’s one or two of those you think are really worth pursuing.

Arden Koehler: I think I’m most interested in the last one that you just mentioned. I thought it was interesting that you said, “Well, you could decompose intelligence into multiple different capacities.” The argument sort of requires that the capacity that builds on itself through generations is correlated or correlated with or produces increases in some other capacity that we think is really important, like power or general reasoning ability or something like that.

Arden Koehler: I think you pointed out in that paper that that isn’t completely obvious. I wonder if, I don’t know, Rob, do you think some of the people that are more skeptical of the singularity idea now are thinking maybe there’s a break between the thing that builds on itself through generations and the thing we really care about?

Robert Wiblin: Actually, I haven’t heard much of a focus on that. Be interested to know whether you have.

David Chalmers: I think the slogan I had in the paper was, I just looked it up, self-amplification plus correlation plus manifestation equals singularity. You need a self-amplifying capacity that obeys what I called proportionality thesis that 10% increases in capacity will produce the capacity to create beings with a 10% increase in capacity over that, and so on. That’s what I call the proportionality thesis. That could be false if, for example, we came up against diminishing returns.

David Chalmers: Then we need correlation between that capacity and something we really care about, and then we need manifestation. It’s not enough to have the capacity. People need to act on the capacity. We might choose not to create smarter AIs even though we can. I think if you have those three things in place, self-amplifying capacities, correlations with things we care about, people acting on their capacities, then you will get explosion in the things that we care about.

David Chalmers: That means to oppose this I wanted to say, “Well, you’ve got to come out here and oppose one of the premises.” It’s not enough to say this is crazy science fiction. Come out in favor of diminishing returns and against self-amplification, or come out in favor of there not being interesting correlations between our capacities and maybe it could go wrong.

David Chalmers: Maybe you can say that you would just get a mere explosion in, say, programming ability, but that won’t correlate with anything else like power to do important stuff in the world. That would be surprising, because you’d think that as you get improved programming abilities, you’d get improved abilities to create beings that could do important stuff in the world, and therefore you’d get corresponding improvements in that capacity. Maybe there are other ways to oppose correlations here.

Robert Wiblin: It seems like in as much as you’re just imagining that AI will get much better over many decades, in as much as that’s the vision it just seems like it’s a continuation of what we have now just with another positive feedback loop, like the many other positive feedback loops that we have in the economy. Where it’s like we get richer, that means we can have more scientists. It’s like we get better information technology, which means we can have more scientists and communicate more and then build more things. There’s all of these virtuous circles that we already have. This might be one that’s quite powerful at some point, a stage in the process, but it doesn’t seem completely unique.

David Chalmers: Maybe you’re right. The argument already turns on, at least the way I put forward the argument, there’s first an argument that we’ll get to human-level intelligence roughly based on human cognitive processing being simulable. That’s an equivalence premise. Then there’s an extension premise for why we should get to beyond human-level intelligence. That turns on just the regular kinds of technological amplification and improvement that we have already.

David Chalmers: Then there’s a third premise, the explosion premise which is the one with rapid progress to an intelligence explosion. It’s true, even without the third premise what you’ve got in that second premise, the extension premise already involves some kind of extension or probably a positive feedback loop. It may well be that would be enough to get you to super-intelligence eventually.

David Chalmers: I guess I just thought of this as an argument for it happening potentially much faster, getting to super-intelligence that stands to us as we stand, say, to a mouse within decades, as opposed to via the existing positive feedback loops and existing extension processes didn’t seem to me to motivate clearly anything much stronger than, I don’t know, within centuries or within millennia. That speed up for me seemed to be the interesting thing. Of course, maybe there are reasons to think that there are other positive feedback loops that could extend this in a similar way to a similarly short timeframe. If so, I’m curious to hear about them.

Robert Wiblin: I guess you could get advances and AI becomes economically much more useful, and so you get way more investment, way more people and capital move into the area or something like that. Doesn’t sound like it’s going to be completely game changing because we’ve seen that happen before, kind of like planes get better, so people build more plane factories, and then there’s economies of scale. I guess that’s one force that would be pushing it.

Robert Wiblin: One thing I just want to ask about is there’s a lot of this focus on “and then it reaches human-level intelligence,” and that’s the point where we might expect some intelligence explosion. It seems to me that reaching human-level intelligence is neither necessary nor sufficient. You can imagine we have very declining returns to more research in AI. It’s like we very gradually increase the intelligence of our ML systems until they get about to human level. Then we turn to it and say, “Right. We’ve gotten here now. How do we improve you a lot?” It says, “Well, I’m just as baffled as you. I’m only as smart as you guys.”

Robert Wiblin: It’s like we’re just going to need a lot of time, and give me a lot of compute and then we’ll struggle forward gradually just like before. You could imagine alternatively that you’ve got an ML system that isn’t very intelligent in a general sense but we’ve trained in this very narrow skill of programming AI and designing good chips, and figuring out how to create this feedback loop, but it can’t do many other things. It’s terrible conversationalist.

Robert Wiblin: Then you get way before it reaches general human-level of intelligence, you get this sudden takeoff. Do you agree with that kind of vision?

David Chalmers: I certainly agree that you could in principle get an intelligence explosion that starts before the human level. Just say we create a system at dog-level intelligence that just has this amazing ability, one ability to augment itself. This leads us from dog-level intelligence, to monkey-level intelligence, to chimp-level intelligence to human-level intelligence and on from there. Maybe that could happen.

Robert Wiblin: We already see ML systems that are less smart than me in a general sense, but would kick my ass at Go. It’s like just a question of can we train it in something that’s sufficiently narrow without it knowing much more stuff.

David Chalmers: I don’t rule that out, but I think what this line of reasoning is missing is the fact that in the case of human-level intelligence there’s actually a positive reason to believe that this self-amplifying capacity will be kicked off in a very specific way. You’re also right that it requires more than human-level intelligence. It requires greater than human-level intelligence, and it doesn’t just require that. It requires greater than human-level intelligence itself created by human-level intelligence.

David Chalmers: It’s that combination that really gets the argument going. Because if you’ve got greater than human-level intelligence that humans create, then we have reasons to believe they will be better at creating intelligence than humans are. It then follows that they’ll be able to create intelligences more intelligent than the most intelligent beings we can create, therefore beings more intelligent than themselves. That specific combination that gets us to the intelligence explosion.

David Chalmers: It’s true that if we got to monkeys creating beings more intelligent than monkeys and so on, maybe something similar could happen. It’s just we don’t have any particular reason to think that we’re ever going to get to monkeys creating beings more intelligent than monkeys, but we do have pretty good reason to think that eventually humans will create beings more intelligent than humans. Once you have that combination, then you have exactly what you need for the intelligence explosion.

Robert Wiblin: The reasoning goes if you could do that previous step where you went from 10 created something that’s 11, then why wouldn’t you have 11 to 12, or why wouldn’t that be roughly similarly difficult.

David Chalmers: Yeah. Well, this goes along with what I call again the proportionality thesis. This involves capacities so that a certain increase in that capacity, say 10%, will increase ability to create beings with a further 10% increase in that capacity. You could deny that, but I think it’s fairly plausible for some capacities in this facility.

Robert Wiblin: That’s fair enough.

David Chalmers: The mere fact it maybe even be running on much faster hardware through hardware speedups might itself give us a speed explosion that gives us an enormous efficiency gain.

Robert Wiblin: Although then that looks like the argument that actually in reality we’ve made something that’s smarter than us, because just then we threw tons of compute at it, and then it was able to think 100 times faster than us, and so that’s why things sped up.

David Chalmers: I don’t necessarily want to say that merely being faster than us counts as an intelligence increase, but it might play a contributing role in getting us to intelligence increases much faster. What would have taken centuries on the slow hardware that we have might happen in decades or years on hardware that runs 100 times faster.

Arden Koehler: It might count as an intelligence increase in the way that people care about, because speed, even if it doesn’t go into under some definition of intelligence, it’s still a way that you can be more powerful with your mind and do more stuff.

David Chalmers: Yeah. Speed matters. If you had someone with exactly the capacities of other people but did things 100 times faster, they’re probably going to be treated as super-geniuses.

Robert Wiblin: Quantity has a quality all of its own. On the difficulty of the feedback, or on the declining returns issue it seems like we should be able to learn from other examples of technologies that we’ve tried to advance where it does seem like we get pretty sharply declining returns. In many areas of science and technology and engineering, maybe every so often you get these big leaps where things improve, and then it’s like this leveling off as we eke out all of the returns within the current design that we have. You’re shaking your head looking skeptical at this.

David Chalmers: No. I agree. I agree completely. We see this a lot in science. I’ve been hanging around and paying attention, say, to various areas of science over the last 30, 40 years. So many of them have slowed down. John Horgan wrote this book, The End of Science, which was maybe a bit extreme. Nevertheless, look at the advances, say, in physics, even the advances in neuroscience and cognitive science, they’ve been kind of disappointing compared to what we might have expected given the advances, say, the advances from the beginning of the 20th century through, say, 1970 and what came after that. There’s a sense of a slowdown in almost every field except for technology.

David Chalmers: It’s true that technology is exempted here. We do see slowdowns. We do see diminishing returns in intellectual areas. It’s not entirely out of the question this could happen in AI.

Robert Wiblin: Well, even in technology it seems like maybe it’s just progressing as fast as it was before, but we’ve got way more people working on it. There’s declining returns for a given number of inputs certainly. Some people have argued there’s institutional, bureaucratic reason, or the system of research has changed and that’s slowed things down. It seems to me just a big part of it is going to be it’s just getting harder and harder for humans to find something new, because they feel like we’ve found all the things that it’s easy for humans to see.

David Chalmers: No. It is interesting. There’s this idea that there’s so many fields now in which there’s fewer great … I mean it applies in aesthetic domains as well. There’s fewer great physicists than there were 100 years ago. There’s also fewer great musicians, let’s say, by many people’s standards than there were at certain points. It looks like there are some big ideas and some big styles that were easy to … There were some big steps that were easy to take once you hit critical mass, and then things slowed down.

David Chalmers: It’s a really interesting question why that happens. The explanation … By the way, I feel that in philosophy as much as anything else there have been fewer grand world revolutionizing level philosophers now than there were 50 years ago, and fewer then than there were 50 years before that. I think maybe it’s, again, the big ideas are out there waiting to be taken. There are diminishing returns after that.

David Chalmers: I guess my hope would be that with increasing intelligence somehow this would provide a way out of that particular kind of diminishing returns. If you had beings with whole new levels of intelligence, there’d be whole new spaces to explore. That’s only a hypothesis.

Robert Wiblin: I agree that might well help and make things go a bit faster, but it’s not completely different than what we have now. Because, of course, like any technologies that we build potentially help us to do research better. You already have some positive feedback loop here. It’s just that, I guess, maybe you think, “Well, we’ve improved all the paper and the computers and the texting and so on, and actually now we’re just super-bottlenecked by the brain which we really can’t redesign almost at all.” That’s the limiting factor. If we could play with that, then things would really take off.

David Chalmers: I guess I think the difference, even relatively small differences in intelligence are responsible for vast differences in what it could lead to, whether it’s the difference, say, between an Einstein and a ordinary person or an ordinary person, and, I don’t know, a chimpanzee or something. Relatively small increments lead to massive differences in the kinds of, say, science that get done, or the kinds of ideas that get heard.

David Chalmers: My thought was that I think moving from say an Einstein to someone who stands to Einstein as Einstein stands to an ordinary person would prima facie lead to a correspondingly vast increase in capacities and in available ideas. It’s possible that’s not the case. Maybe there are just jumping points in intelligence space, and the jump from chimps to humans was one of them, but it’s going to be hard to reach the next jumping point without a massive increase. I’d be surprised if it worked that way. I don’t see much reason to believe it’ll work that way. Of course, it’s speculation.

Robert Wiblin: I probably sound very skeptical here. I guess I’m kind of agnostic. None of the premises required to get a very quick shift are that implausible, so it seems like we can’t rule that out. That’s worrying. Even if we don’t think it’s terribly likely.

David Chalmers: Yeah. If someone just took the conclusion of the argument to be there’s a 20% chance this is going to happen, then from a practical perspective then the case for thinking hard about it is still going to be extremely strong.

Robert Wiblin: I think it’s maybe a little bit less than 20%, but still is something that a lot more people should be thinking about, how would we deal with.

David Chalmers: I think it’s at least 25.

Robert Wiblin: There I was talking about something that happens over weeks or months. I think the odds of that happening is well under 20%. Then if we’re talking about a big shift that happens over decades it seems just more likely than not.

David Chalmers: Okay. Well, good. Maybe we agree.

Robert Wiblin: Did anyone make fun of you for writing this paper? It seemed like maybe academics were scared to talk about this issue, because it just sounds like science fiction and they don’t want to be associated with that.

David Chalmers: Not really as far as I recall. In fact, I wrote an article, it came out in The Journal of Consciousness Studies. There were 25 or so responses published, mostly by academics, and with 10 to 12 academic philosophers. Maybe some people took it more seriously than others. No one was exactly making fun in response, but someone like Dan Dennett is relatively dismissive of the idea, for example.

David Chalmers: Actually, my purpose in writing the article was to get people who are skeptical about the idea to be clearer. One of my purposes was get them to be clearer about why they’re skeptical, and say, “You have to deny this premise, or deny this premise, and deny this premise.” It was somewhat frustrating that Dan in his article just didn’t even try to refute the argument. He just said, “This is crazy and not worth worrying about.”

David Chalmers: Then a bunch of people did take it seriously. It’s led to certainly some philosophers have followed up on those ideas, especially with all the people working on AI. There are a number of philosophers thinking pretty seriously about superintelligence now. Of course, there’s Nick Bostrom who’s been doing this for decades. For a while it was just him and a few others. Now it’s catching on a bit more broadly. You’ve got people like, say, Susan Schneider who’s just written a book on the future of the mind and the future of AI, taking these intelligence explosion ideas seriously.

David Chalmers: I think it’s gradually becoming a more academically respectable subject than it was, and with all the cash and the energy that’s been pouring into people thinking about AI more generally, now some fraction of that I think is going towards people thinking about superintelligence. One does also find in general some resistance to people thinking about superintelligence too, even among people who think about AI ethics and AI safety.

David Chalmers: At NYU there is an institute called ‘The AI Now Institute’, which is all about “Let’s worry about the issues which are arising from artificial intelligence right now in society”. Some extremely important issues, obviously, involving bias, involving labor, or involving autonomous weapons and everything else under the sun. I think there is some resistance that comes from people who think about short-term issues in AI to thinking about long-term issues in AI.

David Chalmers: Then if you start talking about intelligence explosion and superintelligence it can come to seem like science fiction. To me I’ve never understood why these things need to be opposed to each other. There’s more than one problem that needs to be addressed in the world, and the short-term issues and the long-term issues are both important. Maybe in practice there’s all kinds of complicated dynamics there.

David Chalmers: I do have the sense there’s an increasingly serious group of academics who takes these issues seriously. I guess you’re now talking about the intelligence explosion specifically as opposed to super … Do you have the same attitude about superintelligence generally, or is this specific to the intelligence explosion?

Robert Wiblin: It seems to me like worrying about AI in general, maybe that’s now become pretty mainstream. Worrying about superintelligence, it’s a bit weirder, bit more edgy, but people are willing to write about that. Then the intelligence explosion is like a level, or like then it coming about suddenly is another level beyond of like people are a bit more nervous about talking about that, or they think that that will come across as quite strange.

David Chalmers: Maybe that’s right. My own experience is that the difference between the first two, between one and two, worrying about AI and worrying about superintelligence is greater than the difference between two and three. Superintelligence is pretty far out and the intelligence explosion is just a bit further out.

AI alignment [4:07:39]

Arden Koehler: Think that’s actually my impression too. Are there any philosophical questions that you’d like to see people working on more that are relevant to issues around AI in the long term?

David Chalmers: I mean there are so many issues potentially. Philosophers have just begun thinking about them. I’d love to see more thought about all of the issues we’ve talked about here. One issue which I find interesting is a lot of the time at least when I was…

David Chalmers: …writing and thinking about this stuff 10 years ago at least. I thought that a lot of the people who were thinking about AI safety were basically assuming that AI would have what a philosopher might call a Humean architecture. There’s a goal that the AI is pursuing, and it’s just basically trying to always figuring out the best means to that goal. Maybe the AI has a utility function. It’s trying to maximize utility or some objective function that guides all of its behavior, which to a philosopher, it’s one model of how practical, how action and practical reasoning can work. But it’s by no means the only model. There are ways of being… It looks like there are models of intelligence and practical reason that don’t have that Humean form. Everything ends in the service of goals or maximizing utility.

David Chalmers: And I think some of the arguments from worrying about, for example, AI safety and AI alignment, take on a particular form when you’ve got this very Humean model of intelligence; “Oh, you absolutely have to get the right utility function. If you just miss by a little bit, everything is going to be disastrous”. Well, that makes total sense within the framework of this Humean approach to AI, but if you’ve got models of intelligence where it’s not at all clear that every possible intelligence is going to have this form. Maybe you can have Kantian intelligence with underlying rules and so on that it follows. Some people say you can always somehow model such a system as having some tacit utility function that it’s maximizing, but once that utility function is no longer explicitly being operated on, I think the considerations become very different. And it’s interesting to think about whether, say, Kantian AI or rule based AI might have ways of being robust in a way in which Humean AI does not.

David Chalmers: Anyway, I think that’s a place where there’s room for a lot of interaction between philosophers to think about rationality and practical reasoning and the relation among things like goals, actions, rationality. And people will think about this in, say, AI safety. I mean, I’m not sure. Maybe AI safety has gotten a lot more sophisticated over the last few years, and it’s less beholden to that specific goal based model.

Arden Koehler: Just to clarify the distinction between the Humean and the Kantian models here. When you talk about a Kantian model, are you thinking about some sort of agent that can in some ways modify or influence its own goals by reasoning to thinking about what goals should it have? Is that supposed to be the difference between it and a Humean model?

David Chalmers: That’s one thing it could do, but it might not necessarily be fundamentally oriented by goals at all. I mean, maybe it has goals along the way, but it has rules and principles that it obeys. One case I’ve always liked here, I know other people have made it, is the Google Maps analogy of sort of what do you call it, Oracle based AI, that it does a whole lot of theoretical reasoning. And then what does it do? It tells you what to do. And I know various people have offered this as a potential safe route to AI, because this thing won’t have much in the way of goals of its own. Now I think there are various problems with the safety of these systems. Maybe they’re rapidly going to get taken up by other systems in ways that lead to danger. But now I’m just thinking about this as a kind of intelligence and its connection to action. You know, its actions don’t really involve maximizing any utility function by optimal means and reasoning at all.

David Chalmers: What does it do? It tells you the answer to questions. Now some people then say, people into AI safety, are going to say, “Isn’t it now going to figure out the best way to answer a question by means and reasoning, and it’s now going to turn everyone into paperclips, as it were, in order to answer this question as well as possible?” To which the answer is no. That’s not the architecture of the system. It’s not a system which is trying to optimize answering question by the best means available. It’s merely following the rule. Come up with the answer to the question, and answer the question. There’s not this Humean connection to action that people assume in AI. So that’s just an example of, say, rule-based AI, which I think a system like that is not subject to the paperclip style objections. It may have other limitations, but it’s just a very simple example of how the connection between, say, reasoning and action needn’t follow this Humean model in AI.

Arden Koehler: So William MacAskill has this idea of the long reflection, which is sort of this period of time that we might want to enable, where we can sort of think really hard about what we should be doing. And a lot of the difficult, unsolved questions, especially philosophical questions in ethics. And it seems like you’ve written about how philosophers have failed to come to convergence on a lot of these really important questions. So do you think that that suggests that the long reflection is something we should be trying to make happen? Or do you think, because we need a lot of time to figure these things out, or does it suggest that it could be a waste of time? Because maybe we will never come to answers on these difficult philosophical questions.

David Chalmers: So is the long reflection the idea that we ought to basically wait and reflect for a very long time on things like our fundamental values before we do things like colonizing the universe?

Arden Koehler: Yeah, I think that’s basically the idea. And we should try to put ourselves in a position where we can engage in that reflection for a long time.

Robert Wiblin: I guess don’t commit to any irreversible course of action until we’ve just really tried very hard. We’ve turned over every intellectual stone to try to figure out whether we could be making a mistake?

David Chalmers: I haven’t thought about this very much, so it’s likely that anything I have to say about it will not be terribly well thought out. I guess it strikes me as sort of an ideal reflection in certain kind of highly idealized circumstances. This could make sense. I suspect that we’re never going to be in those idealized circumstances. So that waiting to act obviously can potentially have very significant costs. Like someone else is going to act who may have values that are much less aligned with the good than your own. Whether that’s, say, in the case of colonizing the universe, someone elsewhere in the universe or other groups of people right here. So this does kind of strike me as such an idealized prescription that it’s too easy to imagine ways in which things could go wrong. That said, if there was a way to somehow ensure that by waiting in this way, nothing else bad was going to happen in the meantime, then maybe I could imagine at least some highly idealized circumstances where this would be a good idea.

Robert Wiblin: Yeah. So maybe Will would object to this explanation, but I think one idealization would be that we get to a point where we’re just really confident that we’ve set ourselves up such that we’re not going to have a big war. We’re not going to destroy ourselves. So we’re very secure. Maybe we’ve spread out over a fairly decent fraction of the galaxy or something like that. And so we haven’t got all our eggs in one basket. And then we’re like, well, the universe is very big. Time is very long. We could go out and grab all of those resources now. But in fact the universe is only expanding very gradually. So we’re only losing like a billionth of the available resources every year. So why not just take a million years? We’ll only lose thousands of the resources. And then that will potentially give us a much better idea of what we actually want to do with all of that space and all of that time.

David Chalmers: Yeah. And after the first million, we should probably wait for another million.

Robert Wiblin: Yeah, the ever delayed splurge. Yeah.

David Chalmers: Reach that happy equilibrium point. We’d better wait for at least a quadrillion.

Robert Wiblin: Well, so at some point there has to be some constraints. I mean, you’re not going to defer the benefit forever, but you can also do this kind of mixed strategy. I was going to say, when you send out a colonization wave, but then it’s like they’re waiting on the instructions from the philosophy central back at home planet. And they’re going to send out a message about what to do when they arrive at their destination out at light speed at some point, so that maybe just as they’re hitting the furthest reach of the, we’re losing subscribers as I speak. But just as they reach the furthest out level of the universe, they’re going to get the light arriving back from where we are, where we’ve spent as long as we could before they were no longer reachable anymore, trying to figure out what they should do. And then they’re going to act on those instructions. What do you think?

David Chalmers: I guess I’m just a little bit skeptical that philosophical value runs that deep. Maybe this is my moral anti-realism.

Robert Wiblin: I didn’t know you’re an anti-realist.

David Chalmers: I am. I don’t know. I don’t have real worked out views in meta ethics, but I guess I’ve got some leanings towards anti-realism. And that might make me just a little bit suspicious of how deep moral values go and just how much gain there’s going to be in extended reflection. Of course even a moral anti-realist should allow that moral progress is possible and actual and happens a lot, but it may well be that after the first thousand years of sustained reflection, you get into some kind of diminishing returns.

Arden Koehler: You could think that if moral realism is true, then the moral truth is more likely to be some simple thing, like utilitarianism. Whereas if moral anti-realism is true, then it could be extraordinarily complicated for all we know, and it would take even longer to work out all the ins and outs of our views.

David Chalmers: Although if moral anti-realism is true and it’s a function of, say, of human psychology ultimately, then we’ll just build really good AI simulations that have human psychology and do all the stuff we need to do to that to figure out what their idealized preferences and so on would be. And that might make it more of a tractable research project.

Robert Wiblin: Yeah, I guess. I mean, well, one thing is just like you are kind of only so big, and your brain only has so much in it. So it seems like, yeah, you would at some point just run out of further analysis you could do of what are the values of David Chalmers as a physical object.

David Chalmers: Whereas if you’re a moral realist, there is the possibility that very, very deep facts are involved, as it might be very, very deep facts in mathematics that will require vast amounts of rational reflection to uncover.

Robert Wiblin: Yeah, and in the moral realist case, it seems like you could imagine, it’s like the philosophers are like, “Yeah, we have a consensus. It’s moral theory acts, utilitarianism or whatever.” And they’re like, “Go away and check it again, because we don’t want to mess this up.” And then they come back a thousand years later. They’ve tried every thought that they could, and they’re like, “No, we still think it’s this, and check it again. Just keep on going.” Because the risk of being wrong, even if it’s very unlikely, is so serious that you gain from delaying and just constantly trying to find a way that you could be mistaken.

David Chalmers: You’re also going to want to think very hard during this long reflection process of what kinds of beings you’re producing. Presumably AI doesn’t stop. You know, we produce some AIs. Human psychology gradually drifts in various directions, which will predictably favor certain value systems over others. And then, oh boy, we want to think about what we’re doing, because the outcome of a long reflection may very much depend on those intermediate actions. At some point we’re going to get self conscious about those intermediate actions, “Hey, are we doing things which are going to produce more utilitarians or more Kantians?”, and somehow I find reflection of that kind may tend to undermine the whole process, especially if you’re inclined to what is moral anti realism, or this is a function of psychology.

Robert Wiblin: I think Will might find this conversation a little bit frustrating, because it’s been so abstract so far. But yeah, I guess try to make it a little bit more concrete. I guess you’ve written about the possibility that we could create machine learning systems, or AI, that can do philosophy better than us and that might lead to an explosion in progress within philosophy that they might not be blind to possibilities that kind of humans just find it very hard to see. Do we see maybe any early signs of that? Are there any AIs doing philosophy and maybe making discoveries that we’ve missed? And how likely do you think it is that that will ever happen?

David Chalmers: I mean, my sense is that we’re not really at anything close to that point yet. But once you get human level AGI, then you’re probably going to get areas where machines exceeds human capacities, and it’s not entirely impossible. Is philosophical reasoning going to be one of the last of those areas or one of the first? Yeah, various people have said, “I suspect we’ll get machines which are good at doing science quicker”. Then we’ll get machines which are good at doing philosophy, but I could be wrong about that. Certainly there will be a point where machines exceed our philosophical abilities. I’m inclined to think so. So if the long reflection just involves waiting for an intelligence explosion to happen, then yeah, I may think that’ll be a relatively quick reflection.

David Chalmers: Okay, so a simpler version of this is create much greater than human level AI before colonizing the universe, because they will be in a position to have philosophical and other insights that may be vital to creating the best universe. I guess I could see that. I can see a case for that. When it comes to values though, it’s very tricky. Because like I said, I’m not a moral realist, so I’m not sure that that will be a matter of getting closer to the moral truth so much as out of maybe just moving off in the direction of different moral judgments, depending on what happens with AI.

David Chalmers: I mean, if we do succeed in aligning AI perfectly with human values and then producing some idealized version of that, then maybe that would produce something that we regard as massive moral progress. But it’s also very, very likely that we’ll produce AIs which are not perfectly aligned with human values, which are all very complex and variant and multifarious as they are and we’ll just end up in a certain kind of evaluative drift as AI is produced, which it’s not clear that we ought to favor that.

David Chalmers: So this project might also depend on how confident you are that we’re going to produce AIs which are aligned with our current values. Again, if you’re a moral anti-realist that thinks that realism is a matter of moral values or ultimately a matter of our idealized moral preferences.

Arden Koehler: So can I just ask something to get clear on the source of the resistance to the idea that the long reflection might be very good? It seems like at some points you were suggesting that we just don’t need that long to figure out a lot of the things that philosophers are trying to figure out. Like maybe especially if moral anti-realism is true, is that the main issue? Or is it more that you feel like philosophers thinking for all of this time still won’t come up with the answers to the questions we think are really important? Or is it something else?

David Chalmers: Probably a mixture of things. I mean, I really feel like I’m amateurishly speculating about a topic I really haven’t thought much about. But I mean, maybe for example, I mean, there are things here which I think are factual that we need to figure out. I mean, I mentioned already, I’m inclined to think consciousness matters a lot to value. So getting a correct theory of consciousness will, by my mind, be very important in figuring out, the value of various outcomes and therefore of how we should act. So I guess I ought to be open to waiting until we have a really good theory of consciousness before taking irreversible actions that affect different beings differentially.

David Chalmers: So I mean, I’m certainly open to that one in principle. That one strikes me is a bit different. I had been focusing on the one about moral values. And I guess I’m inclined to think more values as just a bit more superficial. They’re more closely tied to the details of our psychology. Maybe idealized significantly, but there are limits to how deep they can run and how inaccessible to us they can be. Might suggest that there’s going to be diminishing returns in long term reflection. But insofar as these moral questions may also depend on factual questions, like, say, the distribution of consciousness. Then I guess I ought to be open to there being deep factual questions there, which are not so accessible to us. That may take longer to figure out.

David Chalmers: And yeah, maybe just straight questions of the physics of the universe, like how long do we have? Is it billions or quadrillions or infinitely many years? Maybe our right policies will depend on figuring out factual questions like that. Maybe there’s a case for figuring those out first before taking irreversible action. I still worry about the idealization and the massive costs of waiting, but I’m sure that Will and others have thought a lot about that and have things to say.

Robert Wiblin: Yeah, I think talking about the idealized case is kind of fun, but I guess I think most people would probably agree that humanity is more likely to go too fast rather than too slow, and more likely to kind of lock itself in rather than preserve option value for too long. So it seems like on the margin, we kind of want to be pushing towards more reflection and more keeping of options open. And then to me, kind of the more difficult question is how on earth do you implement this, given that there’s kind of sometimes first mover advantages and competitions that kind of prompt people to take risks and want to move quickly? And so it’s like more of a kind of a political science institutional question of how do we encourage more reflection rather than less on, you know, incrementally?

David Chalmers: Maybe this will be easier in our idealized AI future than the future that depends on actual human psychology.

Robert Wiblin: Yeah, possibly. Or it could be much worse. It kind of depends on how we set it up, I guess.

Careers in academia [4:23:37]

Robert Wiblin: All right. So listeners who have stuck with us this long might themselves be considering a career in philosophy, or at least if not that, perhaps going into academia. So I’m kind of curious to know if there’s any career advice that you’d be interested to share with people. And of course, whenever talking to someone who’s been very unusually successful within their field, there’s a big risk that kind of the advice that seemed like it worked for them might not work for the typical person. So we have to do a bit of correction on that. So don’t just tell everyone I guess to put it all on red. But yeah, try to adjust for that. Are there any advice do you think would be helpful to listeners kind of in their 20s doing a PhD, having a successful research grant?

David Chalmers: Yeah, I’m not sure, because I think a lot is going to depend also on what you want out of doing philosophy. Then maybe your listeners are selected for being people who have certain kinds of values here, like wanting to do philosophy to help change the world, for example. I don’t think I’ve got particularly useful career advice about that. I think, well, put it this way. Any career advice I have would pale next to that that say Arden has. Someone who’s reflected on this much more seriously than I have. I mean, I’m kind of mentally doing philosophy and trying to figure things out and understand the truth and understand the world, and those moral and practical concerns are not what got me into the field. My advice to people often is A), think about things you’re passionate about that you actually find genuinely interesting, rather than what everyone else is doing.

David Chalmers: I think it’s easy to get pushed into doing what other people are doing. And then the marginal returns for doing that, both for you and for the world are, I think, relatively low. Extend that existing literature by an inch in a certain direction. So I’d rather see people doing things that, A), they’re genuinely interested in, B), are potentially novel and ambitious. Too often I find philosophers are often relatively unambitious. They’re happy just to extend things in small ways, and people with big ideas often get certain kinds of negative feedback. It’s easy to understand why this happens, because people with big ideas often have bad ideas. We’ve all run into people with big ideas that are not promising. But nonetheless, I like to encourage ambition in philosophy and going in new directions, which aren’t just the same thing that everyone else is doing.

David Chalmers: So that’s thinking about it from the point of view of maximizing the number of good ideas which are out there. Exactly how that plays into the project of, say, effective altruism and global priorities, I’m not sure. I’m still inclined to think that having big, new ideas is probably going to be good. But it also does happen that philosophy can be done collectively. So groups of people can make progress. And maybe the model in effective altruism has been more of something like this collective project. And yeah, collectives can make progress in principle greater than the progress the individuals make, even if the individual contributions are relatively small. So this is all to say I don’t have a one size fits all set of career advice, unfortunately.

Arden Koehler: Yeah, I mean I think, so people who are involved in global priorities research often think there are particular sorts of questions that seem especially valuable for people to work on, including for philosophers to work on. So I think if this was a question about how to sort of have the most impact as a philosopher, then we would talk more about that. But I’m also just curious to hear about any things you have to say about just how to build a successful and novel research program.

Arden Koehler: And so my impression is that when you started your PhD, it was pretty unusual to think about consciousness in the way that you were thinking about it, or that it was associated with Descartes. I mean, correct me if I’m wrong. And maybe that was a little bit out of fashion? And so I’m curious to just hear about your experience there and see if there’s anything you can recommend to people who would want to have a sort of ambitious research program like that.

David Chalmers: Yeah, I mean the first thing to say is of course I’ve been very lucky. And so I don’t want to give too much weight to my own experiences, and there could have been 50 people in similar situations, and it wouldn’t necessarily have panned out as well for the other 49. But my own experience was that I got into philosophy because I was really passionate about one particular question, the question about consciousness. And at the time, consciousness has gone into and out of favor over the years. It wasn’t totally dead as a subject. Some people were thinking about it, but it’s true. I also worked a lot on AI and neural networks as a graduate student and published a bunch of stuff on that. And a lot of people said, “My God, you’re crazy, you’re going to work on this almost dead subject, consciousness, rather than this really exciting new subject of AI and neural networks”.

David Chalmers: Well, I was interested in that, but it just struck me as a much more limited question. The question of consciousness, big important question, that was what I was passionate about. And I stayed with it, and five years later, the latest neural network winter was in fashion. And I mean, by the mid nineties, the bottom dropped out of that particular approach. And consciousness was suddenly on the rise. So I got lucky, of course.

David Chalmers: Wait another 10 or 15 years, and suddenly neural networks came back. So one of the lessons I’ve drawn from that was don’t be too influenced by what happens to be in fashion at the moment when, say, you’re in graduate school, because fashions can change.

David Chalmers: But yeah, that does interact with my own luck here. And I experienced pursuing something I was passionate about and had views about as good for doing philosophy, because it led me to have an area where I had intuitions, an area where I had thoughts that didn’t feel as if they were just kind of driven by the literature, and that at least proved to be useful to me in making a contribution. But a lot depends on how happy you’re going to be as a philosopher, whether you want to be someone who’s… Some people love being part of a collective team on a movement and then maybe, especially if you think about the value produced by philosophy rather than your own satisfaction, then some people are going to be in a position to make great contributions as part of a team, which will change the world just as well as contributions made by, let’s say you’ve got a 10% chance of making a transformative contribution yourself and 90% chance of doing nothing.

David Chalmers: That’s if you take the individualist model. Or you can take the collective model, where you’re more or less guaranteed to make a 10% level contribution. And 10 of you between you will make the same contribution. So take 10 people on one model, one makes a transformative contribution, and the other nine do nothing. And on the other model, all 10 make a 10% contribution. Which one are you? And just say now, now make that a choice about your own future. If you’re happier being guaranteed to make a contribution as part of a team, then maybe you’ll pursue one approach to philosophy.

David Chalmers: If you’d rather kind of swing for the fences and have a 10% chance of making a transformative contribution yourself, then you’ll take a different contribution. Of course, all of that presupposes a certain selfish attitude to philosophy. It’s like, who cares if you’re just being, completely looking at, say, moral or even epistemic value, you might say, “Who cares what you did? You should just try and maximize the outcome.” But to me, choice of careers is also very much tied to your own personal satisfaction as well as the best moral outcome or even the best epistemological outcome. So I think it’s reasonable to take into account these facts about what’s going to satisfy you and your career. I think there are just different philosophical personalities here.

Arden Koehler: Sorry, this is just picking up on one of the things that you mentioned, but you talked about not being too swayed by the fashions of the time when thinking about what you’re going to work on. So this is something I worry about a little bit, because it’s probably pretty hard to tell from the inside what’s a fashion and what is intellectual progress. And so maybe nobody’s into AI anymore, because it really, or during the AI winter or whatever, because it really was a doomed research project or whatever. Anyway, do you have any guidance for being able to tell what’s intellectual fashion and what’s intellectual progress?

David Chalmers: I mean, there’s probably an empirical project here figuring out what the signs, I mean there are facts about fashions that go out of fashion. And 10 years later are just viewed retrospectively as mere fashions, and those which are viewed as having made progress. And maybe there’s an empirical project to figuring out the signs of such a project. But I don’t really have a a quick and easy rule of thumb for making that distinction. If there was a quick and easy rule of thumb, it would probably be applied already.

David Chalmers: I mean we all have views about these things, but it probably requires going through some substantive reflection. Maybe you could find some sociological properties of literatures. Once articles started appearing at this rate and these journals with such and such characteristics they’re more likely to be in fashion, but a good project for someone to figure out, though. We could do some empirical philosophy retrospectively. We’ll ask people about a field 10 years ago. Was that a fashion or was that progress? We’ll look at some empirical markers.

Arden Koehler: Yeah, it’s interesting. I mean, it seems like there could also be some sociological clues in just the way people talk about certain topics. I mean, you could imagine when people are dismissive of something or say that it’s silly, and that’s the primary thrust of their argument against it. That could be a sign that it’s a fashion as opposed to, I don’t know.

David Chalmers: Someone will have to figure this out.

Robert Wiblin: Well, it’s very interesting when people say that something is silly, and then they can’t really state analytically why. I think that’s the red flag that something’s going wrong. It’s like very often things that people say are silly are silly, but then when they’re pushed, they can’t explain why.

Arden Koehler: Right, right. Yeah. Silly. And then there’s nothing underneath it.

Robert Wiblin: Yeah. Maybe some things are primitively silly. Well, I really appreciate you taking so much time to record this bumper episode of the podcast.

Arden Koehler: This has been really fun.

Having fun disagreements [4:32:54]

Robert Wiblin: Yeah. I hope that the audience enjoys it as much as I did. I guess one final question is on this very theme is that you seem to kind of really enjoy bantering about philosophical questions with other people. I mean, even those who kind of strongly disagree with you or where there’s kind of a mutual relationship where both of you think the other one has crazy views, like with Daniel Dennett. Do you think kind of having that fun loving attitude towards disagreement is an asset in intellectual work? And is it something that you cultivated? Or is it just kind of a personality trait?

David Chalmers: I don’t know. I suspect it’s more of a personality trait. It’s one thing I really enjoy in philosophy is the interactive and social aspects of it. You actually get to talk about stuff a lot and think about it and argue and argument can be done in an enjoyable way, or it can be done in an unenjoyable way. And maybe I had experiences early on occasionally with, say, a confrontational or dismissive styles of argument that made me think, “Oh, well, this is just less fun”.

David Chalmers: So from the straight out selfish point of view, there are some styles of philosophical interaction that I find more enjoyable, and then I’m inclined to project that onto other people too. So trying to maintain and cultivate that kind of attitude more generally. But I think that’s, to be honest, if I’m really honest, I think that’s more largely about me in making the experience of philosophy a positive and enjoyable experience than about trying to make philosophical progress. I mean, there have been many great philosophers who have been total jerks. You know, Wittgenstein doesn’t seem like he would have been a whole lot of fun to be around. But if you think about Wittgenstein’s ideas, he had some interesting, important ideas. And I doubt there’s much of a correlation between jerkiness versus friendliness, so to speak, and philosophical contributions.

David Chalmers: There are philosophers who make massive progress working solitarily. There are philosophers who are jerks who make massive progress. Maybe you can try and find some potential mechanisms here. Like one very clear mechanism is when philosophy gets too aggressive, it drives people out of the field. And that’s something that we’ve actually seen in practice. I think in recent years, there’s been a trend towards making philosophy less aggressive and more respectful in its attitude. And I would like to think that’s had the effect of maybe driving fewer people out of the field, although I don’t have all of the data. There are potential mechanisms there by which it can help, but to me, to be honest, it’s more like, well, this is my career. This is the career we’re all involved in. It works better if we’re happier rather than miserable.

Robert Wiblin: Because it seems to me if there’s kind of no correlation between being a jerk and being fun and intellectual output, then it seems like why not be fun? Why not? Yeah. Why not kind of be polite about it? I suppose that way we can drive Schopenhauer out of the field or something like that.

David Chalmers: I think some people have the view that being a jerk is actually positively correlated with philosophical progress. Like I have encountered the attitude that, yeah, “Sorry, philosophy should only be done by the very best philosophers, and if someone says something that’s mediocre or stupid, then taking them seriously is going to be a waste of time. And we ought to only focus on ideas and ignore those trivial human matters. Ignore those trivial human matters, like emotions and paying attention to those will just slow things down”. I don’t find that view very plausible, and I think that, yeah, it’s perfectly possible that any progress that can be made by jerky philosophy can be made even if it’s made us, you know, an iota, a second or a minute slower by non jerky philosophy. That’s a trade off. That seems to me to be a trade off worth making.

Arden Koehler: Just in the spirit of non jerkiness, I would just like to re-invite Schopenhauer to the table. It’s an inclusive philosophy for the very sad.

Robert Wiblin: Yeah. I didn’t know Schopenhauer personally, so I shouldn’t pass judgment I guess.

David Chalmers: I gather he was A), very, very depressive and sad. And B), pretty much a jerk to everyone around him?

Robert Wiblin: Yeah, I think that’s kind of his reputation.

David Chalmers: Maybe Schopenhauer could simply not have produced his philosophy without being as depressive as he was because it runs so deeply.

Arden Koehler: Seems plausible to me. It seems like it would be difficult to produce that philosophy without feeling the things that it was about.

Robert Wiblin: That’s fair. Yeah. You’re hoisting me on my own petard here. I was saying how good it is to be nice. You’ve like called me out on being a jerk. I take back my glib criticism of Schopenhauer.

David Chalmers: How does this work for effective altruists, by the way? Do they also cultivate niceness in the field or do they say… Actually I was once, yeah, a few years ago during Hurricane Sandy, there was a blackout in New York. And a bunch of people were hanging out in one of the few areas where you could get food near NYU. And I went to go get food with a prominent effective altruist who will remain nameless. And I said, “Oh, here’s a long line for food. Well, I guess we shouldn’t.” There was an opportunity to cut into line. “I guess we shouldn’t cut into line, because that would be immoral.” I sensed the moral gravitas of the effective altruist I was with. To which effective altruist said, “Oh no, no, actually we’re morally obliged to cut into line, because our time is going to be spent achieving much greater value for the world than these other people.” So yeah, there is a point of view from which effective altruism can encourage jerkiness. I don’t know where the community as a whole stands on this.

Robert Wiblin: Yeah, that’s very bad reasoning I think. I guess for a rebuttal to that, I can refer people to the paper ‘Considering Considerateness’ by Stefan Schubert and my episode with him from back in 2017, maybe early ’18. Yeah, I mean there’s just like so many practical, there’s moral uncertainty reasons. There’s just like so many practical reasons also why, if you’re actually trying to do good, it seems like you should err in the other direction and be as nice to other people just in common sense ways as you can. Maybe it’s a little bit late in this interview to dive into that topic.

Robert Wiblin: I guess on the intellectual banter side of things, I guess, yeah, effective altruism, again, it spans the full range. Maybe like philosophy, there’s people who love to have a laugh. And there’s people who can be maybe direct and candid or perhaps a little bit rude or abrasive would be the other framing. I think one slight problem that we face is that a lot of the discussion happens online, which I think is kind of an environment is very conducive to the more abrasive way. I think in person it’s extremely rare that people are actually rude to one another. It almost basically never happens to me that I find people in person that are inconsiderate.

Arden Koehler: Yeah, I mean, one thing that is maybe very slightly different than the issue of rudeness versus niceness but I think is related, but I think effective altruism does well and so does a certain style of philosophy that I feel like you engage in, Dave, is just being really interested in lots of people’s new ideas and a huge range of ideas and sort of a tolerance of weirdness of ideas. And not dismissing things very quickly. And so I feel like that’s something that yeah, people, I think they try to cultivate it on purpose, and it’s something that I definitely appreciate.

David Chalmers: It also makes sense from the point of view of self interest. When I first got into philosophy, I was just interested in firstly the mind body problem and all that other stuff was just boring, boring, boring. Over time I got interested in it much more. Now it means most philosophy talks I go to have something interesting to me, and that just improves my own experience of doing philosophy. So being open to new ideas is A), in your self interest, just sort of practically and B), it can also help. Sometimes those new ideas are useful for thinking about your own work. So I think it’s also epistemologically useful.

Arden Koehler: It’s much more pleasant to be a nice person who’s interested in what other people have to say.

Robert Wiblin: I think EAs in general are very open minded. And I love to think about all of these crazy ideas as I think is apparent. But I guess there’s also something that’s very enjoyable about just being very passionate about thinking, “No, that’s a crazy idea. Like here’s all my reasons why I think that’s outrageous”, and not being so controlled that you’re just like, “Oh, you know, I think probably this isn’t true for these things”. There’s something to be said for passionate dismissal in the spirit of a good, fun conversation, I think.

David Chalmers: They say you should be open minded, but not so open that your brain falls out. We’re all dismissive of some things. And I think if you’re just going to say it respectfully, it’s fine to be dismissive passionately, but respectfully, giving your reasons.

Arden Koehler: I think you can actually, yeah, I think you can dismiss things in the sense of rejecting them passionately without being dismissive in the sense that I am talking about, which is kind of just like, “That’s lame and dumb, and you should just stop talking now”.

Robert Wiblin: I passionately dismiss things in conversation that I suspect probably are true, because the other person knows more about it than me. And so I presumably am wrong, but I enjoy the thing of having a strong discourse. And then I’m going to find out why I’m wrong.

David Chalmers: That you passionately reject them is one thing. Passionately dismissing them is another. If you passionately dismiss them, you say, “We’re not even going to talk about this.” And then you wouldn’t get to have the strong discourse. Passionately rejecting them is saying, “That can’t be true.” And you go onto have an argument. And yeah, those are very different things.

Arden Koehler: Yeah, right.

Robert Wiblin: Are these philosophical terms of ours rejection and dismissive?

Arden Koehler: They are now. We’re coining them.

David Chalmers: Yeah, I actually wrote this long set of guidelines for respectful, inclusive and constructive philosophy. Both to be nice in various ways, be open to other people’s points of view. But it never rules out completely rejecting someone’s point of view, making objections. And maybe part of being respectful in philosophy is leaving open the possibility that you could be wrong. P is totally false. But at least being open, carrying out an argument that would at least somehow leave yourself open to being convinced. In principle, none of us is absolutely certain about everything, but all that is totally consistent with passionate rejection; “I think you’re wrong, and here’s why”. So it’s one thing being open minded in sort of method which doesn’t require being open minded in beliefs. Let’s put it that way.

Robert Wiblin: Well, to all of the viewers that I have rejected in this conversation, know that there is a good chance that you might be right, and I might be wrong. This has been really fun, guys. My guest today has been David Chalmers, and my co-host today has been Arden Koehler. Thanks so much for coming on the 80,000 Hours podcast.

David Chalmers: Thanks. It’s been a whole lot of fun.

Arden Koehler: Yeah, it’s been great.

Rob’s outro [4:42:14]

Robert Wiblin: I hope you enjoyed that as much as I did. I really appreciated Arden’s contribution to the interview — you’ll be hearing more of her in episodes to come.

The 80,000 Hours Podcast is produced by Keiran Harris. Audio mastering by Ben Cordell, and transcriptions by Zakee Ulhaq.

Thanks for joining, talk to you in a week or two.

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths - from academics and activists to entrepreneurs and policymakers - to analyse the case for working on different issues, and provide concrete ways to help.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected]

Subscribe by searching for 80,000 Hours wherever you get podcasts, or click one of the buttons below:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.