In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.

They cover:

  • Why our intuitions seem so unreliable for answering fundamental questions about reality.
  • What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.
  • Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.
  • Eric’s claim that consciousness and cosmology are universally bizarre and dubious.
  • How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.
  • The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.
  • Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore


Can consciousness be nested?

Eric Schwitzgebel: One of the things you might think is that the United States couldn’t be conscious, because it’s composed of a lot of conscious people, and people are conscious, and maybe it’s not possible to create one conscious thing out of other conscious things. So a conscious thing couldn’t have conscious parts.

Now, why would we accept a principle like that, other than that it’s a tempting escape from this unappealing conclusion that the United States is conscious? What exactly would be the theoretical justification for thinking this? I don’t know, but let’s say you’re tempted to this in some way. The Antarean antheads is meant as an example to kind of undercut those intuitions. Another kind of science fiction example.

Here we imagine that around Antares, there are these big, woolly mammoth-like creatures. And they engage — like the supersquids do, and like humans do — in lots of complex cognitive processes: they have memory, they talk about philosophy, they talk about psychology. They contact us. I imagine them coming to visit Earth and trading rare metals with us, and then maybe falling in love with people so that there are interspecies marital relationships and that sort of stuff.

These giant woolly mammoth-like creatures, from the outside, they’re just like intelligent woolly mammoths. Now, on the inside, what their heads and humps have are a million bugs. And these bugs may be conscious: they have their own individual sensoria and reactions and predilections. But there’s no reason, again, from an information-processing perspective, to think that you couldn’t engage in whatever kinds of cognitive processes or information processes that you want with a structure that’s composed out of a million bugs instead of 80 billion neurons. The bugs might have neurons inside them.

So again, from a standard materialist information-processing cognitive structure perspective — and also, I think, from an intuitive perspective — it seems like these things are conscious. This Antarean anthead who’s come and visited me has opinions about Shakespeare. Now, no individual bug has any opinions about Shakespeare; somehow that arises from the interactions of all these bugs.

So maybe we don’t know that these antheads have these ants or bugs inside them until we’ve already been interacting with them for 20 years. It seems plausible that such entities would be possible, and it seems plausible that such entities would be conscious, again, on standard materialist theories, and maybe also just using our science fictional intuitions, starting from a certain perspective.

And if that’s the case, then that’s some pressure against the idea of what I call the “anti-nesting principle.” According to the anti-nesting principle, you can’t have a conscious entity with conscious parts: you can’t nest conscious entities.

Luisa Rodriguez: Nested consciousness. When I imagine a bunch of ants maybe doing small bits of communicating to each other in whatever way ants communicate using the neural faculties they have — and any individual ant either not being conscious or having some form of consciousness that is more limited than the kind of woolly mammoth as a full entity — my reaction is like, “How could they possibly create this emergent thing from these small bits of consciousness?” But I think that’s just evidence that consciousness is insane. I want to bat it down, but I can’t.

Eric Schwitzgebel: Right. I think we have to remember the materialist perspective here. Which you are doing, but just to remind your listeners. So anti-materialists will look at a bunch of neurons and say it’s impossible to conceive how these squishy things firing electric signals among each other could possibly give rise to consciousness. So therefore, consciousness couldn’t be a merely material thing.

So if you’re tempted by that line of reasoning, then you’re not a materialist. If you’re a materialist, you’ve got to say that somehow this does it. And then the question is, is the resistance to consciousness arising out of the ants the same kind of thing that the materialist is committed to batting down? From a certain perspective, it might seem inconceivable; it seems like consciousness would be a very different thing. But it’s maybe just inconceivable in the same way that a brain giving rise to consciousness seems inconceivable to some people.

Are our intuitions useless for thinking about these things?

Luisa Rodriguez: OK, I want to get back on track and just get more into whether the US is conscious. Before we do though, I think it would be helpful to me to understand how it’s possible or what is explaining why my intuitions are sometimes so incredibly useless for thinking about some of these questions. Like, it seems I would have had the intuition that my intuitions about philosophy should be helpful. And I guess they sometimes are, but sometimes they go so wrong that I’m like, why do I even have them? What is it about the history of the human mind that means that lots of people tend to have intuitions about certain things in some cases that seem to just be wrong across the board for lots and lots of people? Do you have thoughts on that?

Eric Schwitzgebel: Well, I think our intuitions arose, and arise, from an evolutionary history, a developmental history, and a social history that have to be well tuned to certain things, but don’t have to be so well tuned to others.

So if your intuitions were wrong about walking along cliff edges and picking berries and planning parties, then you would soon have physical or social trouble. So our judgements about like, “Don’t invite that guy to the party if you’re also inviting that person,” and don’t walk so close to the cliff edge, and this is how you get a berry off a bush and into a basket, our intuitions about that have to be well tuned to the environment, basically. Otherwise they wouldn’t have evolved, been socially reinforced, or emerged in ordinary cognitive development.

But there’s no such pressure to have good intuitions about the origin of the universe, or the fundamental structure of matter, or what kinds of space aliens would be conscious or not conscious, or whether computers would be conscious. On those kinds of things, there’s no corrective source of pressure toward truth or accuracy, so our intuitions can kind of run wild.

And in some cases, like for consciousness, our intuitions may track superficial features better than they track the underlying things, right? So in developmental psychology, researchers have discovered that if you just put googly eyes on something, kids are much more likely to attribute mental states to it.

Luisa Rodriguez: Also me. I am also more likely to attribute mental states to it.

Eric Schwitzgebel: Right? There’s something about eyes, and in fact in our ordinary environment, a thing having eyes tracks pretty well with its having the kinds of mental states that we like to attribute. So that’s a great superficial feature to track. Children, brand-new-born infants, neonates, will respond immediately to eyes, and to configurations that look like an eye and a nose and a mouth together. So there’s something deep in us that’s about eyes. That’s written deep in us. But that might have little to do with what the real basis of consciousness is.

Luisa Rodriguez: Yeah. Just as a funny little aside, I just did a little check. I was like, is it eyes? Yeah, I guess if I put a nose on a rock, I wouldn’t do nearly as much consciousness attribution as I would for eyes. It’s true. So yeah, I’m kind of sold on that.

Do small differences rule out consciousness?

Eric Schwitzgebel: The human brain processes a lot of information. It does. But also the United States has a lot of information exchange among its citizens and residents. For example, just think about the retina of the eye. That’s got millions of cells that are constantly processing information from the environment. Including right now, I can see you. I know our listeners can’t see us, but I can see you. And so I’m processing information about your face. Lots of information is just exchanged between people through the retina, and of course, through the internet. And in many other ways, we’re exchanging information.

Now, the exact structure of the information exchange between people is not going to look like the structure of the information exchange between neurons, although the United States does have lots of neurons, because it contains people who contain neurons. But I think part of the idea of the alien cases, and I think also part of standard-issue materialism — although there are some materialists who would resist this — is that the exact structure of the information processing doesn’t matter so much, as long as it’s got the right kind of overall cognitive shape, right?

And some materialists will maybe want to get off the boat here. You could kind of exit the argument in various places, but you end up with kind of unintuitive commitments, right? So you could take the commitment and say that it matters a lot that you’ve got a very specific type of processing for consciousness; that even if it engaged in very similar, sophisticated outward behaviour, if it had a different kind of internal structure, it just wouldn’t be conscious, and our intuitions otherwise are wrong. You could say that, but that’s not the standard line.

So that’s the kind of liberalism about underlying structure that is essential to the plausibility of the case for the United States being conscious. So that is a potential point of resistance, but I do think the natural mainstream materialist thought is not to put up resistance there. To say that in alien species, and thus in other potential entities on Earth, you could have consciousness underwritten by very different types of architecture, as long as it had the right kind of cognitive sophistication and sufficient information processing and self-representation and responsiveness to its environment and long-term memory, and all that kind of stuff.

Luisa Rodriguez: Yeah. I do think this is a place where I’m at least drawn to the exit. And I think that it would really help me to understand exactly how liberal materialists have to be to accept that the information processing that people within the US and their subsystems are doing to be close enough to the kinds of information processing happening in a human body to create that individual’s consciousness. What exactly are the disanalogies between how information is processed in the brain and how information is processed by Americans?

Eric Schwitzgebel: Well, there are lots of differences. So if we think about, say, visual communication and auditory communication between people as the primary way — setting aside, say, internet communications — that people interact with neurons: you’ve got calcium channels, you’ve got this release of ions and then an electrical discharge across the gap between the dendrite and the axon. And with people, what you’ve got is light reflecting off your face and going into someone else’s retina, and vibrations of the air that result from things going on in your mouth and throat that then stimulate your eardrum. And that’s a very different thing than calcium channels across axon-dendrite gaps. So that’s a big difference. But is that the kind of difference that should matter a lot?

Luisa Rodriguez: I guess consciousness just seems pretty crazy and complex, and that seems like a reason to think that if you change the ingredients in the recipe, you could easily lose the whole end product.

Eric Schwitzgebel: I think one reason to be liberal — and I’m working on this in a paper collaborative with Jeremy Pober, who completed his PhD under my direction a couple years ago — is what we call the “Copernican argument for alien consciousness.” So here’s the idea: the universe is really big. Probably there are intelligent aliens out there somewhere, even if they’re not visiting us. Maybe there are none in our galaxy, but given there’s like a trillion galaxies out there, even if one in a billion galaxies has some kind of intelligent life, there’s still at least 1,000 intelligent species out there. That’s a very conservative estimate, I suspect, of how much intelligence is there in the universe.

So here’s the Copernican idea. We think about all this intelligent life. Imagine it’s intelligent enough to have technology like us. It would be really strange and surprising if we were the only ones who were conscious and all the rest were what philosophers call “zombies.” They act as if they’re conscious, but they don’t really have experiences underneath. That would make us special in a way that if we were at the centre of the universe we would be special.

So just like the Copernican principle of cosmology says — as a default assumption; it could be proven wrong — but just as a default starting place, let’s assume that we’re not in an unusual part of the universe. So similarly, what Pober and I are suggesting is that it would be a violation of some kind of Copernicanism to say, “We’re special. Our neurons give rise to consciousness. But the different kind of stuff that space aliens have in distant galaxies, there’s no reason to assume that that would give rise to consciousness.” So I think Copernican principles would lead us to think that whatever kinds of cognitive informational structures are interior to naturally evolved alien species, a lot of them — probably most of them — are going to be sufficient for consciousness, even if they’re very different from the specific architecture of human neurons.

Overlapping consciousnesses

Luisa Rodriguez: One thing that occurred to me is that you could actually make the same argument about lots of other kinds of agglomerated groups of systems of humans or systems of beings that we think are conscious. So maybe communities of mammals. Does this mean towns are conscious? Does this mean cities are conscious? You could say this maybe about whole continents. Are they all conscious?

Eric Schwitzgebel: Yeah, that’s definitely a worry. It seems like you could construct a slippery slope argument here. If the United States is conscious, then is California conscious? If California is conscious, then is the city of Riverside conscious? If the city of Riverside is conscious, is my university, UC Riverside, conscious? And at the end, it seems like you might end up with something even more counterintuitive than the idea that the United States is conscious.

Luisa Rodriguez: Yeah. And not just that there are many consciousnesses, but that they overlap. So San Francisco might be conscious, Riverside might be conscious, and then also the state of California might be conscious, which is itself part of the United States.

Eric Schwitzgebel: Right. And Google might be conscious. Some of Google’s workers are residents of San Francisco. Then you have partly overlapping cases and not just nested cases.

Luisa Rodriguez: Oh, I find this so upsetting, and it feels like it surely is an argument against. But something tells me you don’t think it is.

Eric Schwitzgebel: Well, I think it is one of those danger signs that we’re headed down some path towards something troubling and maybe absurd enough that we want to figure out how to get off this path. I mean, you could go all the way down this path.

Luisa Rodriguez: What happens if you do?

Eric Schwitzgebel: I think my favourite example of this is the philosopher Luke Roelofs. They say that every combination of things in the universe is a distinct locus of consciousness. Just like every combination of things in the universe has a combined mass, right? My shoe plus the rings of Jupiter has a certain combined mass. So likewise, my shoe plus the rings of Jupiter has a certain stream of consciousness that’s distinct from any other organism or entity.

So yeah, that’s where you go if you just, like, follow this line all the way to its end, you end up with Roelofs’s view. That’s a pretty hard line to swallow, but I would recommend people check out Roelofs’s book on this. It’s called Combining Minds. So you might want to get off the bus here somewhere.

Now, I chose the United States as my example because I think it’s the best case for group consciousness for a couple of reasons. One is that it has a large number of entities in it; it’s the third most populous country in the world, and relative to other countries, there’s a lot of communication and information exchange amongst its citizens. It also has pretty sharp borders. The entity of the United States engages in lots of behaviours. So it’s a kind of best case, I think, for group consciousness. And as you get smaller and more diffuse entities, and entities that do less and have less information exchange and fewer members the case gets harder and harder.

Borderline cases of consciousness

Eric Schwitzgebel: So you can kind of create the slippery slope case, where if people are conscious, then chimps probably are. And if chimps are, then probably mice are. And if mice are, probably all vertebrates are. And you keep going down. And it could be the case, but it doesn’t seem super plausible that there’ll be a moment somewhere there where, boom, consciousness suddenly flicks in. Or you could ride the slippery slope all the way down to panpsychism again, right? That’s the other possibility.

I think there’s at least some attractiveness, some initial plausibility to the idea that it’s not going to be a sharp break — that it’s going to be a continuum, like from blue to green. So you get this in animal cases, you get this in evolution. Plausibly, you also get this in foetal development. Let’s assume that babies are conscious when they’re born. I mean, you could think that the moment of birth is when consciousness flicks on, but that’s a little strange. And even birth is a temporally extended process, right? So if you narrow in… It’s kind of plausible to think that maybe a nine-month foetus has some consciousness already, at least a little bit. The light is on, so to speak. But again, in foetal development, we don’t see, like, here’s the moment where consciousness winks on, right?

So generally, if we look naturalistically, it looks like there’s a continuum. The kinds of things that we think are associated with consciousness — biological states and cognitive capacities — seem to exist on a continuum, rather than having this wink-on structure. And in general, almost everything in nature that’s large and floppy and complex — like consciousness — is not, strictly speaking, discrete. If you want to look for really discrete, sharp edges in nature, you kind of have to go down to the quantum level. So is the electron in this orbit around the hydrogen atom or in this other orbit, and you get a quantum jump. You know, it can never be in between the two. But aside from those kinds of cases, almost everything in nature admits of borderline indeterminate cases, where it’s not quite clear where to draw the boundary.

Luisa Rodriguez: I still feel more drawn to the, there’s a flicking on, a winking on. And I think the reason is that if I try to also do something analogy-y, and still in the world of evolution and phylogenetic groups, the analogy that I come up with is let’s say an eye. It is not the case that we went from one organism to another, and there was all of a sudden eyeballs. We had less sophisticated eyes before we had sophisticated eyes. And before that we had probably photoreceptive, photosensitive cells. And before that, there maybe weren’t necessarily sensitive cells or photoreceptor cells. And that seems like it feels closer to me that there is a line. It is super early — like going from a cell that doesn’t have the capacity to respond or kind of pick up on light, to one that does. And that is a line. And then after that, it’s like the light getting stronger, so the eyes become more sophisticated. That feels like the most plausible way, or the most intuitive way that I would think about features being picked up and evolved and improved upon over time.

Eric Schwitzgebel: Well, I don’t know much about photosensitivity. But let me speculate just a little bit.
We normally don’t think of humans as having the sensory capacity to detect electric currents in the way that eels do. But if you stick your finger in a light socket, you will detect an electric current. So if a cell is bombarded with enough intense electromagnetic radiation in, say, the visible spectrum, it might have some limited response to that that’s different than the response it would have in the dark. You don’t want to say that this cell has a photosensitive sensory capacity, but it would be kind of like you sticking your finger in the light socket: with enough energy of that sort coming in, there is going to be a reaction in the cell. So I’m not sure whether in evolutionary cases there is a kind of jump. Sometimes there are surprising evolutionary jumps. So I don’t know whether there would be. But at least it seems to me hypothetically possible that you would have in-between cases of photosensitivity like that. Where you’ve got two cells: one is a little bit more sensitive to that high level of electromagnetic energy, and one is less sensitive, and that turns out to be a little bit of an evolutionary advantage, and then you’re off down the path toward creating what we think of as a more specifically photosensitive sensory capacity. So that would be, again, a kind of a way of thinking about borderline cases: at what point do you say that this is actually a sensory capacity, versus at what point do you say that this is just the cell reacting in a certain way to an intense energy input?

Luisa Rodriguez: OK, that is helpful. It helps me understand more what the interpretation would even sound like of this case when you’re trying to make the argument that it’s indeterminate and not discrete, but little. Maybe this is actually the perfect segue to prong two, because I’m now like, “But it doesn’t make any sense still!”

Eric Schwitzgebel:Generally speaking, most people would think that when we’re awake, we’re determinately conscious. And on some, but not all, mainstream theories of sleep, we have some periods in which we are not conscious. Now, not all sleep theorists think this. There are some sleep theorists who think that you are always to some extent conscious when you’re sleeping, even if you’re not dreaming. But I think that’s a minority view in the literature. And intuitively, we often think that there are moments of just zero consciousness when you’re sleeping, and then we suddenly transition into waking. Or alternatively, we suddenly transition from non-conscious sleep to conscious dreaming. Although this is a little bit at variance with how some people use the word “consciousness” in ordinary language: we sometimes say that when you’re sleeping, you’re not conscious. But in the sense of consciousness that we were talking about earlier, that there’s something it’s like; you’ve got an experience when you’re dreaming. Dreams are conscious experiences.

So in the human case, we have what seem to be pretty sudden transitions between either non-conscious sleep and conscious dreaming, and non-conscious sleep and conscious wakefulness. And when you’re kind of disoriented and half-awake, you might say, “I’m half-conscious.” But again, that’s not really the way of thinking about it. It’s more like you’re determinately conscious, but you’ve got this kind of confused sense of where you are, or you’re disoriented in a certain way.

So our human experience seems to be that mostly we are in determinate states of either being conscious or non-conscious. So that makes it hard for us to think about or remember any in-between cases. Maybe they don’t even exist in the human case, or maybe they’re rare. There might be, during sleep, some borderline cases. There might be, during slow falling asleep or slow waking, some borderline cases. There might be, say, for people who are in vegetative states, some borderline cases, but we don’t really know whether that’s so.

If you look at the neurophysiological literature, there seem mostly to be sharp transitions between conscious and non-conscious states, but there are also brain states that seem to be not quite either conscious or non-conscious. And then how to interpret that is going to be super complex. But even if it’s typical for us to engage in sudden state transitions from conscious to non-conscious, it’s not established that that’s universally the case; there can be states that are intermediate between these typical states. So we have a lot of trouble imagining or conceiving of what an in-between conscious state might be. And that, I think, is the source of our sense that this is impossible or inconceivable.

Are we dreaming right now?

Eric Schwitzgebel: So here’s one way of thinking about it. What is the evidence that I’m awake? Well, I seem to be having all of these sensory experiences that are pretty rich in detail. And if I pinch my hand, I feel the pinch. And I’ve got this paper here, and I’m going to read some text. And I usually think that in dreams, text doesn’t stand still. It’s a little hard to look at it. It flutters away. A lot of people have that experience, or report having that experience in dreams.

But now, if I think about all of that evidence, it’s consistent with at least some theories of dreams. So some major, important dream researchers think that we sometimes have very realistic sensory experiences in our sleep, and that dreams are not always full of bizarreness. And some people report that they can read texts in their dreams, and that they can feel pinches. And sometimes we have false awakenings. I definitely have had experiences where I am dreaming that I’ve woken up, and dream that I’m judging that I’m awake and having ordinary experiences. And then I wake up again, and I’m like, holy crap. And then I momentarily worry, am I going to wake up still another time?

So all of the evidence that I have, I think, is some kind of support for the fact that I’m awake. Because the odds that I’d be having an experience like this during sleep, a certain kind of theory of dreams has to be true. And maybe this kind of experience isn’t exactly a totally typical dream experience, because it’s a little more well organised and less bizarre than a lot of dream experiences are. But none of that’s really compelling in the sense of bringing me all the way to, say, zero credence or even one-in-a-trillion credence that I’m dreaming, right? Once I think that, on some theories of dreams that I can’t decisively reject, I could have experiences just like this during sleep — once I kind of get myself in that mood of recognising and realising that fact — it’s a little hard for me to feel 100% confident now that I’m awake, or to justify that confidence.

Luisa Rodriguez: Yeah, yeah. So I feel a little bit open to this. And it is interesting and compelling to me that some dream theorists, including prominent ones, think that these kinds of dreams that would feel a lot like the experience I’m having now can happen. I personally have no memory of ever having a dream that’s anything like what it feels like to be me, with a lifetime of memories that I can look at and that are all very logically consistent and coherent, with richness that feels orders of magnitude more — whether or not it even makes sense to describe it that way — than I’ve ever had in a dream. So it would at least feel surprising to me that despite the fact that me in a dream right now, dream Luisa, potentially has never had anything like a dream this vivid, that this happens to be the one dream that feels like it’s years long and has the richness of an entire life.

Eric Schwitzgebel: Right. So it feels like it’s years long. But of course, in dreams it’s plausible to think sometimes we have the experience of feeling like we’ve spent years in whatever dream reality we’re in. So that’s part of my reaction. But actually my main reaction is to kind of agree with you. My own preferred theory of dreams is an imagery theory on which dream experiences are more like images than they are like sensory experiences. There’s a debate about this in the dream literature. Some people say that dreams experientially are more like daydreams, which are images. And that’s pretty different experientially from the vivid sensory experience of actually seeing something. So I’m inclined toward that theory. But I should say that Jennifer Windt and Antti Revonsuo, two of the top theorists of dreams, disagree with that theory. So I’ve got maybe 80% credence in this theory, but I don’t think I could justify much more than 80% credence, given that some of my favourite theorists of dreams disagree with it.

So contingent upon that 80%, it would be very unlikely that this is the one exception. But I think what we need to do is say that maybe that’s not the right way of thinking about dreams. Maybe your impression, and my impression — that dreams kind of lack this vividness of sensory detail — maybe that’s an error on our part, a memory error; we don’t really remember the experience of dreams maybe as well as we think we do.

There’s this whole interesting literature on dream memory and how accurate or inaccurate it is. I think it might be pretty inaccurate. One interesting sign of the inaccuracy, and I don’t know if we want to get into this, but I did a whole research project for a while on the fact that people used to think they dreamed in black and white. In the 1950s and 1960s, in the United States, the majority of people said that almost all of their dreams are black and white. And that was not the case before the 19th century, and it was not the case after, say, the ’70s or ’80s.

It corresponds with the rise and fall of black-and-white filmmaking. So my theory here is that what happened was people were over-analogising: they don’t remember their dream experiences very well, and they’re over-analogising their dreams to black-and-white movies. Their dreams are like movies. Movies are black and white, and I don’t seem to remember the colour of these particular objects in my dreams, so I guess my dreams are black and white too.

Luisa Rodriguez: Sure, yeah. So then another thought I have is just, does it even matter? I guess I’m interested in this question because there’s a whole can of worms that I sometimes look into and then close back up that’s like, should I actually care a lot morally about the experiences that I and other people have in their dreams? Because they’re really terrible for me a lot of the time, and I think some people have wonderful ones, and that’s great. But if we were to put more moral weight on the experiences people have when they’re dreaming than we do — I think we basically put none on it now — then that would be pretty crazy. But if we just assume that that is a reasonable thing to do for now, would that imply that the experience I’m having right now is not morally relevant? Or maybe we should go the other way, and say this is all pretty morally relevant, because it seems like we are having an experience and there is something it is like to be me, even if I’m in a dream.

Eric Schwitzgebel: That’s a complicated, interesting question. There is a recent paper on this that came out in Pacific Philosophical Quarterly. I’m forgetting the author’s name, unfortunately, but the idea is: if we accept a utilitarian ethics — which would be a view on which what you try to do is maximise the balance of happiness or positive experiences minus the balance of negative experiences or pain in the universe — and you accept, as seems plausible, that during dream experiences you can have positive or negative affect — even though maybe you don’t feel the pain of the pinch, it’s agonisingly terrible to feel like you’re being chased by a monster or whatever, or wonderful to have an experience of flying — if you accept all that, and you accept a utilitarian ethics in which the ethical imperative is maximise pleasure, basically, then we ought to be investing a lot of work into improving the quality of our dreams.

Articles, books, and other media discussed in the show

Eric’s work:

Others’ work:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.