Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, “I solved your problem.” What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings.

A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, “Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?” And they’d be like, “Hell no!” It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it.

Spencer Greenberg

In today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.

They cover:

  • How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.
  • The importance of hype in making valuable things happen.
  • How to recognise warning signs that someone is untrustworthy or likely to hurt you.
  • Whether Registered Reports are successfully solving reproducibility issues in science.
  • The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.
  • The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.
  • The potential harms of lightgassing, which is the opposite of gaslighting.
  • How Spencer’s team used non-statistical methods to test whether astrology works.
  • Whether there’s any social value in retaliation.
  • And much more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Highlights

Does money make you happy?

Spencer Greenberg: So these kinds of methods that are asking about overall life evaluation — I’ll call them “life satisfaction measures” — people have long found that there tends to be a logarithmic relationship between that and income. And what that means is that essentially every time you double your income, you get the same number of points increase from this life satisfaction measure.

So then there’s this question of, OK, people’s life satisfaction goes up, but what about how good they feel in the moment? Let’s call that “wellbeing” instead of life satisfaction. Wellbeing would be like, you ping someone at a random point of the day, and you say something like, “How good do you feel right now?” Or you could ask other questions about their emotional state, like that they are feeling happy or that they laugh today, or things like that, right? But it’s the distinction between their evaluation of how good their life is, which is life satisfaction, and how good they feel in the moment, which is wellbeing.

And a lot of people assume that the relationship should be similar: the wealthier you are, the more your wellbeing tends to go up. And of course, these are associations. So we’re talking about on average. But Daniel Kahneman — Nobel Prize winner, former guest on my podcast, Clearer Thinking, who’s someone I tremendously respect — he published a paper that found a really interesting finding: that as people got wealthier, yes, indeed, their wellbeing did go up, but it kind of flattened off. I think it was at about like $75,000 a year or something like that, it flatlined, so they stopped getting any benefit for their wellbeing on average as they got wealthier.

This was big news. It was quite surprising to many people, and kind of a big story at the time. They’re working with pretty large datasets, and they actually saw this flattening effect.

So that’s where the interesting twist comes into the story. This other researcher, Killingsworth, using a significantly higher quality dataset than Kahneman had access to, basically does his own analysis and finds, lo and behold, that actually wellbeing, much like life satisfaction, continues going up logarithmically. So if you double your income, you get the same unit increase in wellbeing. And this was kind of a shock because, well, what the heck was going on with the Kahneman paper? You know, everyone greatly respects Daniel Kahneman. Why would they find such a difference of opinion?

Now Kahneman, much to his credit, ends up talking to Killingsworth, and they team up for what’s called an adversarial collaboration. This is something I think is incredibly valuable for science, and I hope will happen a lot more, where researchers who disagree actually will write a paper together to try to explore their disagreement and see if they can come to an agreement, or at least figure out the source of their disagreement. So they worked together on a paper — along with Barbara Mellers, who they collaborated with; she does a lot of work on adversarial collaborations — and they ended up finding that actually Killingsworth was correct. Indeed, as you go up in income, there’s a logarithmic relationship with wellbeing.

And they try to figure out why was it in Kahneman’s data that he didn’t find this? What they end up concluding is that the way Kahneman measured happiness was not very ideal. It was three binary variables that kind of get combined together, basically the variables around “Did you feel good?” The problem with it is that almost everyone said yes on these variables, and that meant that it only really had the ability to distinguish different levels of unhappiness, because if you were like “OK” to “good,” you would just agree to the three variables, and therefore it all was really measuring was the unhappy side.

Rob Wiblin: Do people call this top censoring or top… like the measure caps out where, if you’re feeling like kind of content, then you already have the maximum score, so it can’t detect you becoming happier beyond that, going from happy to really happy?

Spencer Greenberg: Yeah. It basically was that. It was just detecting sort of unhappiness. And then in the new paper, in the adversarial collaboration, which is really fascinating — it’s called “Income and emotional well-being: A conflict resolved” — they find that there’s this fairly strange effect with, for unhappy people, you do get this capping out effect. So if you just look at a certain percentile of unhappiness — let’s say the 15th percentile of most unhappy people — as they get more and more income, it does actually cap out. It stops benefiting them. We don’t know for sure why that is. Possibly it’s because when you’re in the bottom 15th percentile of happiness, maybe the things that are making you unhappy at that point are, maybe there’s a limited ability for income to change them. We don’t really know.

But basically they found that it was something about that measure. And indeed, now they both agree that wellbeing goes up logarithmically with income.

Hype vs value

Spencer Greenberg: “Value,” I’m referring to intrinsic values — so the things that people fundamentally care about for their own sake, not as a means to other ends. So it could be like your own happiness, the flourishing of your loved ones, being honest, believing true things, learning — things like that. These are, to me, the things that we should as a society and individually be trying to create more of.

Then we have the category of what I’m going to call “hype.” Hype refers to something being cool, exciting, having a buzz around it. I’m also going to put one value in the hype category, which is social status — which is, I think, a genuine human value: people want other people to look up to them and think they are high status. But I’m putting it in the hype category because I think for this analysis, it better fits in hype rather than values.

OK, so now we have these two kinds of things. We’ve got hype and we have value. And we could imagine a coordinate system: you’ve got value on one axis, hype on the other axis. And you could start plotting things in this system.

So let’s start with pure hype. We’ve got something like art NFTs that nobody even likes looking at. They’re just like ugly art NFTs, right? These are really not getting anyone any value of theirs, except maybe social status — which, remember, we put in the hype category. But they do have, at least for a while, they had a big buzz around them. They were considered cool by a certain crowd. So they were just pure hype. It’s pure hype, no value.

Then we have things that are pure value, no hype. Let’s say doorknobs. Doorknobs are just really good at what they do. Like, so good at what they do, you don’t even think about it. When you need a doorknob, you buy a doorknob, you’re satisfied. You never think about it again. No hype whatsoever. I’ve never heard anyone rave about doorknobs.

Then we have things in between. I think Tesla would be a really good example. Tesla definitely produces some value. It makes cars that people really enjoy driving. It has some positive environmental impact. It also has incredible hype. Elon Musk is really good at building a sense of excitement and coolness and social status for doing a thing. And obviously, Tesla is extremely successful.

And the reason I’ve been thinking about this is I think there are some things that succeed on pure value, and there’s some things that succeed on pure hype. But I think in reality, most of the time, when things succeed, it’s by getting a combination of hype and value. And I think hype is something that I don’t like, and I have a negative feeling around it. And I think because of that, I’ve underestimated the importance of it to accomplish things in the real world. And, of course, if you’re trying to create hype, you should do it in an ethical way; you shouldn’t be lying or manipulating people. But I think there are ethical ways to help get people excited.

So I view it as a kind of blind spot for me. I also think it’s a blind spot for the effective altruism community, because I think, like me, many effective altruists are like, hype, ick. Yuck. Stay away. But the reality is it often is hype that gets people to do things together. It gets people involved. It gets people excited to actually carry out changes in the world or to get a product to succeed.

Warning signs that someone is bad news

Spencer Greenberg: So the idea is not if someone ever shows one of these patterns, they’re bad news. It’s more like, think of it as a continuum: if someone repeatedly shows these patterns to a strong degree, you might question whether they’re a safe person, or whether they might be untrustworthy or hurt you.

So let’s dig into the specific things. The first set of patterns are around things you might call dangerous psychopathy or malignant narcissism. And so the things, you notice that the person seems to be manipulating you or other people. You notice that they’re inconsistent; like, they’ll say one thing one time and a different thing at another time. Or you catch them being dishonest — and again, it could be to you, or maybe you just see them being dishonest with other people. A self-centredness where they seem much more interested in their own interests than in other people’s interests. Quick, very intense anger. So they suddenly become enraged. And then finally, lack of empathy.

And I think what this cluster is really getting at are two personality disorders: antisocial personality disorder and narcissistic personality disorder. I will say not everyone with these disorders should be avoided. Like, there can be people who are good, ethical people who have these disorders — especially if they understand that they have these disorders; they’re seeking treatment, they’re working on themselves, and they have other compensating factors that help them avoid some of the dangers of having these disorders. But when you have someone who has these disorders to a strong degree, they’re in total denial, and they’re not working on it at all, it can pose quite a bit of danger.

The second cluster is around immaturity. And so this would be things like extreme emotionality. Like the person gets extremely upset over very minor-seeming things. The person seems to avoid topics when they’re upset. So instead of telling you, “That bothered me,” they just won’t talk about it; they’ll shut down. They have really poor communication. They’re lacking responsibility or accountability: maybe they mess up, but they refuse to apologise, or they just won’t take any accountability for what they did. And general poor handling of relationships. Like, if you see they have a bad relationship with everyone else in their life, that’s not a great sign.

And I think this immaturity category, maybe it’s not as potentially serious, but I think it really can be a red flag in relationships. You could get in a really bad pickle, where someone will do something, it’s harmful, but then they don’t take responsibility for it. Or they’ll be really angry at you about something: maybe you made a really minor mistake, but it wasn’t that serious, according to relatively objective third-party observers. But this person’s extremely upset about it, but then they don’t even tell you, and they’re just simmering with rage at you. So there’s a lot of things that I think can come out here, that actually, I do think it’s a pretty important cluster.

The third and final cluster is a pettiness cluster. This would be things like they talk negatively about a lot of people, like saying negative things about their other friends to you; gossiping in a way that’s harmful, where they’re spreading information that could hurt people; and extreme judgmentalness, where they’re like, that person sucks because of this little minor defect.

So this category, the pettiness, I don’t think I would have thought of this category, but I do see why it can kind of be insidious, where someone can be causing harm in a social group through these kinds of behaviours.

Important life decisions

Spencer Greenberg: One of the really common things that happens is that when we have a problem, we’re very aware of it when it first starts or when there’s a big change in it. But then we get very acclimated to it very fast. So one thing that I just try to think about is: what are the problems that are happening in your life that maybe you’re so used to, you don’t even view them as a problem anymore, but if you step back and start it over, you’d be like, “Oh wait, that’s a problem”?

We all see examples of this in really little things. Maybe there’s a hole in your counter or whatever, and at first it’s annoying, but then you’re so used to it and you just don’t fix it for years, right? Really, you should have fixed it right away. But the second best thing would be to just notice it today and just get it fixed, instead of just working around it.

And I think we do that for much more serious things. Like someone who has significant depression and they’re just used to being depressed, and they kind of forget that there’s anything else you could be, because it’s been so many years. And it’s like, maybe you should be making a decision around your depression, and actively engaging with what you want to do about it.

Rob Wiblin: Yeah. So not realising that there’s a decision at all, a big potential failure. What are some other ones?

Spencer Greenberg: I think another big one is that people will accept one framing of a problem. I feel like when a friend comes to me with a decision, and they want to discuss it and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, “I solved your problem.” What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings.

And I think this is a powerful thing we can do for ourselves. Sometimes the framings are more about… We make it too binary — like “I either quit my job or I stick with my job” — and we don’t think about, “Maybe I could switch roles at the same job, or I could renegotiate details of my role” or other things like that.

So sometimes that’s where we’re stuck on framing. But sometimes it’s just coming at the problem differently. A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, “Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?” And they’d be like, “Hell no!” It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it.

So I think this can be quite powerful: we get stuck in these frames on decisions, and asking ourselves, “Is there another way of looking at this?” And sometimes talking to other people can be a really helpful way to get those reframes, but sometimes we can generate them ourselves.

Personal principles

Spencer Greenberg: So I think of “values” as the intrinsic values, the things you fundamentally care about, that you value for their own sake. A “principle,” to me, is a decision-making heuristic. So instead of having to rethink every decision from scratch, you’re like, “I have a principle, and it helps me make my decisions quickly. It gives me a guideline of how to make my decision.”

And a good principle, not only does it make your decisions more efficient to get your values — so it speeds you up — but it actually makes it more reliable that you get to your values than if you try to rethink things from scratch every time. So a good principle can help orient you on cases where maybe your willpower wouldn’t be there, or where maybe you might second guess yourself and actually not do the thing that’s most valuable.

Just to give you some examples, one of my principles is “Aim not to avoid anything valuable just because it makes you feel awkward, anxious, or afraid.” I have that principle, so when I’m in a situation where there’s something that’s making me feel awkward or making me feel anxious, that’s valuable to do, I just go immediately to, yeah, I have to do that thing. The fact that it’s awkward or anxiety-provoking is not an excuse to me, because that’s one of my deep principles. And the thing is, if I try to think about it from scratch every time, not only is it slower, but it also is easy to talk myself out of that thing.

Another one of my principles is “Aim to have opinions on most topics that are important to you, but view your beliefs probabilistically. Be quick to update your views as you get new evidence.” Here, if something I think is really important in society or for my own life, I want to form an active opinion on it. So if someone said, “What do you think about this?” I would say, “Here’s what I think” — but simultaneously, I want to be very flexible to new evidence and be ready to adjust my view at the drop of a hat if strong evidence comes in. Not adjust at the drop of a hat with weak evidence, but adjust at the drop of a hat with strong evidence.

So that’s something I aspire to, and I think that’s helpful when someone challenges me. I put a lot of my opinions on the internet, and if someone’s like, “What about this counterevidence?,” that principle helps orient me towards not being so reactive and being like, “Ahh, I’m being attacked!,” but being like, “If they gave me strong evidence, my principle says I have to change my view. So did they give me strong evidence?”

A simpler principle can be more action guiding and give you less room for making excuses or second guessing yourself. A more complex principle can take into account more aspects of the world to show that you miss fewer edge cases. Because it’s not that a principle will be right every single time; it’s that it will be right most of the time, and it will help you be more efficient and help you avoid second guessing yourself too much, or willpower issues and things like that.

Let me read you my principle about lying. I say, “Try never to tell lies. White lies are OK only when they’re what the recipient would prefer.” So I’m trying to say there is some wiggle room. Like, if you go to your friend’s art performance, and they come up to you excitedly, like, “What did you think?” and you actually thought it sucked, that’s a tough one. I’m going to give myself some leeway to be like, if I think this person would rather I express appreciation from their art — they’d rather I lie — then maybe it’s OK.

Lightgassing

Spencer Greenberg: Lightgassing is a phrase I came up with to describe a phenomenon that I kept encountering but I’d never had a word for.

It’s kind of the opposite of gaslighting, so why don’t we start with talking about what gaslighting is? It comes from an old film where there was this man who would mess with the gaslights in their house that produce light, but then trick his partner into thinking that he hadn’t done it, so she started to doubt her own senses and her own sanity. So this idea of gaslighting is when you kind of deny someone’s senses or deny their reality, so that they start doubting their own senses or sense of reality.

Lightgassing, on the other hand, is kind of the opposite of this. The way it works is that sometimes when we’re dealing with someone who, let’s say, is upset, they might say something that we really don’t believe is true, but they want us to reinforce that thing because it’s really deeply important to them.

The most classic example of this would be with someone who just had a breakup, and they’re talking about what an asshole their partner was. But maybe you don’t think their partner is an asshole at all. But they’re giving all this social pressure for you to tell them that, yes, their partner was an asshole. And this is kind of the opposite of gaslighting. Because whereas gaslighting is getting someone to doubt their real sensory perceptions that are actually true, lightgassing is when you’re actually reinforcing false sensory perceptions.

To me, this came about most dramatically in a situation where one of my loved ones was dealing with very severe mental health challenges and was experiencing actual delusions, like actual straight-up delusions about what was true. I realised I was in a very strange situation, where there was a lot of pressure to reinforce their delusions that I knew were false, and I still wanted to be supportive to them, but I started feeling very uncomfortable at this idea of reinforcing their delusions. But if I didn’t reinforce it, I felt like it was going to make them upset or angry.


Rob Wiblin: So you got into thinking about this because a friend of yours was having delusions, I guess. And what did you conclude about how one ought to deal in these situations?

Spencer Greenberg: First of all, I’ll say we’ve written an essay on our website, clearerthinking.org, if you want to check it out, you want to dive deeper into this. But I will say, I think fortunately, the kinds of strategies you use actually are similar, whether it’s a really extreme case — like someone experiencing delusions — or a more mild case, where maybe someone’s just really angry at their ex-partner or something like that.

What I try to do is validate the person’s feelings without validating false perceptions they have. And that doesn’t mean you tell them they’re wrong. If someone’s upset, it’s usually not appropriate to be like, “You’re wrong about X, Y, and Z.” That’s probably not the right time. But you can still be there for them. You can show them compassion, you can tell them you care about them, and you can validate the feelings they’re feeling without agreeing to the specific factual errors they’re making.

So an example if the person is delusional: let’s say they think someone’s coming after them, which is not true. You don’t have to be like, “Oh no, someone’s coming after you. That’s so scary!” You can say, “That sounds like a really frightening experience.” So you’re kind of saying, “Given that you think someone’s coming after you, that makes sense that you’re really scared. I’m here for you. I want to help you.”

Articles, books, and other media discussed in the show

Spencer’s work:

Integrity and reproducibility in social science research:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.