#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more
#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more
By Robert Wiblin and Keiran Harris · Published March 14th, 2024
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Cold open [00:00:00]
- 3.2 Rob's intro [00:01:01]
- 3.3 The interview begins [00:02:31]
- 3.4 Does money make you happy? [00:05:54]
- 3.5 Hype vs value [00:31:27]
- 3.6 Warning signs that someone is bad news [00:41:25]
- 3.7 Integrity and reproducibility in social science research [00:57:54]
- 3.8 Personal principles [01:16:22]
- 3.9 Decision-making errors [01:25:56]
- 3.10 Lightgassing [01:49:23]
- 3.11 Astrology [02:02:26]
- 3.12 Game theory, tit for tat, and retaliation [02:20:51]
- 3.13 Parenting [02:30:00]
- 3.14 Rob's outro [02:34:10]
- 4 Learn more
- 5 Related episodes
In today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.
They cover:
- How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.
- The importance of hype in making valuable things happen.
- How to recognise warning signs that someone is untrustworthy or likely to hurt you.
- Whether Registered Reports are successfully solving reproducibility issues in science.
- The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.
- The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.
- The potential harms of lightgassing, which is the opposite of gaslighting.
- How Spencer’s team used non-statistical methods to test whether astrology works.
- Whether there’s any social value in retaliation.
- And much more.
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore
Highlights
Does money make you happy?
Spencer Greenberg: So these kinds of methods that are asking about overall life evaluation — I’ll call them “life satisfaction measures” — people have long found that there tends to be a logarithmic relationship between that and income. And what that means is that essentially every time you double your income, you get the same number of points increase from this life satisfaction measure.
So then there’s this question of, OK, people’s life satisfaction goes up, but what about how good they feel in the moment? Let’s call that “wellbeing” instead of life satisfaction. Wellbeing would be like, you ping someone at a random point of the day, and you say something like, “How good do you feel right now?” Or you could ask other questions about their emotional state, like that they are feeling happy or that they laugh today, or things like that, right? But it’s the distinction between their evaluation of how good their life is, which is life satisfaction, and how good they feel in the moment, which is wellbeing.
And a lot of people assume that the relationship should be similar: the wealthier you are, the more your wellbeing tends to go up. And of course, these are associations. So we’re talking about on average. But Daniel Kahneman — Nobel Prize winner, former guest on my podcast, Clearer Thinking, who’s someone I tremendously respect — he published a paper that found a really interesting finding: that as people got wealthier, yes, indeed, their wellbeing did go up, but it kind of flattened off. I think it was at about like $75,000 a year or something like that, it flatlined, so they stopped getting any benefit for their wellbeing on average as they got wealthier.
This was big news. It was quite surprising to many people, and kind of a big story at the time. They’re working with pretty large datasets, and they actually saw this flattening effect.
So that’s where the interesting twist comes into the story. This other researcher, Killingsworth, using a significantly higher quality dataset than Kahneman had access to, basically does his own analysis and finds, lo and behold, that actually wellbeing, much like life satisfaction, continues going up logarithmically. So if you double your income, you get the same unit increase in wellbeing. And this was kind of a shock because, well, what the heck was going on with the Kahneman paper? You know, everyone greatly respects Daniel Kahneman. Why would they find such a difference of opinion?
Now Kahneman, much to his credit, ends up talking to Killingsworth, and they team up for what’s called an adversarial collaboration. This is something I think is incredibly valuable for science, and I hope will happen a lot more, where researchers who disagree actually will write a paper together to try to explore their disagreement and see if they can come to an agreement, or at least figure out the source of their disagreement. So they worked together on a paper — along with Barbara Mellers, who they collaborated with; she does a lot of work on adversarial collaborations — and they ended up finding that actually Killingsworth was correct. Indeed, as you go up in income, there’s a logarithmic relationship with wellbeing.
And they try to figure out why was it in Kahneman’s data that he didn’t find this? What they end up concluding is that the way Kahneman measured happiness was not very ideal. It was three binary variables that kind of get combined together, basically the variables around “Did you feel good?” The problem with it is that almost everyone said yes on these variables, and that meant that it only really had the ability to distinguish different levels of unhappiness, because if you were like “OK” to “good,” you would just agree to the three variables, and therefore it all was really measuring was the unhappy side.
Rob Wiblin: Do people call this top censoring or top… like the measure caps out where, if you’re feeling like kind of content, then you already have the maximum score, so it can’t detect you becoming happier beyond that, going from happy to really happy?
Spencer Greenberg: Yeah. It basically was that. It was just detecting sort of unhappiness. And then in the new paper, in the adversarial collaboration, which is really fascinating — it’s called “Income and emotional well-being: A conflict resolved” — they find that there’s this fairly strange effect with, for unhappy people, you do get this capping out effect. So if you just look at a certain percentile of unhappiness — let’s say the 15th percentile of most unhappy people — as they get more and more income, it does actually cap out. It stops benefiting them. We don’t know for sure why that is. Possibly it’s because when you’re in the bottom 15th percentile of happiness, maybe the things that are making you unhappy at that point are, maybe there’s a limited ability for income to change them. We don’t really know.
But basically they found that it was something about that measure. And indeed, now they both agree that wellbeing goes up logarithmically with income.
Hype vs value
Spencer Greenberg: “Value,” I’m referring to intrinsic values — so the things that people fundamentally care about for their own sake, not as a means to other ends. So it could be like your own happiness, the flourishing of your loved ones, being honest, believing true things, learning — things like that. These are, to me, the things that we should as a society and individually be trying to create more of.
Then we have the category of what I’m going to call “hype.” Hype refers to something being cool, exciting, having a buzz around it. I’m also going to put one value in the hype category, which is social status — which is, I think, a genuine human value: people want other people to look up to them and think they are high status. But I’m putting it in the hype category because I think for this analysis, it better fits in hype rather than values.
OK, so now we have these two kinds of things. We’ve got hype and we have value. And we could imagine a coordinate system: you’ve got value on one axis, hype on the other axis. And you could start plotting things in this system.
So let’s start with pure hype. We’ve got something like art NFTs that nobody even likes looking at. They’re just like ugly art NFTs, right? These are really not getting anyone any value of theirs, except maybe social status — which, remember, we put in the hype category. But they do have, at least for a while, they had a big buzz around them. They were considered cool by a certain crowd. So they were just pure hype. It’s pure hype, no value.
Then we have things that are pure value, no hype. Let’s say doorknobs. Doorknobs are just really good at what they do. Like, so good at what they do, you don’t even think about it. When you need a doorknob, you buy a doorknob, you’re satisfied. You never think about it again. No hype whatsoever. I’ve never heard anyone rave about doorknobs.
Then we have things in between. I think Tesla would be a really good example. Tesla definitely produces some value. It makes cars that people really enjoy driving. It has some positive environmental impact. It also has incredible hype. Elon Musk is really good at building a sense of excitement and coolness and social status for doing a thing. And obviously, Tesla is extremely successful.
And the reason I’ve been thinking about this is I think there are some things that succeed on pure value, and there’s some things that succeed on pure hype. But I think in reality, most of the time, when things succeed, it’s by getting a combination of hype and value. And I think hype is something that I don’t like, and I have a negative feeling around it. And I think because of that, I’ve underestimated the importance of it to accomplish things in the real world. And, of course, if you’re trying to create hype, you should do it in an ethical way; you shouldn’t be lying or manipulating people. But I think there are ethical ways to help get people excited.
So I view it as a kind of blind spot for me. I also think it’s a blind spot for the effective altruism community, because I think, like me, many effective altruists are like, hype, ick. Yuck. Stay away. But the reality is it often is hype that gets people to do things together. It gets people involved. It gets people excited to actually carry out changes in the world or to get a product to succeed.
Warning signs that someone is bad news
Spencer Greenberg: So the idea is not if someone ever shows one of these patterns, they’re bad news. It’s more like, think of it as a continuum: if someone repeatedly shows these patterns to a strong degree, you might question whether they’re a safe person, or whether they might be untrustworthy or hurt you.
So let’s dig into the specific things. The first set of patterns are around things you might call dangerous psychopathy or malignant narcissism. And so the things, you notice that the person seems to be manipulating you or other people. You notice that they’re inconsistent; like, they’ll say one thing one time and a different thing at another time. Or you catch them being dishonest — and again, it could be to you, or maybe you just see them being dishonest with other people. A self-centredness where they seem much more interested in their own interests than in other people’s interests. Quick, very intense anger. So they suddenly become enraged. And then finally, lack of empathy.
And I think what this cluster is really getting at are two personality disorders: antisocial personality disorder and narcissistic personality disorder. I will say not everyone with these disorders should be avoided. Like, there can be people who are good, ethical people who have these disorders — especially if they understand that they have these disorders; they’re seeking treatment, they’re working on themselves, and they have other compensating factors that help them avoid some of the dangers of having these disorders. But when you have someone who has these disorders to a strong degree, they’re in total denial, and they’re not working on it at all, it can pose quite a bit of danger.
The second cluster is around immaturity. And so this would be things like extreme emotionality. Like the person gets extremely upset over very minor-seeming things. The person seems to avoid topics when they’re upset. So instead of telling you, “That bothered me,” they just won’t talk about it; they’ll shut down. They have really poor communication. They’re lacking responsibility or accountability: maybe they mess up, but they refuse to apologise, or they just won’t take any accountability for what they did. And general poor handling of relationships. Like, if you see they have a bad relationship with everyone else in their life, that’s not a great sign.
And I think this immaturity category, maybe it’s not as potentially serious, but I think it really can be a red flag in relationships. You could get in a really bad pickle, where someone will do something, it’s harmful, but then they don’t take responsibility for it. Or they’ll be really angry at you about something: maybe you made a really minor mistake, but it wasn’t that serious, according to relatively objective third-party observers. But this person’s extremely upset about it, but then they don’t even tell you, and they’re just simmering with rage at you. So there’s a lot of things that I think can come out here, that actually, I do think it’s a pretty important cluster.
The third and final cluster is a pettiness cluster. This would be things like they talk negatively about a lot of people, like saying negative things about their other friends to you; gossiping in a way that’s harmful, where they’re spreading information that could hurt people; and extreme judgmentalness, where they’re like, that person sucks because of this little minor defect.
So this category, the pettiness, I don’t think I would have thought of this category, but I do see why it can kind of be insidious, where someone can be causing harm in a social group through these kinds of behaviours.
Important life decisions
Spencer Greenberg: One of the really common things that happens is that when we have a problem, we’re very aware of it when it first starts or when there’s a big change in it. But then we get very acclimated to it very fast. So one thing that I just try to think about is: what are the problems that are happening in your life that maybe you’re so used to, you don’t even view them as a problem anymore, but if you step back and start it over, you’d be like, “Oh wait, that’s a problem”?
We all see examples of this in really little things. Maybe there’s a hole in your counter or whatever, and at first it’s annoying, but then you’re so used to it and you just don’t fix it for years, right? Really, you should have fixed it right away. But the second best thing would be to just notice it today and just get it fixed, instead of just working around it.
And I think we do that for much more serious things. Like someone who has significant depression and they’re just used to being depressed, and they kind of forget that there’s anything else you could be, because it’s been so many years. And it’s like, maybe you should be making a decision around your depression, and actively engaging with what you want to do about it.
Rob Wiblin: Yeah. So not realising that there’s a decision at all, a big potential failure. What are some other ones?
Spencer Greenberg: I think another big one is that people will accept one framing of a problem. I feel like when a friend comes to me with a decision, and they want to discuss it and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, “I solved your problem.” What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings.
And I think this is a powerful thing we can do for ourselves. Sometimes the framings are more about… We make it too binary — like “I either quit my job or I stick with my job” — and we don’t think about, “Maybe I could switch roles at the same job, or I could renegotiate details of my role” or other things like that.
So sometimes that’s where we’re stuck on framing. But sometimes it’s just coming at the problem differently. A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, “Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?” And they’d be like, “Hell no!” It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it.
So I think this can be quite powerful: we get stuck in these frames on decisions, and asking ourselves, “Is there another way of looking at this?” And sometimes talking to other people can be a really helpful way to get those reframes, but sometimes we can generate them ourselves.
Personal principles
Spencer Greenberg: So I think of “values” as the intrinsic values, the things you fundamentally care about, that you value for their own sake. A “principle,” to me, is a decision-making heuristic. So instead of having to rethink every decision from scratch, you’re like, “I have a principle, and it helps me make my decisions quickly. It gives me a guideline of how to make my decision.”
And a good principle, not only does it make your decisions more efficient to get your values — so it speeds you up — but it actually makes it more reliable that you get to your values than if you try to rethink things from scratch every time. So a good principle can help orient you on cases where maybe your willpower wouldn’t be there, or where maybe you might second guess yourself and actually not do the thing that’s most valuable.
Just to give you some examples, one of my principles is “Aim not to avoid anything valuable just because it makes you feel awkward, anxious, or afraid.” I have that principle, so when I’m in a situation where there’s something that’s making me feel awkward or making me feel anxious, that’s valuable to do, I just go immediately to, yeah, I have to do that thing. The fact that it’s awkward or anxiety-provoking is not an excuse to me, because that’s one of my deep principles. And the thing is, if I try to think about it from scratch every time, not only is it slower, but it also is easy to talk myself out of that thing.
Another one of my principles is “Aim to have opinions on most topics that are important to you, but view your beliefs probabilistically. Be quick to update your views as you get new evidence.” Here, if something I think is really important in society or for my own life, I want to form an active opinion on it. So if someone said, “What do you think about this?” I would say, “Here’s what I think” — but simultaneously, I want to be very flexible to new evidence and be ready to adjust my view at the drop of a hat if strong evidence comes in. Not adjust at the drop of a hat with weak evidence, but adjust at the drop of a hat with strong evidence.
So that’s something I aspire to, and I think that’s helpful when someone challenges me. I put a lot of my opinions on the internet, and if someone’s like, “What about this counterevidence?,” that principle helps orient me towards not being so reactive and being like, “Ahh, I’m being attacked!,” but being like, “If they gave me strong evidence, my principle says I have to change my view. So did they give me strong evidence?”
A simpler principle can be more action guiding and give you less room for making excuses or second guessing yourself. A more complex principle can take into account more aspects of the world to show that you miss fewer edge cases. Because it’s not that a principle will be right every single time; it’s that it will be right most of the time, and it will help you be more efficient and help you avoid second guessing yourself too much, or willpower issues and things like that.
Let me read you my principle about lying. I say, “Try never to tell lies. White lies are OK only when they’re what the recipient would prefer.” So I’m trying to say there is some wiggle room. Like, if you go to your friend’s art performance, and they come up to you excitedly, like, “What did you think?” and you actually thought it sucked, that’s a tough one. I’m going to give myself some leeway to be like, if I think this person would rather I express appreciation from their art — they’d rather I lie — then maybe it’s OK.
Lightgassing
Spencer Greenberg: Lightgassing is a phrase I came up with to describe a phenomenon that I kept encountering but I’d never had a word for.
It’s kind of the opposite of gaslighting, so why don’t we start with talking about what gaslighting is? It comes from an old film where there was this man who would mess with the gaslights in their house that produce light, but then trick his partner into thinking that he hadn’t done it, so she started to doubt her own senses and her own sanity. So this idea of gaslighting is when you kind of deny someone’s senses or deny their reality, so that they start doubting their own senses or sense of reality.
Lightgassing, on the other hand, is kind of the opposite of this. The way it works is that sometimes when we’re dealing with someone who, let’s say, is upset, they might say something that we really don’t believe is true, but they want us to reinforce that thing because it’s really deeply important to them.
The most classic example of this would be with someone who just had a breakup, and they’re talking about what an asshole their partner was. But maybe you don’t think their partner is an asshole at all. But they’re giving all this social pressure for you to tell them that, yes, their partner was an asshole. And this is kind of the opposite of gaslighting. Because whereas gaslighting is getting someone to doubt their real sensory perceptions that are actually true, lightgassing is when you’re actually reinforcing false sensory perceptions.
To me, this came about most dramatically in a situation where one of my loved ones was dealing with very severe mental health challenges and was experiencing actual delusions, like actual straight-up delusions about what was true. I realised I was in a very strange situation, where there was a lot of pressure to reinforce their delusions that I knew were false, and I still wanted to be supportive to them, but I started feeling very uncomfortable at this idea of reinforcing their delusions. But if I didn’t reinforce it, I felt like it was going to make them upset or angry.
Rob Wiblin: So you got into thinking about this because a friend of yours was having delusions, I guess. And what did you conclude about how one ought to deal in these situations?
Spencer Greenberg: First of all, I’ll say we’ve written an essay on our website, clearerthinking.org, if you want to check it out, you want to dive deeper into this. But I will say, I think fortunately, the kinds of strategies you use actually are similar, whether it’s a really extreme case — like someone experiencing delusions — or a more mild case, where maybe someone’s just really angry at their ex-partner or something like that.
What I try to do is validate the person’s feelings without validating false perceptions they have. And that doesn’t mean you tell them they’re wrong. If someone’s upset, it’s usually not appropriate to be like, “You’re wrong about X, Y, and Z.” That’s probably not the right time. But you can still be there for them. You can show them compassion, you can tell them you care about them, and you can validate the feelings they’re feeling without agreeing to the specific factual errors they’re making.
So an example if the person is delusional: let’s say they think someone’s coming after them, which is not true. You don’t have to be like, “Oh no, someone’s coming after you. That’s so scary!” You can say, “That sounds like a really frightening experience.” So you’re kind of saying, “Given that you think someone’s coming after you, that makes sense that you’re really scared. I’m here for you. I want to help you.”
Articles, books, and other media discussed in the show
Spencer’s work:
- Spark Wave
- Clearer Thinking, including:
- The Clearer Thinking with Spencer Greenberg podcast — including episodes with Daniel Kahneman and Rob)
- A set of tools on decision making
- Transparent Replications
- Optimize Everything — Spencer’s website
- Does money buy happiness, according to science?
- It can be shockingly hard just to understand three variables
- Can you have causation without correlation? (Surprisingly, yes)
- Your intrinsic values: Why they matter and how to find them
- Some signs of harmful or untrustworthy relationships
- On emotionally reactive traits: a hidden cause of drama and ruined relationships
- Importance hacking: A major (yet rarely discussed) problem in science
- Reflecting on your life principles — plus a tool to uncover your guiding principles
- A practical roadmap for rational decision-making
- Tools and strategies for making hard decisions
- Lightgassing — Spencer’s explanation of the phenomenon on X
- How to offer emotional validation (and how not to)
- How can you help friends or family members who are struggling with a mental health challenge?
- Can astrology predict life outcomes? We tested it.
- Can gossip be good?
Integrity and reproducibility in social science research:
- Income and emotional well-being: A conflict resolved by Matthew A. Killingsworth, Daniel Kahneman, and Barbara Mellers
- Lots of bad science still gets published. Here’s how we can change that by Sigal Samuel
- Does history have a reproducibility problem? by Anton Howes
- The past, present and future of Registered Reports by Christopher D. Chambers and Loukia Tzavella
- Preregistering, transparency, and large samples boost psychology studies’ replication rate to nearly 90% by Cathleen O’Grady
Other 80,000 Hours podcast episodes:
- Spencer’s previous appearances on the show:
- Spencer Greenberg on stopping valueless papers from getting into top journals (March 2023)
- How much should you change your beliefs based on new evidence? Spencer Greenberg on the scientific approach to solving difficult everyday questions (August 2018)
- Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm (October 2017)
- Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad
- Lucia Coulter on preventing lead poisoning for $1.66 per child
- Bryan Caplan on why you should stop reading the news
- Vitalik Buterin on effective altruism, better ways to fund public goods, the blockchain’s problems so far, and how it could yet change the world
- Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy
Transcript
Table of Contents
- 1 Cold open [00:00:00]
- 2 Rob’s intro [00:01:01]
- 3 The interview begins [00:02:31]
- 4 Does money make you happy? [00:05:54]
- 5 Hype vs value [00:31:27]
- 6 Warning signs that someone is bad news [00:41:25]
- 7 Integrity and reproducibility in social science research [00:57:54]
- 8 Personal principles [01:16:22]
- 9 Decision-making errors [01:25:56]
- 10 Lightgassing [01:49:23]
- 11 Astrology [02:02:26]
- 12 Game theory, tit for tat, and retaliation [02:20:51]
- 13 Parenting [02:30:00]
- 14 Rob’s outro [02:34:10]
Cold open [00:00:00]
Spencer Greenberg: So we could imagine a coordinate system: you’ve got value on one axis, hype on the other axis
So let’s start with pure hype. We’ve got something like art NFTs that nobody even likes looking at. They’re just like ugly art NFTs, right?
Then we have things that are pure value, no hype. Let’s say doorknobs. Doorknobs are just really good at what they do. Like, so good at what they do, you don’t even think about it. When you need a doorknob, you buy a doorknob, you’re satisfied. You never think about it again. No hype whatsoever. I’ve never heard anyone rave about doorknobs.
Then we have things in between. I think Tesla would be a really good example.
And the reason I’ve been thinking about this is I think there are some things that succeed on pure value, and there’s some things that succeed on pure hype. But I think in reality, most of the time, when things succeed, it’s by getting a combination of hype and value. And I think hype is something that I don’t like, and I have a negative feeling around it. And I think because of that, I’ve underestimated the importance of it to accomplish things in the real world. And, of course, if you’re trying to create hype, you should do it in an ethical way; you shouldn’t be lying or manipulating people. But I think there are ethical ways to help get people excited.
Rob’s intro [00:01:01]
Rob Wiblin: Hey listeners, Rob here, head of research at 80,000 Hours.
Today we’re back with the fourth appearance of listener favourite and jack-of-all-trades Spencer Greenberg.
We cover a lot of fun topics, including:
- How much money makes you happy, and tricky methodological issues that come up trying to answer that question.
- The importance of hype.
- The most accurate warning signs that someone is untrustworthy or likely to hurt you.
- The claim that registered reports are successfully solving reproducibility issues in science.
- The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.
- The biggest and most harmful systemic mistakes we commit when making decisions.
- Lightgassing, which is the opposite of gaslighting.
- Using non-statistical methods to test whether astrology works.
- And the social value of retaliation.
There’s a lot in there.
Just quickly before that, 80,000 Hours is currently hiring for a range of roles in business operations and people operations, which you can find at jobs.80000hours.org, and I’ll comment on in the outro.
But now I bring you Spencer Greenberg!
The interview begins [00:02:31]
Rob Wiblin: Today I’m again speaking with Spencer Greenberg. Spencer remains a serial entrepreneur who, among other things, has founded Spark Wave, an organisation that conducts research on psychology and builds software products with a psychology focus — such as apps for mental health and technology for speeding up social science research. He has also founded clearerthinking.org, which offers more than 80 free tools and training programmes that have been used by over a million people and are designed to help improve decision making and reduce biases in people’s thinking.
Spencer also hosts the Clearer Thinking podcast, as many of you will know, where he interviews an eclectic range of people about all sorts of things related to rationality and important social problems. His background was in mathematics, and he has a PhD in applied math from NYU with a specialty in machine learning.
Thanks for coming back on the podcast, Spencer.
Spencer Greenberg: Great to be back.
Rob Wiblin: So this episode is not going to have a super deep, comprehensive, uniting theme behind it. I think the theme in my mind is that Spencer Greenberg was incredibly prolific last year, even by the standards of Spencer. I was looking at your website, and you’ve written dozens of articles and been involved in a number of really interesting research projects, as usual, but maybe even exceeding your usual standards. And we’re basically going to go through a sample platter of ideas that you came up with last year that either seem important or interesting to me, or hopefully both.
One thing I’ve been meaning to ask you for a while is, I think of you as one of the top thinkers who I associate with the US rationality community, basically along with maybe Julia Galef, Zvi Mowshowitz, Gwern, I guess there’s a couple of others. I won’t ask you whether you think you’re one of the top, but do you also conceive of yourself as kind of trying to spearhead and make workable and practical and useful that tradition of thought?
Spencer Greenberg: Yeah, I really care about trying to be rational. I think it’s just a very deeply rooted value for me. In particular, I think a lot of societal problems stem from what you might call a lack of emphasis on rationality, whether it’s things about the way that we plan our society or political tribes or the decisions that companies make or decisions that nonprofits make, et cetera. I really, really care about it. And I also really care about it in my own mind. I care about how do I find ways to believe the truth more reliably, to not self-deceive. And yes, from that point of view, it’s just been something I’ve always cared about.
And then when I learned about the rationality community many years ago, I was like, cool. There’s other people that really, really care about this. And so there’s a natural sense of feeling like we’re working on a project together, even though we don’t just agree on everything.
Rob Wiblin: Yeah. The rationality community, in my mind, is united on methodology and trying to figure out the truth, but extremely varied on conclusions. But I guess that makes sense if you’re someone who’s really interested in staking out what you personally think is right, rather than just going along. Then you could end up actually with more disagreement internally than you might if you had a group that didn’t really prioritise truth and was mostly prioritising social harmony.
Spencer Greenberg: Yeah. And when people are very truth-oriented and also try to resist social pressure to believe certain things, it naturally can lead to a wide range of weird beliefs. And especially if there’s not a social punishment, like, “That was a weird thing you said,” then it can reinforce — which I think is a good thing mostly, but also can be a bad thing in certain cases.
Does money make you happy? [00:05:54]
Rob Wiblin: Yeah. All right, let’s dive in. Does money make you happy?
Spencer Greenberg: So this is a fascinating tale that I’ve been kind of investigating for a while. Let’s start with some of the earlier research, where people found a logarithmic relationship between income and what you might call “life satisfaction.” So let me unpack that a little bit. You ask someone a question like, “Overall, how satisfied are you with your life?” and you have them rate it on a 0–10 scale. Or similarly, you say, “Imagine a ladder with 10 rungs, and each rung of the ladder is about how good your life is. The top rung is the best possible life for you. What rung would you put yourself at?”
So these kinds of methods that are asking about overall life evaluation — I’ll call them “life satisfaction measures” — people have long found that there tends to be a logarithmic relationship between that and income. And what that means is that essentially every time you double your income, you get the same number of points increase from this life satisfaction measure.
And this has been studied within countries. Maybe not every single country it’s this way, but in many countries they found within the country you get this kind of logarithmic relationship: the wealthier you are, the happier you are — but you have to double your income to get the same benefit each time. And then they’ve also found this across countries. So if you plot the wealth of specific countries on a big plot, you kind of find a similar kind of relationship. That’s kind of a longstanding historical thing. So that’s kind of the first part of the story.
Rob Wiblin: OK, so more income makes you happier, but you get declining returns to it. That makes a bunch of sense. What’s the next stage of this debate?
Spencer Greenberg: So then there’s this question of, OK, people’s life satisfaction goes up, but what about how good they feel in the moment? Let’s call that “wellbeing” instead of life satisfaction. Wellbeing would be like, you ping someone at a random point of the day, and you say something like, “How good do you feel right now?” Or you could ask other questions about their emotional state, like that they are feeling happy or that they laugh today, or things like that, right? But it’s the distinction between their evaluation of how good their life is, which is life satisfaction, and how good they feel in the moment, which is wellbeing.
And a lot of people assume that the relationship should be similar: the wealthier you are, the more your wellbeing tends to go up. And of course, these are associations. So we’re talking about on average. But Daniel Kahneman — Nobel Prize winner, former guest on my podcast, Clearer Thinking, who’s someone I tremendously respect — he published a paper that found a really interesting finding: that as people got wealthier, yes, indeed, their wellbeing did go up, but it kind of flattened off. I think it was at about like $75,000 a year or something like that, it flatlined, so they stopped getting any benefit for their wellbeing on average as they got wealthier.
This was big news. It was quite surprising to many people, and kind of a big story at the time.
Rob Wiblin: OK, so now the story is: yes, money does make you feel better in any given instant, but it kind of caps out above some particular level; it seems like it’s not helpful anymore. I mean, the thing that might immediately jump to mind is, as you get higher and higher incomes, there’s fewer and fewer people earning that amount of money. And also you’re expecting declining returns — so you’re thinking,the differences between $75,000 and $100,000 might be quite small, because both of those are comfortable incomes. So maybe it’s just that the study didn’t have a sufficiently large sample size to pick up the differences statistically. Is that what was going on?
Spencer Greenberg: No. Actually, they’re working with pretty large datasets, and they actually saw this flattening effect.
So that’s where the interesting twist comes into the story. This other researcher, Killingsworth, using a significantly higher quality dataset than Kahneman had access to, basically does his own analysis and finds, lo and behold, that actually wellbeing, much like life satisfaction, continues going up logarithmically. So if you double your income, you get the same unit increase in wellbeing. And this was kind of a shock because, well, what the heck was going on with the Kahneman paper? You know, everyone greatly respects Daniel Kahneman. Why would they find such a difference of opinion?
Rob Wiblin: And what was the reason?
Spencer Greenberg: Well, now Kahneman, much to his credit, ends up talking to Killingsworth, and they team up for what’s called an adversarial collaboration. This is something I think is incredibly valuable for science, and I hope will happen a lot more, where researchers who disagree actually will write a paper together to try to explore their disagreement and see if they can come to an agreement, or at least figure out the source of their disagreement. So they worked together on a paper — along with Barbara Mellers, who they collaborated with; she does a lot of work on adversarial collaborations — and they ended up finding that actually Killingsworth was correct. Indeed, as you go up in income, there’s a logarithmic relationship with wellbeing.
And they try to figure out why was it in Kahneman’s data that he didn’t find this? What they end up concluding is that the way Kahneman measured happiness was not very ideal. It was three binary variables that kind of get combined together, basically the variables around “Did you feel good?” The problem with it is that almost everyone said yes on these variables, and that meant that it only really had the ability to distinguish different levels of unhappiness, because if you were like “OK” to “good,” you would just agree to the three variables, and therefore it all was really measuring was the unhappy side.
Rob Wiblin: I see. Hold on. Do people call this top censoring or top… like the measure caps out where, if you’re feeling like kind of content, then you already have the maximum score, so it can’t detect you becoming happier beyond that, going from happy to really happy?
Spencer Greenberg: Yeah. It basically was that. It was just detecting sort of unhappiness. And then in the new paper, in the adversarial collaboration, which is really fascinating — it’s called “Income and emotional well-being: A conflict resolved” — they find that there’s this fairly strange effect with, for unhappy people, you do get this capping out effect. So if you just look at a certain percentile of unhappiness — let’s say the 15th percentile of most unhappy people — as they get more and more income, it does actually cap out. It stops benefiting them. We don’t know for sure why that is. Possibly it’s because when you’re in the bottom 15th percentile of happiness, maybe the things that are making you unhappy at that point are, maybe there’s a limited ability for income to change them. We don’t really know.
But basically they found that it was something about that measure. And indeed, now they both agree that wellbeing goes up logarithmically with income.
Rob Wiblin: I see. So they found that for people who are unhappy, maybe in the bottom third, at some point earning additional money just doesn’t help. I guess we could speculate maybe those are people who are in unhappy marriages, or they hate their job, or they’re just…
Spencer Greenberg: Or they have genetic things that make them unhappy.
Rob Wiblin: Oh I see. They could just be morose by nature. And unfortunately, beyond some point, money just isn’t able to help with the typical problems that people in that group have. Whereas if you’re someone who’s cheerful, then as you get more money, you just find more ways to enjoy your life. That’s the basic story?
Spencer Greenberg: Yeah. And actually, funnily enough, I thought there was a problem with this analysis, because I thought, if you’re studying a relationship between wellbeing and income, can you really then condition on wellbeing? Can you say, let’s look at the relationship between wellbeing and income for people who have low wellbeing? Isn’t that going to kind of destroy the relationship? And I spent an entire day analysing this, thinking that Kahneman had made a mistake. I even emailed with him about it. Turns out he was totally right. He actually did that for very subtle reasons. Yes, there is a problem with doing the conditioning, but the way he did the conditioning was just perfect, and it was actually exactly done properly. So anyway, that was a waste of a day of my life. But interesting.
Rob Wiblin: I’m sure you learned something.
Spencer Greenberg: Yeah, for sure. But the story doesn’t end here. And this is the funny thing. So you read this paper and you’re like, wow, actually there is this logarithmic relationship. It applies both to life satisfaction and wellbeing. Cool.
But being the person I am, I plotted the data, because they released their data, which is super cool. I actually plotted it and I was shocked by what you see when you plot it: if you actually plot income versus wellbeing, it looks completely flat. Like you’re squinting at it and you’re like, wait, I thought it was going to go up? What on Earth is happening? Why is this flat?
And the answer is, it is a logarithmic relationship, but it’s such a flat one that to me, the real takeaway of what you want to understand is not this technical thing about the relationship, but if you want to understand how does wellbeing change with income, the actual answer is it doesn’t, basically. So just to give you some numbers here, it was about a 4.5 point difference out of 100 on wellbeing — 4.5 points out of 100 on wellbeing, going from the lowest income to the highest income.
Rob Wiblin: OK. Wow.
Spencer Greenberg: Yeah. So think about that for a second. How little that is.
Rob Wiblin: What are the typical scores? Because I guess if you were going from 10 out of 100 to 14.5 out of 100, that would look more impressive. But I’m guessing it’s not that?
Spencer Greenberg: That’s a good question. I don’t have the typical scores offhand. But it’s really shocking; you could look at the chart of it and it’s like you just can’t even see it going up at all. Basically, visually, you squint it and you’re like, is it even going up? And that’s actually a really large income increase. I think the top income bracket is something like $600,000. So we’re talking about a ridiculously large increase in income for 4.5 points out of 100 on wellbeing.
So to me, the top line story is, holy crap, why is wellbeing not going up when people are getting wealthier? And then sort of second, if you want to dig in further, it’s like, and by the way, that tiny increase, that’s logarithmic. That little 4.5 point increase is a logarithmic effect.
Rob Wiblin: It’s the sort of point that only an academic would care about when they’re very deep in a debate with their colleagues on some technical statistical points.
Spencer Greenberg: Yeah. To be fair, the authors absolutely pointed this out. They do have a paragraph about this. It’s just not what the paper is about. So I think most people reading the paper, that’s not their takeaway. And that plot was not plotted, so they’re just going to come away with the logarithmic relationship.
So I went back and I compared this to life satisfaction. And actually life satisfaction — the overall evaluation of your life, instead of how good you feel at a moment — has a much stronger relationship to income. It depends on how you measure it, the strength of it. But when I was looking at different countries, it looks like it was maybe about 2 points on a 10-point scale, as you go from the lower-income to higher-income brackets.
Rob Wiblin: So almost four times the impact on life satisfaction as opposed to moment-to-moment experience?
Spencer Greenberg: Yeah, if you think about it from the point of view of what percentage of the scale you traverse. If you think about it in a normalised way, like in Z-scores, I think it was something like twice the strength of the effect.
Rob Wiblin: I see. So do you conclude from all of this that probably earning more money doesn’t make you more cheerful, doesn’t make you have more pleasurable moments?
Spencer Greenberg: Here’s the really weird thing. If you think about this, you’re like, what are we really showing? We’re showing that your moment-to-moment wellbeing doesn’t seem to go up very much at all as people get wealthier, on average. Does that mean that we can conclude something like wealth won’t make you happy, or won’t make you feel good in the moment? I think it does provide evidence for that at the individual level. I think it should make you question, will you feel happier, moment-to-moment, if you make more money?
But it doesn’t mean that it’s not going to. For example, there might be really specific situations you’re in where you can be confident that more money will make you happier. For example, if lack of money is causing distress for you day-to-day, that’s probably making you less happy. Or if it’s preventing a really important quality of life factor for you, where if you could buy that thing, like, let’s say treatment for chronic back pain or something like that.
So I think there’s a lot of ways where we could say money actually will make people feel better moment-to-moment. But I think, interestingly enough, on average, we don’t see much of an effect — which is quite surprising. However, if what you care about is not the moment-to-moment feeling, but your overall evaluation of your life, there we do see a stronger effect.
Rob Wiblin: Yeah, a reasonably strong effect. I feel like this issue crops up among economists and on Twitter, at least in my feed, roughly every two years, and it has for my entire adult life. There’s some new update, some exciting claim about this.
Something that I find bizarre about it is that normally economists and people who talk about this sort of research on Twitter are up in arms about the difference between correlation and causation. And if you ever just tried showing a straight correlation between two things, people would laugh you out of the room and say, “This is completely lacking credibility. Why would you think that just plotting a straight correlation between this and that shows you anything about whether one causes the other?”
But almost all of this research, as far as I can tell, is just raw correlation across surveys of how much people earn or how much their household earns and how happy they are. But this never gets pointed out. I find it absolutely strange, and I don’t put much stock in it — or I think it requires enormous carefulness in the interpretation — because there’s all kinds of ways that money could plausibly make you happy that wouldn’t necessarily show up in this sort of pure raw correlation.
Spencer Greenberg: Yeah, it’s a really interesting question. So let’s say you’re studying two variables, A and B, and you find a substantial correlation. Do you know the causal structure? Knowing that they’re correlated, does that mean that A causes B? Obviously not, right? It could be that B causes A. It could be that C causes both A and B. There’s a lot of ways out of that. It could be that A causes B, which causes A again, in a cycle, right?
In fact, I once tried to do this for three variables. I said, let’s say you’ve got variable A, you’ve got variable B, and your goal is to predict variable Y. So you got A and B, and you want to predict Y, and A and B are both correlated with Y. What are all the causal relationships? And like two hours later, I was still figuring out new causal relationships. It blew my mind how many different causal relationships you could have. It was just three variables. I have a blog post on this on my website, spencergreenberg.com, where I map out all the different causal relationships. It’s really nuts.
So there’s a lot of reason to think that just because things are correlated doesn’t mean that the causal relationship is what you think. However, it is suggestive. Like, if you find a moderate correlation, you could start asking, well, maybe there’s a causal relationship. And can we rule out that maybe B can’t cause A? And if we can rule that out, we’re a little closer to proving a causal relationship, right? But often correlation is confused for causation.
But then there’s a completely different thing, which is, suppose you find little to no correlational relationship: can you conclude that there’s no causal relationship? I also tried to work this out. I have an essay called “Can you have causation without correlation?.” It turns out there’s some interesting cases where you can have little to no correlation, but still have strong causation. But there aren’t that many cases. And you can rule them out to some extent in some situations. In other words, you can kind of work your way through the list and be like, well, could it be this, could be that?
I’ll give you an example. Sometimes there are systems that are in equilibrium on purpose. Let’s say a thermostat that’s heating your apartment: it’s in equilibrium on purpose. When things are in equilibrium on purpose, you can have causal effects that don’t lead to any correlation, where, for example, because it’s always held in equilibrium, any change in parameter has no effect. Even if it actually causally changes what’s happening underneath, inside the machine, the temperature still stays the same, because it’s an equilibrium, right? So that’s an example.
Another example is you can have zero correlation with causation when you have certain types of nonlinearities. For example, a bowl shape, like a perfect parabola shape, what happens is that as you go up the parabola, you have a positive correlation; as you come down, you have a negative correlation — and they cancel, because correlation is sort of an average measure, so you end up with zero correlation. You could have perfect causation with zero correlation, right?
Rob Wiblin: That could show up in nutrition, right? If you had, say, how much saturated fat do you eat, what fraction of all your calories comes from saturated fat? And you could have some golden middle level that is ideal. But if you just did a linear correlation, you’d find no relationship, because, as people have less, it’s bad, and as people have above that level, it’s bad — but on average, people are spread across that and it has no effect.
Spencer Greenberg: Exactly. Although to really have it cancel perfectly, you’d have to have it like it’s just as good on the upside as bad on the downside. But it can happen, or it could mostly cancel.
But what I would say overall, if you find no relationship, I actually think it’s stronger evidence for no causation than if you find a moderate relationship being evidence for there being causation in the way that you think that A causes B.
Rob Wiblin: So you would say, if we had found that there was a strong correlation between income and happiness, then we could say maybe income causes happiness? But if we find that there’s no relationship, no raw correlation between them, then we could say probably there’s not a causal relationship between them?
Spencer Greenberg: It’s a little stronger. It’s a little bit stronger, I would say. It doesn’t mean it proves it. You can have these weird effects like we talked about, and there’s some other weird effects. You could have a weird perfect cancellation. Like you could have a confounding variable that’s some other variable, that’s both making —
Rob Wiblin: I think I want to offer one of these, because there’s a really obvious way in which you could have no correlation here, even though all else equal, money does make you happier. Which is that people face this lifestyle choice between going and getting jobs where often they work long hours in unpleasant work in order to make more money, or do they take an easy job that doesn’t pay very much and work fewer hours and enjoy more leisure?
And it could be the case that people really get a lot of value out of spending money on themselves, but in order to get the money, they have to make the sacrifice of working more in a job that they don’t like. That is a very plausible situation. And in that case, you could just see that people, they choose a different point along this tradeoff, but on average, they’re not really happier one way or another. It just cancels out.
Spencer Greenberg: Right. So if it were the case that people, when they go get money, they take on jobs that are so unpleasant that they nullify the income increase, that would also lead to the same effect. Again, it would have to balance very perfectly. And that’s why you might think these cases are not as likely, because you’d have to get that very lucky, perfect cancellation, you know what I mean? Where it’s like the amount of extra income just is offset by the amount of extra stress. But it’s possible.
Rob Wiblin: Yeah, but if it was really the case that people just did not get anything out of spending more money, you might expect that there’d be a negative correlation, because the people who have more income are working more. There’s surely a correlation between income and hours worked. And if money was truly so useless, then they would be less happy, as you might expect of the person who’s working 80 hours a week in some soulless corporate job. But evidently the data suggests that those folks are at least doing OK, and there must be some reason for that.
Spencer Greenberg: Yeah, it gets tricky. And I think it depends also on what do you actually care about? Do you care about whether, if you were going to randomly give someone money, they’d be happier? Or do you care about the average effects of going and seeking more money? Because if we think of this as the average effects of going and seeking more money, or let’s say successfully seeking more money, that’s a little bit more true to what the research is showing than it is true to if you injected someone with random money they didn’t have to do anything for.
Rob Wiblin: Right. Yeah, I suppose people apply it to different things. And I guess this does suggest that if you’re a typical person, going and changing your lifestyle in order to try to earn more money, to become the kind of person who earns more money, probably delivers gains to life satisfaction, not really that much gain on experiential lifestyle.
But I think some people look at this and they say it’s not valuable to increase productivity or GDP per capita, because, look, money doesn’t make people happy. But in fact, if we could, as a society, just produce much more without having to work harder, it might be that that would actually deliver pretty substantial wellbeing gains, because then we wouldn’t have this offsetting effect of needing to work longer hours to get the money.
Spencer Greenberg: Yeah. I think on the GDP question, you get into a huge mess of positive correlations that are hard to interpret — because GDP per capita is correlated with almost every positive thing you can imagine about a country, and we don’t know how that web untangles. To what extent is it the GDP per capita going up that’s causing all these positive things? To what extent is it a consequence of some of these other things? Is it that good governance leads to both GDP per capita increase and wellbeing increases? Yeah, that’s a very complicated one.
Rob Wiblin: Yeah. OK, so that’s pointing in one direction. Actually, there’s something that jumps out to me that points in the exact opposite direction, which is: in general, across people, positive things tend to come together, and that’s because positive things often cause one another in a very complicated web. So even if it was the case that money didn’t really make people happier, being healthy, I think we know does make people happier relative to being in chronic pain or being stuck in bed because you’re dealing with long COVID or something like that. So health makes people happy, and health surely also makes it possible to earn more money, because you have the energy just to go out and get a job. And ill health surely interferes with your career, at least if it’s serious ill health.
And there’s going to be lots of other examples like this, like having had a stable household with loving parents and so on, probably that causes you to be happy through one channel; probably also is good for your career, because you’ll have greater confidence, you’ll have had more opportunities to learn as a child on and on. So given that you’ve probably got all of these different causal mechanisms that you would think would create a spurious relationship between happiness and income, why isn’t the relationship larger is actually a more shocking mystery.
Spencer Greenberg: It’s a great observation. And honestly, now I think that my best guess is just that there isn’t that strong a relationship between successfully seeking more money and having higher wellbeing. I don’t really understand why that is. I think it’s probably true, but it’s a kind of strange mystery. And it also is something that we didn’t really know much about as a society, like one example. We actually have more good information than we did a little while ago, so it feels like there’s some progress being made, even though there’s a lot of mystery there.
Rob Wiblin: Another thing that could be going on, that might help to unravel why there’s not as strong a correlation, might be that there are personality differences between the kinds of people who care about money and place value on money and seek money, and those who don’t. These may be broad stereotypes, but it wouldn’t surprise me if the kind of people who feel like they need money in order to be happy might be less happy to start with. They might be different personality types, someone who is more material focused, and they might indeed need money to be happier, but they’re starting from a lower baseline because that mentality puts you in a deficit to begin with compared to someone who’s primarily focused on relationships. I don’t know of any evidence for that, but imagining that there’s these different types of people, and desire for money correlates with happiness, I think is imaginable.
Spencer Greenberg: Absolutely. And I think you’re doing a really great job pointing out how freaking complicated this is to actually make sense of. So we have this interesting finding of this quite low relationship, and then it’s like, OK, what does it mean exactly? But I think it at least should update us somewhat, that if you’re going and seeking money as a way to have good wellbeing, we should be a little bit more sceptical of that approach. That doesn’t mean it’s not going to work for you. You might have a situation where it actually will make you have higher wellbeing. But we’ve learned something, I think.
Rob Wiblin: We learned bit by bit. OK, so my training is in economics. I feel like I want to criticise the economics profession here for not doing a better job, because economists have been developing this whole suite that we call the credibility revolution for the last 30 or 40 years — where, in order to actually figure out causal relationships, because we can’t do experiments in a lab that will really pin this down, we have to find natural experiments. And something that would truly be more convincing about this would be to find a case where people, for random reasons, end up getting a salary increase versus other people in the same job, who for basically random reasons don’t.
Spencer Greenberg: Could be like a tax credit or something.
Rob Wiblin: Exactly. Some policy cutoff like their birthday. If you’re born on the first of February, you get more money; if it’s like a day less then not. Or you’re like just above the cutoff for a salary increase on some performance metric versus someone who was just below. There, you’ve got randomly allocated additional income, basically. Does that increase your happiness? I feel like I would find that persuasive in a way that I don’t find any of this correlational, longitudinal, cross-country stuff to be very compelling, personally. But someone’s gotta go do it.
Spencer Greenberg: Yeah, that’d be really great. I’ll tell you why it’s so hard to do: because you need experience sampling data. You need to ping people and be like, “How good do you feel right now?” Now, you could use other methods. They’re not going to be quite as good as that. Like, you could ask people, “Think back to your whole day today: How good did you feel?” or something like that. But usually, I would say the most robust way to do this is you ping people at random points in the day, and that is a really big pain in the ass, and probably not the kind of thing economists are doing.
Rob Wiblin: I see. Yeah, I guess it’s the case that most of the natural experiments involve often policy changes, policy cutoffs, and then they look at tax records to get the outcome. But here you’d really have to plan it ahead, because people are not doing random experience sampling throughout the day and reporting that to the IRS through the normal course of events.
Spencer Greenberg: Well, if we just assume that money equals happiness, then we can know it’s easy to study.
Rob Wiblin: Makes it a lot more straightforward.
Spencer Greenberg: Isn’t that what economists love to do?
Rob Wiblin: Maybe there’d be a logarithmic relationship between that.
Hype vs value [00:31:27]
Rob Wiblin: OK, new topic. You have this concept of hype versus value. Tell us about that.
Spencer Greenberg: Yes. So this is something I’ve been thinking a lot about lately, and it’s partly because it’s a personal failing of my own. So let me lay out this idea of hype versus value. “Value,” I’m referring to intrinsic values — so the things that people fundamentally care about for their own sake, not as a means to other ends. So it could be like your own happiness, the flourishing of your loved ones, being honest, believing true things, learning — things like that. These are, to me, the things that we should as a society and individually be trying to create more of.
Then we have the category of what I’m going to call “hype.” Hype refers to something being cool, exciting, having a buzz around it. I’m also going to put one value in the hype category, which is social status — which is, I think, a genuine human value: people want other people to look up to them and think they are high status. But I’m putting it in the hype category because I think for this analysis, it better fits in hype rather than values.
OK, so now we have these two kinds of things. We’ve got hype and we have value. And we could imagine a coordinate system: you’ve got value on one axis, hype on the other axis. And you could start plotting things in this system.
So let’s start with pure hype. We’ve got something like art NFTs that nobody even likes looking at. They’re just like ugly art NFTs, right? These are really not getting anyone any value of theirs, except maybe social status — which, remember, we put in the hype category. But they do have, at least for a while, they had a big buzz around them. They were considered cool by a certain crowd. So they were just pure hype. It’s pure hype, no value.
Then we have things that are pure value, no hype. Let’s say doorknobs. Doorknobs are just really good at what they do. Like, so good at what they do, you don’t even think about it. When you need a doorknob, you buy a doorknob, you’re satisfied. You never think about it again. No hype whatsoever. I’ve never heard anyone rave about doorknobs.
Then we have things in between. I think Tesla would be a really good example. Tesla definitely produces some value. It makes cars that people really enjoy driving. It has some positive environmental impact. It also has incredible hype. Elon Musk is really good at building a sense of excitement and coolness and social status for doing a thing. And obviously, Tesla is extremely successful.
And the reason I’ve been thinking about this is I think there are some things that succeed on pure value, and there’s some things that succeed on pure hype. But I think in reality, most of the time, when things succeed, it’s by getting a combination of hype and value. And I think hype is something that I don’t like, and I have a negative feeling around it. And I think because of that, I’ve underestimated the importance of it to accomplish things in the real world. And, of course, if you’re trying to create hype, you should do it in an ethical way; you shouldn’t be lying or manipulating people. But I think there are ethical ways to help get people excited.
So I view it as a kind of blind spot for me. I also think it’s a blind spot for the effective altruism community, because I think, like me, many effective altruists are like, “Hype, ick. Yuck. Stay away.” But the reality is it often is hype that gets people to do things together. It gets people involved. It gets people excited to actually carry out changes in the world or to get a product to succeed. So anyway, this has been on my mind lately.
Rob Wiblin: But wouldn’t you say doorknobs are succeeding? Doorknobs are kicking ass!
Spencer Greenberg: Yes, the doorknobs are doing well, but now we’re just in the 1:N situation of, you get lots and lots of companies that make doorknobs; they’re kind of nondistinguished. Imagine you make the first ever doorknob, and you have to get it to take off. That’s where you’re in trouble. You’re like, “Look how useful it is!” They’d be like, “What the fuck is that thing? Why do I need a doorknob? I’ve been opening my doors fine without a doorknob forever.” Actually, I know nothing about the origin story of doorknobs. I’m just making that up.
But basically, you imagine you’re in a situation where you’re trying to do something novel. You’re trying to create a change in the world. It’s very, very hard to do that through pure value. It sometimes happens, but I think often you need a certain amount of hype to help get you there. If you’re doing pure hype, then you’re basically like a con man, right? Where it’s a fraud. But often even high levels of value need some hype along the way to help catalyse action.
Rob Wiblin: To me, so recently we did this episode with Lucia Coulter on Lead Exposure Elimination Project, and it feels like that’s something that has incredible value generation; preventing people from being exposed to lead is so important. Also it’s now finally generating some hype — that people are talking about it more and more; it’s becoming a bit fashionable in the EA community. And presumably that’s going to make it easier to get this stuff done, even holding the level of value generation constant.
What implications does this have? The thing that I’m familiar with, that I’ve kind of always heard, is inasmuch as you need to collaborate and coordinate with other people, there’s a reason to bandwagon on stuff that is not as cost effective, not as value generating, but is easier to get people to take up, because it’s easier to excite them about it. And then you’ll be able to move more resources. You won’t have to… The expression is “shovelling shit uphill.” I think that’s the one that I grew up with. It’s not going to be quite as difficult to get people to be interested in what you’re doing. Is that basically just the bottom line from this?
Spencer Greenberg: Well, I think there’s a lot of pieces to this. One piece is that you can market things in an ethical way to help create hype. And I think a lot of the people who work on effective altruism-type causes, because they find marketing so icky, don’t think about the fact that actually marketing is really important — and that there are ethical, non-manipulative ways to make things exciting through marketing. So I think that is just a blind spot. It’s a blind spot for myself that I’m trying to get better at, and I think a blind spot for people in these kinds of communities.
So an example: let’s say you’re trying to sell a product. You could exaggerate how good the product is. That’s obviously unethical. That’s one way to create hype. But another way is you think about all the accurate ways to describe the product, and then you’ve got this huge set of different accurate ways to describe it. And then among those, you pick the ones that get people most excited. That is a way of ethically building hype.
Or another thing to think about is, do you need to create a movement around this thing to actually create the change that you want? And then what drives people to be so excited that they want to join a movement? And that’s a very different set of forces than if you’re just trying to do the thing itself. If you’re just trying to do the thing, you plug away, you try to eliminate lead. That’s great. But what if you needed a movement to create the change to eliminate lead? Then you have to really think about what causes people to get excited and come together.
Rob Wiblin: OK. The lesson I was talking about was coming from the framework in which hype is somewhat exogenous. It’s just this random thing; people are sometimes excited about things and sometimes not excited about them. But of course, you can also generate hype to some extent by doing good marketing or being a good communicator, explaining things well, and actively taking steps to get people excited.
Spencer Greenberg: Well, I think a good example is Will MacAskill’s recent book, where I think he was able to create a lot of hype around a really good book that had a lot of interesting ideas in it. And he didn’t do that in any way that was manipulative and unethical; he did it through getting journalists interested in writing about it, getting reviews from top people. So he managed to create this buzz where a lot of people were talking about his book in a way that I think was very positive. And I think if he hadn’t created the hype, I think he would have had just much less impact with that book.
Rob Wiblin: Someone who was sceptical of this, how would they push back? I think you’re right to point out many people have this aesthetic distaste for hype.
Spencer Greenberg: Exactly. Which I do too, honestly.
Rob Wiblin: I do too. Yeah. And I think it’s not for no reason, because I guess once you start letting that attitude in where you’re just going to go along with whatever is fashionable at the time, then it’s possible to really lose your focus on value and just start saying things because people are going to like them. I guess you’d have some way to go before you were just creating NFTs.
But maybe, at least if you’re a researcher who’s thinking about cost effectiveness a lot, then maybe you think, “Hype might be important, but it’s not my department. I try to figure out what things in development deliver the most welfare gain per dollar, and I don’t want to then be polluting that by also asking, ‘What’s hot right now? What are people chatting about on Twitter?’ Send that to someone else.”
Spencer Greenberg: Exactly, exactly. And I think when you start thinking about this idea of how do I ethically create hype, there are actually a lot of very specific things you can do.
For example, let’s say you’re writing about your work. Well, what title do you use? A lot of times, the title is what determines whether someone interacts with your work or not. And if you look at marketers, they will often spend a huge amount of time on the title of a thing or the headline, because they know that’s a critical step for someone looking at what you’re doing. So there are marketers that say that they’ll spend 30% of their time just working on the headline — which seems insane, right? But it can be that important for getting something to spread. So that’s just one example: without being dishonest, let me get the perfect headline that gets people excited about what I’m doing.
Another example is, when you present the information, are you going to present a 50-page report, or are you going to present a 50-page report but the first page is a really nice summary that’s really easily shareable and has the most interesting things right up front? So if someone only has one minute, they could still learn about your thing and get excited about it. They don’t have to go read through so many pages to start getting excited about what you’re doing.
Rob Wiblin: Yeah, going with the lead exposure example, our title for that interview was “Preventing lead poisoning for $1.66 per child” — which from a research point of view is insanely oversimplified and probably would irritate some people. I don’t know whether that was a good title, but it puts very front and centre what the value proposition is meant to be.
Spencer Greenberg: Well, at least you didn’t say, “You won’t believe this one weird trick to prevent children from dying!” Look, I think people have a negative reaction to hype for good reason, often — because there’s many things in our world which are pure hype, and they’re really icky and disgusting. They’re essentially tricks. They get people to buy into things that don’t work. They allocate money to people that are lying. And that’s really bad. So the ick reaction is not unjustified, it’s just that there’s a way to do hype ethically and well. And I think it’s important. It’s important when you’re doing something that involves getting people excited — which not everything does, but many things do.
Warning signs that someone is bad news [00:41:25]
Rob Wiblin: OK, new topic. What are some warning signs that someone you meet or someone you know is bad news?
Spencer Greenberg: So we did a little qualitative study where we had 100 people answer the following question just in an open-ended format — they could say anything they wanted — and we asked them, “What signs do you look for that help you identify people who are likely to be untrustworthy, or who are likely to hurt you if they become a close friend or partner?” We then took all their responses and we kind of synthesised them to look at what are the patterns of what they’re saying, where multiple people are saying the same thing. And we broke down each of the things that were patterns into kind of discrete signs to look out for.
I thought it was pretty interesting because I wasn’t sure if I was going to agree with what people said, but I found out that really I did agree to a very large extent with what people ended up producing. And they also got me, through their answers, to think about things that I may not have thought about, but I’m like, yeah, that actually is a pretty good thing to look out for.
Rob Wiblin: What were the headline findings?
Spencer Greenberg: So here are the kind of patterns that emerged. And before we get into them, I will say that almost everyone will sometimes show these patterns. So the idea is not if someone ever shows one of these patterns, they’re bad news. It’s more like, think of it as a continuum: if someone repeatedly shows these patterns to a strong degree, you might question whether they’re a safe person, or whether they might be untrustworthy or hurt you.
So let’s dig into the specific things. The first set of patterns are around things you might call dangerous psychopathy or malignant narcissism. And so the things, you notice that the person seems to be manipulating you or other people. You notice that they’re inconsistent; like, they’ll say one thing one time and a different thing at another time. Or you catch them being dishonest — and again, it could be to you, or maybe you just see them being dishonest with other people. A self-centredness where they seem much more interested in their own interests than in other people’s interests. Quick, very intense anger. So they suddenly become enraged. And then finally, lack of empathy.
And I think what this cluster is really getting at are two personality disorders: antisocial personality disorder and narcissistic personality disorder. I will say not everyone with these disorders should be avoided. Like, there can be people who are good, ethical people who have these disorders — especially if they understand that they have these disorders; they’re seeking treatment, they’re working on themselves, and they have other compensating factors that help them avoid some of the dangers of having these disorders. But when you have someone who has these disorders to a strong degree, they’re in total denial, and they’re not working on it at all, it can pose quite a bit of danger.
Rob Wiblin: You should be on your guard.
Spencer Greenberg: You should be on your guard. Just be careful and know what you’re getting yourself into.
Rob Wiblin: This is a classic idea that on a first date, you should see whether the other person is nice to the waiter at the restaurant. And if they’re a jerk to them, that’s a sign that they maybe don’t care about people who aren’t in as strong a position of power as them. And I guess maybe this falls into another cluster you’re about to mention, but it seems like it might fall into that one.
Spencer Greenberg: Yeah. Well, you could see that being lack of empathy, for example. It could be a sign of lack of empathy. That’s just one set of things to look at.
The second cluster is around immaturity. And so this would be things like extreme emotionality. Like the person gets extremely upset over very minor-seeming things. The person seems to avoid topics when they’re upset. So instead of telling you, “That bothered me,” they just won’t talk about it; they’ll shut down. They have really poor communication. They’re lacking responsibility or accountability: maybe they mess up, but they refuse to apologise, or they just won’t take any accountability for what they did. And general poor handling of relationships. Like, if you see they have a bad relationship with everyone else in their life, that’s not a great sign.
And I think this immaturity category, maybe it’s not as potentially serious, but I think it really can be a red flag in relationships. You could get in a really bad pickle, where someone will do something, it’s harmful, but then they don’t take responsibility for it. Or they’ll be really angry at you about something: maybe you made a really minor mistake, but it wasn’t that serious, according to relatively objective third-party observers. But this person’s extremely upset about it, but then they don’t even tell you, and they’re just simmering with rage at you. So there’s a lot of things that I think can come out here, that actually, I do think it’s a pretty important cluster.
Rob Wiblin: It could make someone a difficult colleague as well, I imagine, if you can’t have frank conversations about what things have gone well and badly.
So that was the second cluster. There’s a third one?
Spencer Greenberg: The third and final cluster is a pettiness cluster. This would be things like they talk negatively about a lot of people, like saying negative things about their other friends to you; gossiping in a way that’s harmful, where they’re spreading information that could hurt people; and extreme judgmentalness, where they’re like, that person sucks because of this little minor defect.
So this category, the pettiness, I don’t think I would have thought of this category, but I do see why it can kind of be insidious, where someone can be causing harm in a social group through these kinds of behaviours.
Rob Wiblin: Yeah, I wonder whether it indicates that people might do that as part of their social positioning. So maybe they’re trying to undermine the status of other people in a group in order to big themselves up in relative terms. I mean, of course everyone has done that at some point in their life, at least once, but it’s maybe not a good thing to be doing very regularly, trying to raise your own position by dragging other people down, rather than delivering value to people around you.
Spencer Greenberg: Yeah, it’s almost like a negative-sum kind of behaviour, where you’re damaging other people’s reputations in a way that clearly we don’t want everyone in society doing that. It seems like that would lead to very bad outcomes.
Those are just three categories to think about. Again, none of them are hard and fast rules; they’re all on a spectrum. But if someone is showing these kinds of manipulative or very self-centred or very sudden rage kinds of behaviours, that’s in the first category. If they’re showing a lot of immaturity — like failure to acknowledge their mistakes, really bad communication, seem to be fighting with everyone in their life — that’s the immaturity category. And then finally the pettiness category.
Just things to be on the lookout for, to help you avoid people that might hurt you.
Rob Wiblin: So I kind of had the reaction of, this is just a survey, right? Is this really a good research methodology in order to figure out what are good red flags that people are looking out for? Maybe we should be seeing more sophisticated papers. What would you say to that? Do you think this is a useful research methodology?
Spencer Greenberg: It’s a good question. I really didn’t know how it was going to turn out. I didn’t know if I was going to be like, “This is junk.” But actually, for me, I actually think it seems like sound and wise advice when I read it. So that convinces me that maybe there’s something to this.
I suppose you could argue people have a lot of personal experience, and what we’re looking at here is not things where just one person said it; we’re looking for a pattern of a bunch of people are noticing the same pattern and noticing the same signs and then raising that. So we’re kind of trying to tap the wisdom of the crowd.
Now, I think this kind of method will work in cases where, on average, there is a lot of wisdom that people develop. It wouldn’t work in cases where there’s a lot of systematic bias in people’s perceptions. In systematic bias situations, you’re just going to reflect the systematic bias. But I don’t think any of these categories are that. I don’t think any of these categories are things where you’re like, “No, people are wrong. That actually isn’t a problem. You should just not worry about it.”
Rob Wiblin: “Pettiness is actually great.” Well, I’ll actually go out and say, I think this is an outstanding research methodology for this question, because I think the important thing to notice is that here we’re not trying to discern causal relationships even; we’re not trying to figure out the deep thing that is causing things to go wrong. We’re just asking, things went wrong: what things did you see before that? What would have helped you to predict this outcome? Or what things correlate with it? And if you’re just trying to predict, you don’t need to ask any deeper questions; simply looking at the raw correlation is sufficient. Because even if it’s the case that pettiness itself isn’t a bad quality, that it merely correlates with something else that suggest that someone’s going to have a toxic effect on your life, it’s still a great predictor, right? So yeah, I think I’m fully bought in.
Spencer Greenberg: With the predictions, it’s so much easier than figuring out causality. It’s amazingly easier.
Rob Wiblin: Yeah. Do you think of this as a key life skill? On the show before, in previous interviews, I think we’ve sometimes listed the key ways that your life can go super wrong, and said, like, sometimes people really go off the rails, and there’s really only a few ways that happens, and you should be aware of those and steer very clear of them. I guess one of them was to commit a crime, or some other severe wrongdoing against others, even if it’s not technically a crime. Drug addiction. I guess not all drugs are created equal on that; there’s some that have a pretty bad track record. Not treating severe mental health problems seriously, or having a severe health problem and just not getting that addressed at all. Maybe because you’re in denial.
And I would say, actually another category that we haven’t talked about on the show before, but I think is another way to really mess up your life in a big way, is to get very close to someone who was cruel or exploitative, allowing them to become a really close friend or a colleague or a partner, like starting a business with someone, who is really bad news. This can have a very negative effect on your life, and it’s something that people should go out of their way to not have happen.
Spencer Greenberg: I 100% agree. I think it’s a really common reason people’s lives get really messed up is they kind of attach themselves — through marriage, through working relationships, just even sometimes through deep friendship — with people that are very harmful. And someone could be very harmful without being a bad person, so I would draw that distinction there. But there are some people that are good people, but are still very harmful. But I think learning to notice the signs…
And it doesn’t mean you can’t interact with the person at all. Maybe you could still hang out with them casually, but keeping a level of distance where they’re not so involved in your life that they can ruin your life, I think that’s the key thing. And honestly, I’ve been burned incredibly badly by this, where there have been people in my life that I think are quite harmful people and that have hurt me tremendously.
Rob Wiblin: Yeah, I was going to say I feel like kind of everyone who lives a normal life, or I suppose anyone who’s not extremely lucky, as you become a teenager, as you become an adult, you learn through experience — through bitter experience, often — about these warning signs, and that not everyone is maybe as nice as the people you knew when you were a child. And that some people really are quite toxic. But it does feel like it’s something that it’s very hard to teach people. It’s very hard to find a 17-year-old and sit them down and say, “Here’s a list of things. If someone has these traits, you should really be wary.” It feels like people don’t take that as seriously as maybe they should until they’ve had negative experiences.
And I think there’s a similar dynamic, actually, in business: that people who are early in their careers often don’t appreciate how harmful it is to make a bad hire.
Spencer Greenberg: Yeah, if one of your first employees is a harmful person, that could absolutely be devastating — can ruin your whole business, actually.
Rob Wiblin: Yeah. I feel most people have a story of, at some point they made a bad hire and then they realised how important hiring was, and how important it was to actually go and call references and things like that.
Spencer Greenberg: But it’s also tricky, because you might have a really bad relationship, get burned by someone who’s very harmful, but then you might not update on what are all the signs of being harmful, right? You just over-anchor on the details of that person without seeing the more general pattern. So hopefully a list like this could help accelerate people a little bit. Just getting them thinking about what are the different signs you might want to look out for.
I want to mention one other thing here, which is that I think there’s a type of person that is often actually quite a good person, like they’re often altruistic, that can also be extremely harmful. And I wrote an essay about this. I call it “reactive personality.” It doesn’t have, to my knowledge, a real name, so that’s why I gave it a name. The way it works, in my experience — and I’ve known quite a few people like this — is that there are people that tend to get extremely upset by things that wouldn’t upset most people, or would only upset people a very small amount, but they get extremely upset. And there’s nothing wrong with that. I mean, that’s unpleasant, but for them. But there’s nothing wrong with that.
But it’s the next part that’s the bad part. The next part is when they’re extremely upset by something that is actually fairly benign, they then will distort reality to be in line with their false perception. So they’ll be like, “I feel so upset; this person must have done this incredibly horrible thing,” and then they might go spread rumours that that person did a horrible thing. Or, “I feel so angry, that you must really hate me,” and then they’ll be convinced that you hate them, or things like this.
So this is something that I would just add to this list. If you notice a pattern of someone getting extremely emotional about things that almost nobody would get extremely emotional about, and secondarily they then distort reality to make reality fit this emotion, that would be another thing that I would point to that’s not captured in this qualitative list.
Rob Wiblin: And you think that might be correlated with people actually being nice?
Spencer Greenberg: Well, the people I’ve known who’ve been like this have mainly been nice, altruistic people. As far as I can tell, they weren’t trying to hurt anyone. They actually had good intentions.
Rob Wiblin: It’s not because they were cruel.
Spencer Greenberg: Yeah, exactly. And I think that people like that can learn to be better. Either they can learn to get more control over those initial emotional reactions, where maybe they can’t control feeling that initial emotional reaction, but they can step back and do methods like dialectical behavioural therapy or cognitive behavioural therapy or mindfulness-based stress reduction. Or they can learn to, when they’re in the throes of that emotion, not distort reality so much by maybe waiting until they’re calmed down to judge what’s happening or to work on.
They call it emotional reasoning: emotional reasoning is when you feel something, so you are convinced that it’s reality. “I feel angry, therefore you must have hurt me; I feel sad, therefore you must be going to abandon me.”
Rob Wiblin: When I was making this comparison between the personal life case — where you get close to someone in a personal capacity who turns out to have these traits — versus a colleague, I was thinking, if I make a bad hire and I hire someone who has these personality traits and it works out poorly, I mostly think I’ve screwed up. That’s kind of on me. I mean, maybe they’re also responsible in a sense, but I would largely blame myself.
I think, in a personal capacity, sometimes it’s harder to… That’s not the natural reaction. If someone behaves very badly in your social circle, you mostly think they suck, or that it’s their responsibility that they did this stuff, rather than blaming yourself for having them in your life. And I wonder, because we can externalise the responsibility for the bad actions — and indeed legitimately, if someone’s done something wrong and cruel, then it is their responsibility — whether we can have more effect… We can’t change people very much at all, basically. It’s very hard. The thing you can influence is who I am around, just as you can influence who you hire or don’t hire in a company.
So I wonder whether it’ll be useful, whenever someone is toxic, to say, “Well, it’s bad that that’s the case. But the key thing here is that I am responsible for myself, and I shouldn’t have them in my life, and I need to figure out how to get them out of my life.”
Spencer Greenberg: Yeah, it’s interesting. I wouldn’t advocate people blame themselves if someone mistreats them. I think that can be a bad dynamic. It’s not your fault someone mistreats you. However, your steps you can take, the wise way to behave is to avoid those situations.
I think a good example of this would be like if someone goes to an extremely dangerous neighbourhood and they’re walking around solo at night where a lot of crime happens, it’s not their fault that they get mugged. However, it would have been wise to take precautions, right? It’s the same kind of thing. It’s not your fault that someone mistreats you, but the wise thing to do is to try to notice these signs.
And I think an exercise that you can do that could be very valuable is to think back in your life to people that really hurt you, people you kind of regret that you ever became close to that person, and think about were there signs? And what were the signs? When were you aware of them? And then reflect on what’s the generalisation of those signs that you can help protect yourself in the future? My guess is that they will tend to fall into some of the buckets we mentioned today, but there may be other ones that you pick up on as well.
Integrity and reproducibility in social science research [00:57:54]
Rob Wiblin: OK, new topic: integrity and reproducibility in science research, which I think is actually something we’ve touched on in all three interviews that we’ve done before — and I think justifiably so, because it’s super important.
I heard you mention, I think on your own show a while back, that studies for which the methodology is preregistered are now being replicated. I guess there’s been enough of them now that we can do replication studies to see whether preregistration of methodology is working. And it seems like it is working, because preregistered studies are replicating at about the rate that you would expect if everything were going smoothly, which is to say at reasonably high rates.
Spencer Greenberg: Are you talking about preregistration or Registered Reports?
Rob Wiblin: Preregistration.
Spencer Greenberg: OK, so just to clarify for the listener, preregistration is before you do a study, you declare in a document exactly what you’re going to do. Like, “I’m going to do this exact thing and here’s how I’m going to analyse my data.” The idea is to try to bind yourself, so that you can’t do fishy stuff later to get the result you want. However, it’s OK to do different analyses than you planned; you should just acknowledge that in the paper. You should say, “We thought we were going to do it this way. Here’s why we said not to do that.” That’s fine. So it’s not that you can’t change your method. It’s just that you’re binding yourself to you have to say if you did change it.
Registered Reports are a different thing. A Registered Report is where you submit to a journal before you run your study. You submit to them saying, “Here’s what I’m going to do. Please evaluate my paper and accept or reject it. And then if you accept it I will then go collect my data and analyse it.” You’ve already kind of preconfirmed you’re going to publish it. What’s really nice about a Registered Report is it removes the weird incentive to have the results turn out a certain way. You’re going to get your publication whether it comes out positive or negative or whatever you find, so now you don’t have that weird incentive pushing on you. You can just focus on whatever the results really say.
Rob Wiblin: Yeah. Now I realise I’ve been conflating these two things in my mind. I guess they are somewhat related.
Spencer Greenberg: They’re related. Like Registered Reports involves preregistration. But it’s more than preregistration, right?
Rob Wiblin: Yeah. Tell us, how are they panning out? Have I understood right that it’s making a big difference?
Spencer Greenberg: I can’t recall the exact number I cited. I don’t know what that was. My vague recollection is that Registered Reports actually replicate at pretty good rates. That does seem to substantially improve replication rates.
I don’t know about preregistration. I don’t have a recollection of the effect of that. One thing I will say that’s kind of tricky about preregistration is that the reality is nobody holds you to them. So people preregister and then they don’t do what’s in their preregistration plan sometimes. And then nobody mentions it or says anything. So that’s a little bit tricky.
And we know that in part because we sometimes find that in our replication project, called Transparent Replications, where we replicate new papers coming out in top journals with the goal of shifting incentives — so when a new paper comes on psychology in the journal Nature or the journal Science, we’ll go do a quick replication to see if it holds up and to make sure everyone knows whether it holds up or not. We find sometimes people don’t follow the preregistration plan. That’s something we ding them for. We’ve got three different ratings we evaluate, and one of them is our transparency rating. And so if they didn’t stick to their preregistration plan, they’re going to get penalised for that.
Rob Wiblin: OK, so with Registered Reports, you can’t remember the specific number, but it seems like they’re moving the needle a lot. And this is in psychology or social science in particular, which I guess has had a relatively poor track record of reproducibility of research.
Spencer Greenberg: Well, more and more journals are accepting Registered Reports in social science in particular, but other fields are starting to adopt these things as well. And I will say social science did/does have a big replication problem, but it doesn’t mean that other fields don’t. We just may know less, right? So social science has gone through this atoning process that it’s undergoing. Not every field has kind of gone through that.
Rob Wiblin: Yeah, it could be that actually now it’s maybe above some other disciplines. I saw something recently that I actually haven’t read yet, but it was asking the question, “Does history have a reproducibility problem?” You might ask, how can that be the case? And the argument was most history papers are constantly citing original sources, things in the archives, like things that are generally known, that supposedly happened in the past. But then if you go and look very carefully and see, do the archives, do the logs, actually show what was claimed in this paper, often people are just kind of recycling made-up claims or distorted claims. That would be the reproducibility concern there.
If it does turn out that Registered Reports are replicating at roughly the correct rate, does that mean that we found a big solution, or a massive part of the solution, to the credibility of science? And this is the direction that we should go and this is something that people should be campaigning for?
Spencer Greenberg: I think it is going to be part of the solution. However, I don’t think it’s all the solution for a few reasons.
One is that replication is necessary but not sufficient for good research. So for our Transparent Replications project, we have these three ratings I mentioned: one is on transparency; one is on did it replicate or how well it replicated; the third, though — which I think is really important — I think people don’t really talk about, and we call it clarity. And it’s essentially, did the paper claim things that it didn’t show? To what extent are their claims the things that they actually proved?
And you might think, isn’t that a rather minor detail? Like, of course they’re going to say the things that they proved. But no, not so. Often papers claim things that it didn’t actually show. And this is actually, in my view, a huge problem. So even if something replicates, it may not mean what it claims to on the tin, and there’s a big incentive to make it seem like you showed something really interesting and important, even if you showed something kind of boring and trivial.
Rob Wiblin: Yeah, I think you mentioned this in our last interview, and you called it “importance hacking.” This is an analogy to p-hacking, where people might do the statistics and the experiments all correctly, but then they’ll find some way of presenting what they’ve done as having much more practical, real-life implications than it possibly does. And this is ubiquitous.
Spencer Greenberg: Yeah, it’s quite common. And I would say Registered Reports doesn’t deal with that issue.
The other thing that I think is important to acknowledge about Registered Reports is that not all research should be working this way, where you can just upfront say, “Here’s exactly what I’m doing; here’s exactly how I’m going to analyse it.” I think we really need to distinguish between exploratory research and confirmatory research. And Registered Reports are really good for confirmatory research, where you can lay out exactly what you’re trying to confirm. For exploratory stuff, it just doesn’t work that way. You want to collect a whole bunch of data, you want to analyse a bunch of ways, you develop hypotheses. You might not even realise what your hypotheses are. You just know that there’s something interesting in that vicinity and you want to explore it.
Rob Wiblin: You mentioned your Transparent Replications project, which you’d just launched around the time that we did our last interview. And I think back then you’d tried to replicate three psychology papers that had appeared in Science and Nature, two of which you did manage to replicate and one you didn’t. How’s it gone in the last year?
Spencer Greenberg: Yeah, it’s good. We have a whole bunch of replications in progress that we haven’t released yet, but on our site we’ve got seven so far. And one of our big priorities has been going faster and faster, with the goal of eventually being able to really exert a strong influence on incentives, ultimately to really say, “This is really great research; everyone should trust this,” and, “Hey, this research didn’t hold up” — and try to make it so that you have an incentive to publish in a different way.
But yeah, so we found a mix. Interestingly enough, a bunch of papers are being dinged substantially for clarity. Like we had one paper that had perfect replicability — five out of five stars on our system — but only one out of five stars on clarity. That’s a kind of a pattern we’re seeing: that yeah, we’re getting a bunch of stuff replicating, but it doesn’t necessarily mean that what they’re claiming about it is actually true.
Rob Wiblin: What was the clarity failure in that case?
Spencer Greenberg: It depends. Obviously, we’re finding different kinds of clarity failures. The one that I’m mentioning is called “Relational diversity in social portfolios predicts well-being.” It’s this interesting idea of, does it matter for happiness how many connections you have of different types, on top of just how much time you spend with people? So some people, they might have lots of social time, but it’s just with friends. Other people, they might have the equal amount of social time, but it’s with friends and family and colleagues at work. So this idea of this portfolio of different types of social relationships, this relational diversity, they found that it predicts wellbeing.
And we actually replicated that. We totally replicated their result, which is fascinating. We also found that they did their analysis wrong. But fascinatingly enough, when we corrected everything and kind of worked out all the kinks in it, we did find their effect. So that was the really fascinating thing about it. So we had to give them pretty bad clarity, because what they were saying about their work wasn’t quite correct, but it turned out to replicate.
Rob Wiblin: They were right in the end anyway.
Spencer Greenberg: Yeah.
Rob Wiblin: Were they engaging in some sort of p-hacking situation, where they were changing the analysis to get the paper published in a good journal?
Spencer Greenberg: No, because if they did it correctly, they would have gotten the result.
Rob Wiblin: I see, right.
Spencer Greenberg: I think it was just that they got lost in the weeds on some complex analyses, and luckily enough, we caught it and corrected it. And hey, the result holds up, so good for them. I don’t know.
But there are other kinds of these clarity issues. That one is not typical. That one is like, they messed up the analysis. But there are other kinds. For example, we’ve had cases where they use a really complex analysis, and if you do it exactly the way they do it, you get their result. Great. But then we ask ourselves, “Well, this is a really complicated method. We don’t have a good intuition about what it’s really doing. What’s the simplest way you could validly analyse this hypothesis? What’s the simplest valid analysis?” And we do it that way. And we were like, “Holy shit, all of this important stuff was hidden by that complex analysis.” The meaning of the result is not what you think it is.
Rob Wiblin: Is it possible to explain quickly enough what the difference is?
Spencer Greenberg: Well, I’ll give you an example of a real result we found. Basically it was kind of a fascinating result. They found that there’s three different views about where wealth comes from. So off the top of my head, I think they were like, does wealth come about from corruption? Does wealth come about from hard work? Or does wealth come about from just pure luck? Like, it’s just the lucky who get wealthy. And then they found an interesting relationship, where these three views on where wealth comes from mapped onto three views on policy positions. So they had three different policy positions and they were like, each view is more associated with one of the policy positions than the other.
Pretty cool. Pretty nice, clean result. And in fact, you do their fancy statistical analysis, you find that that’s the effect. So then we thought, well, what’s the simplest way to validly analyse this? And I’m like, well, what I would do is make a simple correlation table, each of the views about where wealth comes from correlate with each of the policy positions. And what I expected to find, based on their paper, was that you’d get this thing where, for each of the views on where wealth comes from, it’s strongly positively correlated with the policy position for its policy position, and not strongly correlated with the other two.
And that is not what you find. That’s not what you find. In fact, one of the views on where wealth comes from, it has no relationship, or little to no relationship to the policy position that it’s supposed to be associated with, but it has a negative relationship with the other two policy positions. So technically they weren’t lying: it is more correlated with that policy position. But it’s not correlated with that policy position. But this big fancy analysis that wrapped it all together, you just couldn’t see what was going on.
Rob Wiblin: I see. Yeah. Do you think, is this a case of academics disappearing up their own rear ends, or does it just make the paper more interesting to use some cutting-edge, very complicated statistical method?
Spencer Greenberg: I think there’s a lot going on here, and I never want to assume someone is doing something dishonest unless I have strong evidence. I think what happens is people learn fancy techniques. Because they’re fancy, they think they’re better. Because they’re fancy, they also get more credit for them, and other people just sort of trust it more and also judge it less — because people reading these papers are experts, but they’re not experts necessarily in the fancy technique. So they read it and they’re like, “Yeah, it seems reasonable. I don’t know.”
You know what I mean? I’m a mathematician, so I’m just incredibly unimpressed by fancy math. When someone shows me fancy math, I’m like, why didn’t you do simple math? Why would you waste your time with fancy math? You know what I mean? I have no respect for fancy math, whereas I think a lot of people are like, “Ooh, fancy math.”
The most extreme example of this I’ve ever seen is there’s this paper you may have heard of. This wasn’t one from our replication project; it’s just an old paper that found that the ratio of positive… I can’t remember if it was positive conversation topics or positive comments/negative comments that had this really powerful predictive effect on, for example, how a relationship would go or who would get divorced. And I can’t remember, it was like —
Rob Wiblin: Oh, yeah. You’ve got to have, like, five positive remarks to your partner for every one negative one. Otherwise you’ll get divorced. I think that was the theme of it.
Spencer Greenberg: Exactly. And this, I believe it was a master’s student was looking at this paper, and he’s like, “This doesn’t really make sense to me.” And it actually had fluid dynamics equations in it. This is like a psychology paper. And so he was like, what? But everyone who reads this who’s a social scientist is going to be like, “I don’t know. I mean, it’s fancy math.” You know what I mean? It’s just completely outside of what they’ve been taught, right?
Rob Wiblin: When people verbally make that claim, you would think that what would appear in the paper is a graph of the ratio of positive to negative comments against divorce probability or breakup probability or whatever, right? It seems like it should be very simple to test, especially as you’re just making a correlational claim here.
Spencer Greenberg: Exactly. And so they did this incredibly fancy math. So this grad student, much to his credit, is like, “I don’t know. This kind of looks like BS to me,” shows it to a physicist, and the physicist is like, “What the fuck? This is a fluid dynamics equation. Why are you plugging this data? It makes no goddamn sense.” But everyone bought it and it got a lot of attention.
So this is I think a thing that people do not necessarily for bad reasons. But I think it actually obscures reality to use fancy things that we don’t have a good… It’s hard enough to have a good intuition for linear regression, which is one of the simpler techniques in statistics. And also machine learning: it’s hard to really wrap your mind around it and really get an intuition. Once you go beyond that, it’s outside of most people’s intuition.
Rob Wiblin: Yeah, I definitely have the rule of thumb that more complex methods is bad — a very bad sign about the credibility of a paper, and plausibility that it’s right. But that’s an extraordinary case there, because the claim is so… Well, it’s so intuitively plausible and so simple to test that, like couples that bicker more have worse relationships.
Spencer Greenberg: The crazy thing is, it’s probably right. It’s probably true. The exact number is obviously bullshit.
Rob Wiblin: But what would be remarkable is if the opposite were true, if that were not true.
Spencer Greenberg: The more negative comments you make, the happier your relationship is? Probably not.
Rob Wiblin: Yeah. Well, I have actually heard the claim that bickering, or like people who have the same arguments for many decades, that that’s not actually predictive of divorce — because often you just are in some kind of equilibrium where people disagree and it’s fine, and over time, people make it work out. But that’s almost more interesting than the reverse.
Speaking of honesty, there were a few notable scandals last year where researchers seemed to have completely made up their data and then published very famous, notable, well-covered papers on the back of it. Have you updated at all on how common outright fraud might be in academia from any of this experience? Or do you think these are just events that you would expect, even if fraud was relatively uncommon?
Spencer Greenberg: First of all, there’s this kind of continuum between fraud and just doing fishy statistics. I think the vast, vast majority of scientists have a line they’re not willing to cross. Like, they’re not willing to make up a data point that wasn’t in the study. Do you know what I mean? They might be willing to throw out an outlier, but it’s a genuine outlier, and they’re like, oh, maybe — and it can be valid to throw outliers, so it’s not ridiculous to do. But I think very few scientists are willing to make up a data point, which is kind of the stuff we’re talking about here.
I defer to Data Colada folks, who have done incredible research on this. On the podcast, I heard one of them say that he estimates maybe 5% of papers are fraudulent. That’s from memory. I hope I’m remembering it correctly. That would be something around my guess. My guess would be 3% to 6% of papers, maybe. So I don’t think it’s the biggest problem in science, honestly. It’s very disturbing. The right rate of it should be much lower than that rate. But I don’t think it’s the biggest problem by a long shot.
Rob Wiblin: I mean, compared to papers that use fishy methods, that’s very small. But on the other hand, people who are able to just completely make up data points can produce outrageous findings that could… And presumably you don’t make up your data from whole cloth in order to show something as boring as people who argue are less likely to stay friends. You are presumably doing that because you want to show something more interesting. So the more interesting, influential claims might be disproportionately likely to be fraudulent, I would guess.
Spencer Greenberg: You know, it’s funny, because one of the people that has proven to have committed fraud in social science, they said that they would first come up with the hypothesis they thought was true and then they would make up the data. Which is interesting, because it wouldn’t be surprising if some of their effects are actually real, but they just more quickly could just make the data up. If you have no scruples and you’re just willing to game the system maximally, you’re just like…
But yeah, it’s an interesting question. I do think that the more surprising the finding from our intuition, the higher the probability of the finding is not true. However, the weird offsetting effect is the surprising findings are the useful ones. So if everything just completely agrees with your intuition, what have you really learned? It’s funny. In that fringe of really surprising stuff, that’s where the most is to be learned, but also the highest probability that the result is BS.
Personal principles [01:16:22]
Rob Wiblin: OK, let’s push on. One of your approaches to life is to maintain a list of principles that guide your behaviour. Why do that?
Spencer Greenberg: First I want to define what I mean by principles. I think there’s a lot of different words people use that can kind of get mixed together, and I try to be precise about them just to avoid confusion.
So I think of “values” as the intrinsic values, the things you fundamentally care about, that you value for their own sake. A “principle,” to me, is a decision-making heuristic. So instead of having to rethink every decision from scratch, you’re like, “I have a principle, and it helps me make my decisions quickly. It gives me a guideline of how to make my decision.”
And a good principle, not only does it make your decisions more efficient to get your values — so it speeds you up — but it actually makes it more reliable that you get to your values than if you try to rethink things from scratch every time. So a good principle can help orient you on cases where maybe your willpower wouldn’t be there, or where maybe you might second guess yourself and actually not do the thing that’s most valuable.
Just to give you some examples, one of my principles is “Aim not to avoid anything valuable just because it makes you feel awkward, anxious, or afraid.” I have that principle, so when I’m in a situation where there’s something that’s making me feel awkward or making me feel anxious, that’s valuable to do, I just go immediately to, yeah, I have to do that thing. The fact that it’s awkward or anxiety-provoking is not an excuse to me, because that’s one of my deep principles. And the thing is, if I try to think about it from scratch every time, not only is it slower, but it also is easy to talk myself out of that thing.
Rob Wiblin: Can you give us some more examples of the principles?
Spencer Greenberg: Another one of my principles is “Aim to have opinions on most topics that are important to you, but view your beliefs probabilistically. Be quick to update your views as you get new evidence.” Here, if something I think is really important in society or for my own life, I want to form an active opinion on it. So if someone said, “What do you think about this?” I would say, “Here’s what I think” — but simultaneously, I want to be very flexible to new evidence and be ready to adjust my view at the drop of a hat if strong evidence comes in. Not adjust at the drop of a hat with weak evidence, but adjust at the drop of a hat with strong evidence.
So that’s something I aspire to, and I think that’s helpful when someone challenges me. I put a lot of my opinions on the internet, and if someone’s like, “What about this counterevidence?,” that principle helps orient me towards not being so reactive and being like, “Ahh, I’m being attacked!,” but being like, “If they gave me strong evidence, my principle says I have to change my view. So did they give me strong evidence?”
Rob Wiblin: So at clearerthinking.org, you’ve got a tool where people can go through and try to figure out what principles they want to adopt as heuristics for their life. I tried doing that, and by and large, I felt like I wanted to reject the ones that the app was suggesting because the typical principle feels quite extreme. Something like, “Never tell a lie,” “Always treat people with kindness,” “Family comes first” — they tend to be quite strong in one direction.
And it’s interesting that there’s something about the nature of setting principles that kind of pushes you to say, I want to take a strong stand on a difficult tradeoff, rather than saying, “You should balance X against Y. It’s hard to say what’s the right golden middle point to do between them. Everything in moderation.” But maybe I’m making a mistake in being too pragmatic and thinking through too many decisions individually, rather than just saying, generally X is more important than Y, so X.
Spencer Greenberg: I think there’s a tradeoff, right? A simpler principle can be more action guiding and give you less room for making excuses or second guessing yourself. A more complex principle can take into account more aspects of the world to show that you miss fewer edge cases. Because it’s not that a principle will be right every single time; it’s that it will be right most of the time, and it will help you be more efficient and help you avoid second guessing yourself too much, or willpower issues and things like that.
Let me read you my principle about lying, because I try to adapt to the kind of thing you’re saying. I say, “Try never to tell lies. White lies are OK only when they’re what the recipient would prefer.” So I’m trying to say there is some wiggle room. Like, if you go to your friend’s art performance, and they come up to you excitedly, like, “What did you think?” and you actually thought it sucked, that’s a tough one. I’m going to give myself some leeway to be like, if I think this person would rather I express appreciation from their art — they’d rather I lie — then maybe it’s OK.
Rob Wiblin: I guess you’re imagining the hypothetical where you could ask them, “What would you have wanted me to say?” And then wipe their memory of it or something, and then go back and act it out.
Spencer Greenberg: Then if, in the wiped-memory scenario, they’d rather I lie, then maybe it’s OK. So yeah, I try to add a little nuance there.
I get your point though. And the tool gives you some default principles. If you want to find the tool, by the way, it’s called Uncover Your Guiding Principles, and you can find it on clearerthinking.org. But the tool tries to give you some starting principles, but you can refine them, you can make them more specific, you can give them more details. We start with simple ones because I’m not sure people are going to have the same level of details they want to add. But the tool also helps you write your own, and it’ll make a really cool visualisation of your principles for you, that you can hang on your wall if you want. That kind of thing.
Rob Wiblin: Yeah. I tried thinking, what principles do I actually already have? And I think the ones that I have in my mind that I think, this is actually a principle for me were things like: 90 minutes of intense exercise every week — ideally more, but 90 minutes is the minimum; go outside every day; no junk food; get enough sleep. I guess you had that under “Take care of your physical and mental health,” which is a more general principle. Is something as specific as 90 minutes of intense exercise a week, is that too specific to be a principle?
Spencer Greenberg: No. It’s not maybe phrased in the way a typical principle is, but I think it is a good principle. Because you’re like, “I only worked out once this week, should I work out?” Yes, you should work out. Your principle says it. It helps reduce that “I’m tired today and I’m not going to do it,” right? And you’re just like, “No, my principal says I do it, so I do it.”
If we were perfectly rational agents that could rethink things through every single time, then maybe we wouldn’t need it. But because of the ways our brains work, I think this actually can be a pretty powerful way to help us make better decisions, despite oversimplifying. That’s the funny thing. We do oversimplify. Maybe this week actually it would be better to do 85 minutes of exercise instead of 90, right? But you should probably just still do 90 every week.
Rob Wiblin: Keep it simple, yeah. I guess you had “Take care of your physical and mental health,” which for me would be a little bit vague, I think, because then that would give me too much wiggle room to fall off the wagon, I think, on some stuff.
Spencer Greenberg: Yeah, totally. I think for me, maybe I don’t need as much like, “exercise exactly 90 minutes.” It’s something that I’m pretty good about doing; there’s just a reminder that you should always be making that one of your priorities in life.
Rob Wiblin: Yeah. And I think a recent one that I’ve talked about on the show before, especially in discussing how I consume less news now, is I think the principle is, “Don’t consume content that makes you unhappy, unless it’s meaningfully contributing to your ability to make the world a better place, and you’re going to follow through and take the actions that the information consumption is enabling you to take.”
Spencer Greenberg: Ah, because it might help you make the world a better place, but there’s no way you’ll act on it.
Rob Wiblin: It would in principle, but I’m not going to.
Spencer Greenberg: I think it’s a great principle. I mean, it’s too detailed for most people, but for someone like yourself, I think it’s a fantastic principle.
Rob Wiblin: Yeah. Other than that, I guess I’m not sure. Do you always want to be kind to people? I’m not sure.
Spencer Greenberg: Yeah. So mine is “Try to always be kind.” I do try to always be kind. Even when someone’s a bully to me, I try to point out what they’re doing in a kind way, these days. Maybe in high school I was a little more intense. But these days I don’t actually want to hurt the person. I just want them to not bully people.
Rob Wiblin: Yeah. I mean, I suppose pointing out someone’s bad behaviour is kind in a sense that you’re enabling them to potentially improve.
What principles have you thrown out and replaced over the years, if any?
Spencer Greenberg: I think it’s more that they have coalesced to be more specific, and that I’ve gotten more of them. I don’t know that there are any that I actually thought were good that I’ve gotten rid of, which is interesting. If I do get rid of any, that’ll be interesting to note. But there is, by the way, there’s one on there which is, “Don’t let anyone of low moral character be a recurring or substantial part of your life,” and that really relates to our earlier conversation.
And I have revised some principles. So that principle, someone really called me out on the first version of that principle, and I revised it. I added the recurring or substantial part. It’s because actually I think it is important to be able to interact with people of low moral character, and to be able to be at an event and have a one-hour conversation with them, et cetera. And I think making it a rule of, you always immediately step out of the conversation if you’re convinced… That’s actually not a very good life strategy. So I kind of revised that.
Rob Wiblin: Yeah. How did you decide what principles to include? Is it kind of, you put down candidates that sound intuitive and then you think, “If I did this, intuitively, would this make my life better? If I’d applied this principle in the past, would my life have gone better?” Is there more to it?
Spencer Greenberg: Yeah, I think that’s a really good approach: would my life have gone better? I think what our tool does is it gives you a lot of ideas for principles, so you can read through them and see if any of them resonate. Or even if they don’t resonate, they might inspire you to write a version that resonates more with you. So yeah, I looked through lots and lots of different potential principles. I thought about what heuristics have worked well for me in the past and have benefited me.
Decision-making errors [01:25:56]
Rob Wiblin: As part of Clearer Thinking, you’ve been involved in making quite a lot of tools that are aimed at helping people make better decisions. And I think that’s cut across individual decisions, but also group decisions — like families deciding what to do, or a shared house is figuring out how to manage things, or a work team, or an entire organisation. I’m curious to know: what do you think are the biggest or the most important classic pitfalls that people or groups stumble into that cause them to make bad calls?
Spencer Greenberg: Yeah, I think it’s really interesting the way group decision making is so different than individual decision making, and I think it speaks to the social nature of our species. A lot of group decision making is people trying to guess what other people want and adjusting for it.
And they do it for different reasons. Sometimes they’re doing that because there’s like a high-status group member, and everyone has to look good to that person, or at least not challenge that person, and so they’re trying to guess what the high-status person wants and not go against it. You can get this at companies, for example, where the higher-up person at the company, everyone’s trying to appease them. It can also happen just because people care about each other, and everyone’s trying to guess what the other people want, because they want the other people to be happier, or don’t want to upset anyone.
And it really changes the dynamics of group decisions so much. I think the right way to set up group decisions actually depends a lot on the type of decision and what you want to optimise for.
For example, let’s suppose that what you’re trying to do, as a group of a few friends, let’s say five of you, is trying to decide where to go to dinner. That kind of decision, it’s pretty reasonable to say, “Let’s go to somewhere where nobody’s super unhappy.” It would kind of suck to go somewhere where someone has no menu options they can eat, right? So you think you want a kind of minimum bar. In that kind of scenario, you kind of want a veto system of like, you want people to honestly say if they’d be unhappy there, and then just pick from the things that nobody would be substantially unhappy with.
That’s going to be very different than, let’s say, you have a committee that’s evaluating where to give grants. In that kind of scenario, if you have a veto system where anyone can veto it, you’re going to end up with lowest-common-denominator risk aversion, where any kind of out-there idea gets rejected by someone, and then you stifle innovation in grants.
So it depends a lot on the kind of setup, and you can kind of tweak the parameters to help optimise a group to be better at the kind of decision.
Rob Wiblin: Do you think people make better decisions as individuals or as groups?
Spencer Greenberg: I think it depends a lot on the type of decision and also the type of group. So obviously, if you’re trying to decide where you want to go to dinner alone, you’re going to be better at that than the group will be at maximising the average happiness of the group, because it’s a much harder problem and there’s an information-communication problem. So it’s a little hard to compare, because they’re not even trying to make the same types of decisions usually.
But let’s suppose it was something like, should you fund this grant? On the one hand, the group will have social biases where maybe everyone’s trying to appease this high-status person. You could also have weird false consensus effects: nobody wants to speak out against the thing, so everyone thinks that everyone thinks the thing is a good idea. And then the group picks a thing, but it’s actually nobody’s first choice, right?
So a group could be worse for those reasons than an individual. On the other hand, one major advantage that groups have is that people in the group catch each other’s biases — and if those biases are idiosyncratic, and not like things that all people are biassed in or the whole group is biassed in, they cancel to some extent. People are much better at critiquing other people’s ideas than they are their own. So I think in that way the group can excel.
So it’s complicated, yeah.
Rob Wiblin: We’ve talked on the show with a couple of different people… Vitalik jumps to mind, on interesting mathematical breakthroughs and crypto approaches to trying to get better decision making in groups, and having decentralised collaboration and things like that.
Do you think there’s potential for that sort of stuff to be adopted in organisations? Do you think we’re going to get much mileage out of that research agenda?
Spencer Greenberg: Yeah, it’s really interesting. There are certain kinds of situations trying to get all of society to make a decision, and you kind of come up with a kind of voting system. But the reality is, when you have smaller groups at organisations and things like that, the problems are so much simpler than these complex solutions are targeting. They’re things like everyone wanting to kiss up to the boss; or the loudest person talking the most and influencing the group, or the charismatic person — even if they don’t have the highest status.
Whereas these fancy methods are sort of… I mean, they’re targeting some theoretical optimal, but the reality is we’re dealing with a bunch of civilised apes trying to make a decision, right?
Rob Wiblin: That the problems are like at a very basic, obvious level. Yeah, we don’t need fancy math.
So you mentioned everyone kissing up to the boss, and I suppose the boss not discouraging that. That’s a good one. Are there other very simple failures that you think are common and important?
Spencer Greenberg: Yeah. People talk a really different amount. Extroverts talk a lot more in these kinds of meetings and have a way outsized influence than a very introverted person. But that doesn’t mean the extrovert has better ideas at all, right?
I think another really simple, common problem is actually a game theory problem, which is: suppose that you’re in a small group and you have to make repeated decisions. Let’s say you’re on the board of an organisation or something like that. And let’s say it’s a majority vote. If you think the group is going to vote a certain way, opposing the group loses you a bunch of stuff: it loses you some political capital, it makes people feel less aligned with you, it makes people feel like maybe you’re a loose cannon.
So you actually have a game theoretic incentive to just vote with the group, except when you think the group is completely divided, and that’s when you can exert your influence. But this creates an extremely strange effect, where the group almost always seems to agree on everything, even though actually maybe a lot of times people disagree, but nobody wants to show it.
Rob Wiblin: It would also make it very difficult to tell when the group is like 50/50 split, because everyone’s trying to guess what other people think.
Spencer Greenberg: You have a sense of consensus when it’s not really there, and then you think the group has a consensus, so you don’t vote against it, but then also nobody else votes against it, so it’s just really fucked.
Rob Wiblin: Yeah, I’m not sure how big that dynamic is in reality, at least… I don’t know. Intuitively, to me, maybe 80,000 Hours is —
Spencer Greenberg: I’ve seen it really strongly in board meetings and things like that.
Rob Wiblin: OK, yeah. The thing is, on the one hand, if it seems like 80% of people agree with a given decision, then what’s the point? If you don’t think that you can get past 50/50, then why bother even having the argument? But on the other hand, people enjoy expressing their opinions, and often many people enjoy disagreeing with other people, and they enjoy seeming smart for having insights that might have been missed. Even if they’re on the wrong side of a debate or on the losing side, they might feel like —
Spencer Greenberg: It seems like you hang out with a bunch of effective altruists.
Rob Wiblin: [laughs] You don’t think that this is an important motivator for people, that people get something? It’s not an inherent value, but they get instrumental enjoyment or career benefit out of having things to say, having something to add to a debate?
Spencer Greenberg: Yeah. I think savvy people will test the waters by expressing, let’s say, a minor concern about the thing, but not tipping their hand that they’re actually against the thing. That’s a savvy way to do that. But I think that there are communities — like the effective altruist community, or the rationalist community — where you get much more rewarded for coming up with a clever objection or having a contrary opinion. And I think in a lot of social spaces, you don’t get as much benefit for that, and you get more benefit from just social cohesion, or more cost to lack of social cohesion, or bringing up objections.
Rob Wiblin: What would you suggest for an organisation? How could 80,000 Hours make better decisions? Are there any practices that we should adopt that possibly we don’t have?
Spencer Greenberg: I think it depends a lot on the type of decision. When it’s a decision where, let’s say, all the stakeholders will have equally good information and be equally equipped to make the decision, and you expect no one’s gaming the system — like it’s a high-trust environment, and nobody’s going to try to cheat to get their thing through — a really good method can just be everyone independently scores the thing.
We’ve done this in certain cases at Clearer Thinking. Let’s say we have to evaluate projects. Let’s say it’s a situation where we think everyone’s opinion is going to be equally good. Each independently does the evaluation, puts a score on it, and we only see people’s scores after. We unveil them all at the same time, right? Nobody knows what everyone else did.
What’s really nice about this is it avoids groupthink. We all do our thing independently, so we’re not being too influenced by each other. You could even blind who said what. So if you worry about people being sheepish about being found out of a certain opinion, you can make it so they all get pulled together and you don’t even know who gave which. And then that can then lead to a discussion of, it looks like on average people thought this was the best thing.
Now, this strategy is going to be less good in a few situations. One, where some people have much more information than others, or are in a better position to predict a thing. It’s also not going to be good in game theoretical situations, where some people are trying to win and are willing to kind of manipulate the system in order to win. It’s not going to be robust to that. Someone could just give a 10 score to the thing that they wanted to win, and a 0 to everything else, and that will kind of game the system, right?
Rob Wiblin: Yeah. If people have different levels of information, and I guess we could also expand it to different levels of competence. So some people, we can imagine, have better judgement within an organisation, or more experience. They’ve been around for longer, they know more. It seems like the math would end up saying that you should vote, but then weight some people’s votes higher than other people’s.
But I think it’s very rare to do that. I guess famously the investment fund Bridgewater does something like this, although I’ve heard a lot of scepticism about how good the Bridgewater system actually is, or whether it’s portrayed as it actually functions. But I guess people have this egalitarian instinct, where they don’t like the idea of making it very explicit that this person’s judgement is weighted three times as much as theirs is, and that maybe prevents that from happening, even though that might be optimal.
Spencer Greenberg: I think it’s a good observation. I think that it can be very uncomfortable to give different weights. A solution that we’ve used in some cases for this is we give different teams the ability to do the votes on their own expertise, but not on other people’s expertise. So maybe engineering is estimating how hard to implement would this be? And maybe UX people are saying, how much value would this have to the user? So that’s one nice way to handle it. And then there’s a nice justification, because those are the people best suited to do that.
Another approach — and I think this is one that startups often should use — is that there’s someone who’s in charge of the project, and everyone understands it’s that person’s job to decide, but it’s also that person’s job to collect the information from all the other people. Like what does design think, what does engineering think, et cetera. Then they’re going to take all that information, think about what everyone thinks, but it’s their job, and everyone knows it’s their call at the end of the day. But if they’re doing a good job, they’re going to be influenced by different people’s opinions.
Rob Wiblin: Yeah. I actually think of that as kind of a voting laundering system. Where a distributed voting system would be optimal, but there’s something about that that humans find uncomfortable or it doesn’t work for organisations in practice. And it’s really quite uncommon to just bring in everyone who’s concerned and get them to vote and then weight it; it just doesn’t exist.
But you can effectively do that by getting someone to go and talk to the different people, sussing out their opinion, and then they, in their own mind, average across all of the different opinions — and you’ve effectively voted without the voting actually happening and the votes being public and people being able to see whether the vote was accepted or not. I think that works reasonably well, at least if the person doing the collation is good.
Spencer Greenberg: Exactly. It all hinges on the skill of the person who’s in charge of the project. They have to avoid bringing their own biases in too strongly. They have to seek people’s opinions and get people to be honest about their opinions. They have to do the weighting of the information based on who actually knows what and how much expertise the different people have.
Also, often the project leader will have considerations that the individuals don’t have. The project leader might be like, what’s the cost of this? Maybe they have a budget and there’s going to be tradeoffs between doing this and doing other things that the individual team members are not thinking about, nor should they be thinking about. There could also be political implications. Like if they’re part of a larger organisation, maybe you have to do a certain amount to prove to the organisation that you’re doing a good job. That is not exactly the same as if you were just trying to do a good job — proving it — you might have to write some glossy presentations that you wouldn’t… And then the team leader realises that that has to be part of what they do. But that is part of what being a good team leader is, right?
Rob Wiblin: Yeah. OK, let’s think about individuals. What are the biggest, classic, most consequential blunders that individuals make when making calls?
Spencer Greenberg: I mean, there’s so many, it’s hard to even know where to begin. One that I would point to, because I think it just doesn’t get as much airtime as it should, is that we’re almost never making decisions about things. Like, we’re just doing stuff. By the time you think you’re making a decision, you’ve already raised something to attention — as like, there’s two options here, or three options, right? And this makes sense most of the time, because most of the things are not worth investing your mental energy in. If you’re trying to decide every moment what to do, think about how insane that would be. I definitely don’t advocate that.
However, what I’ve noticed is that people often have a big problem in their life, and they’re not making a decision about that that they probably should be. And I’ve been guilty of this myself for sure. It’s like trying to notice, wait, there’s something not optimal here, something not going so well, and raise it to the level of a decision. So trying to make more decisions, but also not to decide about everything: decide about the important things that maybe you aren’t making a decision about.
Rob Wiblin: Yeah, my wife and I play backgammon quite a bit. I suppose many people will be familiar with this, with chess or other games as well, but almost always the biggest, most consequential mistakes you make are cases where there was just an obviously way better move and you didn’t see it; you didn’t consider it whatsoever.
I guess possibly there’s some analogy here that very often it’s because it’s an unnatural move. It’s a move that’s good in this case, but is not how the checkers typically get moved around the board, so you just simply don’t consider it. I guess that would be the case potentially in life decisions as well: a move that could be very good for you, but would be abnormal, you’re just not going to have it as part of your option set.
Spencer Greenberg: That’s a really good observation. And abnormal can mean different things. It could mean it’s not the thing you would typically do. It could also mean it’s not a cultural thing that’s typical in your society. Maybe everyone in your society around you and your culture does the same kind of stuff, but it’s actually horrible for you. So you’re just doing the same kind of stuff, and not realising that you’re even making a decision, right?
Rob Wiblin: Yeah. OK, so one bottleneck is just not even realising that a decision is being made. Tricky though, because obviously we can’t think about most things most of the time, or we would just constantly be analysing and deliberating. So how do you have a good process for even figuring out what are the decisions that you need to be making that would be important? It seems like maybe you almost need to have some explicit stop, where you say, “What are the most important decisions in my life right now?”
Spencer Greenberg: Yeah. To me, one of the really common things that happens is that when we have a problem, we’re very aware of it when it first starts or when there’s a big change in it. But then we get very acclimated to it very fast. So one thing that I just try to think about is: what are the problems that are happening in your life that maybe you’re so used to, you don’t even view them as a problem anymore, but if you step back and start it over, you’d be like, “Oh wait, that’s a problem”?
We all see examples of this in really little things. Maybe there’s a hole in your counter or whatever, and at first it’s annoying, but then you’re so used to it and you just don’t fix it for years, right? Really, you should have fixed it right away. But the second best thing would be to just notice it today and just get it fixed, instead of just working around it.
And I think we do that for much more serious things. Like someone who has significant depression and they’re just used to being depressed, and they kind of forget that there’s anything else you could be, because it’s been so many years. And it’s like, maybe you should be making a decision around your depression, and actively engaging with what you want to do about it.
Rob Wiblin: Yeah. So not realising that there’s a decision at all, a big potential failure. What are some other ones?
Spencer Greenberg: I think another big one is that people will accept one framing of a problem. I feel like when a friend comes to me with a decision, and they want to discuss it and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, “I solved your problem.” What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings.
And I think this is a powerful thing we can do for ourselves. Sometimes the framings are more about… We make it too binary — like “I either quit my job or I stick with my job” — and we don’t think about, “Maybe I could switch roles at the same job, or I could renegotiate details of my role” or other things like that.
So sometimes that’s where we’re stuck on framing. But sometimes it’s just coming at the problem differently. A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, “Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?” And they’d be like, “Hell no!” It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it.
So I think this can be quite powerful: we get stuck in these frames on decisions, and asking ourselves, “Is there another way of looking at this?” And sometimes talking to other people can be a really helpful way to get those reframes, but sometimes we can generate them ourselves.
Rob Wiblin: Yeah. Any other big ones?
Spencer Greenberg: There’s so many that I literally have this massive file of decision-making failures. I’ve been documenting them for years. So when you say, “Are there any others?” it just makes me laugh.
Rob Wiblin: Yes. That’s a bad question. I was expecting that you might say just considering too few options, which is kind of a variant on the previous ones.
Spencer Greenberg: Well, I worry about being redundant, because I feel like… How many episodes have we recorded together? Is this our fourth?
Rob Wiblin: This is the fourth one.
Spencer Greenberg: I have this feeling of déjà vu that I might have said that before. So I’m trying to be mindful.
Rob Wiblin: We have mentioned it on the show before, but I think there is research suggesting that the single probably most useful thing you can do with an individual decision is to spend more time brainstorming options, because so often we consider only a very narrow set. It’s one of the things we put prominently on our website, at least.
Spencer Greenberg: Yeah, it’s something called “narrow framing” — where you frame it as like, “I either do A or B,” but maybe there’s a C and D. This is another thing that I try to do when I’m talking to someone else who’s struggling with a decision. I’m like, “Are you sure you can’t just get everything? Are you absolutely sure that this is a tradeoff?” And really thinking through with them if there’s some other option that is just strictly better than the ones they’re considering. First, get that out of the way. Make sure you’ve actually… Because this is a truism — but I think it’s a really powerful truism — that you can’t do better than the best option you consider. It’s just a bound on how well you’re going to do.
Rob Wiblin: Yeah. So I feel in my own personal decision making, I’m impatient, and maybe lazy to some extent. And very often with decisions, I just really want to do things quite quickly. And I’ll start writing a doc about it, but I find myself wanting to… The idea of doing a very exhaustive, like, dozens of pages analysis of what I should do as my next career move is just so painful to me that I’m never going to do that.
So in practice, what I do is something very brief, where it’s like I write down what I think are the important facts, then maybe I’ll do a bit of dot-point analysis of what I think of as the key arguments and considerations, like back and forth. Then maybe I’ll talk with someone about that, come back to it, sleep on it, and then make a decision.
Do you think I’m leaving much value on the table by doing that, even with things that are reasonably important?
Spencer Greenberg: I think it’s tricky, because there’s definitely such a thing as overanalysing or getting stuck in the details too much. And you know yourself, right? Maybe for you there’s a big cost to going too deep in it, and maybe you’re avoiding some bad thing by cutting it off. So I don’t know, for your personal situation.
I do think that there are some decisions in life that are both so important and so tricky that they’re worth taking real caution with and really taking our time. There may be ways you can do that deeper analysis that feel less unpleasant to you. For example, I don’t know for your particular case, but some people find it a lot easier to pair up with someone and do some of those things. Like instead of doing a big spreadsheet yourself, maybe you have a friend you talk it through with, and maybe that has a similar purpose, but feels more palatable. So there might be ways to get some of those benefits.
Rob Wiblin: Yeah, I guess just talking with people is a lot more pleasant. And I feel like that’s in practice how I end up making a lot of decisions, is just through conversation. Things implicitly emerge without you having to do necessarily such a formal thing.
Do you think groups spend too much time deliberating on decisions? Because on the one hand you’ve got issues with maybe the boss just makes a big call without talking with anyone. On the other hand, folks hate meetings, and they hate decision paralysis where they can’t take action because decisions haven’t been made. Do you think that there’s an average bias?
Spencer Greenberg: Well, it’s interesting, because in a typical startup environment where there’s someone in charge that’s going to make the decision ultimately, that tends to be an efficient decision-making methodology, right? Like ultimately it’s in the hands of one person, and when they say we’re done with the decision, they’re done with it.
I think where groups tend to get hung up is when they use other mechanisms of decision making that are like, let’s discuss it until we reach a consensus. Then groups can go around endlessly. Look at juries: if a jury deliberates and comes up with an answer in three hours, that’s amazing, right? That’s really fast. When you really need “we all agree,” that can be an incredibly slow process. In some cases it’s warranted. Like if it’s deciding when someone gets a death penalty, OK, that’s good. It should take a long time. But in smaller decisions, that can be a huge pain in the ass.
Rob Wiblin: Yeah. So maybe the key thing is making sure that you’re spending the most amount of time on the decisions that are most consequential, and trying to move more quickly on things where the stakes are not nearly so high.
Spencer Greenberg: Absolutely. And then when it comes to group decision making, really think about what are the pitfalls of this type of decision: Are we worried that we’re going to be too risk averse or too risk taking? Are we worried that we’re going to make the decision too slowly, or that we’re going to not consider enough angles? Are we worried that individual people will be biassed, or are we worried that everyone’s going to just copy each other? And then from the dynamics you’re worried about, you design the decision-making setup.
Rob Wiblin: To address that particular failure.
Spencer Greenberg: Exactly. Do you blind everyone so nobody knows what everyone else said? Do you put one person in charge? Do you use a formal voting system? They’re all solving different problems, essentially.
Rob Wiblin: For people who are interested in this topic of decision making, are there any particular tools or things that people should go read on Clearer Thinking?
Spencer Greenberg: Yeah. So if you go to clearerthinking.org and you click on “All Tools,” there’s a checkbox where you can limit it by types of tools, and there’s a set of tools on decision making. So you can click that box and you can see all our tools on decision making.
Lightgassing [01:49:23]
Rob Wiblin: OK, new topic. What is “lightgassing”?
Spencer Greenberg: Lightgassing is a phrase I came up with to describe a phenomenon that I kept encountering but I’d never had a word for.
It’s kind of the opposite of gaslighting, so why don’t we start with talking about what gaslighting is? It comes from an old film where there was this man who would mess with the gaslights in their house that produce light, but then trick his partner into thinking that he hadn’t done it, so she started to doubt her own senses and her own sanity. So this idea of gaslighting is when you kind of deny someone’s senses or deny their reality, so that they start doubting their own senses or sense of reality.
Lightgassing, on the other hand, is kind of the opposite of this. The way it works is that sometimes when we’re dealing with someone who, let’s say, is upset, they might say something that we really don’t believe is true, but they want us to reinforce that thing because it’s really deeply important to them.
The most classic example of this would be with someone who just had a breakup, and they’re talking about what an asshole their partner was. But maybe you don’t think their partner is an asshole at all. But they’re giving all this social pressure for you to tell them that, yes, their partner was an asshole. And this is kind of the opposite of gaslighting. Because whereas gaslighting is getting someone to doubt their real sensory perceptions that are actually true, lightgassing is when you’re actually reinforcing false sensory perceptions.
To me, this came about most dramatically in a situation where one of my loved ones was dealing with very severe mental health challenges and was experiencing actual delusions, like actual straight-up delusions about what was true. I realised I was in a very strange situation, where there was a lot of pressure to reinforce their delusions that I knew were false, and I still wanted to be supportive to them, but I started feeling very uncomfortable at this idea of reinforcing their delusions. But if I didn’t reinforce it, I felt like it was going to make them upset or angry.
Rob Wiblin: Yeah. It’s been interesting to me to see how this term gaslighting has become more and more common. I guess, as originally construed in this movie, it’s a really extreme behaviour where you’re like a machiavellian actor who’s going out contriving crazy situations in order to get someone to think that they’re going mad. Which I would say, to a close approximation, almost never happens. I’m sure some people do that, some really evil folks, but these days it seems like gaslighting, I guess people use it to describe situations where you disagree with someone in a way that causes them to question, I don’t know, whether they’re right, almost — regardless of whether you have any intention to make them think that they’re crazy.
But either way, gaslighting seems probably unusual. But I would say, by contrast, lightgassing is ubiquitous. It’s almost like the default in a conversation is, if someone says something and you disagree with them, you just kind of go along with it. Would you agree that this is basically the bread and butter of human conversation much of the time?
Spencer Greenberg: Well, I think it’s really useful to think of both these things on a spectrum. So for gaslighting, there’s an extreme gaslighting where you’re literally purposely manipulating a situation in order to make someone feel crazy. And I actually have seen cases of this. Someone I knew growing up, they really mistreated their girlfriend. I asked them about this once, and they told me that there were certain things that they were doing to try to make their girlfriend think that they could never be loved by someone else, so they would never leave them. And I was like, wow, that’s completely insane. But the level of knowledge that they were doing this really blew me away. So it’s not that it never happens, but I’d agree it’s quite rare.
Rob Wiblin: What sort of person is so conniving that they do that, but so unconniving that they would just tell you that they’re doing that?
Spencer Greenberg: I think it’s interesting how people who are not very good people sometimes don’t really have a very good sense of how their behaviours will be perceived by others.
Rob Wiblin: Maybe they think that their behaviour is more typical, or they overestimate how much everyone else is thinking the same way they are?
Spencer Greenberg: Yeah, I think that’s right. And we all kind of overestimate how much everyone else thinks like us. I think also there’s this very strange thing where people sometimes are proud of their manipulations, especially people who lack empathy. And they view it as sort of like, “Look how clever I am.”
Rob Wiblin: “Look how clever I am.” Yeah. All right. Yeah, I interrupted you, but maybe this sort of really evil gaslighting is more common than I imagine. But anyway: lightgassing.
Spencer Greenberg: And I will also say, I think the phrase gaslighting gets overused. It gets used for things that really aren’t gaslighting; it’s really just that someone disagrees with you. It’s fine for someone to disagree with you. It doesn’t mean they’re invalidating your sensory perceptions. But sometimes there are certain kinds of disagreement that are an invalidation. For example, let’s say, “Rob, I feel really angry about the thing you said to me.” And then you were like, “No, you don’t.” That would be a form of gaslighting where it’s like what you’re saying is I don’t feel angry.
Rob Wiblin: Yeah, OK.
Spencer Greenberg: For lightgassing, again, it’s a spectrum. And I agree with you: very mild lightgassing is probably the bread and butter of conversation. People often say things we disagree with, and many people just kind of nod along or say “uh-huh, uh-huh” as though they agree. Sometimes they even go further and pretend more actively to agree, not just nod along.
But then there’s a much more extreme form of it, where someone is really trying to get you to agree to something specific that’s maybe even harmful for them to believe. It’s like a false belief they have that might be harming them, and you feel pressured into agreeing with it.
Rob Wiblin: So you got into thinking about this because a friend of yours was having delusions, I guess. Were they suffering psychosis or something similar to that?
Spencer Greenberg: Yeah. Something in the psychosis spectrum.
Rob Wiblin: And what did you conclude about how one ought to deal in these situations? I suppose maybe we should distinguish that there’s the case where someone is having active delusions because of a mental health problem, and I guess also people having dementia. I think this is a very common situation that carers have to deal with, although somewhat different dynamics at that stage, I guess.
And then there’s the case where someone has had very often an interpersonal conflict that they feel incredibly strongly about, and they really want people to agree with their interpretation. I guess breakups would be one, but there’s others as well — conflict at work or something like that. And they’re going to find it very difficult if people disagree, but it might be quite important for them to find out if they’re misunderstanding the situation, because they could end up taking actions that are quite harmful if they’ve got the wrong end of the stick.
What advice would you have for people about how to actually act in these cases?
Spencer Greenberg: First of all, I’ll say we’ve written an essay on our website, clearerthinking.org, if you want to check it out, you want to dive deeper into this. But I will say, I think fortunately, the kinds of strategies you use actually are similar, whether it’s a really extreme case — like someone experiencing delusions — or a more mild case, where maybe someone’s just really angry at their ex-partner or something like that.
What I try to do is validate the person’s feelings without validating false perceptions they have. And that doesn’t mean you tell them they’re wrong. If someone’s upset, it’s usually not appropriate to be like, “You’re wrong about X, Y, and Z.” That’s probably not the right time. But you can still be there for them. You can show them compassion, you can tell them you care about them, and you can validate the feelings they’re feeling without agreeing to the specific factual errors they’re making.
So an example if the person is delusional: let’s say they think someone’s coming after them, which is not true. You don’t have to be like, “Oh no, someone’s coming after you. That’s so scary!” You can say, “That sounds like a really frightening experience.” So you’re kind of saying, “Given that you think someone’s coming after you, that makes sense that you’re really scared. I’m here for you. I want to help you.”
Rob Wiblin: So in your experience, if you validate people about how they’re feeling about the situation, but you don’t necessarily agree with their factual interpretation of what happened, people find that sufficient? They don’t really actively try to pin you down on whether you think they’re right?
Spencer Greenberg: Well, I think partly if you’re giving people what they want in the scenario — you’re giving them empathy, you’re showing that you care about them, you’re listening openly and curiously — a lot of times they feel satisfied, and they don’t necessarily care about you literally agreeing to everything they’re saying.
But if they do pin you down, I think a useful thing you can do there is become curious, and say, “I don’t know that much about this. Could you tell me more about this? Tell me more about why you feel this person’s an asshole. I really want to understand it.” And be open minded that you could be wrong. Maybe the boyfriend really was a total asshole. And maybe this person who is experiencing delusions is actually right, and there is someone following them. You never know.
Rob Wiblin: Yeah. And if you end up concluding that they’re mistaken, that their interpretation of things is really off base after asking all these questions, it seems like you could face a tricky decision, basically. Because the most interesting lightgassing cases are those where, unlike many bread and butter, normal white lie situations, there could be a big gain to the person if they were able to realise their mistake — because they might be about to quit a job or do something quite hostile because they’ve interpreted events one way.
But on the other hand, there could be a very large cost for you personally, or it could be very upsetting to the person, or it could cause them to cut you out of their life because you’re challenging them on something that, because it’s around an emotionally charged issue, it’s extremely important to them. And they really maybe feel alienated having friends who are telling them that they’re wrong about something that’s so central to their identity and life at that point.
Maybe this is just a very difficult tradeoff to navigate in general, but do you have any advice?
Spencer Greenberg: I think at the end of the day, it comes down to a conflict between your values. So on the one hand, what’s wrong with lightgassing, first of all? Because we didn’t really talk about that.
I think from my point of view, there are two big things wrong with it. One is it actually can be a disservice to the person that you’re doing it to: you can be reinforcing false beliefs that actually are going to cause harm for them. I think you have to be really careful about that. Like reinforcing someone’s delusions or even reinforcing someone’s misperceptions about their social life actually can have a real cost for that person later.
The second thing is that it’s inauthentic, where it’s essentially a form of lying. And if you have a value around honesty, you’re kind of violating your own values. Now, if you are genuinely in a situation where this person is going to be really hurt unless you lie to them essentially about what you think, we’re in a values conflict, and you have to think about, “There’s values at stake that I care about. One value is like helping my friends see things clearly to help them in the future. Another value is being honest. Another value is not causing harm to them or not causing them pain.” And you’re just going to have to think about how much you value each of those things in that moment, and do the best to take the action that produces the most value according to your value system.
Rob Wiblin: Yeah. Do you think on average, people probably err on the side of agreeing too much or too little?
Spencer Greenberg: Too much. Too much. I think people do it without even thinking about it. It’s such a natural, automatic behaviour. And I think that people get trained that it’s dangerous to disagree — in part because often the people who they see disagree are the disagreeable people who don’t give a shit, right?
But if you’re just talking about everyday, ordinary conversation, there’s very nice ways to disagree, where you say, “Oh, I’m not sure I believe that. Can you tell me more about your thoughts on that?” Or, “Tell me why you think that. Because I thought that things were different.” Or even, if you want to be very gentle, you can say, “Some people say this other thing. What do you think about that?” You know what I mean? You don’t even have to identify yourself as the person who believes the opposite.
I had a funny experience. I was at a party a few months ago and someone who I just met was very excitedly telling me about their astrology practice, and how they love astrology and all this stuff. And at first I was just nodding along because I was just being polite, sort of in an automatic way. Then I just had this thought across my mind, I’m like, “I don’t believe what this person is telling me. And I’m just nodding and it’s not authentic.”
So I said to them, “I have a question for you. If it turned out astrology doesn’t work, would you want to know that? Or would you want to believe it works, even if it doesn’t work?” And they responded really nicely to the question. You could imagine someone not liking that question, but they responded really well to the question. They thought about it for a moment and they said, “No, if it didn’t work, I would really want to know.” And that, to me, it was a really positive shift in the conversation. Then we got to talking about, how do you know if things are real? And could astrology be scientifically tested? What would that look like? And to me, it was a much more fulfilling conversation, and hopefully for them as well.
So anyway, there are ways to do this. Obviously, there’s some skill involved, some social skill involved. And if you don’t have that social skill, maybe you start just by agreeing, but you can kind of begin to push the envelope in little ways.
Astrology [02:02:26]
Rob Wiblin: You bring up astrology. You actually wrote an enormous test to see whether astrology works or not, right? Was that the result of this interaction or just coincidental?
Spencer Greenberg: It was a bit coincidental. We were doing a series of studies on personality, and I thought, hey, you know what would fit into the study perfectly? If we just threw in astrology as well. I will say it’s just one form of astrology: it’s the simple sun sign astrology, also known as zodiac sign, where everyone’s assigned one of the signs, like Pisces, Aries, et cetera. Which is much simpler than the full set of astrology that some people practice. But because we were already doing the study, I was like, why don’t we collect people’s zodiac signs? That will enable us to test some things about astrology along the way. And so that’s kind of how it originated.
Rob Wiblin: We’ll keep people in suspense about the result. So basically, you had all this personality data, and then you got people to tell you their star signs. And then did you collect other data as well, and then see whether any of these things lined up with astrological star signs?
Spencer Greenberg: Exactly. So we collected 37 different “life outcomes” about people. So things like, how many close personal friends do you have? What’s your education level? What’s your income? Have you been arrested? All these different facts about a person. And we chose them because we thought they were things that people might care about predicting. Like, they could be interesting to know about a person, right? And we wanted to really cast a wide net.
So we picked 37 of these things, and then we said, let’s see how predictable these things are. And we can do it using personality. So, we did it using a Big Five personality test we developed, which is the kind of gold standard in academic personality testing, where you give each person five scores with the acronym OCEAN: O stands for openness, C for conscientiousness, then you have extroversion, agreeableness, and neuroticism.
So we get five scores for each person. We try to predict, using these five scores, each of these 37 life outcomes. And then we also try to do it with astrology, where we take their zodiac sign. We represent it as a 1 if they have that zodiac sign, a 0 if they don’t. So we have this vector of 1s and 0s with a single one for each person, and we try to predict each of these 37 life outcomes. So the method we used for testing personality and testing zodiac signs was exactly the same. We were in a linear regression, trying to predict each of these 37 life outcomes using each of the two methods.
Rob Wiblin: You mentioned that you were using star sign astrology —
Spencer Greenberg: Sun sign.
Rob Wiblin: OK, sun sign. I must admit I’m not across all the different branches of astrology. But how did you make a decision about which type of astrology to use? Is that regarded as the most legitimate astrology going?
Spencer Greenberg: No, it’s not. But I would say it’s the most common one that’s referenced. And quite amazingly, in surveys of Americans, about one in three Americans say that they at least somewhat agree — somewhat agree or higher — that someone’s astrological sign, like Gemini or Pisces, accurately describes their character and personality traits. So it’s not that it’s the one that professional astrologers most like. It’s definitely not — they use more complicated ones — but it’s sort of the common denominator that most people think of, and that a lot of Americans believe in.
Rob Wiblin: So what did you find?
Spencer Greenberg: So first, to see that the method works, what we can do is we run this linear regression on fake zodiac signs. So we assign each person a random zodiac sign that’s not their real zodiac sign. We run this linear regression, we try to predict these 37 life outcomes. And what we find is we’re able to pick one out of 37 using this method. Now, of course, that’s a false positive — because we put fake zodiac signs — so it kind of gives you a sense of the false positive rate being about one in 37 false positives.
Then we run it on real zodiac signs, and we find we’re able to predict zero out of 37, which was just a fluke. Like, it could have been one, it could have been zero. But basically, yeah, none of these 37 life outcomes were we able to predict using people’s zodiac signs.
Finally, we run it using the Big Five personality scores, and we found that we were able to predict about 22 out of the 37 life outcomes with a decent level of predictive accuracy.
Rob Wiblin: Yeah. How did astrology fans react to this?
Spencer Greenberg: Well, I have to say, from memory, this is probably the top three most hate-filled responses that I’ve gotten to a post.
Rob Wiblin: I’m shocked to hear that.
Spencer Greenberg: The first kind of critique we got was from people who hate astrology. And they were like, “Why on Earth would you waste so much time studying astrology?” I’m like, I’ve done much more wasteful things than this, and nobody seems to get mad at me about them.
And then there were a whole bunch of astrology people that were angry about it. Some of them just pointed out that this is not the kind of astrology they believe in or more sophisticated consumers of astrology. But we acknowledged that in the post. We said that all along.
The other thing that they would critique is the methodology. And I think our method was quite unusual, because it’s really not a standard method from statistics. We’re not using p-values. We’re really doing a method from machine learning, and we’re saying, let’s see what we can predict. Can we use astrological science to predict things? And our answer: no, we can’t.
Rob Wiblin: Right. This is the main methodological question that I had after reading it, which is: you’ve got this enormous table of all of the star signs and all these different outcomes, and it’s just all 0s, basically, through the entire thing. And I imagine because you’re testing many hypotheses… Because it was every star sign against every one of these outcomes, right?
Spencer Greenberg: Well, not quite. So what we did is we trained a linear regression model on a binary vector — where there’s a 1 for when someone has star sign and a 0 when they don’t, right? So if someone has a Pisces, they’ll have a 1 in that slot and a 0 in all the other slots. If they’re an Aries, they’ll have a 1 in the Aries slot and 0 in all the others. And then we do a linear regression on that to predict these 37 life outcomes one by one.
And we do this to make it exactly analogous to the Big Five personality prediction, where we get, instead of 1s or 0s for the star sign, we get their scores in each of the Big Five personality traits. So each person gets five numbers, and we do the exact same method.
Rob Wiblin: I see. But I guess you’re using some kind of cutting-edge regression analysis here. It sounds like this is not a standard method, and you’d have to be using some approach to avoid the multiple testing thing — where you’re testing so many hypotheses that you have to raise what is the bar for considering a result significant, because otherwise you’ll get lots of false positives just by virtue of having asked so many questions, basically.
And the fact that you got basically 0s everywhere across the entire board made me wonder, if this method is novel, is it possible that you’ve screwed it up? And actually, it would not be possible for it to find positive results, at least if there were modestly sized impacts?
Spencer Greenberg: Yeah. So there’s always a possibility of screwing it up. But I think what confused a lot of people is that they’re thinking in statistics, and this is not statistics. So if you’re doing this statistics, you’re like, we did a whole bunch of these correlations, and then you’ve got p-values. But then you’re going to have too many false positives because you have so many different correlations, so then you need to do some kind of correction, like a Bonferroni correction, to correct for all the different correlations you’re computing.
Rob Wiblin: Yeah, that’s how I’m imagining it.
Spencer Greenberg: Exactly. This is just totally different. It’s a predictive paradigm. So this is why we tested it: when we assigned zodiac signs at random, we do fake ones, and we find about one out of 37 false positives. So we show them, if you do it on random stuff, we know there’s no predictive relationship — about one in 37 outcomes you find positive. Then we do it on astrology, we got zero. It could have easily been one. It was just a fluke that it happened to be zero instead of one. And then we do it on the Big Five, and we find 22 out of 37 predictive. So it’s just a different methodology.
And you wonder, why are there so many 0s in our table? This also confused people. The reason there’s so many 0s in the table is that the 0 doesn’t mean literally that there’s no relationship. What the 0 means is that in a predictive paradigm, you never want to predict on the data you trained on, right? Often in regular statistics, you will predict on the data you trained on: you train your model and your predicting on the same data. In a machine learning paradigm, that’s an insane thing to do. What you care about is predicting things you’ve never seen. So what we do is we train on some of the data, and then predict on data on people we haven’t looked at yet.
And what the 0 means is that we were not more accurate than just predicting the average for everyone. So if you’re trying to predict people’s age or their income or their education using astrological signs, we were better off just predicting every person at the average age, or every person at the average income, than using their astrological signs.
Rob Wiblin: So it’s a machine learning algorithm that is trying to use this data to figure out what formula would I use if I wanted to predict out of sample? If I wanted to predict with this data on new people?
Spencer Greenberg: On new people. Yeah.
Rob Wiblin: And presumably, there must be some penalty for including parameters in the model, because otherwise… Well, actually, maybe you don’t even need that when you do the in-sample model generation and then you test it on an out-sample, you don’t need to say it’s better to have a smaller model?
Spencer Greenberg: Yeah, that’s a good question. So we actually used a method known as L2 regularisation. We’re really getting in the technical weeds here, but I’m happy to talk about it. It’s a very interesting topic.
So there’s a danger of a model overfitting, right? If you have lots and lots of variables in your model and you train it on some data, there’s a danger that it fits the noise and not just the signal. Now, if you do the thing where you train it on some data and then you predict on the same data — which often people do in statistics in various ways — there’s a really bad danger that you’ve overfit your sample: you’ve learned about that specific data point, and of course you can predict it because you learned about that specific data point. What you really want to know is: can you predict data you haven’t looked at yet? Does your model generalise to new data?
So by applying it to new data only, we avoid the issue that we might have accidentally overfit. In other words, if we can predict accurately on the new data, then we know we didn’t just overfit. However, there’s still a danger that we overfit the original data, and actually that’s why we’re getting no predictions that are accurate in the new data.
So what you do is this method called regularisation, where basically what you’re doing is you’re putting a prior on your model, saying “I prefer smaller coefficients to larger coefficients.” And then you could set the strength of this prior and you could prove mathematically that with the strength of this prior, as you make it stronger and stronger, overfitting becomes less and less likely.
Then, what we did is we tested it for all different values of the strength of this prior, ranging from zero prior — where you get something that’s very much just like regular linear regression we’re used to, or logistic regression we’re used to — all the way to a really strong prior. And we basically show it doesn’t matter which prior you use; you still get no results for zodiac signs, no matter what prior you use. That’s just kind of a robustness check.
Rob Wiblin: OK, so…
Spencer Greenberg: Now that we’ve lost all of our audience and nobody’s listening, what should we talk about? Just kidding, I think you’ve got a sophisticated audience.
Rob Wiblin: I mean, I actually think this is quite interesting. This isn’t where I was going to go with this, but I studied classic statistics and I’m used to the p-values and the tables with the coefficients and hypothesis testing and so on.
This is a different paradigm. Should I be expecting this to take over? Do I need to learn how all of this stuff works, because this is what the future of statistics is going to be?
Spencer Greenberg: Well, I don’t think there’s much danger in it taking over. I think p-values are alive and well, and even the people who are pushing against p-values, usually they’re pushing for a Bayesian approach, not a machine learning approach. I did my PhD in math, but I specialise in machine learning. So often when I approach problems, machine learning is a natural approach to me. Sometimes I approach them statistically and sometimes with machine learning.
And I think this is actually a very common point of confusion, because often they’ll use methods that seem really similar. Like you might have linear regression and machine learning and linear regression and statistics. So what’s the difference? Is it just the same paradigm? The answer is no. Statistics is about testing a hypothesis. So you have a hypothesis and you want to test it. Machine learning is about making the most accurate predictions you can. It’s not about testing a hypothesis. So it’s a fundamentally different way of looking at things. So we want to say, how much can we predict from zodiac signs? Which is a different question from testing a hypothesis around zodiac signs.
Rob Wiblin: From a communications point of view, even if this method is better, do you think… I mean, obviously some people were sceptical because they’re like, “What the hell is this? L2? What is this table? So many 0s? That’s really sus. Did you just fill a table with 0s and then publish? How do I know any of this data even exists?”
I guess it does speak to the tradeoff, potentially, between using cutting-edge approaches that you think might be actually sounder intellectually, and actually being able to persuade people — where familiarity is helpful, and the sense that you didn’t have a lot of discretion about how you did things is helpful.
Spencer Greenberg: Yeah. So one of the challenges that comes up for me in research is that if I’m in a mode of figuring out the truth, that actually can lead to different behaviours than if I’m in a mode of communicating to other people. And that creates this tension. I often much prefer to start in the mode of figuring out the truth. I think that’s a better way to start. And then you can come back to how do I communicate? So this is how I in my “let’s figure out the truth” mode approach this, and then maybe I should have done more work to redo everything in terms that people are familiar with.
But I do worry that if I’d used a really typical statistical approach, that the result might have just been underpowered to find things. And so I’m not positive, but I believe this method gives us the best chance of actually figuring out the truth about this.
Rob Wiblin: Maybe you don’t know enough about astrology to answer this, but one response was, this is sun sign astrology. And of course, you got all the 0s for this, but we use a more sophisticated model. But if you have a more sophisticated model that includes sun signs as well as like, moon signs or other suns — I don’t know, whatever they use — if the sun thing is a part of it, then they still have some explaining to do. Because why is it that you wouldn’t get some predictive value out of the sun component? It suggests that they should drop that component out of their more complicated astrology approach, right?
Spencer Greenberg: Yeah. I think it depends on the way they use sun signs. Because some methods that are more complicated still use sun signs, and then if they use them in a way where we should expect them to still correlate with things — like your sun sign is part of what they use to make predictions, even if it’s not all of it — this is still evidence against their approach. If they use sun signs in a way that’s highly nonlinear — where, let’s say, the interpretation of the sun sign depends on all these other factors, and it could be interpreted totally differently based on the other factors — then it’s not clear that this is really even providing evidence against their approach.
But what I will say is, to most people, this is what astrology means. The thing we tested, right?
Rob Wiblin: The thing that’s in the newspaper. This is the thing that you’re getting in your inbox if you subscribe to an astrology thing.
Spencer Greenberg: Yeah, for a lot of the basic astrology stuff. That’s right. And then another thing I think that’s pretty subtle is that people will say, “But you only tested these 37 things and you didn’t find the ability to predict. What if it predicts these other things?” And yeah, it’s always possible it predicts something obscure, but if it predicts things that are even correlated substantially with any of the things we’re predicting — so we have things like the number of friends you have and your employment status or whatever — if you’re claiming astrology predicts things that are substantially correlated with any of them, we should have still been able to predict these things through that correlation.
Rob Wiblin: Yeah. So you have to be saying, if you say that it works, and just none of these 37 things was the right thing to worry about, you have to argue that there’s something else that was important that not only isn’t on this list, but isn’t correlated meaningfully with any of the things on this list — which includes education, and income, and lots of really quite central life outcomes that you think would be at least related to other stuff that’s important. So in fact, it provides a more comprehensive test than might be initially apparent.
Spencer Greenberg: Exactly. The last thing I want to say about this, a caveat of this research, is that it’s not powered to find really small effects. So let’s say Pisces were like 3% more likely to be employed. I don’t think we would have been able to find that.
So an objection some people have raised to this research is like, but aren’t there well-established effects about when in the year you’re born that have to do with, for example, are you like the youngest in your class or the oldest in your class? And there’s things about athletes, like maybe if you’re one of the oldest in your class as an athlete, you actually excel more, and that’s like a self-fulfilling prophecy? Where if you’re in the youngest, you tend to be smaller?
And it’s totally possible those effects are real. I wouldn’t be surprised if those effects are real, but I would be surprised if those effects were really strong. I suspect that if they’re real, they’re pretty small. And especially when you average over a whole adult population, it’s a pretty big difference to look at professional athletes, where maybe you could have this compounding effect, versus just averaging over everyone. Does it really matter that much if you’re one of the younger in the class versus one of the older? Maybe a little bit.
Rob Wiblin: Yeah. Well, sun sign astrology probably doesn’t have large effects. We break the big news stories on this show.
Spencer Greenberg: The last thing I’ll say is, OkCupid did this amazing analysis many years ago. They have a match algorithm that tries to predict your compatibility, and they’ve shown that it’s actually pretty good at predicting compatibility between people. So they ran their algorithm on every pair of sun signs — like Pisces to Aries, Aries to Capricorn, whatever — and it’s just a hilarious chart, because it’s just all the same number in every cell except for one that was, just due to noise, off by one. But yeah, so it’s the same result that we found, basically, but through a different methodology.
Rob Wiblin: I don’t want to get stuck up on astrology here, but with so many superstitious things, the thing that just immediately pops into my head is, what is the causal mechanism? And I’m like, I don’t understand what the causal mechanism could conceivably be for stars, where the sun was affecting details of your employment history or your relationship compatibility. So you kind of dismiss it out of hand. And I imagine that many people who don’t buy into astrology, that’s the basic reason. But it seems like many people don’t have that filter, or that’s not a question that they immediately ask when presented with a claim. Have you ever looked into this?
Spencer Greenberg: Well, you have to both think that it doesn’t have a plausible causal mechanism, and you have to be pretty confident that science has found most plausible causal mechanisms, right? And so if you’re maybe not so bought into that science has figured all this stuff out, maybe lack of causal mechanism is not as disturbing because you’re like, well, scientists don’t understand consciousness, and they don’t understand what makes something alive.
Rob Wiblin: I see. Yeah. That makes sense.
Game theory, tit for tat, and retaliation [02:20:51]
Rob Wiblin: OK, new topic. This is a totally random one. I was recently watching this video on YouTube about the history of game theory. And in particular, it got interesting when it started talking about these ecological studies, where you have different programs that have different approaches to the prisoner’s dilemma, and you kind of pit them against one another. Then maybe you put them on a grid and you get them to interact with one another, and adopt their different cooperation-versus-defect strategies against their neighbours, and then see what strategies tend to flourish.
And you can get very interesting dynamics where a strategy will flourish when it’s rare, but then it won’t flourish once it becomes common. And some things that are very vulnerable to being destroyed, if you can get multiple different agents nearby one another using this cooperative strategy, then they’ll tend to grow and outcompete others. So somewhat intuitive in a way.
But a very interesting thing is, kind of consistently, this line of research finds that tit for tat with forgiveness is a very strong approach. This is if someone cooperates with you, you cooperate back; if someone defects on you, if someone treats you poorly, you treat them poorly back. But also you’re inclined to, if someone treats you badly, you treat them badly, but you’re willing to forgive — so every so often, you might test the water again and treat them well, and see whether they start reciprocating. And that avoids you getting locked into a negative state, where someone defects once, or there’s a misunderstanding between people, and then they just treat one another poorly forever. That there’s opportunity for forgiveness. Lots of intuitive sense here.
But my impression is that in society, we like the forgiveness and we like the tit, but we don’t necessarily like the tat. I think that retaliation against people who’ve wronged you is not encouraged. Indeed, if you started saying, “I have a retaliatory mindset: when people wrong me, when people punch me, I punch back,” I don’t think that would get a positive reaction if I said that. But these studies would suggest that that is a prosocial thing to do to an extent: that you need people who are willing to meet nastiness with nastiness, in order to ensure that people are incentivised to be cooperative, and that bad behaviour is punished and gets weeded out of the ecosystem.
So is society missing an opportunity here in our social norms, to not say that actually righteous retaliation is a positive thing? Or maybe it’s the case that people are so likely to perceive themselves being wronged when they actually haven’t been wronged, that we’re right to not allow it, to say retaliation is totally unacceptable, because you basically would get escalation of misunderstandings all the time into constant fighting back and forth? Did you have a take on this?
Spencer Greenberg: It’s a great question. I’ve never thought about it before, but I’ll riff off the top of my head. I think something important to note is that in these simulations, they’re talking about what works for the group in the long term. In the long-term equilibriums, they’re not talking about what’s best for the individual, right? And that’s also true even if you think about evolutionary history. Evolution isn’t optimising for one person to survive; they’re optimising for genes to survive over the long term. So what is the optimal strategy for you to achieve your own life goals may not be what leads to the best equilibrium.
And I think we see this with bullying, right? Let’s say there’s a bully in your life. A lot of times the best thing you can do, if it’s low cost, is just get them out of your life. Just stop hanging around with them. But if everyone plays that strategy, then basically the bully can keep going around bullying people.
Rob Wiblin: They’re just a hot potato.
Spencer Greenberg: Yeah, they’re a hot potato. They don’t necessarily get socially punished too much. Maybe minor, like you disconnect from them, but there’s not a further social punishment.
Now, this is I think where positive gossip comes in. So gossip has a negative connotation, and there are lots of kinds of negative gossip. Negative gossip would include saying rumours about people that are not substantiated. That would be negative gossip. Or saying things that are private information that nobody really has a right to know.
Rob Wiblin: And it’s not important or valuable.
Spencer Greenberg: It’s not important, but it makes someone look bad or harms someone’s reputation. But positive gossip, as I’ll call it, is where you’re giving important information about other people, and you’re spreading it around in a way that helps people make better decisions that are accurate. I think that that is one way that, in practice, we respond to bad behaviour in a way that reduces for example bullying. If someone bullies us, maybe the optimal thing for us to do right now is just stop hanging out with that person. But then maybe we mention to our friends, you know, “I feel like that person really mistreated me,” right?
And I think in a community setting, that kind of positive gossip, factual information that helps people make better decisions… And it also is not vague — it’s not like, “This person wronged me”; well, what does that mean? — it’s like really specific about what exactly happened, trying to be fair to the person. That ends up creating an actual incentive over time, where people like that can actually get kicked out of the community and get serious repercussions for their behaviour.
Rob Wiblin: And I guess that’s stronger, because it’s more evidence based and maybe less likely to lead to retaliation. Because hypothetically, if you just resorted to violence, then the other person can respond with violence as well. But if you’re just saying, “This person did X,” well, they can respond with at least telling true facts about you. But in that case, actually things are kind of fine. It doesn’t tend to escalate that much.
Spencer Greenberg: I will say that I think there’s an interesting exception to this. I will say this is based on my personal experience; I don’t have data on this, so it may not be true. But in my life, as long as I can remember, if someone tries to bully me, I immediately turn things up to 11. I think it’s because I grew up with brothers, where there’s a lot of brothers picking on brothers, and you kind of learn that you’ve got to defend yourself. You can’t get taken advantage of, right?
And so in high school, people would try to bully me, and they just would immediately regret it. Like instantaneous regret. For example, outside of school one day, this bully — who was actually a really scary guy; he burned all these cigar burns along his arm, self-inflicted wounds — he really hated me for reasons that I didn’t understand. And one day he’s like, “I’m going to fight you. Let’s fight after school today.” And so I just immediately swing my arm back, go like that. Don’t actually punch him in the face, but put my knuckle right near his face. So he leaps backwards thinking I’m going to punch him. And then I just walk away.
What that creates is this instantaneous, like, “Wow, I tried to bully this guy and suddenly had a deep fear reaction,” right? And that was always the way that I dealt with bullying. Not necessarily through a physical threat like that, but through an immediate escalation. And it worked extremely well for me, and a bunch of people tried to bully me, but nobody ever did it successfully. Now, that being said, it’s a bit of a risky strategy, right?
Rob Wiblin: It’s a risky strategy indeed. I think one thing that people do go for at least a little bit is turnabout is fair play. Like, if someone is flagrantly violating the prohibition against X on the regular, then if people do X back to them, then they can’t complain too much. Because if they’re not going along with the social norm, then how can they appeal to it? I’m not sure whether that is good or not. I haven’t thought about that, but it’s an interesting one.
Spencer Greenberg: Yeah. I would just say that I think there’s a version of this that’s not for high school courtyard, but a version of this, of everyday encounters. Let’s say someone you just met is humiliating you in front of a group. And I’ve seen scenarios like this too. I think there’s a version of this that you can also do. What I found works for me — again, don’t necessarily recommend this; there’s some danger here — is I just point out what they’re doing, and I find that it’s very embarrassing for them.
Rob Wiblin: What are some examples?
Spencer Greenberg: Well, here’s a ridiculous example. I was at a dinner, and the person sitting next to me is extremely wealthy, like multi-hundred millionaire, very narcissistic. And he found out that I was vegetarian, and he announced to the whole table, “Hey, this guy’s a vegetarian. His penis is going to shrivel up.” My immediate response was to laugh and say, “OK, so you believe that people who are vegetarian, their penises shrivel up. Is that what you really think happens?”
This actually is just an example of a more general strategy I use. Like online, if people make really obnoxious comments on something I write, I just take what they’re doing at face value and I just ask questions about what they’re doing. And there’s something about that that it both doesn’t give them what they want because it doesn’t show you’re rattled, but also it’s very embarrassing, because their own behaviour is actually embarrassing, and you’re just letting onlookers see how embarrassing their behaviour was. So, anyway, that’s worked pretty well for me.
Rob Wiblin: Yeah. Valuable lifestyle advice. I’d be very interested to hear stories from listeners who apply that.
Parenting [02:30:00]
Rob Wiblin: OK, you’ve got to go. You’ve got many projects on the boil beyond just talking to me. But final question: I think you’ve mentioned that at some point you were considering having kids, or at least you were investigating what is it like to have kids? I’m going to have a kid in my life pretty soon, fingers crossed. What did you learn in the course of looking into that? Anything that I should know?
Spencer Greenberg: First of all, huge congratulations. Very exciting.
Rob Wiblin: Thank you.
Spencer Greenberg: Second of all, I don’t have a child, so I’m going to be hopeless and naive in the sense of firsthand experience. But I did learn some interesting things. So I went and talked to a bunch of parents to ask them about their experience having kids, and I found it to be fascinating.
One of the really interesting things that I ended up concluding from it is that I think, on average, having children reduces people’s pleasure, but increases their sense of meaning and purpose. So that’s kind of how I think about it now: just as a tradeoff for your own life. It’s deeply, deeply meaningful having children; there’s also a lot of ways it’s not pleasant, and it reduces other forms of pleasure because you’re stressed, tired, busy, you’re looking out for another person. You’re looking out for another person. You’re sacrificing yourself constantly for this other person, right?
However, I will add one caveat. I think there are some people that just love being around kids. They just get so much joy out of it. And that kind of person, if you’re that kind of person, you might actually increase your pleasure, too, if you just get this high from being around kids. So there are a few people like that.
Rob Wiblin: Yeah. Don’t I remember you saying, when you spoke with parents, you asked them, “What’s your favourite part of the day?” And they said, “After the kids have gone to sleep.”
Spencer Greenberg: Yeah, that was a really funny one. I was talking to this couple, very power couple-y: one’s a lawyer, one’s a doctor, they work really hard. They’ve got a nearly full-time nanny. I started asking, “What’s the best time with your children?” And then they kind of looked at each other and discussed it, and then they ended up saying, “It’s like right after we put the kids to bed, and we’re looking at their sweet, smiling faces.” And I was like, “Wait, but they’re asleep. That’s the best time with your children, when they’re asleep?”
But I think that’s a nice illustration of this meaning-versus-pleasure thing, right? It’s like deeply meaningful, seeing their sweet kids’ faces, sleeping —
Rob Wiblin: But without the difficulty of the child.
Spencer Greenberg: Exactly. Another interesting thing about that couple, the man from that couple was saying he used to arrange it so that when he got home to be with his kids, he would play with them. It would be like playtime. So the nanny would have done all the logistics stuff. But he actually changed it on purpose, so that when he got home, he would feed them and bathe them and stuff like that. And I thought that was really interesting. And it, again, speaks to the meaning and pleasure. It wasn’t for him about, like, “Let me have fun with my kid”; it was like, “Let me invest in my kid. Let me take care of my kid. That is actually what I want to be doing.”
Rob Wiblin: Yeah. Did this influence your decision whether to have kids or not, or were you just doing this out of more curiosity?
Spencer Greenberg: It was a time when I was really thinking about what do I want in life? I think it influenced me a little bit, but I think, you never know for sure — people can always change their mind — but I think there are a number of reasons why having children is not the most appealing for me.
Rob Wiblin: OK, yeah. Well, I’ll get back to you.
Spencer Greenberg: Let me know how it goes. You should start tracking your meaning and pleasure right now, so you can get some good, high-quality data. And hopefully you should have like 10 or 12 kids so you can get a decent sample size.
Rob Wiblin: Yeah. I’ll be able to give you a very rationalised explanation for why this was a great decision, despite the incredible sleep deprivation and so on, very soon.
Spencer Greenberg: Well, lucky for you, you’ll never be able to admit to yourself that it wasn’t a good decision. So you’ll be happy with the decision no matter how it turns out.
Rob Wiblin: Humans have a lot of flaws, but we’re well designed in some ways. My guest today has been Spencer Greenberg. Thanks so much for coming on The 80,000 Hours Podcast, Spencer.
Spencer Greenberg: Thank you so much, Rob. I really enjoyed this. And if people enjoyed this conversation, I would just love it if you checked out my podcast, Clearer Thinking with Spencer Greenberg. I have on lots of really interesting guests talking about interesting things. And I’ve had Rob on too.
Rob Wiblin: Yeah. Can recommend.
Rob’s outro [02:34:10]
If you enjoyed that episode, then do go check out Spencer’s previous appearances on the show, which are similarly great. Those are:
- #147 – Spencer Greenberg on stopping valueless papers from getting into top journals
- #39 – How much should you change your beliefs based on new evidence? Spencer Greenberg on the scientific approach to solving difficult everyday questions
- #11 – Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm
Also as I mentioned in the intro, we’re growing the team here at 80,000 Hours and right now that means we’re hiring for multiple roles on our people operations and business operations teams.
So if you’re excited about building and running the systems that help 80,000 Hours run effectively, you should take a look at and maybe apply for those positions.
Being able to work in London in the UK is preferred, but we’re open to remote candidates whose working hours overlap with at least four hours between 9AM–6PM UK time.
Salaries are roughly £50,000 to £75,000 depending on the role and your experience.
And applications are closing pretty soon on March 24.
You can find out by going to 80000hours.org and clicking the link that says “We’re hiring!” All our roles are also listed on our job board at jobs.80000hours.org. At the moment, they’re listed alongside 891 other positions you might want to look at and consider applying to across a wide range of organisations, locations, skill levels, skill types, and problem focuses.
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire, Simon Monsour, and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.