Transcript
Cold open [00:00:00]
Hugo Mercier: If you take someone who intuitively believes in a conspiracy — for instance, someone who works in a company or in a government, and they’ve seen that their boss was shredding documents or was doing something really fishy, and they have good evidence that something really bad is going on — their reaction is going to be to shut up. They’re going to be afraid for their jobs and, in some places, for their lives.
If you can contrast that to the behaviour of conspiracy theorists who don’t have actual perceptual or first-hand evidence of a conspiracy going on, then these people tend not to be afraid. They can say, “I believe the CIA orchestrated 9/11 and they’re this all-powerful, evil institution” — and yet they’re not going to kill me if I say this.
And ironically, that means that the more vocal conspiracy theories, the more everywhere it is, the less likely it is to be true, in a way — at least when it comes to an actor that still has a lot of power now. The more huge the claims are, the less likely it is to be correct, otherwise you would not be out there saying that.
Rob’s intro [00:01:07]
Rob Wiblin: Hey listeners, Rob here, head of research at 80,000 Hours.
I feel like I’ve been hearing more and more about misinformation and disinformation in recent years, and I just checked Google Trends, which confirmed those search terms are about five times as popular now as they were back in 2017.
In 2023, the discussion around misinformation and disinformation shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world, or alternatively, extremely easy to mislead people into believing convenient lies.
I just saw that the World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years, ranking it ahead of war, environmental problems, and other threats from AI.
I worry these fears are a bit exaggerated, and misunderstand how misinformation does and doesn’t work.
So I was delighted to interview cognitive scientist Hugo Mercier, whose research on how people form beliefs and figure out who to trust has led him to a very different worldview — in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon.
As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.
At this point, you might be thinking, if people are so smart, how come so many fall for lies about vaccines, or financial scams, or buy into or astrology or bogus medical treatments, or vote for such daft policies?
And that’s exactly what I put to Hugo — the toughest cases of people falling for bad ideas I can think of, to test whether they really indicate widespread gullibility, and if not, how Hugo would make sense of them.
We then probe the different stories people have offered for how AI could lead misinformation to get radically worse, and try to see how well they hold up. Hugo thinks they are mostly nonsense, while I’d just say I’m sceptical.
Personally, I think this interview is a very useful corrective to some ideas that are very popular and mostly going unchallenged at the moment.
So without further ado, I bring you Hugo Mercier.
The interview begins [00:03:26]
Rob Wiblin: Today I’m speaking with Hugo Mercier. Hugo is a cognitive scientist and research director at the CNRS at Institut Jean Nicod, Paris, where he works with the Evolution and Social Cognition team. He’s published over 100 papers and book chapters, mostly focused on two key topics: firstly, the function and workings of reasoning; and secondly, how we evaluate communicated information. On the second, he’s also the author of Not Born Yesterday: The Science of Who We Trust and What We Believe, which came out in 2020.
Not Born Yesterday attempts to explain how we decide who we can trust and what we should believe, and Hugo argues that we actually don’t credulously accept whatever we’re told — even if those views are supported by the majority of people, or by prestigious, charismatic individuals. He also argues that mass persuasion — be it by religious leaders or politicians or advertisers — is very difficult to pull off, and almost never leads to big changes in public opinion. On the contrary, he thinks that we are pretty skilled at figuring out who to trust and what to believe — and if anything, we’re much too hard rather than too easy to influence.
Thanks for coming on the podcast, Hugo.
Hugo Mercier: Thank you for having me, Rob.
The view that humans are really gullible [00:04:26]
Rob Wiblin: I hope to talk about how people decide who to trust, and where people really do go wrong in forming their beliefs. But first, I want this to be a subtle interview, because I feel like the truth on this topic kind of has to be somewhere in the middle. Human beings can be surprisingly savvy, but also screw up to their detriment in meaningful ways — if only just because being right about everything in this messy, complex world is an exceedingly hard thing to do. And I actually think the book, if you read it carefully, has a very subtle message, a much more subtle message than just a one-paragraph summary would allow. It admits a lot of complexity in how we form beliefs in different situations and our performance in different domains.
Maybe just to start, what are some sources that have advocated the view that human beings are really gullible — the view that you wanted the book to challenge?
Hugo Mercier: Historically, this has been mostly a view upheld by more right-wing thinkers. That goes back to ancient Greece, people who wanted to have reasons for rejecting popular opinion, for not having a lot of people vote, and ignoring the people. So that was one of the reasons that these people would give: that people were stupid, they’re gullible; if they could vote, then they would vote for the first demagogue that would come around, and that would lead to a disaster.
More recently, that view has also been advocated on the left, for sort of opposite reasons: the left would have claimed that people are being oppressed by the bourgeoisie and by the dominant classes, and the only reason they’re not rising up against this domination is that they have sort of absorbed and imbibed the dominant ideology that is put forward by the dominant classes.
Rob Wiblin: So that’s one common stereotype of people as being not very savvy, not very engaged, easy to lead by the nose. I think there’s also a stereotype of human beings really stubborn about their beliefs, and just sticking to the beliefs that they’ve already committed to and that they’ve already said publicly, and throwing out evidence that conflicts with that because it’s inconvenient for them. This sort of confirmation bias idea.
I think you want to say that, on balance, that stereotype is actually probably closer to the truth than the other one, although people are in many areas actually pretty savvy about taking in new information as well. Would you say you kind of agree with the stubbornness stereotype as more sound?
Hugo Mercier: Yes, especially in the really bizarre informational environment we live in — in which we are bombarded with information, and it’s often hard for us to tell where the information is coming from, and we can’t really exchange arguments with the people trying to convince us of things. In that kind of setting, it makes sense to be kind of rationally sceptical, so you’re just going to mostly ignore things that disagree with your point of view.
But that’s mostly a stance that is dominant because we live in this weird environment. When it comes to everyday life — when you talk to your colleagues, to your friends, to your family — there you’re just mostly rational. So you change your mind when you should, and you don’t when you shouldn’t.
Rob Wiblin: It’s interesting, there is this stereotype that people are stubborn about their beliefs and kind of unreasonable about them. But it is possible to steelman that approach, or to defend it as, in an information environment that is very challenging and somewhat reasonably hostile, that could actually just be the optimal strategy — as we’ll explain a little bit throughout the interview.
The evolutionary argument against humans being gullible [00:07:46]
Rob Wiblin: So there were three reasons why I was really drawn to doing this interview. The first is just that reading the book, I found it was generating a lot of new thoughts for me; even in places where I wasn’t convinced, I was finding it super generative.
The second is that I’ve long argued, I’ve long had the opinion that in terms of persuading other people, simply making good, sound arguments as clearly as you can is an underrated, underappreciated approach. There’s this idea out there that the best way to persuade people of things is to engage in tricky behaviour, where you try to influence people subconsciously using the right kind of words, or talking to them in the right-coloured room. My view is that if the people you’re talking to are worth their salt, and if you think your views are actually sound and true, that sort of tricky approach is overrated and rarely convinces people.
And the third reason is because these topics are getting a lot of renewed discussion this year because of advances in AI and LLMs, which have made a lot of people worry that we could see a breakdown in the ability of ordinary people to distinguish truth from fiction — because so much of the content that they might be exposed to could be this cleverly produced propaganda or marketing, or otherwise stuff made by LLMs or other generative visual models, with the goal of convincing people to believe some or other falsehood that the LLM operator is keen on.
And my take, and I think yours as well, is that this is a legitimate worry, and I’m glad that people are talking and thinking about it, and trying to find ways of tackling the new problems that are going to come up, but that the risk is actually a little bit more limited than some people think. And maybe we don’t expect to see the worst-case scenarios that people would describe actually come to pass.
So with that bit of framing out of the way, what’s the fundamental evolutionary argument that it’s not really possible for humans to evolve to be gullible or foolish, or easily persuaded of things by other people?
Hugo Mercier: The basic argument is one that was laid out by actually Richard Dawkins and a colleague in the ’70s and ’80s. It goes something like this: within any species, or across species, if you have two individuals that communicate, you have senders of information and receivers of information. Both have to benefit from communication. The sender has to, on average, benefit from communication, otherwise they would evolve to stop sending messages. But the receiver also has to, on average, benefit from communication, otherwise they would simply evolve to stop receiving messages. The same way as cave-dwelling animals might lose their vision, because vision is pointless, if most of the signal you were getting from others was noise — or even worse, it was harmful — then you would just evolve to stop receiving these signals.
Rob Wiblin: Yeah. So in the book you point out that, having said that, a lot of people will be thinking about persuasion and gullibility and convincingness and perceptiveness as a sort of evolutionary arms race between people’s ability to trick one another and people’s ability to detect trickery from others and not be deceived. But your take is that this is sort of the wrong way to think about it. Can you explain why?
Hugo Mercier: Yes. I think this view of an arms race is tempting, but it’s mistaken in that it starts from a place of great gullibility. It’s as if people started being gullible and then they evolved to be increasingly sceptical and so to increasingly reject messages. Whereas in fact, what we are seeing — hypothetically, because we don’t exactly know how our ancestors used to communicate — but if we extrapolate based on other great apes, for instance, we can see that they have a communication system that is much more limited than ours, and they’re much more sceptical.
For instance, if you take a chimpanzee and you try pointing to help the chimpanzee figure out where something is, the chimpanzee is not going to pay attention to you as a rule. They’re very sceptical, very defiant in a way, because they live in an environment in which they have little reason to trust each other. By contrast, if you take humans as an endpoint — I mean, obviously chimpanzees are not our ancestors, but assuming our last common ancestor was more similar to the chimps than it was to humans — if you take a human as the opposite endpoint, we rely usually on communication for everything we do in our everyday life. And that has likely been true for most of our recent evolution, so that means that we have become able to accept more information; we take in vastly more information from others than any other great ape.
Rob Wiblin: So one take would be that you need discerningness in order to be able to make the information useful to the recipient, so that they can dismiss messages that are bad. And there’s a kind of truth to that. But the other framing is just that you need discerningness in order to make communication take place at all, because if you were undiscerning, you would simply close your ears and, like many other species, basically just not listen or pay no attention or do not process the information coming from other members of your species. So it’s only because we were able to evolve the ability to tell truth from fiction that communication evolved as a human habit at all. Is that basically it?
Hugo Mercier: Yes, exactly. It’s really striking in the domain of the evolution of communication how you can find communication that works — even within species, for instance, that you think would be very adversarial. If you take a typical example of some gazelles and some of their predators, like packs of dogs, you think they’re pure adversaries: the dogs want to eat the gazelle; the gazelle doesn’t want to be eaten.
But in fact, some gazelles have evolved this behaviour of stotting, where they jump without going anywhere — they just jump on the same place. And by doing that, they’re signalling to the dogs that they’re really fit, and that they would likely outrun the dogs. And this signalling is possible only because this stotting is a reliable indicator that you would outrun the dogs: it’s impossible if you’re a sick gazelle, if you’re an old gazelle, if you’re a young gazelle, if you’re a gazelle with a broken leg. You can’t do it. So the dogs can believe, so to speak, the gazelles, because the gazelles are sending it on a signal.
By contrast, an example that is a bit sad in a way, is in most mammal species, but it’s also true in humans, there’s a conflict between the mother and the embryo or the foetus. The foetus is trying to grab a lot of resources, more than maybe what the mother would be willing to share. And the two of them have a kind of tug-of-war in which both produce vast amounts of hormones to try to get as much as resources as possible for the foetus, and to give only a moderate, reasonable amount from the point of view of the mother.
Because the foetus doesn’t have an honest manner of signalling “this is how much resources I need,” there is a complete lack of communication. So the amount of resources that the mother gives is the same, even though the amount of hormones that is produced on both sides is massive. So you have no progress, no communication, even in a system in which things should work really well, because you have a mother and her unborn child — yet, because they have no way of honestly signalling, there is no good communication going on.
Rob Wiblin: Yeah. I think this might have come up on the show before. You have this issue between a mother and their unborn child, this issue that the mother has more interest in their future reproduction or their future reproductive potential than the baby has in their mother’s future reproductive potential, because the unborn baby will not share 100% of its genes with the future siblings or the future half-siblings. For complicated reasons that we won’t go into, the baby wants to maybe absorb more resources than is optimal from the mother’s point of view, and the mother might want to deny them resources that the unborn infant wants. And this creates a conflict in which the baby cannot credibly communicate accurately what level of resources it actually needs to be healthy, because it always has this incentive to lie.
Hugo Mercier: Exactly. And you see the same thing with baby birds trying to get food from the mother bird, when the mother bird comes to come back to the nest and she has to decide which baby bird to give the worm to or whatever, and they’re all trying to shout as loudly as possible — even though that is actually bad, because that increases the predation risk. But there’s no way for the babies to honestly signal, like, “I’m the one who really needs the food,” or “I’m the one who’s going to make the most of the food.”
Rob Wiblin: Yeah. So a very natural response to this sort of evolutionary reasoning is that it would be right if we were living in an information environment that resembled the one our ancestors existed in. But the modern world, as you were alluding to earlier, is so different from the world our preindustrial ancestors lived in, be they farmers or hunter-gatherers, because today we’re bombarded with more claims about more things by more people than we ever have been before. And unlike in the ancestral environment, most of the people who are making these claims at us we’ve never met, and we’re not necessarily going to meet them again.
So plenty of people who are trying to communicate at us in the one world might be trying to take advantage of us, or take advantage of the enormous amount of research that they’ve been able to do in order to figure out how to better persuade other people of whatever it is they want them to believe — marketing experts and so on.
This explanation for how the way that things happened in the past might not be the way things are working out today is often called “evolutionary mismatch.” I think a famous example of this is with diet. So it’s true that our ancestors are very good at figuring out what a healthy diet was to eat in the environment in which they existed. But now, in the modern world, we have to contend with nacho-cheese-flavoured Doritos invented by thousands of food scientists, and we can’t necessarily just rely on our instincts about what is tasty to guide us towards a healthy diet.
What would you say to someone who raised this objection that you might be accurately describing history, but are you describing the world of today?
Hugo Mercier: I completely agree. And because you are giving this example, it makes me think that the mismatch isn’t that dramatic. In the sense that the environment we live in is crazy in nutritional terms, in which you can have an infinite amount of calories every day for a relatively small amount of money — and in spite of that, even people who are overweight, like unless you weigh a tonne, literally, it means you’re only missing your calorie target by like 10% or something. So it’s not that bad in a way, given how much temptation there is and how easy it is to get food that’s not great for you. Even in that domain, where clearly we could be doing better, we’re also much less worse than one might have expected.
And I think to some extent the same is true for information. So clearly we’re making bad decisions in informational terms. Sometimes we’re accepting bad information that we shouldn’t accept. I don’t think that’s very common, but it happens. A lot of the time what happens is we reject information that we should have accepted. Essentially, if you take all the bad things that people are accused of believing — like being vaccine hesitant, believing that the last elections in the US were fraudulent, believing in conspiracy theories — all of these people have heard the correct version somewhere. So in all of these cases, the problem is also that they have rejected something that was accurate. I think that’s the main problem with the informational environment we live in now.
And at any rate, if we want to understand and to make sense of how we are reacting to the informational environment we live in now, we have to understand how our cognition evolved, and our cognition evolved for a different environment. So if we understand how our cognition works now, we can understand what it gets right and what it can sometimes get wrong in the current environment.
Open vigilance [00:18:56]
Rob Wiblin: OK, we’ll come back to a bunch of those themes throughout the conversation, and see how we have adapted to the modern, more challenging junk information environment in which we find ourselves.
But let’s push on to the core descriptive information in the book. You spend a lot of time laying out the main principles by which we decide how to incorporate new information into our beliefs. And you call the general approach that we take “mechanisms of open vigilance.” Can you break down this concept of “open vigilance” for us?
Hugo Mercier: Yes. It comes from the concept of epistemic vigilance, and it’s really the same thing; it’s just kind of rebranding.
The “open” comes from the fact that all of these mechanisms, as we were hinting at earlier, their main function really is to help us be more open: be more accepting of information, be influenced by others when they’re right and when we’re wrong. That is the ultimate function. We start from this place of we just have our own beliefs that we form through perception and inference, and then the more open we are, the more we’ll be able to benefit from the fact that other people have different knowledge, they have different information from what we have, and we can use that.
And the “vigilance” comes from the fact that this openness, as we were also kind of saying earlier, is only made possible by the fact that we are vigilant. So it is because we check what people tell us, and we check whether we can trust them or not, that we can afford to be open to what they’re telling us.
Rob Wiblin: In what ways would you say that we are open to new information?
Hugo Mercier: For instance, when someone gives you a good argument, people tend to change their minds. This is something they’ve studied a lot in the lab, in which we give people small logical or mathematical problems to which there is a perfect argument — like, you can just demonstrate the correct answer in a relatively easy manner, even though most people get it wrong originally, and it’s one of these kind of trick questions, and people have a very strong intuition that they got it right. And in spite of that, when you give them a good argument, they change their minds quite easily. So in a way, the argumentation is the way in which you can exert the most dramatic changes of mind, when you have arguments that are strong enough.
Rob Wiblin: Yeah. Many people will have the idea that giving people good arguments for beliefs is not always that effective, and often people will throw out good arguments on spurious grounds, or just because they conflict with what they already believe. You have this example in the book of people who’ve studied whether people sensibly incorporate new information in changing their beliefs. I think many listeners will, like me, have heard of this experiment where people who supported the Iraq War were told later on that WMDs were never found in Iraq, and in fact they didn’t exist, and that this shockingly caused them to become more in favour of the Iraq War rather than less, as you might expect.
You point out that there’s been a whole slate of experiments of this type done, and that was actually the only case in which people’s beliefs updated in the wrong direction relative to what you might expect. In some cases they didn’t move so much, but that was the one case out of dozens in which it went in the wrong direction. Do you think to some extent people are maybe cherry-picking cases where folks are resistant to arguments, and they ignore the familiar everyday cases when arguments persuade us sensibly all the time?
Hugo Mercier: Yes, I think there are at least two documented cases of this backfire effect. There is another one with vaccine hesitancy, I think for one specific vaccine and one specific range of the population. But yes, there are now dozens, if not hundreds, of experiments showing that in the overwhelming or the quasi-entirety of the cases, when you give people a good argument for something, something that is based in fact, some authority that they trust, then they are going to change their mind. Maybe not enough, not as much as we’d like them to, but the change will be in the direction that you would expect. In a way, that’s the sensible thing to do.
And you’re right that both laypeople and professional psychologists have been and still are very much attracted to demonstrations that human adults are irrational and a bit silly, because it’s more interesting. Look, people can speak — it’s the most amazing thing maybe in the biological world that we have language. It’s like, well, sure, obviously we have language, but sometimes, one time every maybe 50,000 words, there’s a word that you can’t remember, you have on the tip of the tongue: “Oh my god, this is amazing how our brains have been working so poorly.” So yeah, we are attracted by mistakes, by errors, by kind of silly behaviour, but that doesn’t mean this is representative at all.
Rob Wiblin: You have the example in the book of if someone asks you to estimate the length of the Nile, and then you give a length in metres, and then you’re told, “Actually, this other person, who’s just as informed as you, estimated this other number. Would you like to update your guess then?,” people tend to move about half the way towards the view of the other person, which is actually very sensible.
Hugo Mercier: They only move one-third on average, so people are a bit conservative. They don’t treat the other person by default to be as knowledgeable as they are. But if you let people talk to each other, then the one who actually was the most knowledgeable will tend to exert more influence. So when people can actually say, “I know because I’ve been to Egypt, because I studied Egypt, because I’ve read about this recently,” then the other person will be convinced.
Rob Wiblin: Yeah. So that’s a somewhat easy case, because it’s not bringing in more complicated social factors, and people are unlikely to try to trick you about the length of the Nile. So it’s a more straightforward one, but it suggests that in a simple case, where we don’t have to worry about these other manipulative effects, our behaviour and our updating is actually pretty on point.
Hugo Mercier: Yeah. When I present these experiments I was briefly describing earlier, with these small logical or mathematical problems, people will say, “But people don’t care.” That’s kind of what you were saying earlier as well: you have no emotional attachment to the answer to these problems; it really doesn’t matter to you whether it’s five or 10 or whatever.
My favourite counterexample is, early in the 20th century, some of the greatest minds in Europe — like Bertrand Russell and David Hilbert and Whitehead and others — were trying to develop kind of simple, logical foundations for mathematics, and they were consumed by that. It was their life’s project. They had devoted years and years to the project. They had written thousands of pages. These people were really intense. And then at some point they read Gödel’s incompleteness theorems showing that you can’t do it, and immediately they say, “Oh yeah, we can’t. We give up.” Immediately they read it and then they realise, “He’s right. What we’ve been spending most of our lives doing is not doable, and we’re giving up immediately.”
So if the arguments are strong enough, even if the issue is as emotionally salient as you can possibly imagine, people are still going to accept the arguments.
Rob Wiblin: OK, so that’s openness. In what ways would you say that we are vigilant?
Hugo Mercier: Well, in a way, this openness also demonstrates vigilance, because if you had given Russell a really crap argument in favour of the incompleteness theorem, he would not have accepted it. So people are only open to the extent that they are vigilant. And there are many ways this vigilance is exerted in terms of how we decide who we trust, how we decide what information is more likely to be plausible or not.
I think the most basic mechanism is what I’ve called “plausibility checking.” Essentially, whenever you encounter communicated information — when you hear something, when you read something — the first thing your brain does is compare that to what you already know. And in a way, in order to understand what you’re told, you need to have some background beliefs; you need to compare that to what you already know. So if you tell me there’s an elephant in my backyard, I have to form a representation of that, and I have to realise that it kind of clashes with most of what I know about elephants and backyards.
And this first layer is already a very strong kind of defence mechanism in a way, because by rejecting anything that doesn’t fit with our beliefs, it makes it really hard for other people to make us believe things that will be bad for us. Like sometimes our own beliefs will be mistaken, and that’s a problem because we’ve made a mistake on our own, but then the issue is not going to be communication per se.
Fortunately, there are ways of overcoming this plausibility checking, such that you can end up accepting information that would challenge your prior beliefs, challenge things that you already thought to be true. And broadly, there are two main things. One is argumentation, as we were mentioning: so even if someone tells you something that you disagree with, if they have a good enough argument, you might change your mind. And the other is trust: if I believe that a given computer is better than another, but then a friend of mine who I know to be a computer expert tells me that I have it wrong, I might trust in that person.
Rob Wiblin: So you might think that there’s this paradox that if we encounter information that conflicts with our current beliefs, then we’re more inclined to be sceptical of it. Potentially one of the options on the table is just to throw it out and say, “I’m ignoring this new information because it doesn’t pass the plausibility-checking stage, because it’s too in conflict with other things that I already believe” — and that creates potentially this loop where you could get stuck, because you’re just unwilling to update on the basis of any information that conflicts. So why doesn’t that create this sort of pigheaded unwillingness to ever change your ideas?
Hugo Mercier: So if things work well enough, it shouldn’t make you change your mind, but it shouldn’t reinforce your beliefs either. To go back to your example earlier, if I believe the Nile is 5,000 miles long, and you tell me it’s 3,000 miles long, and there’s a bit of a clash between our two beliefs, maybe at worst I’m going to ignore what you’re telling me, but I’m not going to believe now that the Nile is 6,000 miles long. So at worst I’m going to get stuck. But it’s not going to make things worse; it’s just going to fail to make things better.
Rob Wiblin: So our defensive posture is that if something conflicts with our existing beliefs, so it doesn’t pass that initial plausibility check of just being consistent with what we think, then it has to have something else going for it that allows it to pass through and be incorporated into our ideas. So it could be something like it comes from an authority that we have trust in, and then we might take it very seriously. Or it could be an argument that we feel ourselves qualified to check and to see whether the reasoning holds up. But if it’s just an assertion from something that we don’t trust and where we don’t feel qualified to pass judgement on the soundness of the argument ourselves, then the default thing is just to not try to incorporate it into our beliefs. Is that right?
Hugo Mercier: Exactly, yeah. And that’s what we see. If you consider mass persuasion attempts — if you consider advertising, propaganda, religious proselytising, all of these things — you’re typically in one of these situations in that when you see an ad on the subway or on TV, you know who is sending you the message, but you don’t have any information about their real competence, you tend to suspect that there’s a conflict of interest. They don’t really have time to give you any arguments that might change your mind, not at any length. So in most of these mass persuasion situations, you’re in a situation in which people are mostly going to react on the basis of whether the message that they’re hearing jives with what they were already believing or not.
Rob Wiblin: Yeah. An interesting example of this that you have in the book is there’s this idea, I think, that people can be more easily persuaded of stuff if they’re tired or distracted or not paying attention. That would be a model where the ideas by default go in, but you have active defences that things can be rejected if you’re able to think about them and not allow them into your mind, because you’re checking them and rejecting them.
But you say that research suggests actually the opposite: that when people are distracted, not paying that much attention, or they’re tired, or they don’t feel in a position to judge ideas, what happens is they just stop changing their mind at all. Which is, of course, a very sensible thing to do, because those are the points at which, if you tried to evaluate the arguments, you would be most likely to make a mistake. And so you simply close your ears, more or less, or you simply become unwilling to shift your opinions.
Hugo Mercier: Yes. Completely. In a way, that’s this idea that has led both to the myth of brainwashing and the myth of subliminal influence, both of mid-century America.
You have the idea that if you go to the movie theatre and in the middle of the movie, they’re going to show very quickly words like “Coca-Cola” or something, then it will make you drink more Coca-Cola. And the idea is that precisely because your brain can’t process the information on any conscious level, then you’re unaware of the influence attempt and you’re completely falling prey to it. And there is no data showing that at all. The original claims were just completely made up by someone who wanted to sell books. There is no evidence that any of this works at all.
And the other thing which has had much more dramatic consequences is the idea of brainwashing: the idea that if you take prisoners of war and you submit them to really harsh treatment — you give them no food, you stop them from sleeping, you’re beating them up — so you make them, as you are describing, extremely tired and very foggy, and then you get them to read Mao for hours and hours on end. Are they going to become communists? Well, we know the answer, because unfortunately, the Koreans and the Chinese have tried during the Korean War, and it just doesn’t work at all. They’ve managed to kill a lot of POWs, and they managed to get I think two of them to go back to China and to claim that they had converted to communism. But in fact, after the fact, it was revealed that these people had just converted because they wanted to stop being beaten and starved to death, and that as soon as they could revert back to go back to the US, they did so.
Intuitive and reflective beliefs [00:32:25]
Rob Wiblin: Let’s push on to a really important distinction here that I think is necessary to understand the broader picture, and that’s the distinction that you draw between intuitive and reflective beliefs.
So you’re happy to concede that people are willing to adopt kind of crazy, potentially wrong reflective beliefs, but you point out that these are often not especially consequential. It’s intuitive beliefs that do the heavy lifting in our lives, and we’re much more careful about what we believe and are resistant to change our intuitive beliefs. Can you explain the distinction between intuitive and reflective beliefs?
Hugo Mercier: Yeah. Intuitive beliefs are beliefs that are formed usually through perception. If I see there’s a desk in front of me, I have an intuitive belief that there’s a desk in front of me, and I’m not going to try walking through it; I know I can put my laptop on it. And also beliefs that are formed through some simple forms of testimony. So if my wife tells me she’s at home tonight, then I’m going to intuitively believe she’s at home tonight. So I will base my behaviour on that, and I will act as if I had perceived that she was at home tonight, for instance. And that’s the vast majority of our beliefs, and things work really well, and these beliefs tend to be consequential and to have behavioural impact.
By contrast, reflective beliefs are beliefs that we can hold equally strongly as intuitive beliefs, so it’s not just a matter of confidence, but they tend to be largely divorced from our behaviour. So you can believe something, but either because you don’t really know how to act on the basis of that belief or for some other reasons, it doesn’t really translate into the kind of behaviour that one would expect if you held the same belief intuitively.
So an example that is really striking is conspiracy theories. If you take someone who intuitively believes in a conspiracy — for instance, someone who works in a company or in a government, and they’ve seen that their boss was shredding documents or was doing something really fishy, and they have good evidence that something really bad is going on — their reaction is going to be to shut up. They’re going to be afraid for their jobs and, in some places, for their lives. They have a strong emotional component, and their behaviour will be one of really not wanting to say anything, or if they say anything, they won’t want to shout it from the rooftops — they’ll contact a journalist anonymously or something like this.
If you can contrast that to the behaviour of conspiracy theorists who don’t have actual perceptual or first-hand evidence of a conspiracy going on, then these people tend not to be afraid. They can say, “I believe the CIA orchestrated 9/11 and they’re this all-powerful, evil institution” — and yet they’re not going to kill me if I say this. So at worst they’re going to say things, but their emotional and behavioural reactions are really stunted, or really different from what you would expect from someone who would have a similar intuitive belief.
Rob Wiblin: Yeah. You give the contrast between, in Pakistan, the intelligence services are known for engaging in all kinds of conspiracy theories all the time, and engaging in basically committing crimes on the regular in order to pursue their agenda. And everyone in Pakistan believes that this is the case, and they know intuitively that it’s the case, and they don’t go out and organise a conference talking about how the security services orchestrated a terrorist attack, because they think that they would be killed.
Hugo Mercier: Yeah, people who have tried would be dead.
Rob Wiblin: Right. And by contrast, in other places where people claim to believe that the security services are equally evil and organising terrorist attacks all the time, they don’t seem to have much fear that there’ll be any repercussion to saying this. And that’s the difference between intuitive and reflective claims.
Hugo Mercier: Yes. And ironically, that means that the more vocal conspiracy theories, the more everywhere it is, the less likely it is to be true, in a way — at least when it comes to an actor that still has a lot of power now. If it comes to old things in the past, then fair enough. But if it comes from an actor, like an institution that is supposed to be really powerful now, the more huge the claims are, the less likely it is to be correct, otherwise you would not be out there saying that.
Rob Wiblin: Yeah. Can you give some other examples of crazy, or at least not intuitive, reflective beliefs that people claim to believe? I think that there’s got to be some related to religion, where people say that they believe X, but it’s not really apparent that their behaviour is fully following through on the kind of religious dogma that they claim they buy into.
Hugo Mercier: Yes, obviously there are many examples from the Christian faith, for instance, in the sense that if you believe in theologically correct version of hell, committing any kind of sin that is likely to land you in hell is completely irrational. Because essentially you’re saying, “Well, if I jerk off now, which is a small amount of pleasure, then maybe I will spend eternity in hell” — which doesn’t seem like a good tradeoff. So either people are really stupid, or they’re not fully believing, they’re not intuitively believing that they’re going to go to hell if they do the slightest little thing.
People also reflectively believe the theologically correct belief that the Christian God is omniscient and omnipresent and omnipotent. And yet when they think about it more in their everyday lives, they will think about them as an agent, and then they know, well, if I pray now, two people can’t really pray at once, because they have to pay attention to the first person first and then to the second person, which contradicts being omniscient. So there are really lots of examples. I mean, you can believe that God is omniscient, you can be very confident — you’re not lying to yourself; you’re genuinely believing it, and you’re not lying to anybody — but the fact is that it’s really hard to turn that into behaviour.
To take examples of things that are presumably real, I have a very strong belief that the Earth is currently rotating at I don’t know how many miles per hour on itself, and then around the sun, and then the sun around the centre of the galaxy, et cetera. And I don’t feel carsick. I just know that it is the case, but it doesn’t impact me in an emotional manner. Or I think that time and space are related to each other, but it’s not going to make me not want to take the plane because time would go slower. There are things that, even if you accept them and even if they are true, they just don’t have a lot of behavioural consequences.
Rob Wiblin: Yeah. So a listener might think that exempting false or reflective beliefs and professing some silly idea that you don’t actually act on, that that doesn’t indicate actual underlying gullibility or poor judgement — that there’s a little bit of a bait-and-switch going on here. What would you say to that?
Hugo Mercier: That’s true. So the point is that these mechanisms that I think we have of open vigilance that help us evaluate information, to some extent they work more or less intensely based on how important the belief is to us, and what would be the consequences of actually holding a belief. And for beliefs that don’t really matter to us in the slightest, we tend to be less vigilant, and that makes sense.
Another thing is that most of these reflective beliefs, I would think, not only are they not costly for the person who holds them — again, as we were saying, if you’re a conspiracy theorist, the CIA is not going to kill you — but they also tend to possibly have advantages. People who defend conspiracy theories, maybe they’re trying to claim that they know more than others, that they actually have more knowledge than others, that they’re more competent — they can score points within the circle of fellow conspiracy theorists.
So I think that a lot of these reflective beliefs — even though they’re false, and even though they can have really dramatic social consequences; like if a lot of people believe one of these things that is mistaken, that can be bad for society as a whole — again, they don’t have to be costly for the individuals themselves. And on the contrary, they might actually be beneficial for the individuals who hold them.
Rob Wiblin: I suppose the example of someone who doesn’t, on an intuitive basis, believe all of the claims of the religion that’s dominant in their area might nonetheless benefit from, on a reflective basis, saying that they agree with the propositions — because that allows them to get along with everyone else and to fit into the social group. So it’s very natural to just go along with those claims, even if you don’t actually then take the step of thinking, “What should this imply about everything that I ought to be doing?” and truly integrating it into your core intuitions.
Hugo Mercier: Yes, completely. People like sociologists of religion who have studied conversion experiences, for instance, have noted that usually you find a religious group that suits your more practical needs. Like you’re someone who likes being in a group, you like going to church, you like participating in common activities, you like doing things to help other people, or you just like being along, going along with your family and your friends who have already converted. And then you’re going to convert for these reasons. It’s only later on that you will sort of adopt the beliefs that go along with the behaviour, but the behaviour is the primary root cause.
How people decide who to trust [00:41:15]
Rob Wiblin: OK, pushing on: what rules of thumb do people use to decide who to trust?
Hugo Mercier: There are two main dimensions of trust, really. One has to do with competence — essentially, how likely is it that what you’re telling me is true? And that depends on how well informed you are, how much of an expert you are, whether you’re someone who is very knowledgeable in a given area. And for this, we keep track of informational access, for instance. So let’s say we have a friend in common, and I know that you’ve seen her recently. If you tell me something about her, I will tend to believe you, because presumably you’re better informed because you’ve seen her more recently.
More generally, we are pretty good at figuring out who is an expert in a given area, sometimes on the basis of relatively subtle cues. Like if you have a friend who manages to fix your computer, you’re going to think they’re a good computer person, and maybe you’ll turn to them the next time you have a computer problem.
So that’s the competence dimension: Does that person know the truth? Do they themselves have accurate beliefs? And the other dimension, which is maybe what we really call trust in everyday life, is: Are they going to tell us that? Because even if I can believe that you’re the most expert person in the world in a given area, if I don’t trust you, if I don’t believe that you will share with me the accurate beliefs that you hold, then it’s no use to me.
That second dimension, of really trust per se, depends broadly on two things. One is your short-term incentives. So even if you’re my brother, or you’re a very good friend, if we play poker together, I’m not going to believe you — because I know that if you tell me to fold, you have no incentive to be truthful in the context of poker; we have purely opposite incentives. So there’s this kind of short-term, what can you get from me with that specific message?
And then there’s the long-term incentives, like: Are you someone whose interests are kind of intermeshed with mine, and someone who would benefit from me doing well? And is that something that’s going to be true moving forward? So if you’re a family member, if you’re a good friend, I know that you don’t have any incentive, or very small incentives, to mislead me — because then that will jeopardise our relationship, and the cost to you as well as to me would be quite high.
Rob Wiblin: Would you generally say that we have good judgement about who to trust?
Hugo Mercier: Yes, on the whole. We make mistakes, but on the whole, I think we’re pretty good. And I think most of the mistakes we make are mistakes of the type that we don’t trust people enough, rather than trusting them too much.
Rob Wiblin: Why do you think we err in that direction?
Hugo Mercier: Because it’s the more cautious dimension. So two reasons: one is it’s often less costly, or it seems less costly, to not trust someone — in that you’re just losing out on the potential benefit of the cooperation, but you’re not risking something you already have; you’re risking a potential gain in the future instead of risking something you already have.
Let’s say you have a neighbour who wants to borrow your drill. If you don’t trust them, no one’s going to take your drill. Your drill is safe. But what you lose is that if you trust them and it turns out that they’re a good neighbour and they give you back your drill, then next time you can borrow something from them, and so you might be gaining something in the long term. That’s what you’re losing by not trusting. But that cost of not trusting is something that tends to be in the longer term and it might be harder to imagine, versus the cost that you have to pay. If you trust someone, they ask you to trust them. It’s because you have to pay a cost now to help them or to believe in them, and that cost you have to pay immediately, and that’s very salient.
Rob Wiblin: Yeah. On this point of trust, one way in which people might be credulous is if we just follow instructions from authorities. And one very famous experiment that people frequently cite demonstrating that kind of tendency are the Milgram experiments. These are super famous, probably most people have heard of them. But just to recap: people were instructed to give intense electric shocks to another person when they gave the wrong answer to some quiz or something. And supposedly many of them went along with this, even to the point of causing the supposed participants, who were in fact actors, to pass out, or even pretend to die from the shocks.
Many people will have heard of this, but I think the reality is a little bit different than what is often portrayed. What do people misunderstand often about the Milgram experiments?
Hugo Mercier: On the whole, these are really dramatic experiments that are really interesting, and it’s still quite informative about human nature that some people went along. The figures that are usually put forward of about 60% going along are probably quite inflated, in the sense that after the experiment, Milgram and his confederates asked the participants, “Do you think that there was something fishy going on? Do you think that there was anything that might not have been kind of truthful?” Quite a lot of the participants actually said that they thought there was something maybe a bit fishy, and the participants who said that were those who were the most likely to go all the way. So one way of reading that is that if you suspect that it’s not real, then you’re more likely to comply, because why not?
Another thing that’s important is that it’s not obedience to any authority. There was a lot of argumentation going on. The experimenter who was convincing the participants had to do a lot of work; they had to really exchange with them, say, “Look, this is for science. It really matters for science that you do this. We take all the responsibility.” So it was not just saying, “You do this,” and then participants say, “OK, I’m going to do that,” no questions asked. People were really, really resistant to doing anything like this.
And also, it only works if it’s a scientist from Yale, essentially. That was very prestigious. It still is very prestigious, but arguably, it was even more prestigious. The participants were mostly kind of lower-class participants who might have been really kind of awed by the prestige of the institution and by the fact that it was science. It was not just any random bloke telling you to do this; it was a scientist with a white coat in the basement of Yale telling you to do this.
So people had a lot of cues that are very reliable as a rule, that these people know what they’re doing; I’m not going to be sued for anything because they’re the ones that are taking responsibility. Obviously it’s still interesting that people do that. It’s not nothing, but there’s a lot of caveats that should be kept in mind.
Rob Wiblin: Yeah. I think you point out that among people who didn’t have any doubts about whether or not the experiment was real, only about a quarter of them went all the way to the highest voltage, unlike people who thought that maybe it was a setup from the beginning. And also, when people were commanded to just raise the voltage, then they tended to reject that and refuse. They had to be given what at least seemed like plausible arguments in favour of how this was important for science.
The interesting thing here is they were, I think, deliberately bringing in less-educated people to a fancy-looking lab at the most prestigious university they could get in order to amp up this authority effect and to try to see whether in that circumstance, people would have trust in the authority figures. And the crazy thing is, they were actually right to trust the authority figures, because it was a setup. In fact, there was no damage being done and the university would not have approved an experiment in which the participants were being killed by high-voltage shocks. So in a perverse way, they were actually being rational to think that actually it’s kind of fine to go along with it, because otherwise this wouldn’t be happening.
Hugo Mercier: Yes. You want to believe that if you were in a kind of low-trust society, or if it had been another institution in which people had placed less trust, they would have been less likely to go along — partly because, indeed, these institutions might have been more likely to do the real thing. Then again, it’s still interesting. These are still really interesting results that reveal… And you can know that, because the people’s emotional reactions were very strong; the participants were really distraught, so something was going on. Some of them really felt they were doing something that was potentially really not great, and some of them still went along. So there’s something interesting there. But the conditions that are necessary for that to happen are very specific — and on the whole, quite rational.
Rob Wiblin: Yeah. What’s the other famous experiment from this era of wild psychology research? The Zimbardo one?
Hugo Mercier: Oh, yeah. The Zimbardo prison experiment. But that’s really bogus, this one.
Rob Wiblin: Yeah. I was just going to say, for people who haven’t heard, the prison experiment that also is still remarkably in many psychology textbooks is much closer to just being an outright scam. You should basically just remove that one from your memory and Google it if you haven’t heard what was dodgy about that.
OK, incidentally, in the book, you discuss the French. The French, it turns out, have a much stronger belief than many other countries in homoeopathy. And that’s one of the very strange ideas that you struggle to explain, and maybe think this could be a legitimate example of people being fooled into believing something that harms them and doesn’t really have any intuitive justification either. You also point out that there was a belief in 13th century France that preserved umbilical cords could help you win lawsuits that you found.
And you actually begged people in the book to help explain how either of those things came to be widespread beliefs. Did you get any good answers to that appeal?
Hugo Mercier: No. For the umbilical cord one, I’m not really surprised. I guess there are few medievalists within my readers.
For homoeopathy, I’ve never really encountered a great explanation either. Some of the explanations have to do with the mechanism of homoeopathy, but I think few of the users of homoeopathy are really aware of how it’s supposed to have worked. I think the main principle that underlies most misguided medical treatments is that when people are sick, or when one of their close ones is sick, people want to do something. It’s really counterintuitive somehow to just say you’re just going to rest and eat soup and that’s it — even though, for most common diseases, that’s the best thing you can do.
And people want to do something for reasons that are probably quite interesting in terms of showing that you’re not faking it, for instance — like, “No, look: I’m taking a medicine.” And homoeopathy has the advantage of being completely painless; it’s not very expensive.
Rob Wiblin: It’s harmless as well.
Hugo Mercier: Yeah, it’s completely harmless. You’d have to have a lot of homoeopathy to be diabetic. So that’s the main risk. The only risk, obviously, is if you substitute homoeopathy for treatment when you have a sickness that actually requires a treatment. Like if you have an infection, you want to take antibiotics in many cases. Then again, most people who take alternative medicines, in the West at least, also use conventional medicines when it’s required. So it’s more when you have a cold, when your son bumped their head and they have a bump on their head: it’s these things which are everyday occurrences and for which the cost of being mistaken in taking a homoeopathy is very small.
Redefining beliefs [00:51:57]
Rob Wiblin: So yeah, as mentioned above, you want to say that people professing silly beliefs that they don’t actually act on, or intuitively incorporate into their world model, that that doesn’t show deep, real gullibility.
And there are also various other cases of seemingly really daft behaviour that you want to defend and explain in the book as motivated by really understandable, pragmatic, selfish concerns. Basically, if people are persuaded that their self-interest requires them to say that they believe some stupid thing, typically they are willing to do it. But that doesn’t necessarily mean that they’ve been persuaded of the belief on a deep level. So it’s less, in these cases, an epistemic error, and more a matter that they’re kind of being bribed; they’re being paid to claim that they believe in magic or whatever else.
One of the examples of this you’ve alluded to earlier is a nontrivial number of people say that they believe the Earth is flat, for instance. How could you explain that? Typically the reason is that people are really enjoying the social group, the kind of social dynamic that comes along with these flat-earther groups. Is there much more to say about that, other than that people who are kind of lonely and maybe don’t feel like they have many allies in life often kind of look for unusual beliefs that can bind a group together that they can all profess? And then that increases the loyalty between them, and allows them to hang out and feel like they have something special?
Hugo Mercier: Yes, I think that’s a potential explanation. It seems as if people who turn towards conspiracy theories are people who maybe don’t have the status that they think they should have. In the sense that instead of being people who influence others, in terms of having strong opinions about current events and these sort of things, they’re mostly down to just, you have to accept what’s in the newspaper, you have to accept what the authorities say — and that might not be fully satisfying.
A lot of people want to contribute to creating their epistemic environment. And if you can’t do that professionally, like if you’re a journalist or researcher or something like this, then it’s tempting to do it in a way that will make up for it — but because you’re not in a nurturing institutional context, it’s likely to go astray. So people do their own research, and they create these sometimes very elaborate and quite knowledgeable theories about vaccination, or the fact that the Earth is flat, or that whoever is killed…
In a way, I can really understand their motivation. And I was talking to a journalist who has studied a lot of QAnon people, and he was describing the work that these people were doing and the feelings that they had when they felt they were uncovering new evidence was not very different from what he felt as a journalist when he was figuring out how a story was making sense together. So he really was understanding, in a way, their motivation. Unfortunately, the outcome isn’t great, but the motivation isn’t intrinsically bad.
Rob Wiblin: So the important thing maybe to realise here is that if people are forming these beliefs not because of an error in their ability to incorporate new evidence, but rather because there are selfish motives or reasons why it’s personally beneficial for them to think that they believe something, or act as if they believe something, then it’s not an epistemic error. So giving good arguments is not necessarily going to change people’s minds; instead, you have to make their wellbeing high if they believe a different thing that you think is more true. It suggests quite a different intervention that you might need in order to change people’s minds or to help them.
Hugo Mercier: Yes, exactly. If the beliefs haven’t been acquired because you’ve been convinced by careful arguments, then that’s also not how you’re going to get people out of it. And that’s true for religious cults or those sorts of things as well. Usually people, as we were mentioning earlier for religious conversions, join a new religious group because they have practical reasons — like they get along well with the people, they get stuff in the short term that they enjoy. So convincing them that the doctrine is ridiculous is not going to do so much. What you have to do is to provide them with an environment in which they’re going to get what the other environment is able to provide in terms of status, in terms of brotherhood, these sorts of things.
Just to come back to conspiracy theorists and maybe flat-earthers in particular, when you have a really good idea, and you think you’re the first person in the world to have that idea, even if it’s not something not massive, it feels really awesome. You feel as if you know something, that you figured out something that no one else really has figured out. Imagine if you had that belief about the Earth being flat. It’s like, “All the scientists, everybody in the world is getting this thing completely wrong. And I know this, and I have this truth that is better than what everybody else is thinking.” This might be quite a high. If you’re able to convince yourself of that, I can see how it would be quite pleasant, in a way.
Rob Wiblin: One phenomenon that you regularly point to in the book is that in repeated interactions, when people have their reputation on the line with other people, they tend to caveat what they’re saying and to be quite careful about what they assert is true — because they know that if they assert X confidently and then X turns out to be false, that they’ll be discredited in future in the eyes of other people. And you could similarly think, shouldn’t people be really careful about going out and saying on Facebook that they think the world is flat, because that’s going to discredit them in everyone’s eyes going forward, if they’re wrong? And maybe on some intuitive level, they realise that they might be wrong.
And you actually have an explanation here for why sometimes it can be useful to say things that are outrageous to the broader population. It can be in your selfish interest to say things that alienate people, and maybe discredit you in their eyes, because it shows your deep commitment to the ingroup that you’re trying to affiliate with — because these are the few people that you think you can potentially trust and make really strong allies. I think you could call this “burning bridges.”
Hugo Mercier: Burning bridges, yeah. I think it was Pascal Boyer who coined that term in that context, yes. The idea is that if you’re part of a small group of people that doesn’t necessarily trust outsiders that easily, one way of showing that they can trust you is to show that you can’t really be part of any other group anymore. So if you say something that’s really offensive… If you look at new recruits in kind of radicalised movements, they will say things that are really awful. Presumably one of the reasons is that they know they are burning their bridges with everybody else — with their family, with their former colleagues, with the rest of society. And once you’ve done that, then you have no other choice but to be faithful to the group that shares these beliefs. Even if the beliefs don’t really matter that much, really, at the end, it’s just a way of displaying your allegiance to a group.
I mean, it’s hypothetical, but it would kind of make sense of why people would express views that so many people are going to find aberrant or plain silly.
Rob Wiblin: Another maybe darker example of this general phenomenon that you mentioned in the book is an official in the North Korean government who announced that they believed Kim Jong Un could teleport magically from place to place. Now, someone might be cynical about a North Korean official saying this, but maybe it’s also conceivable that having lived your entire life inside the North Korean regime, you have been brainwashed into believing that Kim Jong Un has crazy magical powers. But what do you think is the explanation for what’s going on there?
Hugo Mercier: For this, I rely on a nice paper by Xavier Marquez, who’s a political scientist who has coined the term, or maybe inherited it, of “flattery inflation” — which is something that he documents in a number of cases in which you have a dictator, and the people around the dictator want to signal that they would be faithful to the dictator, that they have his back, so they can then benefit from the dictator’s largesse. But it’s hard because the dictator knows that everybody has that incentive, and everybody is going to try to ingratiate themselves with him.
So one possible solution for that is to flatter the dictator in a way that is going to make you look ridiculous, even vis-à-vis the other kind of sycophants that are surrounding the dictator. And so you say things that are increasingly over the top, so that you can say, “Look, I’m the most sycophantic of all the sycophants. I’m the one you can trust, because I’m saying things that everybody else in society is going to think I’m a loon for saying.”
So again, it’s kind of hypothetical, but it fits with the behaviour. And again, these are — to go back to the intuitive/reflective belief distinction — all of this is very much reflective. If people have seen Kim Jong Un control the weather or teleport, I’m pretty sure they would have been quite shocked.
Rob Wiblin: Yeah. Or if Kim Jong Un was going to a conference in Beijing and said he didn’t need a plane, didn’t need transportation.Why are you organising transport for me? There’s no need.”
Hugo Mercier: “Beam me up, Scotty.”
Bloodletting [01:00:38]
Rob Wiblin: OK, another thread of the book is the following. In some of the most striking cases, where many people reach false conclusions, you want to say the issue isn’t our processing of incoming evidence or our being easily persuaded by other people; rather, it’s a matter of human beings, in some cases, having natural preconceptions that they will tend to come to of their own devices almost every time, unless there’s super compelling evidence provided to the contrary. And the three cases you discuss the most in the book are excessive vaccine hesitancy, belief in creationism, and bloodletting.
Maybe the most striking one, with new facts that I was totally not familiar with, is the discussion of bloodletting. The story I was familiar with here was that the humoral theory of medicine was popular among a particular set of ancient Roman and Greek philosophers and physicians. And the physician-philosopher Galen famously wrote about the humours and bloodletting and medicine in general, and it’s kind of that argument from authority that made bloodletting a generally accepted practice for some 1,900 years. And of course, bloodletting went on to kill an inordinate number of people, because as we now know, when people are sick, draining you of your blood is kind of the last thing you want to do.
Hugo Mercier: It is not a great idea.
Rob Wiblin: You need all of the blood that you have. But you argue in the book that that’s not really why bloodletting was common practice in the Western world for all that time. Can you explain?
Hugo Mercier: Yeah, so I just can’t help but mention a couple of anecdotes. When George Washington fell ill with a throat infection in the winter of 1799, obviously, he was like a semi-god in the US at the time, and so the best doctors were brought to his bedside. And they decided to, over the course of several days, not in one go, bleed him of two and a half litres of blood — which is about half of the amount of blood that a normal adult has. And after that he died somehow. So people are still discussing whether it was the throat infection or the bleeding out, or probably a mix of both, but that didn’t help.
And another example from the same period is Benjamin Rush, one of the founders, was the best recognised physician in the revolutionary US. He was not the only one, but everybody else at the time, when there was an epidemic — for instance, there was an epidemic of yellow fever in Philadelphia — they would bring all the sick people together, put them in a big tent, and then they would bleed them all just a little bit. Like, you don’t bleed them a lot, but you just bleed them a little bit. The issue there is that they bleed them all with the same scalpel without really being too thorough about washing it, because they didn’t have the germ theory of disease.
So to be fair, in most cases, people get bled and you cut a little bit of the arm, you let a little bit of blood that’s going to come out, and you do that often as a prophylactic when you have a cold or something. And it’s not great, but it’s not a big deal in the vast majority of cases. In some cases, it’s gone really wrong, but usually it’s not that bad. It’s never useful, except if you have too much iron in your blood, but that’s very rare.
So as you were saying, the main story we could tell is that these very influential physicians from Greece and Rome had influenced whole centuries and millennia of Western physicians. But as it turns out, maybe about a third of cultures in the world practice bloodletting, and they’ve done that even though they have never heard of Galen, they’ve never heard of the Hippocratic writers, they’ve never heard of any of these influential physicians. So that shows that it is a practice of bloodletting for a number of reasons that are not well understood.
But the practice of bloodletting itself is intuitive, and most cultures will spontaneously stumble upon something like bloodletting. It’s going to be bloodletting, or it’s going to be using emetics to make people throw up, it’s going to be laxatives, it’s going to be even sudation. The idea is to make something come out of you one way or the other when you’re sick. And that’s just an intuitive practice that is bad. Like, just don’t do that. But for a bunch of reasons, it seems to have happened nearly everywhere.
So that shows that, if anything, the theories that these physicians created — these very complex, elaborate theories, such as the humoral theory of disease — they were created after the fact to justify a practice that was preexisting, and a practice whose success was due to its kind of intuitive nature: the fact that intuitively people find that compelling, and not to the theories. The theories are really secondary: they’re something that go along with the practice, because people like to do things that they can justify, but they don’t really cause the practice.
Rob Wiblin: Yeah. You point out that in a very large number of hunter-gatherer tribes, it was common to engage in some form of bloodletting. And obviously these people had not heard of the Greek classics.
Hugo Mercier: Exactly.
Rob Wiblin: If this was all the result of some error on the part of a handful of philosophers, then that would be pretty surprising that this is coming up so frequently. And also, you point out that in the hunter-gatherer tribes, they don’t tend to have some elaborate theory for why bloodletting is good. Nothing about biliousness or any of these other humours. Rather, if people are asked, they’ll say here’s something bad in you that’s making you sick, so we’ve got to get the bad thing out, right? Something very intuitive like that.
And it’s only actually when you have a more cosmopolitan, more educated environment — like ancient Rome — that you need to come up with some elaborate theory to justify how it is that bloodletting is a good idea. Because in that more competitive cultural environment, simple intuition might not be regarded as dispositive; it’s not sufficient. So there, you need to write a book, you need to write a treatise to explain why you need to engage in bloodletting.
Hugo Mercier: Yes, exactly. And there’s going to be more competition as well among physicians. So you don’t have just the village healer; you have several physicians, and the physicians you’re going to see might be the one who is the best able to explain — even if the explanations are mostly bogus — who is not just going to tell you, “You need to be bled,” but “You need to be bled, because such and such.” And given that you yourself kind of agree with the therapy in the first place, it looks better if you can understand what’s going on, or if you feel you understand what’s going on.
Vaccine hesitancy and creationism [01:06:38]
Rob Wiblin: Yeah. So similarly, lots of people are reluctant to get vaccinations. I guess people who are reluctant to get vaccinations have been pretty vilified in recent years. One explanation for this belief is generally bad judgement; another might be that people are gullible and that they don’t know who to trust, so they’re trusting these quack doctors. What’s your explanation for why so many people are scared to get vaccinated?
Hugo Mercier: In a way, we can tell that it’s not sheer gullibility, because we find the same pattern everywhere, and we have found it since the beginning of mandatory vaccination or inoculation in Britain about two centuries ago. In every society, there will be a few percent of people who will quite staunchly oppose vaccination, really kind of anti-vax people. And you’ll have 10%, 20%, 15% or more who are more vaccine hesitant. The worst country probably is actually France, which I’m a bit ashamed about. So you have this, and you find that just about everywhere in the world. Again, you find that in England, as soon as vaccination was introduced. So it just seems to be a fact of human nature that some people will find vaccination to be a bad thing.
And I think it resonates with a lot of people. Even people who are mostly pro-vax can at least sort of understand the intuition that injecting something that is related to a disease into a baby that is perfectly healthy doesn’t seem like the most straightforward thing to do. Imagine if you didn’t know anything about vaccination and you encounter a tribe that takes a bit of blood from a sick cow and puts it in a perfectly healthy baby. You’re going to think that they’re nuts. I’m taking the example of the cow because that’s how inoculation started with smallpox in the UK.
So obviously, given everything we know about vaccination, you should do it for all the vaccines that are recommended by the health system. But I can see how it’s not the most intuitive therapy. It’s not like if you have a broken arm and someone said, “Probably we should put the bone right.” “OK, sure. Yeah, let’s do it.” They say, “Oh, your kid is perfectly fine. We should take this thing from that sick person and transform it and then put it in your kid.” It doesn’t sound great. So there’s an intuition I think that many people share, that vaccination isn’t the best therapy.
And we know that this is the prime driver and not the stories about vaccination causing autism, for instance. Because as much as in every culture, there are people who are going to doubt vaccination, the reasons that they offer to justify that doubt are going to vary tremendously from one culture to the next. So in the West, it has been a lot recently about vaccines, the MMR vaccine in particular, causing autism. It used to be that the smallpox inoculation would turn you into a cow. There are many cultures in which it’s going to make you sterile, it’s going to give you AIDS, it’s going to give you all sorts of bad things. The justifications vary a lot, because these are the ones that you get from your environment. But the underlying motivation to dislike vaccine is pretty much universal — not universal in the sense that everybody shares it, but that in every population you’ll have people who are very keen on being anti-vax.
Rob Wiblin: Yeah. It does make a lot of sense. At most points in history, if doctors had said, “What we should do is take the thing that makes someone else sick and put it on you,” then that actually you would have been pretty justified in saying, “I don’t know, I think I’m just going to go get the homoeopathy, take just the water,” because that would be a much safer option.
Hugo Mercier: Exactly.
Rob Wiblin: It is quite counterintuitive that you should take someone who’s healthy and then give them a transformed disease, basically. So it’s just an unfortunate fact of reality that that actually is the best treatment.
Hugo Mercier: That is really bad luck.
Rob Wiblin: Yeah, really bad luck for us. It makes me more sympathetic to anti-vaxxers. And I guess it suggests that for every generation, the burden of proof is on the vaccinations. Every generation has to be persuaded anew that, despite what you might think, this is safe. And when the diseases that people are getting vaccinated against are common, then it’s more easy to demonstrate to people that it’s a good idea, because you can see that the past generation all had smallpox and we don’t, and we’re not dying. So that’s compelling evidence. But when nobody you know has ever had any of these diseases, then it’s a lot harder to provide really compelling information. Especially if you’re in general sceptical of authorities or of doctors, then how are you going to come to credibly believe that this is safe? It’s a quite deep epistemic challenge for you.
Hugo Mercier: Yes. And it’s also a moral challenge, because for a lot of people, vaccination is going to protect you, but against something that is unlikely to be really severe. But if enough people vaccinate, then you stop the spread of the disease in the first place, and then the disease can’t affect those people who are more vulnerable and who can be really sick. So it can be rational in a very kind of narrow, egoistic manner, to not vaccinate in some cases — but then it becomes, in many cases, a morally very dubious choice.
Rob Wiblin: OK, a third example that I think we’ll just skip over here, because it has a kind of similar structure, is creationism. In general, people have this intuition that things that exist in the environment were created for a purpose. Which is a pretty sound intuition, because if I look around the room here, the table looks like a thing that was made for sitting at, the light looks like it’s a thing that was made to produce light. So it’s very natural to engage in this kind of teleological reasoning of, what were humans made for? What were trees made for?
So this gives creationism a very easy time. It’s very intuitive to the human mind to imagine that the natural world was made for a purpose, rather than to believe that it was made completely by random chance variations. Evolution is not a simple thing to understand necessarily, compared to the idea that things are made for a good purpose.
Hugo Mercier: Yes. And even when it comes to things that were created by natural selection, not like artefacts, it still helps to have this teleological stance — in the sense that if you wanted to understand an eye, it makes sense that I have to understand it as something that was “designed” to see; if you don’t see it as such an artefact, in a way you’re never going to make sense of how it works. So even if you have to remind yourself something was obviously not created by someone, it still has features of something, nearly, that would have been created by someone.
False beliefs without skin in the game [01:12:36]
Rob Wiblin: OK, pushing on. A few interviews ago, I spoke with the economist Bryan Caplan. Have you heard of him?
Hugo Mercier: Yeah. Obviously, yeah.
Rob Wiblin: One of his key ideas, which he lays out in the book The Myth of the Rational Voter, is that while voters often have very wise and prudent beliefs when it comes to practical decisions that they have to make in their own personal lives — like what what car to buy, say — when it comes to big-picture policy issues, they often hold completely daft or demonstrably false beliefs.
And the main reason for this, as he explains it, is pretty simple: that voters have dumb beliefs about policy issues. The practical effect on them is virtually nothing, because the chance that their vote is going to change an election is roughly zero. And they know that perfectly well, so they understandably choose what Bryan calls “rational ignorance” — basically just expressing views that feel good or that make them look good, or just confidently believing kind of random things, because at the end of the day, it doesn’t really matter one jot what they think.
So inasmuch as you think voters and democracies do have some foolish or harmful beliefs, do you agree that that’s an important driving reason?
Hugo Mercier: Broadly, yes, but several caveats. One is that people are not that bad at having beliefs about the economy as a whole, for instance. So at the moment in the US, there’s a lot of talk about a disconnect between the underlying economic conditions in terms of inflation and unemployment rate, for instance, versus what people are saying that they feel that things aren’t going well. They say, “My personal finances are going well, and I have a good job and I’m getting paid more. But on the whole, I think the country is doing poorly.” And that seems to be one of the disconnects that Bryan Caplan is talking about.
However, that’s only exceptional because until recently, public sentiment was actually tracking economic conditions extraordinarily closely. It’s still the case, actually, in most countries in which this data is generated — and in the US, it’s plausible that this is a temporary lapse, and that once people have figured out that the salaries have increased by just about as much as inflation by now, things will converge again. So people aren’t quite as bad as they are often portrayed in having an impression of how the system as a whole is doing.
And it’s also important to keep in mind that when people make voting decisions, for instance, they will decide to vote for the person who best aligns with what they believe on things that they care about. So maybe someone who is really oriented on issues they really care about — abortion, for instance, or something that’s really prominent to them. Maybe they won’t know what the economic issues are of the candidate, the economic positions of the candidates, but they will know what their stance is on abortion, and that is going to help that person decide.
So obviously people can’t be experts in every domain, but that’s fine. I agree that people are going to get a lot of things wrong because it doesn’t really matter to them that much, but it doesn’t have to be really bad for democracies.
And another kind of rejoinder — not really to Caplan, who would not advocate for dictatorship — but when things get really bad in society is when dictators get things wrong. Like the most dramatic thing that ever happened, maybe in history, is Mao’s Great Famine, when he killed, or his policies killed, dozens of millions of Chinese people. And that was the case because he had really misguided beliefs, and he had a lot of power to implement them. So when a random citizen gets something wrong, the consequences of that are really, really small because there are so many checks and balances. Whereas in a dictatorship, when the dictator gets something wrong, then things can go really, really bad.
And then again, the logic is in a way the same, because the dictator is themselves largely insulated from the consequences: they’re not the ones that are starving. So the logic is similar, but the consequences are much worse.
Rob Wiblin: Right. Yes. Hard to disagree with that. I think Caplan might say, for example, an example of people getting their kind of engineering wrong, or their ideas about public policy wrong in a way that’s quite harmful to society is many people are very scared about nuclear energy, and think that nuclear power is not a good way of generating electricity; it’s not safe or whatever — even though almost all experts in the area, and indeed I, think that nuclear power would have been a great thing to invest in in the ’70s and ’80s, and double down on and try to make it safer and better and cheaper.
I think that is an example where a person who is wary of nuclear power and doesn’t really want to see any more nuclear power plants built, they don’t really suffer any negative consequence themselves because of their own false beliefs; maybe they suffer because the whole of society has this incorrect idea, but there’s not a big reward that they would themselves get by taking time away from taking care of their children to look into the engineering behind nuclear power. It doesn’t make a whole tonne of sense. So it’s easy to explain, from a self-interested or the normal inertia-of-life point of view, how people could end up thinking something intuitive like nuclear power isn’t safe, even if it’s not true.
Hugo Mercier: I agree. That’s one of these examples in which misperceptions that aren’t well understood — I’ve worked a bit on this — we don’t really understand very well why people seem to have such negative preconceptions about nuclear power just nearly everywhere in the world.
But whatever the cause, these misperceptions have had dramatic policy consequences, and it is indeed one of the domains in which you can make a plausible case that it is public opinion to some extent that led countries like Germany or Belgium or other countries to dismantle their nuclear fleet. And studies have shown that thousands of people have died because of this, because of the coal plants that had to be used instead of nuclear power. So it is a case in which it’s not irrational for people to get this wrong, but you could make the case that it is bordering on being a bit immoral, to the extent that they are inflicting costs on others.
The only thing I would say is that in most of these cases — as far as I can tell, but I don’t know enough — is it’s more like a public sentiment. If there was a poll in which people had to make a conscious decision of, “I’m going to vote on this issue,” when that’s the case, then you should have some responsibility for informing yourself before actually voting, so that at least it’s the most informed people who vote. If it’s just like, “Well, the public opinion doesn’t like this,” asking people to, as you are saying, be held liable for their opinions is a very high bar. You can’t do everything; you can’t become an expert in everything. So if it’s just that there was a bad general feeling, it’s hard to hold people really morally responsible for this.
Rob Wiblin: Yeah, I think that this might be another case where there’s an intuitive human reaction to the idea of this kind of radiation that for some reason really strikes fear into the heart of human beings. So fear of nuclear stuff has an easy road in order to persuade people. The thing that’s funny, actually, about that is less that and more that people don’t have a similarly fearful reaction to particulate pollution coming from coal — they’re a bit wary of that, but not nearly as wary as they should be. Whereas the nuclear stuff, they get so scared of it, even though the evidence is so poor.
Hugo Mercier: I completely agree. We have some evidence, but it’s not completely clear that one of the things that’s going on is that nuclear power is tapping into people’s disgust mechanisms. We have this psychology that evolved to help us avoid things that are going to make us sick, so we have an intuition that we shouldn’t touch, or even less eat, faeces and urine and anything that comes out of people’s bodies, or rotten flesh — anything that smells really horrendous. And these mechanisms, the way they work is they tell us that these things contain small things that you can’t really perceive, but that are going to make people sick, and the amount of the thing doesn’t matter — which is mostly true; you can get sick with a very small amount of viruses or bacteria, obviously. And then that’s contagious; that can be transmittable from one person to the next.
And I think it is that template that people apply to nuclear energy, because they think radiation is this invisible thing, a bit like germs and viruses are invisible to the naked eye. It’s this invisible thing that makes people sick, even if they’re exposed to a very small amount of it. And then the people who have gotten sick, they can make others sick. You see that, for instance, in the way that people who have been hurt by radiation, mostly after the Hiroshima and Nagasaki bombings — it was sometimes hard to find people willing to treat them, because they are perceived as being contagious themselves, which was not the case by and large. Some of their clothes might have had some radiation, but they themselves were not anymore. So we have this false image of nuclear energy that is, again, overwhelmingly misguided. And I think that gives us this bad feeling about it.
And it’s true. It’s funny, but particulate matters I think would work in the same way as people have the intuition in a way that smoking is going to make you sick. When data started coming out that smoking is causing lung cancer, I think it intuitively is like, yeah, I can see that happening. And if there was more of a media discussion of the effect of particulate matter, even if you can’t really see it — in the same way as you can’t really see the particles in smoke — people I think would get it. It’s still less scary, because if you’re exposed a lot to it, then you’re more likely to get it; you don’t have this very insidious feeling you have with things that are contagious.
Rob Wiblin: I would have thought that one of the differences between particulate pollution and radiation is that the former has a very simple physical mechanism that I think I intuitively understand, which is that you burn coal or wood, and it produces smog, and then I breathe it in. And that sounds bad, but also comprehensible. Whereas with nuclear power, you’re like, “Where is the radiation? I can’t see it. How much is it? How much is bad? Are there different types?” I’m very educated, and I still find it very confusing. So you can imagine someone who just doesn’t understand — like, what is radiation? It’s super weird.
And I think that confusion means that you just have to take the belief that it’s safe on trust, because you don’t understand. You’d have to have done graduate physics. And then if you don’t trust people, if you don’t trust engineers to that degree, then, you’re just screwed. You’re kind of always going to be suspicious of it.
Hugo Mercier: I agree, but still another small caveat is that at least before Hiroshima and Nagasaki, for a long time in the early 20th century — before, essentially, the harms of radiation were better understood — people would put radioactive elements in makeup or everywhere, because it was glowing.
Rob Wiblin: Clocks and stuff.
Hugo Mercier: Which is a bad idea. So you still need something to associate the idea of nuclear power with something bad. It’s not completely intuitive. When people figure out that actually radiation is bad, then it’s like the way in which it is bad makes it feel much worse than other types of potential harmful substances.
One consistent weakness in human judgement [01:22:57]
Rob Wiblin: OK, pushing on. In a second, I want to throw some of the trickiest cases I can think of at you, possible counterexamples to the set of explanations that you present in the book. But before that, what’s an important way in which you think our mechanisms of open vigilance and plausibility checking and credibility detection systematically fail? What’s an important, consistent weakness in human judgement and our ability to figure things out?
Hugo Mercier: One potential example is that in some cases we fail to notice that we get the same information from different people, but in fact all of these people had gotten their information from the same source. So it feels as if we have a lot of people telling us the same thing, but they all have vetted it in a way they all have independent value, when in fact it all comes from the same person.
In some sense, that’s what you get with intergenerational transmission. You have one generation that somehow forms a given belief, and then they pass it on to the next generation. When you grow up in a society, it feels as if your parents and your siblings and your uncles and your aunts and everybody in society — if you live, say, in a small-scale, relatively somewhat more homogeneous society — everybody has similar beliefs in the ancestors, in what traditions have to be upheld, in taboos, and these sort of things. And it has to feel as if everyone in the group has independently vetted this.
So maybe they have some first inhabitants, maybe they have good reasons to believe this. And so you have this incredibly strong evidence: we have all of these people who are otherwise reliable and trustworthy, and they all appear to have come to the same conclusion independently of each other. And it would be really foolish not to believe them. Obviously, they have to be right. You fail to take into account the fact that they themselves have all been influenced by the same process by the previous generation, which means that their opinions are actually not really that independent of each other.
Rob Wiblin: Yeah, I think I’ve seen this play out in my social circle sometimes. It’s a thing that’s hard to avoid, because you can basically just have any situation where you notice that a lot of people all seem to agree with this research conclusion, something that’s not that obvious, but maybe everyone has independently figured out that this thing is true. Then you might dig into it and realise that they’ve all just had conversations with this one person who persuaded them of this, and maybe wrote a paper about it.
And that’s all well and good; it’s good to know that one person looked into it and believes X, but you shouldn’t then count it as 10 independent people all having figured it out just because they all agree. You need to know the providence of the original assertion in order to tell how overwhelming the evidence is just from it being conventional wisdom.
Hugo Mercier: Yes. It’s not as if you’re not always warranted in discounting the fact that 10 people have agreed to it, because if your friends aren’t completely foolish, if they trust that one person, that tells you that maybe they’re right to trust that person and to believe what they’re saying. So assuming they’re not just blindly following what that person is saying, which I don’t think happens very often, it’s still some kind of signal that all of them are agreeing that person is right. But it’s still good to know that at the end of the day, it’s better to just go back to that person and revisit yourself what arguments they are putting forward.
Rob Wiblin: Yeah, one of the terms for this is “information cascade,” where you can get beliefs becoming more and more… You look like you recognise that term?
Hugo Mercier: Yes, it is somewhat similar. In an information cascade, the way it’s been usually designed is people are influenced by the people before them, and it looks to you as if each new person has made up their mind independently of others, when in fact they themselves had been influenced by the people before them. So it looks as if you have a lot of confirmatory evidence, when in fact it just so happens that at the beginning of the chain you had a few people who thought so, and then everybody was overwhelmingly influenced by them. And that’s supposed to get increasingly worse, because if you have five and then 10 and then 50 people who all agree, then obviously the weight of the evidence that they’re right is increasingly large.
For the record, that doesn’t happen when you try to do these experiments. Economists have these nice models of how that should happen, in a way, if people are rational to some extent. But in fact, in every group you have enough people who are pigheaded and just going to say, “No, I’m going to ignore what everybody else is saying, I’m just going to go my own way.” And these people completely break these cascades. So they can be annoying, but at least they play that kind of useful role sometimes.
Rob Wiblin: Yeah, that’s good to know. That’ll be my excuse next time.
Trying to explain harmful financial decisions [01:27:15]
Rob Wiblin: OK, so back to the trickiest cases. I asked around for counterexamples, and here are the ones I found hardest to explain on my own, using the tools you gave me in Not Born Yesterday. So the things I was looking for are the following:
- Beliefs that a meaningful number of people are persuaded by — so it’s not just one weird person.
- Beliefs where the person takes actions as if they really intuitively believe that those things are true, and where those actions are costly to the person themselves — like making their lives worse.
- And also, it would be possible or practical for someone to figure out that the beliefs are probably false if they did some sensible research — the sort of thing that might take me a couple of hours.
Does that sound like a good set of criteria?
Hugo Mercier: Yeah. Cool.
Rob Wiblin: OK, so the biggest cluster of things that I think might qualify are really harmful financial decisions. There are a couple of these that we could go through, but two of the top ones for me are multilevel marketing scams and day trading.
Just to briefly explain what those are: day trading is the phenomenon of amateurs or semi-professionals trading shares at home, kind of buying and selling a Google stock or whatever, maybe holding it for just a couple of hours or maybe days, and then buying it and then selling it again. This is basically just an incredibly dumb investment strategy. I think over the medium run, I’ve seen studies suggesting that 90% or more of people lose money using that investment strategy. If it’s compared to just buying an index fund of the whole stock market and not selling anything until you retire, that dominates almost always.
But this activity is more popular than ever, or at least it was really popular during the pandemic, where people were stuck at home, and it’s not uncommon for people to lose tens or hundreds of thousands of dollars in their retirement savings just due to their overconfidence or whatever it is that’s driving them to do this crazy thing.
And then you’ve got multilevel marketing scams, which are a little bit more complicated to explain, but Americans might be more familiar with them because I think they’re most popular there. These are companies like Amway or Avon or Herbalife or Vorwerk or Mary Kay, and millions of people have been tricked by these things. They’re companies that hook you on the idea that you’ll make money by buying products from them and then selling them onto your friends or people you know, be it Tupperware or nutritional supplements or whatever. And they say you’ll make money because you’ll recruit your friends to also become salespeople, and then you’ll get a cut of all of the sales that they make.
To cut a long story short, the result of geometric growth in these schemes, as people attract more and more salespeople and just the limited market size that exists for nutritional supplements and Tupperware, is that only the first few percent of people who join these programmes can hope to turn a profit, and everyone else has to lose out. The mystery is that, for decades, large numbers of people have continued to be tricked into losing large amounts of money and time participating in these dodgy businesses, and that with a bit of Googling, they could have found out that these are just a total trap.
So I appreciate you’re not an expert in these particular case studies, and I’m throwing the most difficult one I can at you, but do you have any thoughts on what’s going on with people’s reasonably often poor and quite large financial decisions in cases like these?
Hugo Mercier: In the first case, the case of day trading, my intuition would be that obviously it’s related to the same reasons that drive people towards gambling. Essentially, people are gambling in a legal manner, with the stakes much higher, which is funny when you think about it. So I don’t exactly know why people are attracted to gambling, but in relation to the point of Not Born Yesterday that people are quite good at discriminating information, I think in most cases, it’s not something that they have been persuaded by other people to do; it’s something that they intuitively want to do. They don’t need someone to tell them, “You should do this” — on the contrary, actually, maybe if people were to tell them that, they would be more sceptical.
It’s something like intuitively, you hear stories of stocks going up and people making a lot of money. It’s like, why not me? And these stories are true, obviously. They’re kind of selectively reported, because people lose a lot of money as well. But if you talk about not statistics, but about stories, there should be an asymmetry in that you can only lose so much money, but there’s no ceiling to how much money you can win. So it’s possible that a story of someone who loses $10,000, it’s really not interesting at all. But a story of someone who makes millions because they’ve “invested wisely” — because they got lucky, really — is more likely to make the headlines and for people to talk about it. So there might be a bias there in what kind of stories people hear.
But then many people — not everybody, but a lot of people — are attracted to gambling. They feel as if they can make a lot and they don’t fully realise the losses that can ensue. So I don’t think a lot of persuasion is going on there. I think it’s something that a lot of people are just intuitively drawn towards. I don’t know exactly why that’s the case, but I don’t think it’s a problem with people being gullible in particular.
Rob Wiblin: OK, yeah. Let’s stick with day trading for a minute. And I suppose similar stuff is this kind of “number go up” is the term that people use for people investing in crypto and just thinking that cryptocurrency is always going to go up. And again, there is this mystery of why do people gamble? I mean, people go and play on these poker machines where they just get into this zombie-like state, putting money into these machines and losing it. It does seem like a case to me where engineers have found some bug in the human brain and they are exploiting it hard. There’s some people who are vulnerable to this hack, basically, and they’re not realising how much money they’re losing, or it’s pleasing them in some way, and so they’re able to just extract enormous amounts of money. That does seem like a case where, if it’s not persuasion, it’s manipulation of a sort.
Hugo Mercier: Yes. I agree. I mean, it’s your manipulating mechanisms. They’re not the ones I’m really talking about in that book, in terms of mechanisms that allow you to evaluate communication, because no one has to talk you into going to the casino. It’s just you hear about it — maybe even from someone who says you shouldn’t go there because blah, blah — but somehow you end up there.
So I don’t know what is the proportion of people who have really kind of problematic gambling behaviour out of all the people who gamble, because obviously people who gamble get something out of it. They enjoy it. So in the same way as most people who drink are not problematic drinkers, just gambling per se is not irrational to the extent that you don’t lose everything and that you enjoy gambling; it’s like any other activity that you can pay to engage in.
And then, like every behaviour, there’s going to be a bell curve of how attractive, how rewarding any activity is. For most people, gambling is moderately attractive or moderately rewarding, and people engage in it moderately, and things are kind of fine. You have people who really don’t like it, and then you have people who have an issue, but this is just going to be the normal distribution for any trait, like people who are overeating, over drinking, taking drugs — everything that you can think of that is rewarding. It’s impossible to be perfectly calibrated, especially, as you mentioned earlier, since we live in such a crazy environment in which people are sort of out to get you to some extent. So you’re going to have people at the end of the bell curve who are going to have problems, but that doesn’t mean that the underlying mechanism is itself problematic.
Rob Wiblin: Yeah. In terms of people having intuitive beliefs that are wrong, economists and finance experts will tell you if you ask that movements of prices on the stock market are basically completely random. They follow a trend, but then the movements around that trend are completely random, and what happened yesterday doesn’t affect what will happen tomorrow. I think that the term for this is it’s a martingale — which is to say that at every moment it’s reset, and the past doesn’t matter, and it’s moving up and down in these random patterns.
I think that is not intuitive to people. When I try persuading people of this, people who I know in my life, they’re like, “Really? Is that right?” I guess they’re right that if you’re a super professional or you’re an algorithm writer, maybe you could find some predictability in there, and that’s why some people do manage to make money on it — usually people who are really at the cutting edge of finance, not randoms at home. But the idea that there’s no pattern that you could exploit is not intuitive.
And I also meet people who I think many have the strong intuition that if there’s a change in the exchange rate between pounds and euros, and it goes in one direction, then it will tend to go back in the other direction in future. So they’re inclined to wait and expect it to return to where it was before. That feels more normal than the idea that the past is irrelevant, it’s just always going to move in a random motion.
So I wonder if that could be doing something. In the ancestral environment, there weren’t really these systems that were set up where it was so competitive that there was no predictable pattern that you could exploit. So people do have the sense that, “If I pay attention to the line going up and down, I’ll eventually be able to figure out the way to win this.” In fact, that’s just not possible in this case, and so they’re getting a bit tricked.
Hugo Mercier: I agree. I mean, the stock market is particularly maybe counterintuitive for the reason you’re sketching. On the one hand, it is true that you have a lot of very clever and very technically well equipped people who are looking into this and who are doing their best to make money. Intuitively it feels like, if that should be the case, then you should be able to predict something, because these are intelligent agents and you should be able to predict their behaviour. And it’s not intuitive that precisely because you have all of these people competing with each other to try to make the best of it, then whatever is going to remain is going to be noise. I don’t think that’s intuitive, that you get noise out of the combination of the behaviour of a lot of intelligent agents competing with each other. I don’t think that’s a very intuitive thing.
Rob Wiblin: Yeah, because I can’t think of an example in the premodern era where that would have been true. Maybe there were some, but it’s a peculiar scenario.
Hugo Mercier: Yeah, well, no. Not for manmade things. Obviously there’s a lot of randomness in the weather or in these sorts of things.
Rob Wiblin: People think they can predict that too.
Hugo Mercier: Yeah, people think they can predict that to some extent. But I guess also for behaviours, in most cases it was probably better to try to figure out what was going on and to think you could anticipate things instead of just giving up. Yet these are probably heuristics that work in most cases, but that just happen to be mistaken in that very, very weird context.
Rob Wiblin: Another mistake that people I think make is they might make money, but then they’ve done worse than what they would have done if they’d invested in an index fund or if they’d invested in something more boring. So they compare it to nothing, whereas they should compare it to a consistent average return. Though I guess in the day trading case, it’s not going to save day trading, because day traders lose money in absolute terms.
Hugo Mercier: They just lose money, yeah.
Rob Wiblin: OK, what about the other one, the multilevel marketing scams? I’m not sure if they’re a thing in France?
Hugo Mercier: I guess they have to be. People don’t talk about it that much, so I assume it’s less of a big thing than in the US. So you are saying that people waste hours doing this? I don’t know to what extent that time is wasted. I mean, to some extent, there are a lot of people who like selling things. And maybe a lot of these people, if they’re homemakers or people who have some time anyway, and they’d be happy to try to do something with that time to try to make some money, and you’re spending time with friends and with other people that may be interesting to talk to, and you’re trying to sell them this product that you yourself maybe think is not such a bad product, I can sort of see how that would not be necessarily unpleasant.
I’d be surprised if a lot of people gave up a well-paying job to do this, or if deeply introverted people would take that up. So I think there might be some advantages that people perceive, and I just don’t know the data about in how many cases is it really bad, is it really disruptive to your personal finances that you took that up? It’s possible that in many cases you lose a bit of money, but it’s kind of entertaining, and maybe you can make a bit of money if you’re a bit lucky. I just don’t know how bad it is for most people. I’d be surprised if it was really terrible for millions of people. It seems as if people would know about that.
Rob Wiblin: I think the more typical case is that people fizzle out after losing a small amount of money or a modest amount of time. And then there are other people who do get really convinced, and then they lose more substantial amounts. So of course there’s a range.
I think the standard story for why people fall for this is that the mathematics of why it doesn’t work, the mathematics of why it’s a scam where only the first few percent can come out ahead, is not so straightforward. It’s not intuitive, because it involves geometric growth rather than linear growth, and it just requires an ability to dive into spreadsheets that… I remember trying to figure this out myself, and it took me a minute to figure out how to model it, and eventually I did because I knew the result.
Hugo Mercier: So you didn’t start selling Tupperware?
Rob Wiblin: Not yet. But by the way, you should definitely get Pyrex Tupperware. Do not get plastic Tupperware. I’m not selling it, but Pyrex Tupperware is the way to go.
On the other hand, if people Google about these schemes they can find the results written up. But I wonder whether there’s another social thing going on here, where people are convinced to do it by their friends. It’s set up in quite a clever way where you recruit other people who trust you, and it’s actually sometimes the destruction of the trust between friends when the whole thing falls apart, that is really destructive. So I think there might be a bit of clever design here where either people who have set up these businesses or how they’ve evolved over time are hijacking intuitions that were sound in another environment and here are not working as well.
Hugo Mercier: That sounds right. But it would be interesting to look at comparative data of if other countries are more vulnerable to this, and if yes, what is the sociology in a way of these places that might explain why people in these countries are more vulnerable?
Astrology [01:40:40]
Rob Wiblin: OK, so another simpler example, I think, is belief in astrology, which is pretty widespread, and I think I’ve heard it’s actually growing. I think many people just treat it like entertainment; they just enjoy it for the sake of it, like the same way that someone enjoys a crossword. But other people do seem to actually base sometimes business or personal relationship decisions on what they think astrology is telling them is going to happen.
How can you explain people believing and acting on the idea intuitively that stars millions of lightyears away are affecting and taking interest in their lives and changing their dating prospects?
Hugo Mercier: This is also one that I’m not really sure about. It’s interesting that it’s cross culturally not as common as something as bloodletting, but we find that in a number of different cultures, maybe because the movements of planets is one of the first things that people were able to predict accurately, one of the first things in the natural environment that people were able to figure out. We can tell that Mars is going to be in that place in the sky at such-and-such time. And so, at least before modern science, if you run into someone who is able to tell you, “You see that bright thing in the sky? In two weeks, it’s going to be over there,” it looks very impressive.
Obviously it shouldn’t be the case anymore, but that might help explain how astrology gets started. It’s plausible as well that, I don’t know if it’s true or not, but I see early astrologies as like applying for grants if you’re an early-time astronomer, because no one is going to pay you just to look at the stars for the sake of it. But if you can get people to, you know, “Give me money, because I’ll be able to predict when you’ll become king” or whatever, then they’ll give you money to do what you really want, which is just figuring out how the stars and planets work.
So astrology is one specific case, but broadly, divination practices are quasi-universal. In just about every culture, people will engage in some form of divination, whether it’s reading tea leaves, entrails, whatever. So we know that that exists. There is a desire, obviously, to know the future, which is very understandable.
And it’s possible that one of the functions of divination more generally was as a way of kind of coordinating people. If you’re more like a group, and you have to make a decision, and it’s hard to figure out what’s the best decision and there’ll be conflicts of interest, then divination is a way really of randomising things. We’re just going to do something, and then either the person doing the divination is going to tilt the balance in the way that they think is the best, or it’s just going to be random. But at least we all agree that we’re just going to do whatever the divination says.
So I think that helps explain why, at least in some contexts, you have these sorts of practices. When it comes to people in modern societies who really base their behaviour on astrology, again, I don’t know the data. As you were saying, most people treat that as mostly a kind of entertainment, but I just don’t know how many people would base really consequential personal decisions on that.
And also, what are the alternatives? There are many decisions for which we have conflicting intuitions. The odds of us making the right call are about 50/50. So it’s not going to make things worse. What would be really shocking, I guess, would be people making really counterintuitive decisions. Like, let’s say you’re with the perfect partner and you love them, and everything about them is like, she’s going to be perfect for the rest of my life, or he’s going to be perfect for the rest of my life. And then you read your horoscope, it’s like, “You should break up with your partner.” I mean, how many people do that? I hope it’s not many. And I really think it’s not many.
Rob Wiblin: That might be one reason why the horoscopes are usually very vague, so you can kind of read into them whatever you want to hear.
Hugo Mercier: Yeah. I mean, a lot of it is going to be post hoc.
Rob Wiblin: I do remember reading, I think that they were looking back on divination practices in ancient China, and I think they detected some pattern in the results which suggested that the dice were loaded, the dice were rigged, basically. I think that they were doing bone divination here, and that they identified that things were tending to swing in the side of the people who were in power at the time, whatever kind of conclusion they wanted.
So that might make some sense. We’re slightly struggling with this one. My take is that the great majority of people do just treat it like an entertainment, and probably actually are not disadvantaged — because like you say, they’re not going to do anything that stupid. And maybe having this kind of random influence over your life where sometimes it encourages you, gives you the energy and enthusiasm to go and do something that maybe was a good idea anyway. And if something’s clearly a bad idea, you’re just going to ignore it, so you’re never really led that astray.
For 95% of people who pay attention to astrology, it’s a harmless and possibly accidentally beneficial practice. And then maybe you have some people out on the tail who do take it too seriously, they believe the con a little bit too much and it does end up harming them. But it’s not so obvious that that’s the case that it necessarily puts off everyone else. So the practice can kind of continue. Yeah, I don’t know. It’s a tough one.
Hugo Mercier: Yeah, it is an interesting example.
Medical treatments that don’t work [01:45:47]
Rob Wiblin: OK, a third cluster is medical treatments that don’t work. So some cases of medical treatments that don’t function are subtle and it would be legitimately challenging for even professionals to stay up with the research and figure out what is legitimately a really good treatment for some condition and what’s not. Medicine is a tricky business. But other examples do feel much more clear cut; I think even a moderately bright and attentive person shouldn’t be falling for some of these schemes.
We mentioned homoeopathy earlier, which has no conceivable way of working, and indeed has been proven not to work, but it’s still widely sold. So we talked a little bit then about how that might be sustained in France. Similarly, you’ve got pseudoscientific parts of medicine — like chiropractic or osteopathy that just have no evidence behind them, and a cursory Google search would turn up their extremely suspicious history. But still, people spend a lot of money on this stuff and sometimes it’s not so harmless, because chiropractic treatment can often harm people. It can give them back problems that are even worse than what they came in with.
There’s nutritional supplements that are really peddled by pharmacies and nutrition shops that don’t help you, and a number of them — I guess the fat-soluble ones that stick around in your body — can even be actively harmful. But it’s a big business, lots of money made selling this junk. I could probably come up with a bunch of dubious weight loss or nutrition or dietary fads that achieved massive popularity despite not having much going for them.
So I just threw a bunch of cases at you there. But how might you approach modelling and explaining people using medicines that don’t work, where it’s possible for a reasonable person to figure out that they don’t?
Hugo Mercier: I think you have to think of the demand dimension: why do people demand these treatments? I don’t think that the success is owed to the supply, in the sense that people develop these therapies and then they try to push them on people. I think instead there’s a demand for that. And what makes me say that is that essentially they’re universal. In every culture you have this, and that’s in a way maybe more understandable in cultures that don’t have access to modern science and to modern medicine. But even in these cultures, in which essentially very few or none of the treatments that they use will be efficient, they still exist. And as you were saying, these things persist even when you have modern medicine to go by and modern science.
So that suggests that somehow, for some reasons, people want that. And if there is an influence of marketing and these sorts of things, it’s going to be on which treatment people seek, instead of the fact that they’re going to seek a treatment in the first place, or they’re going to seek a way of stopping their back pains or getting more energy.
So the question then is: why do people want to do something when there is really not that much to be done? And I guess in most cases, it’s a reasonable heuristic that there aren’t that many problems for which doing nothing is going to work. It just so happens that health is one of these things, in which most diseases are going to get better, most ailments are going to improve of their own accord. But that’s not true for most things in our lives. Like if your computer is broken, it’s very unlikely that it’s going to fix itself. So the posture of “When something is not going great, I’m going to try to do something about it” is not completely unreasonable, obviously.
So if you’re in a situation in which it feels as if the official options haven’t done anything for you, then you’re going to turn to something else. So you’ve been to the physiotherapist, and the physiotherapist didn’t really manage to solve your back problems, then you’re going to try to turn to someone else. And then there will be people who are going to fill that void, because they are problems that either they’re going to get better on their own, or that just nothing really can be done, unfortunately, or at least not easily.
For chiropractors, my intuition is that obviously the theory is completely bogus, but I think in practice a lot of them are just quite good physiotherapists. They just happen to be, and then they have this whole image because it’s easier to sell if you’re not just a physiotherapist who’s supposed to do basic things, but you have something that’s kind of deeper — you understand things on another level, and you can talk to people. But they’re just good physiotherapists, essentially, and most of them have training as a physiotherapist, and they’re just going to do the thing that any good physiotherapist would do.
Rob Wiblin: OK, so that’s one way that the thing could start out as rubbish, but then they adopt the good methods and then maybe it becomes more reasonable over time. It seems like there’s this cluster of cases where things that don’t work seem like they work. In the health case, as you were pointing out, people typically go and get treatment, especially treatment where the evidence for it working is not so great, when they’re kind of desperate and when things are worse than they usually are — and then when things are worse than they usually are, then they tend to get better on average.
I wonder whether there’s this illusion that anything that you attempt to do when your condition is abnormally bad will seem, after the fact, like it was helpful, because you probably got better in most cases, and there’ll be more people saying, “I tried this, and then I got better” than people saying, “I tried this, and then I got worse” — because generally they just regressed to their long-term-average level of health.
Hugo Mercier: I completely agree. And then again, it’s one of the reasons why we have more superstitious beliefs about health than about your computer. Your computer is not going to fix itself. So if you spray holy water on your computer, it’s not going to fix itself, so you’re not going to believe holy water fixed my computer. But if you have a cold and someone sprays you with holy water, then you will actually get better from the cold, so it’s easier to believe that it works in that case. So you’re completely right. I mean, health is weird in that sense.
Rob Wiblin: Yeah. I suppose what’s going on there is that there’s this internal, invisible process fixing the problem that you can’t see. And people were not necessarily aware of the immune system and other repair mechanisms, and that is something that occurs in the bodies of animals that doesn’t occur elsewhere necessarily.
OK, then in the finance case, with day trading, it’s harder to see — because we know that most people who try it are losing money in absolute terms: they walk away with less money than they started. And gambling even more so maybe. I wonder, though, whether you can have a reporting bias, where people will talk about how they won at day trading if they make money in the occasional… But they don’t tend to go out on Twitter and say, “I’m a moron who invested $100,000 trying to day trade, and then I came away with $50,000.” They tend to obscure that. They might not even tell their partners, let alone strangers. So really, if you’re someone who just looks at the field of people talking about it, it will always seem as if it’s working.
Hugo Mercier: I agree. There’s likely a reporting bias, both in terms of you’re more likely to report positive than negative stories, and the amplitude of positive stories can be much larger than the amplitude of negative stories.
In terms of health — it’s not just health, but particularly maybe in terms of health — another factor is possibly there are people who’ve suggested that one of the reasons why we really want to do something when we get sick is we want to show others that we are really sick. Because when you’re sick, people help you. Your family is going to look after you, maybe your friends, your colleagues are going to pick up the slack, and these sorts of things.
And because of that, malingering is always a risk, and people fake diseases in professional contexts in substantial numbers. So possibly one of the reasons for making sure that people are not malingering too much is that when you tell them, “OK, well, if you’re sick, then you have to take or do this” — and “this” is pretty unpleasant — so if you’re sick, “OK sure, I’m going to do this, and then you’re going to help me and look after me, even if what I’m doing is completely useless,” whereas if you’re not sick, it’s like, “Well, I’m not going to do that.”
So if you think of many therapies that are somewhat unpleasant — we were talking about bloodletting earlier, sudations, laxatives, emetics — it seems as if a lot of therapies have this unpleasant dimension that, even if the therapy itself is useless, doing it, and imposing it on others, might make sense. And then accepting it when you’re sick might make sense, in that you’re showing others — like drinking kind of castor oil or whatever disgusting thing, it feels more efficient because it’s more costly — but in fact, maybe what you’re doing is signalling to people who might help you that you’re deserving of help, because you’re really sick.
Rob Wiblin: You’re not pretending. Yeah, I never thought of that one.
I guess the story in which humans just had really exceptional judgement and were really hard for people to trick, or indeed just for circumstances to trick, would be that we could do this intuitive adjustment on the selection effect of people over-reporting successes and under-reporting failures, and see through that, and just not update in favour of day trading being effective.
And I guess, to be honest, most people probably do. I don’t know that many people who are fooled by financial things that they see online. I don’t know, maybe I have a nonrepresentative sample, but I think a typical person hears that stuff and is like, “Yeah, I’m not going to do your stupid finance thing.” But then the one in 20 people who is persuaded can get into a lot of trouble, and then you have all of the people who you were saying just intuitively want to do it anyway; they feel like it makes sense.
Hugo Mercier: Yes, I think there’s quite a demand. I mean, obviously gambling is a big, successful industry.
Generative AI, LLMs, and persuasion [01:54:50]
Rob Wiblin: OK, let’s turn now to the question of AI and LLMs and generative models, and whether we should worry about them being able to convince people to believe things that are not true.
I want to be careful here, and I want to be subtle here, because of course there’s going to be some issues here. There has to be. Any new technology for producing and spreading content has got to lead to some misinformation and confusion, at least to some nonzero extent. And the thing I want to push back on is that we should expect a really dire situation with respect to AI, and that we should expect it to lead into big swings in public opinion, or a big increase in the extent to which society is more out of touch with reality than it already is.
As I mentioned earlier, the modern world, with all of its weird, delicious foods, has caused people to eat an unhealthy diet, because their evolved intuitions that previously collected what was tasty with what was healthy are kind of exploitable. So we could imagine that AI-generated content can play a similar role at gunking up our information environment and making things more challenging for us to figure out, in the same way that… You would disagree with the nacho cheese Doritos analogy, that there are some important differences there, but either way, the environment could shift in a way that makes it yet more challenging for us to discern what’s really going on.
One piece of framing that might be useful is that I’ve found four different ways that people worry AI is going to lead to mass persuasion, and I’ve tried to give them names to help clarify the conversation:
- The first one I call inundation AIs, which are just AIs that write massive numbers of articles, all arguing for some conclusion X.
- Then you’ve got silver-tongued AIs, which are just generally extremely persuasively written content — somehow they just are like the best lawyer, the best advocate for the position that you could imagine, and they’re more persuasive than any human being is now, and so they’re going to be able to persuade people of X.
- Then you’ve got scalpel AIs, which are AIs that write extremely personalised persuasion — these models are going to harvest everything you’ve ever written or posted, and then write an article that’s most designed to convince you of X by figuring out exactly what will convince that individual person.
- And then you’ve got a spam approach, which is just to produce an enormous amount of junk articles and misleading content that causes people to give up on figuring out what is true. So they just confuse people by causing them to reject the entire enterprise.
I think it’s worth evaluating each of them a little bit separately, because the reasoning behind each is not at all the same. So what’s your overall view on whether we should worry about LLMs or AIs making public discourse about important topics worse?
Hugo Mercier: Yeah, I don’t think we should worry.
Rob Wiblin: OK, so you’re a little bit more extreme than me, maybe.
Hugo Mercier: Yeah, I’m happy. I mean, I’m not happy. I would rather not be persuaded of the opposite, because I’m happy not to be worried. I’ve talked to people about this, and then again, my colleagues tend to be on the same side of the issue as I am, to some extent. But I haven’t seen a scenario that I deemed plausible in which AI or LLMs were making things really worse. Obviously I should specify this is not an area where I’m really knowledgeable. Misinformation I know about, but LLMs in particular I don’t know much about, so there might be things I’m underestimating due to that ignorance. But people who know more about these things haven’t been able to convince me otherwise, let’s say.
Rob Wiblin: OK, so maybe let’s go through the four different approaches then, one by one. How useful would it be to take the inundation approach — to generate just enormous numbers of articles arguing for a given conclusion using all kinds of different arguments?
Hugo Mercier: First of all, there’s already an essentially infinite amount of information on the internet. So the bottleneck is not how many articles there are on any given topic, because there is already way more than anybody will ever read; the bottleneck is people’s individual attention — and that bottleneck is largely controlled by, to some extent peers and colleagues and social networks, but otherwise mostly by the big actors in the field: by cable news, by big newspapers. And there’s no reason to believe these things are going to change dramatically. So having another 1,000 articles on a given issue, just no one is going to read them.
Rob Wiblin: Yeah. This is the thing that made me sceptical of this when I really thought about it, if I imagined myself running a propaganda campaign — especially one that’s already financed, and has enough supporters that you could write a meaningful number of articles arguing for something already. I’m like, don’t you hit pretty declining returns just on the sheer volume of them? I could imagine it helping. I suppose if you didn’t have many resources, it could make it a little bit cheaper, because you could potentially write opinion pieces, where otherwise — perhaps if you weren’t very educated or you didn’t speak the language that you were focusing on very well — then I suppose these things could make it cheaper to do that. You could have an assistant.
But the idea that it would be helpful to produce very large numbers doesn’t seem like the key thing, because the question is how do you get people to read them and take them seriously? And there the bottleneck is just a different stage. It’s not the production stage, which is relatively cheap, I would imagine, in the scheme of things; it’s how do you get anyone to care give a damn about what you’ve made?
Hugo Mercier: Yes. Let’s say you wanted to write an op-ed and to push it to people on Facebook or something. You could hire someone to write the op-ed. It’s going to cost you a few thousand dollars. Then getting more than a few hundred people to read it on Facebook is going to cost you a lot of money. And that’s just people; you’re going to have to have people click on the thing, and one out of 100 of the people who click would actually read the whole thing. So that’s the bottleneck. It’s not the number of things that are written; it’s how many things people read.
Rob Wiblin: Yeah. I guess it could allow you to come up with more iterations and test more different messages. And some of them will be a bit better than others, so you can get a bit of a gain there.
Hugo Mercier: But even then, that assumes that people will read them. Otherwise you’re going to get no feedback.
Rob Wiblin: I see. Yeah.
Hugo Mercier: People don’t read the news in the first place.
Rob Wiblin: You’re saying that people read the headlines, right?
Hugo Mercier: Some people read some headlines, yeah. Some people do read the news, obviously, but it’s much less than we believe.
Rob Wiblin: Not with great care. Is there anything else that we can say for this? I suppose if I was the Russian propaganda ministry, it would allow me to quickly churn out more opinion pieces about more topics, and perhaps some of them might take off because by chance they turned out well. But yeah, it’s not targeting what I would feel was my key bottleneck, and that’s the thing that limits its impact.
Hugo Mercier: But where you’re right, I guess, is that you can imagine that for this, the impact may be more negative than positive. Because if you’re a respectable newspaper, you’re not going to get your articles written by ChatGPT — whereas if you are a Russian propagandist, you might. So the reduction in cost may be greater for the bad guys than for the good guys. Maybe at the margin it’s going to make a very small difference, because the cost of writing papers, as you were saying, is not the major cost anyway. But if it decreases the cost of doing that a little bit, then they might have more money to spread to use on disseminating the content. But then again, it’s going to make a very, very small difference.
And also, obviously, LLMs might also help the good guys in other ways. They’re not going to just say, “Write an article about this” and the article is written. But obviously journalists might find it tremendously helpful to ask questions to LLMs to help them do other parts of their jobs, short of writing the final piece.
Rob Wiblin: Yeah. I should maybe say that in the book you discuss fake news and misinformation in general, and to cut a long story short, the evidence that fake news and deliberate misinformation is causing large numbers of people to change their minds about things is not very good. At least in the US, there are a reasonable number of people who do consume fake news, but overwhelmingly they’re the people who are most extremely partisan to start with. So it’s that they want to read fake news that endorses their preconceptions, because they really enjoy it as a sort of recreation, really, and it doesn’t cause them to change their views that much. It’s more that their views cause them to want to consume the information.
Is there anything else you want to say about that, at a high level? People can go away and read the book, of course, if they would like to know more.
Hugo Mercier: No, that’s exactly right. Essentially, if you’re in a democracy — and to some extent even in dictatorships, but clearly in democracies — the informational environment is going to be driven by demand. Overwhelmingly, the things that are there are there because people want to read about them, they want to hear about them, and not because someone is trying to push them. Obviously, journalists and editors also have some agency; I’m not denying that. They’re going to work on some stories rather than others. But the selection bias operated by the population as a whole is going to be so massive, and journalists themselves want to write something that people will read. So mostly if you see a lot of fake news for something, it’s because people wanted to hear this, so presumably they already agreed with it.
And indeed, as you are saying, it’s been quite well shown by now that, first of all, the amount of fake news that circulates is very small, like a few percent of the information that circulates on social networks is fake news — really like 2%, 3% at most. And that very small percent is overwhelmingly consumed by people who are politically extreme and whose views fit with that. So that’s going to have no effect.
Rob Wiblin: OK, to some extent we’re forced to speculate about this second one, the idea of a silver-tongued AI that somehow learns through training to be more persuasive than human beings currently are by just being a superbly persuasive writer, a superbly good advocate. But if LLMs continue advancing far beyond the level that they’re at today, and also maybe they’re fine-tuned for the persuasiveness characteristic in particular, do you think that they could do much to change people’s minds about important topics by just writing really good books or op-eds about this or that?
Hugo Mercier: Again, the bottleneck is how much are people paying attention? As we were saying earlier, on most issues, people are not.
Rob Wiblin: Maybe the most important thing here would be that they need to find they could be very entertaining, so they could be extremely engaging while pushing a message. So maybe you could at least get an audience in that sense?
Hugo Mercier: Yeah, possibly. So then the question is, why would that favour, so to speak, the bad guys rather than the good guys is not clear to me. If an LLM has managed to be very good at this, then you might imagine that as long as journalists do their fact checking and make sure that everything that is said is correct, why not use an LLM to help make your story a bit more interesting, if they get that good at it?
Given that people are already exposed overwhelmingly more to reliable news than to fake news, any change in how appealing true news is compared to fake news is going to be vastly more influential. So you’d have to imagine that the impact this has on the spread of misinformation is orders of magnitudes larger than the impact it has on the spread of reliable information for this to make a difference going in the wrong direction.
Rob Wiblin: Yeah. OK, so that’s just on the engagingness. And you’re saying the engagingness of all articles would rise in this situation, so it would be competed to a draw, roughly. I suppose you might have a thing where currently it’s easier for bigger, more resourced institutions to make engaging content because it’s quite expensive, and then in future, this might just become a commodity thing, where it’s very cheap to make something engaging, and that would make it easier for a less resourced group to compete for attention. Does that sound plausible?
Hugo Mercier: Sure. Then again, it’s not saying it would be a bad thing, right?
Rob Wiblin: Yeah, I suppose. Inasmuch as you thought that smaller groups were more likely to be misleading. It depends on maybe on how much you trust The New York Times versus others.
Hugo Mercier: I mean, I do trust The New York Times, but there are issues for which they will get it wrong, or they will not report on an issue that is actually important. And maybe like a very local newspaper or a blogger or something might not have the means or the time to create a very compelling story, if that can help them make a story that is actually compelling, truly compelling, make it to show others that it is compelling.
But at the end of the day, people are not stupid. They will get feedback. So if you manage to make a story that is completely a nonstory very compelling, people are going to pick up on that and they’re going to stop listening to you — especially if you have to lie to make the story compelling, or if you have to exaggerate to a degree that’s not good.
Rob Wiblin: So we can imagine making LLMs more and more convincing, like they get smarter and smarter, and they become more and more effective advocates for a given view. And on some people’s view, you could imagine that that tops out in 10 or 100 years’ time, whenever we have the most amazing technology there, you could kind of persuade someone of anything, at least a normal human today, because so great would be your persuasive powers.
Your model is that it caps out at not necessarily that far above the level of persuasiveness of the most charismatic, the most persuasive human beings today, because we just are not influenceable by language most of the time, or it’s so hard to get people to care what you say.
Hugo Mercier: Yes. For things that are complicated and that would take very long arguments, the bottleneck is going to be attention. I don’t think there is a way of making a good or complicated moral case for an issue that is not completely obvious in a paragraph. And if you have to read a book, I’m not sure if that will ever happen, but let’s say that an AI can write a book that is more persuasive than any human author: it’s still going to be the case that five people are going to read the book, and these people probably have very strong views to start with.
Rob Wiblin: Part of your model of the world is also just that people, when they realise that there’s a risk that they’re going to be tricked, they just shut down. So you can imagine in a world where it’s very easy to produce this compelling, slick content, people learn the lesson that anyone can make slick content and so they just stop paying [attention]. They might engage with it for entertainment value, but they won’t necessarily regard it as very strong evidence for any particular conclusion.
Hugo Mercier: Yeah, that’s a good question. I think people are already very good at this. I mean, people who write smart political books, it’s quite doable to write up an argument that seems persuasive, but because you’re very selectively reporting evidence, is actually really misleading. And people are already very good at this. LLMs will make it easier, no doubt, I would imagine.
But also I want to be careful. What I’m saying on the whole is that I don’t want to make people believe that it’s impossible to change people’s minds, if you do spend the time. What seems to work at scale is really the accumulation of evidence — in particular, evidence or arguments conveyed by people who are kind of near you.
For instance, if you look at how opinions change throughout long periods of time, we know that opinions have changed dramatically on some things, like gay marriage, or it used to be interracial marriage, trans rights. Many things are changing relatively quickly. Most of that is generational changes, like young people being different from older people. But for some issues, there are some changes within individuals. So for instance, on average, people have become more pro-gay marriage over the past 40 years, in most Western cultures. Not just young people are more pro-gay marriage, but individual people have changed.
So that kind of change is possible, but it’s only possible when an issue is the big thing that everybody talks about for a long time, and when you have a lot of people in your surroundings who are making arguments. It’s not just reading things in the media. It’s like you talk to people in your family, your friends and your colleagues. And that works. Like it really can have a dramatic impact on society.
But it’s hard to imagine how LLMs… Maybe they can grease the wheels a little bit by providing better arguments. But then again, the main bottleneck is attention. That’s not going to happen for every issue that there is, because every issue can’t be in the headlines all the time.
Rob Wiblin: Let’s talk a little bit more about what does work. I think you’re saying that direct experience of things is persuasive to people. I guess the classic example is they meet someone who has come out as gay and they’re like, “This person seems totally fine to me.” They meet them with their partner and they’re like, “I don’t really see anything wrong with this relationship, and I hung out with them.” And that is just enormously compelling to people, more compelling than any opinion piece.
And then you also have people trust their friends and family and colleagues, people they know well. So if several of them start making an argument at them over the dinner table, or at the lunch table at the office, then that gets a lot of weight relative to something that you might read in a newspaper. Is that broadly the picture?
Hugo Mercier: Yes. Honestly, the honest answer is we don’t really know. This sort of impersonal influence, there is some indirect evidence that it plays a role. For instance, people have shown that people who talk about climate change are more likely to become more believing in climate change and that it’s manmade, and they’re more likely to want to talk to other people about that. There’s also evidence from canvassing, like when you go to people’s houses. In a nice experiment, people went to people’s houses and they talked about trans rights for 20 minutes, and that had a small but discernible influence on people’s attitudes.
So you can imagine if these kinds of conversations are repeated many, many times, then that can explain how things change on a large scale. But yeah, you need to have a meaningful discussion from someone who you have no reason to distrust, and they can exchange arguments, they can share their experiences, they can ask you to empathise with these people by imagining experiences you had that are similar to theirs. So it’s possible, but it’s something that is just hard to imagine how you can scale it up.
Rob Wiblin: OK, what about the approach that I called scalpel, in which everyone can be delivered an individualised pitch for X, given their existing views and their personality — so they could be given the arguments that are most convincing, given their preconceptions. Do you think that can meaningfully increase the impact of an effort at persuasion?
Hugo Mercier: I think it’s essentially the same as the other one, because probably, given how good people already are at making arguments, the only thing that’s stopping a book by Peter Singer or some brilliant philosopher or brilliant thinker to persuade more of the people who read the book is that he can’t personalise the arguments. Like, if Peter Singer could talk to every individual reader, I would assume that he would be way more persuasive, because I’m sure there are many counterarguments that he hasn’t been able to put into his books. And obviously he’s extremely clever, and so presumably he would be way more persuasive in person.
So I think that if there is going to be a delta in persuasion compared to what’s already out there, it’s going to be in that personalisation. But as we were saying, even that, it’s not clear how you would scale it up, because people just don’t have that much time.
Rob Wiblin: I see. So the bottleneck becomes, you could come up with a personalised argument for them, but how do you get people to pay attention to you? And there, just as everything becomes more entertaining — maybe because LLMs are able to make things more entertaining; they’re able to do a mashup between, I don’t know, an argument for nuclear power and Hamilton or some hip-hop thing — but then they’re competing with everyone else who’s trying to do the same thing. So it’s quite hard to get an edge for one particular view over other things in such a competitive information environment.
Hugo Mercier: Yes, personalisation can do great things. If you could watch the exact series that would appeal to exactly your taste, in a way that would be really awesome.
But on the other side, we also consume content — whether it’s news or fiction or anything else — to some extent because we want to be able to talk to others about it. So if you watch a movie that was made just for you, but that no one else can watch or enjoy because it’s not their taste, it’s going to spoil some of the fun. Likewise, if you read some news or if you hear some arguments that are only compelling for you, and that if you try sharing them with others it’s not going to appeal to them at all, it reduces the interest you have in having the thing in the first place. And it reduces what political scientists call “two-step flow” — that you can’t convince other people in turn, because the thing has been so personalised to you that the buck kind of stops there — and we know that a lot of persuasion comes from people being convinced by media or by government, and then passing on that knowledge or those beliefs to others.
Rob Wiblin: Finally, I want to come to this other worry, which I called spam. To me, this is the most plausible way that AI could make the information environment worse. This is just helping to produce and disseminate just vast amounts of low-quality or misleading content, even more than exists currently. And now we don’t really expect that it’s going to persuade anyone to buy any particular conclusions, but it’s an alternative effect where it’s just increasing the noise that’s out there; it’s just cluttering up the internet with lots of untrustworthy information, where it’s kind of effortful to figure out that it’s untrustworthy because it looks like a paper, it looks like a real study, it looks like a real blog post.
So people realise that this is the case and they just begin to mistrust everything that they see a little bit more, because a lot of the cues that they might use to judge the credibility of things are no longer as reliable as they used to be, because they’re too easy to fake. So the end result is just that, for practical reasons, they give up on trying to form strong views on most topics, and they end up feeling that it’s just not really worth the effort to learn about what’s going on, because it’s so easy to generate some fake video of events that never happened, or fake papers purporting to show some conclusion, or fake accounts creating the false impression about what people believe.
I think one reason I worry about this is that, as I understand it, this has been an approach that many governments have used with their own populations when they’re worried about them rebelling against them: just to produce an enormous amount of noise and confusion. Where it’s not that people believe that the regime is good, but they no longer trust any particular thing that they’re observing, so they just kind of opt out of public discourse.
And I think it’s kind of already the case for me to a great extent in many areas, because basically, I don’t trust anything I read about the Russian invasion of Ukraine on social media, because it just seems overrun with misleading propaganda from both sides, because almost everyone who talks about it is an advocate for one view or another. So the end result of me observing this was that I invested less effort in understanding things, because it seemed too hard to separate truth from lies.
But this could be worse, and it could be widespread across more issues. What do you think of that risk? Can you imagine that being something that plays out over time?
Hugo Mercier: My intuition would be that most people still rely on curation to a large extent. So if you’re going to trust a piece of news to some extent, you make up your own mind based on the content of the news — and if it’s something that’s too implausible, then you’ll be sceptical. But for all the things that are within the range of things that are broadly plausible, the main moderator is going to be the source. So if you read that in a reliable newspaper, or if it’s tweeted by a colleague that you trust, at least within a given area of expertise, then that’s how you know that the information is reliable, or at least kind of worth considering.
And the fact that there’s a lot of junk out there shouldn’t change that fundamentally. The only problem would be that if these courtiers of information — these people who are going to relay information to you by creating it themselves — if they become less reliable, if their job becomes so hard that they stop being reliable, then everything stops working. But I’m not sure that LLMs are going to make the jobs of journalists that different, in terms of figuring out what’s true or not. I mean, you still have to talk to people, you still have to check your sources. And in many ways, LLMs can help them as well. So on balance, it’s not clear it’s going to make things harder.
You’re right when you were saying that obviously the strategy of many governments that already have a not-very-trustworthy political system is to increase that mistrust, so that at least potentially more trustworthy agents can’t gain a foothold. That’s why the Germans were trying, in the Second World War, to discredit the BBC: because they knew it was impossible to get the Germans to believe German propaganda anymore, but at least they could try to discredit the other side. And you have the same thing in Russia and China, et cetera. But that can only work if there is not much trust to start with. If you have some actors that are trusted, it’s not obvious how you’re going to make that trust go away.
Rob Wiblin: I guess one difference, or one thing that’s distinctive about the case of an authoritarian government in a low-trust society just flooding the zone with noise and misleading information in order to cause people to give up on figuring out what their political views should be, is they have the ability, the power to push things out to people. They can grab this misinformation and force it into people by influencing newspapers and so on, so the attention bottleneck is less of a problem. And they’re not competing with private, already credible actors that people can turn to for curated, credible advice — because they’ll just shut down those newspapers, for example, or they’ll kill or lock up the people who are too trusted already and actually could serve this useful function.
So I guess if you get into a situation like that, then now you are in trouble, and maybe it will be helpful for the Russian government to kind of pull the wool over the eyes of their own people, or at least cause them to give up on understanding things, to be able to produce enormous amounts of content all the time, videos of anything that they feel like. But hopefully, in a free society, where you already have credible authorities where people can kind of hope that they’ve done their homework to figure out whether something really happened or not, hopefully that effect should not be so severe.
Hugo Mercier: Yeah. Then again, you know Russia is doing that, and you know with some effectiveness. People are saying that Russia has been really good at brainwashing the Russian people into supporting the invasion; I think mostly they’ve been really good at stopping non-Russian media from changing Russian people’s minds about the invasion, and they stuck with their priors that Ukraine wasn’t great and that if the Russian government decided to invade them, they had good reasons. Which is already really bad.
Most people don’t really care about the news and these sorts of things that much. And for these people, you don’t need flooding. I mean, you just need to have one official channel that they’re going to watch anyway, and that tells whatever you want. So for these people, it doesn’t matter much.
And then for the people for whom it really matters, it’s not that hard to find reliable information on most things. Some issues are really kind of urgent, and like in the few hours after an event, it may be hard to get information. But looking back, even after a few days or weeks, it’s not that hard to find reliable information about most things. And so if you’re one of these people who really wants to figure out what’s going on, stopping someone — except with really outright very strong censorship — stopping someone from doing that is going to be a bit difficult, I think.
But because you’re mentioning flooding, a government that does flooding a lot is China. When something goes wrong and the people might be upset with something, one of the things they do to stop too many people from talking about it and potentially from organising for things to change is they flood social networks with celebrity news, gossip, and other things that people are by default more interested in than in hard news. So they don’t even need misinformation.
Again, you can see how LLMs could generate more of that content, but it doesn’t seem that hard to generate, and they’re already kind of doing it. So maybe at the margin it will cost them a little bit less to do it, but I don’t think it’s going to be vastly more efficient.
Rob Wiblin: A question mark for me, when I was thinking about this in prepping for the interview, is: you and I, we’re adults, and we’ve grown up in a pre-generative-AI era, where we already have a lot of established understanding of the world and opinions. So I think it does get quite difficult to create any radical shift in your or my worldview. Not easy at all.
But imagine that you were born today, and so you grow up your entire life in a world where we can imagine by the time you’re paying attention to any of this, in five or 10 years, just the majority of articles might be primarily written by generative models, the majority of video might be generated by AI, the majority of images might be AI generated. I just don’t know what cumulative effect that has when you don’t have a pre-generative-AI understanding of the world that way: you could trust what you saw, you could trust that videos are real, you could trust that these articles were at least written by a human being who was putting in some effort.
I’m not necessarily saying it will be bad, but the way that this could play out over decades, as generations turn over, I feel less confident about that.
Hugo Mercier: I don’t know. I mean, if you think about photography, essentially we’re both born in a world in which you can make fake photographs that are really hard to discern from the truth, because it can be done with very low-tech methods, and then there’s Photoshop and these other things. And yet when we see a picture in The New York Times, you still believe that picture broadly happened.
I’m not sure why it would be different for other things. People are still going to be interested in what’s true. Even now, people don’t watch the news that much, but they watch the news a bit, even though they could only consume fiction if they wanted. So if it’s a matter of just consuming things that are not true, we could already max out on that now without any problem, and many people indeed do that. But if you’re interested in figuring out what’s true, you won’t want to see something that is made up; you’ll want to see what you think is the actual thing. And as long as there are these institutions that curate things and that vouch for what they’re spreading, things are not going to change all that much.
And I think to some extent the same is true for writing. In a way, you can see LLMs as a continuity between just someone who just writes — who’s not a professional writer — and then you go to school and you write for 10 years, and you get extremely good at writing. The difference between a naive person’s writing and an experienced journalist’s writing is larger than the difference between a journalist writing and even the best LLM in the world ever, because the difference between a completely naive person and a professional journalist is just so massive. So we’ve already had most of that gain in terms of making articles that are well written and persuasive and all of that.
What matters at the end of the day is who is vouching for it — like whose name, whose institution, whose reputation is going to be damaged if the thing turns out to be false. That’s what matters: that there is a system in place of reputation, and that if things go wrong, if the information turns out to have been mistaken, then there’s someone that you can punish — and therefore there’s someone who has an incentive to keep things reliable.
Rob Wiblin: Let’s accept, for the sake of argument, that it does become easier to produce misleading content in future. I think some people envisage that the outcome of this would be people’s opinions being changed in all sorts of random directions all the time. Whereas I think your model — and now my model, having read the book — is that it’s not that people would start changing their mind more; it’s that people would start changing their mind less. Because when people make really complex arguments in some field that I don’t understand, and I don’t trust the person, I don’t believe that they are an authority really, and I can’t check the argument that they’re making for myself because I don’t understand it well enough, I simply don’t change my mind.
And likewise, in future, if people start noticing that it’s possible to trick them into believing stuff all the time because they’re incapable of noticing that a video is doctored, for example, then they just stop changing their mind at all in response to these inputs, because they always just have the option of keeping their current views.
Do you agree that that would be where things would, in a bad case, potentially bottom out?
Hugo Mercier: Yes, it’s increasingly easy to say, “That video has been made up, so…” et cetera, et cetera. I’m not sure how much the technological impossibility of doing something ever was such a strong argument. If 40 years ago someone were to tell you, “Look, there’s this picture that appeared in The New York Times; I think it’s a fake picture,” would your argument really have been, “Well, it’s impossible to doctor a photo”? Or would your argument have been, “Well, it’s in The New York Times and also in The Washington Post and also everywhere else”?
I think the argument always rests really ultimately on reputation, and not on the technical possibility of doing such-and-such trick. There are always people who want to say, “I don’t believe in that,” and they’ll have more excuses, but I don’t think it’s going to make a big difference.
Rob Wiblin: So here I’m picturing a scenario where, let’s say that we end up in a worse case than that — where The New York Times doesn’t exist or The New York Times is no longer credible, and so there’s no particular authority that you trust to determine the providence of an image or a video, and to determine whether it’s real or not. In that case, I think what happens is you just stop paying attention, and you stop changing your mind.
Hugo Mercier: Yes.
Rob Wiblin: So hopefully we can solve it by having trustworthy sources and institutions that people believe have done the legwork to figure out if things are true. But if they don’t, it won’t be mass persuasion; it’ll be mass indifference and mass stubbornness, I think.
Hugo Mercier: I agree, but I’m kind of, I guess, an optimist by nature. As long as there is a demand for truth — as long as some people, at least for some issues, or most people for some issues, or some people for every issue, maybe they really care what’s true — as long as there’s a demand for that, and if there’s not government intervention to stop institutions from meeting that demand, that demand is going to be met somehow. It makes sense if there are people who really care about the truth, and it makes sense for newspapers to appear who are mostly going to report on the truth.
And that’s what we see in the evolution of newspapers in the US: if you only have one newspaper in a given place, things are not really stable, because if they say something wrong, no one is going to call them out. But as long as you have more than one source of information, then it’s really hard for one of them to get away with something that’s really false, because all the other ones are going to call them out on it. If you have enough, there’s enough demand to pay for people to do that job, and for not just one institution, but for several institutions to do that job. And if the government is not stopping that from happening, it will happen. Like, why not?
Rob Wiblin: Yeah. On this topic of trustworthiness, one comment I want to make is I think people underestimate the importance of official misinformation, which is the term that I use for lies or misleading claims that are just reported as fact by governments or officials or academics or major newspapers. Personally, I don’t think that happens especially often. But when it does, it packs a much bigger punch than any kind of Russian botnet on social media.
Hugo Mercier: Obviously, yes.
Rob Wiblin: Because a Harvard academic going on PBS and asserting some incorrect thing about their specialty area is naturally very convincing to people, understandably. And the reach is vastly larger as well.
So even though I think those cases are atypical, I think calling them out and trying to stamp them out is really important, because each instance does a lot to spread confusion and misunderstanding. And in the long run, it undermines trust that there are any authorities that we can turn to and be reasonably confident that they’re doing their best to tell us the truth and not just pushing some agenda that we don’t share. So yeah, I really don’t like official misinformation. We’ve got to get rid of it.
Hugo Mercier: I completely agree. That’s the only way of keeping these institutions reliable and therefore people trust them. For instance, one of the best determinants of whether conspiracy theories are prevalent in any given country is the degree of corruption and mistrust that there is. So in countries in which there is less trust in institutions, then conspiracy theories flourish, even if they’re not correct conspiracy theories. But this general atmosphere of “I’m not going to trust what the government is telling me” is really very deleterious, obviously. So yeah, they have to be absolutely called out.
Ways AI could improve the information environment [02:29:59]
Rob Wiblin: Finally, I’m curious to quickly consider ways that AI or LLMs could improve our ability to figure out the truth and have productive conversations online, or even in real life maybe.
Three ideas that occurred to me in prepping for this interview are: firstly, you could imagine that in future, every social media post on an important controversial political topic could have an individualised fact check next to it, or even a reasoning check, where the LLM tries to point out ways that the argument being presented might not hold together, or could contradict like known claims on Wikipedia or something. That might not have to be mandatory, but social networks could give a boost to posts that opt in to that kind of fact checking or reasoning checking system.
Another option would be having the opposite of misinformation bots on social media. So you could counter misinformation with information bots that are just as cheap and just as numerous and prolific, and they’re programmed to do the opposite: to try to make useful points as politely as they can. In principle, I don’t see why fighting fire with fire in this way should be impossible. And the information bots might attract more followers, inasmuch as people care about the truth and can distinguish it at all, because that would be like good sources of information. It does require that people do it; it does require that someone put in the effort to do the legwork, but it seems like a promising idea.
And then finally you could have LLMs working to identify misleading AI-generated accounts, and flagging them for removal or reducing exposure. Again, I don’t see why offence can’t be defence here. It seems like you could, I don’t know, set a thief to catch a thief.
What do you make of those ideas?
Hugo Mercier: I completely agree. I think that a lot of people have speculated about the negative potential consequences of LLMs on the informational environment. Even if they exist, I would guess that they would be small by comparison with the positive consequences. As a rule, when you have a technology that makes transmitting information easier, there are going to be bad agents that are going to use that for bad purposes. But I can’t think of a single case in which they have made things worse in the aggregate. And I think, as you were saying, there’s a lot of potential for these technologies to help spread information. If they could automatically suggest community notes on Twitter, for instance, that might be helpful.
What would be interesting would be to see how the informational environment segregates then, on that basis. Because you might imagine that some people are not there necessarily to just have the most accurate information, to put things nicely, and so they might not want every one of their posts to be flagged, saying, “Ah, actually, that is not true.” They might find that somewhat annoying, and so they might just go somewhere else. So then that’s the question: is it a good thing or a bad thing? But at least it’s plausible now to imagine a context in which people could have a social network like Twitter, in which you have a relatively efficient and automatic fact-checking mechanism, which has never been possible so far, because it’s so demanding in man hours.
Rob Wiblin: Too expensive, yeah. I’m a little bit more nervous than you, and I’m glad that there are people trying to predict ways that this could go, like what negative effects it could have. Because in this case, I think this is an example where there are some ways that it will be harmful, but I think that they can be addressed as long as people stay abreast of ways things are going wrong, and they look for defences, they look for ways for society to adjust in order to make this stuff better.
For example, it is true that now you can potentially fake video or images, and so people are trying to come up with responses to that, where they track the providence of images and video better in order to be able to more distinguish what comes from a trustworthy source and not. But that does require someone to do that. Someone actually has to notice that this is an issue and address it. So that’s why I’m always glad to see people working to fix problems, even though I think they are fixable, because someone has to do it.
Hugo Mercier: Someone has to do it.
Rob Wiblin: Exactly, right. And likewise, you could imagine a world in which people try to do the things that I just listed, and other better ideas that they might come up with, or we could be lazy and not do it, and then maybe the negative effects would outweigh the positive effects.
Hugo Mercier: But then again, looking back, there have been people who’ve done the right things. So let’s just hope that that trend keeps going.
Rob Wiblin: Yeah. All right, final question. This is slightly predictable, but what’s the dumbest thing you’ve ever been persuaded of? And why did you let it happen?
Hugo Mercier: I hope that most listeners are not going to make it this far, because if they hear that, then they’re going to discount everything that precedes it. So when I was in my early 20s, we were watching the news on TV with a friend, and they said that in football, in soccer, they had an issue that not enough goals were being scored. Which is kind of true, I guess. And so they were going to fix that by attaching an elastic band between the bar of the goalpost and the goalie to make it harder for the goalie to move, so that more goals would be scored.
And yeah, it was an April fool’s joke, but for a short time, we were really annoyed, like, “What? But that’s insane!” And the excuse, I guess, is I was probably quite high. So I don’t know what the counterfactual was.
Rob Wiblin: So we can still trust you, as long as you’re not high right now.
Hugo Mercier: I am not. You have to trust me on this.
Rob Wiblin: All right. My guest today has been Hugo Mercier. Thanks so much for coming on The 80,000 Hours Podcast, Hugo.
Hugo Mercier: Thank you. That was great.
Rob’s outro [02:35:19]
Rob Wiblin: If you found that episode interesting here’s some others that are related to one theme or another:
All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo McGuire and Simon Monsour.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.