Enjoyed the episode? Want to listen later? Subscribe by searching 80,000 Hours wherever you get your podcasts, or click one of the buttons below:

I think what I’m most concerned about is the shredding of a shared meaning-making environment and joint attention into a series of micro realities – 3 billion Truman Shows.

Tristan Harris

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages.

Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it’s hard to remember how recently it was a fringe view.

It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality.

But while it all feels plausible, how strong is the evidence that it’s true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory.

At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. And a jury trial — an Athenian innovation — ended up condemning Socrates to death. Fears about new technologies aren’t always misguided.

Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques.

Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address.

Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what’s in our interests as users and citizens?

One way is to encourage a shift to a subscription model. Presumably, that would get Facebook’s engineers thinking more about how to make users truly happy, and less about how to make advertisers happy.

One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site.

But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50.

One possibility is for Congress to say: it’s unacceptable for large social media platforms to influence the behaviour of users through hyper-targeted advertising. Once you reach a certain size, you are required to shift over into a subscription model.

That runs into the problem that some people would be able to afford a subscription and others would not. But Tristan points out that during COVID, US electricity companies weren’t allowed to disconnect you even if you were behind on your bills. Maybe we can find a way to classify social media as an ‘essential service’ and subsidize a basic version for everyone.

Of course, getting governments more involved in social media could itself be dangerous. Politicians aren’t experts in internet services, and could simply mismanage them — and they have their own perverse motivation as well: shift communication technology in ways that will advance their political views.

Another way to shift the incentives is to make it hard for social media companies to hire the very best people unless they act in the interests of society at large. There’s already been some success here — as people got more concerned about the negative effects of social media, Facebook had to raise salaries for new hires to attract the talent they wanted.

But Tristan asks us to consider what would happen if everyone who’s offered a role by Facebook didn’t just refuse to take the job, but instead took the interview in order to ask them directly, “what are you doing to fix your core business model?”

Engineers can ‘vote with their feet’, refusing to build services that don’t put the interests of users front and centre. Tristan says that if governments are unable, unwilling, or too untrustworthy to set up healthy incentives, we might need a makeshift solution like this.

Despite all the negatives, Tristan doesn’t want us to abandon the technologies he’s concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world.

Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we’ve ever had — tools that could educate and organise people better than anything that has come before.

The tricky and open question is how to get there — Rob and Tristan agree that a lot more needs to be done to develop a reform agenda that has some chance of actually happening, and that generates as few unforeseen downsides as possible. Rob and Tristan also discuss:

  • Justified concerns vs. moral panics
  • The effect of social media on US politics
  • Facebook’s influence on developing countries
  • Win-win policy proposals
  • Big wins over the last 5 or 10 years
  • Tips for individuals
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Key points

*The Social Dilemma*

In climate change, if you went back maybe 50 years, or I don’t know, 40 years, you might have different groups of people studying different phenomena. There are some people studying the deadening of the coral reefs, some people studying insect loss in certain populations, some people studying melting of the glaciers, some people studying species loss in the Amazon, some people studying ocean acidification. There wasn’t yet a unified theory about something that’s driving all of those interconnected effects, but when you really think about existential threats, what we want to be concerned about is: If there’s a direction that is an existential one from these technologies or from these systems, what is it?

In the case of social media, we see a connected set of effects — shortening of attention spans, more isolation and individual time by ourselves in front of screens, more addiction, more distraction, more polarization, less productivity, more focusing on the present, less on history and on the future, more powerlessness, more polarization, more radicalization, and more breakdown of trust and truth because we have less and less of a shared basis of reality to agree upon. When you really zoom way out, I think what I’m most concerned about is the shredding of a shared meaning-making environment and joint attention into a series of micro realities. As we say in the film, 3 billion Truman Shows.

Each of us are algorithmically given something that is increasingly confirming some way in which we are seeing the world, whether that is a partisan way or a rabbit hole sort of YouTube way. And I know some of that is contested, but we’ll get into that. If you zoom out and say that that collection of effects — we call it human downgrading, or ‘the climate change of culture’, because essentially, the machines profit by making human beings more and more predictable. I show you a piece of content, and I know, with greater and greater likelihood, what you’re going to do. Are you going to comment on it? Are you going to click on it? Are you going to share it? Because I have 3 billion voodoo dolls or predictive models of each other human, and I can know with increasing precision what you’re going to do when I show it to you. That’s really what the film and our work is trying to prevent, and not really litigating whether the formal addiction score of TikTok in this month versus this month is this much for exactly this demographic.

Conspiracy theories

Conspiracy theories are like trust bombs. They erode the core trust that you have in any narrative. In fact, one of the best and strongest pieces of research on this says that the best predictor of whether you’ll believe in a new conspiracy theory is whether you already believe in another one, because you’re daisy-chaining your general view of skepticism of the overall narrative and applying it with confirmation bias to the next things you see.

One of the best examples of this is that a Soviet disinformation campaign in 1983 seeded the idea that the HIV virus raging around the world was a bioweapon released by the United States. And this was based on an anonymous letter published in an Indian newspaper, and it ended up becoming widely believed among those predisposed to distrust the Reagan administration. And as my colleague, Renee DiResta, who used to be at the CIA, and studied this for a long time, said, “As late as 2005, a study showed that 27% of African Americans still believe that HIV was created in a government lab”. And so these things have staying power and re-framing power on the way that we view information.

Facebook

Do you trust Mark Zuckerberg to make the best decisions on our behalf, or to try to satisfy the current regulators? Do you trust the regulators and the government that we happen to have elected? And as you said, there’s a strong incentive for Facebook to say, “Hmm, which of the upcoming politicians have the most pro-tech policies?” And then just invisibly tilt the scales towards all those politicians.

I think people need to get that Facebook is a voting machine, and voting machines are regulated for a reason. It’s just that it’s an indirect voting machine, because it controls the information supply that goes into what everyone will vote on. If I’m a private entrepreneur, I can’t just create an LLC for a new voting machine company and just place them around society. We actually have rules and regulations about how voting machines need to work, so that they’re fair and honest and so on.

Obviously we’ve entered into another paradox where, if we want Facebook to really be trustworthy, it should probably have completely transparent algorithms in which everyone can see that there’s no bias. But once they make those algorithms transparent, there’ll be maximum incentive to game those algorithms and point AIs at literally simulating every possible way to game that system. Which is why we have to be really careful and thoughtful.

I think the heart of this conversation is: What is the new basis of what makes something in this position — a technology company with all of this asymmetric knowledge, data, and collective understanding of 3 billion people’s identities, beliefs, and behaviours — what would make anyone in that position a trustworthy actor? Would you trust a single human being with the knowledge of the psychological vulnerabilities and automated predictability of 3 billion human social animals? On what conditions would someone be trustworthy? I think that’s a very interesting philosophical question. Usually answers like transparency, accountability, and oversight are at least pieces of the puzzle.

The subscription model

We’re already seeing a trend towards more subscription-oriented business relationships. I mean the success of Patreon, where people are directly funded by their audience…recently Substack…you have many more journalists who are leading their mainstream publications and having a direct relationship with their readers being paid directly through subscription. And you also have, by the way, more humane features in Substack. They let you actually, for example, as a writer, pause and say, “Hey, I’m not going to write for the next two weeks”. And it’ll actually proportionally discount the amount of subscription fees according to letting the author live in a more humane way and have these breaks. So we’re not creating these inhumane systems that are infinitely commoditizing and treating people as transactional artifacts. So those are some really exciting trends. And I actually have heard that Twitter might be looking into a subscription-based business model as a result of reacting to The Social Dilemma.

I think what we need though, is a public movement for that. And you can imagine categorically — and this would be a very aggressive act — but what if Congress said we are not allowing a micro-targeting behavioural advertising model for any large social media platform. That once you reach a certain size, you are required to shift over into a subscription. Now, people don’t like that, again, because you end up with inequality issues — that some people can afford it and others cannot. But we can also, just as we’ve done during COVID, treat some of these things as essential services. So that much like during COVID, PG&E and your electricity and basic services are forced to remain on, even if you can’t pay your bills. And I think we could ask how we subsidize it, the basic version for the masses, and then have paid versions where the incentives are directly aligned.

Tips for individuals

Robert Wiblin: Personally, I basically never look at the newsfeed on Twitter or Facebook, I’ve blocked them on my laptop, I don’t know my password for these services and I don’t have the apps on my phone, so I can’t log into them on my phone. So I can only access them on my computer and then I’ve got these extensions — the app) is designed to reward you with the newsfeed once you finish a task. But I just never click to finish any tasks so it just blocks it always.

I’ve also got this app called Freedom which can block internet access to particular websites if you need to break an addiction that you’ve got to a website at a particular time. As a result, well on Facebook I basically only engage with the posts that I myself write — which is a bit of an unusual way of using it — as a result I basically never see ads. On Twitter, because I can’t use the newsfeed, I have to say, “I really want to read Matthew Yglesias’ tweets right now”, and then I go to Matthew’s page and read through them. So it’s a bit more of an intentional thing, and it means that they run out because I get to the bottom and I’m like, “Well I’ve read all of those tweets”.

Tristan Harris: Yeah. I love these examples that you’re mentioning and I think also what it highlights obviously is that we don’t want a world where only the micro few know how to download the exact Chrome extensions and set up the password-protecting hacks. It’s sort of like saying we’re going to build a nuclear power plant in your town and if there’s a problem you have to get your own hazmat suit. We don’t want a world where the Chrome extensions we add are our own personal hazmat suits.

We have a page on our website called Take Control, on humanetech.com, where I really recommend people check out some of those tools. You know it starts with, first of all, an awareness that all of this is happening. Which might sound like a throwaway statement to make but you can’t change something if you don’t care about changing it and I think people need to make a real commitment to themself in saying, “What am I really committed to changing about my use of technology?” And I think once you make that commitment then it means something when you say I’m going to turn off notifications.

And what I mean by that is really radically turning off all notifications, except when a human being wants your attention. Because one of the things is that most of the notifications on our phone seem like they’re human beings that want to reach us — because it says ‘these three people commented on your post’, but in fact those are invented by those AIs’ machines to try to lure you back into another addictive spiral.

Articles, books, and other media discussed in the show

Tristan’s work

Tools for social media addiction

Everything else discussed in the show

Other work related to the problem

Transcript

Rob’s intro [00:00:00]

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems and what we should do to solve them. I’m Rob Wiblin, Head of Research at 80,000 Hours.

Today’s guest Tristan Harris is likely to be known to many of you because he’s been advocating for us to take the downsides created by social media more seriously for many years, and because he’s featured heavily in the recent and popular Netflix documentary The Social Dilemma.

I was really sympathetic to the conclusions of the movie, but not totally convinced by the arguments they presented.

I think the producers didn’t give enough room for counterarguments, so I was excited to get to test the strength of the case with Tristan, as well as explore a bunch of possible concrete solutions – a topic which was almost entirely left out of the documentary.

Even if you’ve listened to Tristan on a bunch of other podcasts, I think that the lengthy section on possible solutions will still be fairly novel. If you want to go straight to that it’s about 1hr20m into the episode, or you can skip to the chapter called ‘Possible solutions’.

We also talk about moral panics, US politics, the influence of Facebook in the developing world, big wins over the last 5 or 10 years, and tips for individuals like me who’ve struggled with social media addiction in the past – we go through all the tools I personally use there, so if you find yourself wishing you used your laptop differently I can recommend sticking around for that.

Alright, without further ado, here’s Tristan Harris.

The interview begins [00:01:36]

Robert Wiblin: Today, I’m speaking with Tristan Harris. Tristan is an American computer scientist and the founder and leader of the Center for Humane Technology, a nonprofit organisation that works to prevent technology contributing to internet addiction, wasted attention, political extremism, and misinformation. He’s become one of the most visible voices, or I guess audible voices, questioning whether the effect that email, social media, and YouTube have is positive overall. Before starting CHT, he worked as a design ethicist at Google and studied the ethics of human persuasion at Stanford. He was most recently prominently featured in the Netflix documentary The Social Dilemma, which covered the possible harms being caused by social media and other technology products that distract us or contribute to shaping our opinions. Thanks for coming on the podcast, Tristan.

Tristan Harris: Absolutely. My pleasure, Rob. Good to be here.

Robert Wiblin: All right. I hope to get to talking about how strong the evidence is for online services having negative effects and how those effects might be reduced. But first, starting at the big picture, what are you doing at the moment and why do you think it’s really important?

Tristan Harris: One of the reasons for our conversation now is this recent film, The Social Dilemma, which I think really brought these issues into the mainstream. We see ourselves as a social change organisation that is trying to overall change the core incentives and dangers of the new digital habitats that we really are inhabiting. They’re not just products we use. Facebook is not just a tool or a casual application. They’ve really come to take over the meaning-making and sense-making of entire worlds, and that’s never been more true than in a COVID world where we’re more isolated than ever at home and we rely on social media to figure out, is Portland a war zone right now? Are there guns and rioters everywhere, or is it actually a beautiful peaceful day? If there are distortions in the way that technology is shaping the meaning-making that 3 billion people are doing in hundreds of languages and in hundreds of countries, we ought to really know about that.

Tristan Harris: This film, The Social Dilemma, is a Netflix documentary that came out on September 9th, and I think it’s one of the reasons why we’re talking today. I think much of the appropriate skepticism that some people have had is…is it a salacious overselling documentary, or is it incredibly accurate and subtle in the points that it’s trying to make? My goal today would be to convince you that overall it is very accurate in how it is describing the mechanisms of the overall system. Just to say a couple quick words about The Social Dilemma, in the first 28 days, it was seen by 38 million households in about 190 countries and in 30 languages. So, I really feel like it’s become a kind of “Inconvenient Truth” for the tech industry, and like Inconvenient Truth, though it fell into some skepticism about climate science, and because maybe the timescales were not exactly correct as to what Al Gore had predicted, it’s more about the broader trend of, are the mechanisms here leading to a gradual ecological breakdown of the environmental life support systems that we rely on?

Tristan Harris: I think the similar argument that I hope to make with you today is that technology really is creating a kind of climate change of culture, where there are predictable interconnected effects that overall trend in directions that erode the core life support systems of a social fabric, from our trust mechanisms to the basis of our attention, our productivity, our ability to not be distracted, all of these kinds of phenomena. I’m sure we’ll get into all of that.

Center for Humane Technology [00:04:53]

Robert Wiblin: Okay. Yeah, we’ll come back to The Social Dilemma in just a minute. But I’m curious to know, what is the Center for Humane Technology’s strategy? Are you aiming to convince people in the tech companies that you would like to change gears, or are you thinking about policymakers, or are you just trying to convince the public as a whole that there’s an issue here that needs to be grappled with at a really broad level?

Tristan Harris: Yeah. The Center for Humane Technology is a nonprofit vehicle that we founded to support the overall work of systems change. This might sound very abstract to people, but systems change is incredibly complex. Climate change is an externality of an infinite growth-based economic system. Every time you print trillions of dollars by the Fed, you’re printing IOUs for materials, extraction, pollution, etc., So there’s a direct connection between our economic system and climate change, in a similar sense that there’s a direct connection between the core financial incentives of technology companies and the social impacts. So, you can’t just raise awareness by itself. You need to work on policy, you need to work on cultural awareness from all sides because you can’t convince policymakers to do something unless there’s a public that has a majority view that something needs to change, and then it’s not a conspiracy theory about these different problems.

Tristan Harris: You need to convince mental health professionals, educators, parents, students. We work on full-scale systems change. I don’t know if you know the work of Donella Meadows, who was a systems change theorist, and she wrote a great essay called, I think it’s the 12 Leverage Points to Intervene in a System. You can change the parameters of a system, the taxes, subsidies, or you can change the fundamental mindset or paradigm on which all the assumptions in that system are rested. I think a lot of our work actually goes to that level, where prior to this — and I’ve been working on this topic for probably close to eight or nine years now, starting a little bit before Google even as a design ethicist — and really seeing how one of the core things we were trying to overturn was the way they were framed as an issue.

Tristan Harris: Is technology addictive? Yes or no. You could approach that question with kind of clinical thresholds of, well, how do we measure addiction? Is someone really not sleeping? Do they have withdrawal symptoms if they stop using the product? That really wasn’t the kind of overall framing that was accurate to all the phenomena that we were seeing, which included a multitude of effects from using it for longer than you intended, getting distracted way more often, finding yourself habitually doing something that you didn’t want to do, finding that the things that were, let’s say negative behaviours, were intertwined with core infrastructure you had to use on a daily basis.

Tristan Harris: For example, for many people — say if you’re a social media marketer, you can’t go to work without decoupling from the use of these systems, so to the extent that they manipulate your use, or you don’t feel good when you use it, you’re sort of forced to do that in the same way that many high school students are forced to use Facebook groups to do homework. What makes these systems pernicious is the fact that their overall structure is infused with the way our social fabric works. It’s no longer a question of whether I use the product or not, because a teenager who says, hey, I saw The Social Dilemma the film and I deleted my Instagram account, I don’t want to use it anymore, they will suffer from social exclusion because all of their friends at their high school still use Instagram, and that’s where sexual opportunities and homework and everything else is still discussed.

Tristan Harris: I think that’s one of the things we’re going to have to look at is, how is this operating at a systemic level? To be a little bit more clear, yeah, we work on working with policymakers, we speak with former heads of state, governments, we speak with people, researchers who work on the harms. This film, it’s important to note, was made over the course of three to four years. When we get into some of the things that it talks about, a lot of those interviews were actually filmed many years ago before people had any notion that there was a problem at all with social media.

Critics [00:08:19]

Robert Wiblin: Yeah, in doing some background research for this episode, I found there’s a lot of supporters of your view, and there’s also a lot of detractors. I guess, by really taking up the mantle in this question, becoming a figurehead for it, you’ve kind of drawn a slight target on your head for, I guess, people who don’t agree with your message. How have you found that on a personal level kind of being talked about on these social networks and sometimes being yeah, criticized quite readily sometimes?

Tristan Harris: That’s very kind of you to point that out. One of the funny aspects of this is that part of our critique is that, in the attention economy, hate has a home field advantage. The more extreme and salacious and incendiary the comment, the more likely it is rewarded, and many of our critics participate in that by saying fairly rude or ad hominem type things sometimes. You have to grow a thick skin, I think, to do this work. What’s interesting, and I wish people could see more of, is the overall trajectory over eight years, which I’m hoping to get into because when you’ve been at this as long as we have, and you’ve been trying to convince people of something, you see this whole train wreck that’s emerging, where you see Facebook group recommendations recommending extremism, or conspiracy theories. You see YouTube having a highly partisan, highly polarizing effect, stretching back for a decade.

Tristan Harris: When they reduced those effects over the last two years, which they’ve done, and I’m sure we’ll get into that, I think people can take our rhetoric as hyperbole because they say, well, it’s really not that bad in 2021, now everyone’s talking about these issues. Compare that to the fact that over a 10 year period when no one was talking about them, we had to yell at the top of our lungs that there was really an issue here that led them to now make some of those changes.

Tristan Harris: I will say that the core lens that we use to examine these effects is through the persuadability and vulnerability of the human mind. That’s really my background, I think I would say the thing I’m most expert in. As a kid growing up, as talked about in the film, I was a magician, and magic is really about an asymmetry of knowledge: what the magician knows about your mind that you do not know about your mind. If you knew the thing that the magician knows about your mind, then the trick would not work. Although there’s still persistent effects around attention, because it actually, even if you know that I’m manipulating your attention, your attentional instincts are so strong, the magician can continually manipulate them. But I’m hoping we’ll get into that. I actually have a lifelong sort of exploration of the persuadability of the human mind. I actually went into cults for several years of my life and studied how cults manipulate people, really going into some, where you’d find people who were doctors or lawyers or PhDs who were incredibly intelligent people, who would get sucked into this rhetoric.

Tristan Harris: It’s sort of like what people say when you find these astrophysicists who are simultaneously at the top of their field, but then might be highly religious, and their religious views contradict their knowledge in physics. In that contradiction, their kind of core beliefs and identity tends to win. I think these are all really interesting. I’ve studied neuro-linguistic programming, a little bit of hypnosis, pickpocketing. Actually, one of my favorite experiences was going to a pick-pocketing hypnosis and magic retreat in Bali and hanging out with magicians there for weeks. Anyway, that’s to say, I think that when we get into this conversation, we should be thinking about this in terms of the asymmetry of knowledge and power of technology manipulating aspects of human nature that we are not aware of in ourselves. To sort of loop it all the way back around to your question about how it feels to be criticized, one of the vulnerabilities of the human mind is negativity bias.

Tristan Harris: Rob, if you made a post and you said something, and you got 99 positive comments on that post, and one of those comments was negative, where does your attention go? If you were ‘a rational actor’, you’d be weighing the information 99 to one, and really holding onto and remembering the positive information, in a 99 to one ratio. But of course that’s not how our mind works. Not only does our mind focus on the negative information, but it holds onto and loops on that negative information, because it’s actually an ego attack. I think that when we talk about not just my own experience of critiques that people may have of The Social Dilemma or our work, it’s just to say that we’re all vulnerable to this, and I face that kind of effect just like anybody else.

Robert Wiblin: Yeah. It’s interesting. I think I used to be more bothered by that. I feel like over, well, it’s maybe been a 10 or 15 year period, but gradually, I think our personalities can adapt somewhat to the jungle that is social media, and we can learn to maybe…Well, I think, to some extent, I’ve learned to kind of enjoy people being really vicious and to find it entertaining, but that has been a long journey, and I’m not sure it’s entirely good because then you can end up with a contrarian desire, where it’s like, you almost feed off of the negative attention. That’s bad in its own way.

Tristan Harris: That’s right. Yeah, it’s interesting. That’s right. It’s very interesting. I think that when people become accustomed to saying things that they get enough negative feedback about, they just surrender to the fact that they’re living in a negativity machine. It can lead people to dig in into a politics of grievance, where they actually say things just to off their adversaries and critics. I worry that’s actually one of the core effects that’s driving polarization, is that people actually start to become accustomed to living in a world where everything you say has context collapsed, because you’re saying one thing in one context, and there’s going to be a whole bunch of other people living in a different context to say, I can’t believe you said that, that’s horrible, and their counter critique comments are going to be rated at the top of the possible comment threads. It’s one of the things that I worry about is how that effect is going to play out in the long term.

The Social Dilemma [00:13:20]

Robert Wiblin: Let’s push on and talk about the argument you’re making in The Social Dilemma. What is the big-picture nature of the problem you’re working on? As you said, there’s so many different things that are kind of interlocking, that it can be a little bit hard, I think, to conceptually get a grip on exactly…Yeah, what are all of the different failure modes that these products can have, from a user’s perspective or society’s perspective?

Tristan Harris: Yeah, absolutely. We really look at this through the lens of, if you zoom out and blur your eyes, what is the trajectory of incentives and of power dynamics and influence that are driving certain sociological trends as a result of social media’s influence and impact? Again, you blur your eyes. You’re not asking, what is YouTube’s algorithm doing in February 2016 versus November 2020, because you actually have a very different algorithm, and we’ll get into that, I’m sure, but if you zoom out and say, how addictive is social media for teen girls, or if you zoom out and say, how distracting is email? You’re going to see that change in litigated studies in the academic literature.

Tristan Harris: But if you zoom out and say, are there certain broad, consistent, predictable effects that we will see? Which we sort of think about as the climate change of culture, as coming from an extractive business model based on harvesting and monetizing human attention, because that’s really the meat of what The Social Dilemma as a film is criticizing, is the business model that depends on harvesting and modifying people’s behaviour through monetizing their attention. And that asymmetry of power that enables the machines to know something about you that you don’t know about yourself — leading to more and more predictable outcomes about how you use the product in more influenced ways — will have certain effects.

Tristan Harris: Let’s get into them specifically. Much like in climate change, if you went back maybe 50 years, or I don’t know, 40 years, you might have different groups of people studying different phenomena. There are some people studying the deadening of the coral reefs, some people studying insect loss in certain populations, some people studying melting of the glaciers, some people studying species loss in the Amazon, some people studying ocean acidification, and there wasn’t yet, maybe a unified theory about something that’s driving all of those interconnected effects. When you really think about the existential threats, as your community is familiar with and studying, I think we want to be concerned about, if there’s a direction that is an existential one from these technologies or from these systems, what is it?

Tristan Harris: With our case, we see a connected set of effects between shortening of attention spans, more isolation and individual time by ourselves in front of screens, more addiction, more distraction, more polarization, less productivity, more focusing on the present, less on history and on the future, more powerlessness, more polarization, more radicalization, and more breakdown of trust and truth because we have less and less of a shared basis of reality to agree upon. When you really zoom way out, I think what I’m most concerned about is the net shredding of a shared meaning-making environment and joint attention into a series of micro realities, as we say in the film, 3 billion Truman Shows.

Tristan Harris: Where each of us are algorithmically given something that is increasingly confirming some way in which we are seeing the world, whether that is a partisan way or a rabbit hole sort of YouTube way, and I know some of that is contested, but we’ll get into that. If you zoom out and say that that collection of effects, we call human downgrading, or the climate change of culture, because essentially, the machines profit by making human beings more and more predictable. I show you a piece of content, and I know, with greater and greater likelihood, what you’re going to do. Are you going to comment on it? Are you going to click on it? Are you going to share it? Because I have 3 billion voodoo dolls or predictive models of each other human, and I can increasingly know with increasing precision what you’re going to do when I show it to you. That’s really what the film and our work is really trying to prevent, and not really litigating whether the formal addiction score of TikTok in this month versus this month is this much for exactly this demographic.

Robert Wiblin: Interesting. So they are kind of focusing on the idea that perceiving the world through online services causes us to just have completely different perceptions of reality. Do you think that is kind of the #1 issue that’s been created?

Tristan Harris: I think on a ranked order of harms to the life support systems of society, I think that is really one of the strongest. Yeah. I think that’s one of the most concerning.

Robert Wiblin: I guess I approach this and a lot of things with my economics training, because that’s what I did in undergrad. And that sometimes brings a bit of skepticism about how products could be bad for users. I’m always kind of grasping for like, is there some market failure here that’s causing this product to be bad, even though people are kind of choosing to use it? I guess, I wonder, is it just that, when people all just have a different reality, they all just form different views about the world, in fact, that doesn’t really harm them, it’s only harmful at some systematic social level, and so people don’t have a reason to stop using Facebook just because it’s causing them to have opinions that are so divorced from reality or are divorced from the opinions of other people. Do you think that’s kind of the market failure issue?

Tristan Harris: Yeah. I appreciate you bringing this up, because at the individual level, you could have optimizing benefits that infinite scroll, for example, makes scrolling more efficient. You can see more news in a shorter period of time. So, you could actually interpret that positively, even though the film talks about that in a negative way, and we can talk about that, but if you zoom out, and that creates a world in which more and more of people’s time is basically scrolling by themselves between critical hours when otherwise they may need to (and this may be not in the COVID world) to schedule time being with other people, in general, that’s crowding out other fundamental life support systems of a society, which is to say, we need touch, we need eye contact, we need human connection, we need to be with our friends.

Tristan Harris: If you have a world in which people are increasingly — and this is really not a screen time versus no screen time or smartphones versus not smartphones thing, it’s really about how the business model creates a specific set of effects — but overall, a world in which people are more isolated, more trapped in their own reality has a social balance sheet, like you said, the sort of market failure for the commons, where the commons of our attention commons and our information sense-making commons is being gradually polluted. In the case of infinite scroll, we have less and less available shared attention to be with each other, and we have more and more time being with ourselves. In the case of personalization, you said, we may be getting each of what we ‘want’, but I’m also wanting to question that in our conversation. Because I think, philosophically, one of the things I’m excited about talking with you today is breaking down the authority of human choice, because human choice comes from the authority of human feelings.

Tristan Harris: If human feelings can be sufficiently manipulated, it really isn’t a choice. In the same way that, according to Facebook and YouTube, if they say, what are you people spending the most attention on? And then if they say that’s what they like…attention is not a rational act. It’s not a choice. When you drive on the freeway on the 101 in San Francisco and everyone stares to the right and they see a car crash, according to the social networking platforms, car crashes are what the entire world wants. So, I actually think there’s a philosophical bug in the mental paradigm of Silicon Valley, which I’ve seen really play out over the last eight years because people really didn’t see it this way. I think we’re gradually getting to a world where people are able to recognize self-reinforcing feedback loops, and the technology that doesn’t make a distinction between what we want versus what we’ll watch is going to lead to many long-term harms to the social balance sheets.

Three categories of harm [00:20:31]

Robert Wiblin: Yeah. Okay. Maybe I’ll give my categorisation of things or the way that I kind of structure it in my head, and then maybe you can connect to that. Because I think the last one maybe corresponds to this attention issue or misreading what people want. I guess I kind of structured, oh, I think there’s like three broad issues that I see people raising in The Social Dilemma and other similar topics. The first one I think of is like negative effects on politics. There, I think of the main market failure being that, when people engage in political debate, they’re often doing it for kind of entertainment or just because they find it interesting, or because they want to like flatter their own views, and if people form inaccurate beliefs about policy issues, or they end up forming beliefs that cause them say to vote for a candidate that on reflection, they actually wouldn’t prefer, that doesn’t actually harm them personally. They don’t internalize the cost of those negative beliefs, because in reality, the chance of affecting the election is negligible.

Robert Wiblin: From a self-interested point of view, it’s rational to form beliefs that are enjoyable to have, because the impact of it on you, of voting incorrectly in some sense is not really there. Because people don’t really have strong incentives to form accurate beliefs within politics and policy, their beliefs just kind of tend to drift around to whatever happens to be most grabbing or most appealing for them to believe. So, there’s not like a strong feedback loop that drives people to believe the truth or believe what they really would if they were kind of wise and had a lot of time to reflect. Then sort of the second category, which I think was like negative effects on users, which is a little bit harder for an economist, maybe there’s a little bit more resistance to saying there’s a market failure here.

Robert Wiblin: But I think there is something going on where it’s like people seem to be able to be given a compulsion to use products that harm them in the bigger picture, that maybe they enjoy in the immediate term, but it actually isn’t improving their life all things considered. I guess, here, you’ve got these products getting paid in proportion to how much time they can get you to spend on the product, which is kind of a perverse way of companies being reimbursed because you don’t necessarily want to be giving them your time. You want to be receiving benefits, but they’re paid in proportion to how much time they can grab from you, and so they designed it to be as grabby as possible and to get you to use it, even when on reflection, maybe you wouldn’t want to.

Robert Wiblin: Then there’s maybe a third category, which I find a little bit harder to put my finger on, but it’s something about the idea of companies and advertising having a lot of influence over us, like being able to predict what we’re going to think too much and being able to just move our beliefs around a bit too easily. Potentially, we can be puppeted around and caused to do things that again, on reflection by our wiser self, we wouldn’t really endorse.

Tristan Harris: Right. Even in the third case, there’s a question of authenticity, where even if you were to ‘enjoy it’, it’s not necessarily that, on reflection, you would not endorse that choice, which is sort of a retrospective ethics of not just what was good in the moment, but on a minimization of regret framework would people feel good about it, but really just that technology is getting increasingly good at being able to shape and steer human behaviour, and I hope to be able to convince your audience of that, because I know a lot of people think that advertising is not effective. But it’s really not about advertising, but we’ll get into that. On the first one about politics, I think it’s pretty simple, which is to say that the societies that can communicate and coordinate best, reach agreement and consensus are going to perform best as democratic societies, meaning they’re going to feel high agency, they’re going to feel accountable, the people will feel like there’s a feedback loop between their concerns, their ability to reach consensus about those concerns and what we want to do about it.

Tristan Harris: Let alone the fact that, at an existential level, we have several problems in front of us, whether it’s AI safety, or nuclear technologies, or climate change, that have actually shrinking timelines, where if we do not reach consensus about what we want to do about some of these problems, we are already, through that stalemate, living in kind of an existential world because we are simply not taking actions on things that we need. I actually view this as a global national security and competition issue, where I think in the future, we’ll look back and say, the societies that were able to cohere and coordinate best were the ones that outperformed other societies.

Tristan Harris: Right now, social media, by giving each of us a micro reality, where every time you flick your finger, the incentive is to show you something that agrees with your perspective of the world, as opposed to every time you flick your finger, it challenges your perspective of the world and says, actually, here, it’s more complicated. In fact, let us quickly go into that. Imagine Facebook had two versions of newsfeed. One was called challenge feed. Every time you flicked your finger, it presented you with something that challenged your view of the world. You can imagine this being possible. Then imagine there’s another newsfeed called confirmation feed and morally righteousness feed, where every time you flicked your finger, it gave you even further evidence of why the other side is abominable, not someone we should sympathize with, and not worthy of our love or attention. That’s exactly kind of where we find ourselves in, is that the affirmation feed massively out competes a challenge feed.

Tristan Harris: It’s important for your listeners to know that when we make diagnoses about what is the effect of Facebook on polarization or on personalization, this is changing literally week to week and month to month because they’re always tweaking the algorithm, which is why I say, if we zoom out and say, broadly speaking, what are the broad incentives? Within some micro-variation of slightly less polarizing, I could give you an example of two years ago, Facebook, actually, as a result of some of the critiques that I and other colleagues have been making, changed their core metric from maximizing engagement and addiction. Well, it was really engagement, which was a proxy for addiction, to something they called meaningful social interactions, which they defined as rewarding downstream engagement. I don’t know all the details of this, but basically what I’m told is, you make a post, and then how high it gets ranked in the newsfeed for others is based on its predicted likelihood that friends of friends would highly engage with it.

Tristan Harris: That actually led to more politicization of news. In fact, there were several political parties in Europe who, after this change was made, said, “Did you change the newsfeed? Because we’re getting many fewer views, likes and comments” — and this was on their policy PDFs that they were posting, and instead they were getting much more views of their highly polarizing content, because that was better for what was called downstream engagement.

Tristan Harris: Now, again, this is changing all the time, every month it’s different. But one of the most profound studies that we know, in terms of politics, is Facebook’s own research, which was leaked to the Wall Street Journal in May of 2020, showing that 64% of extremist groups that people joined on Facebook were due to Facebook’s own recommendation system. Meaning, not that people said, hey, I want to type in QAnon from scratch in a search box, making a rational choice out of a vacuum, but we’re instead actually being recommended these extremist groups by the recommendation system, and my colleague Renee DiResta, who’s actually one of the most brilliant people that I know on this topic, gives the example of, when she was a new mother and she joined a Facebook group for Do It Yourself Baby Food, so organic, make it your own baby food. You’d say, oh, that sounds like a great use of Facebook groups. It’s connecting people, connecting mothers, getting peer support. Sounds awesome. But then, Facebook, in 2018, had made it a priority to get people to join more Facebook groups. They actually literally had a company mandate to do that, because they believed, if you remember, they changed their mission statement from let’s make the world more open and connected, to let’s bring the world closer together. Our mission is to bring the world closer together. The way they were going to do that is with Facebook group recommendations.

Robert Wiblin: I think they changed the setting such that someone could invite you to a group and you didn’t even have to opt in. You were just automatically added to the group. I don’t know whether they’ve changed that, but I find it infuriating, because I’d just be added to all of these random groups, and it’s like, no, I don’t want to be in this group, but I guess this is what happens when they’re just maximizing some metric that’s coming from on high. They didn’t think about it.

Tristan Harris: That’s exactly right. You’re bringing up a great example, which is another one of the recommendations wasn’t groups that you should join, but you’re actually speaking to another one that I’ve forgotten until now, which is, on the right hand sidebar, they say, here’s suggested people that you should invite into this group, because they actively wanted to get friends to invite other friends into Facebook groups thinking that would also be a mechanism for increasing social cohesion and groups.

Tristan Harris: Now, if we go back specifically to the example of the mom’s baby food group, when it put up the recommended Facebook groups on the right hand side saying, here’s other engaging groups you’re likely to join…What do you think were the most recommended groups? Meaning, Facebook is making a calculation: If you were to join one of these groups, which one would get you coming back often, posting a lot, clicking a lot, etc.? Because that’s what they’re optimizing for, is groups that would be engaging for you. If you’re starting with an avatar voodoo doll, a predictable model of a user who’s in a mom’s baby food group, which Facebook group would be the most recommended?

Robert Wiblin: I think I might know this one. Is it anti-vaccination?

Tristan Harris: That’s right. Anti-vaccine groups for moms. Because of course, if you’re going your own way as a new mom, you’re probably also going to go your own way on vaccines. Then, when you join that group, what do you think the Facebook group recommendations were after that?

Robert Wiblin: Well, I guess we’re getting into more hardcore conspiracy theories.

Tristan Harris: Yeah. It’s the slow slippery slope towards QAnon and Pizzagate and Flat Earth and chemtrails and things like that, and it really happened. You have to imagine this is happening at the scale of millions of people. Once you’re in one of those groups, you’re in sort of a closed echo chamber where everyone is resonating with the same kind of core ideas and the same core ways of seeing the world. As you pointed out, there’s lots of times that people would gather and only speak to like-minded people, but it’s one thing for a group of people to meet in a local YMCA with a box of donuts sitting on fold-out chairs talking about white nationalism or something like that, and it’s another when you have a virtual ability to get groups to millions of people and have hundreds of people commenting and saying the most incendiary things, which we know also were weaponized by different adversaries, whether it’s Russia or China, to try to actually instigate more conflict.

Tristan Harris: Because unlike that YMCA with the fold-out chairs and the box of donuts where people would just talk as a group of 12, you actually have Russian agents entering into that group and actually stoking further conflict. I think when you really zoom out and realize that this has happened in every country all around the world, even if Facebook took the whack-a-mole stick and it whacked, let’s say the anti-vaccine groups and dampened their recommendation rate — which they’ve done by the way, many of the examples that I will give in interviews, whether it’s this one or others, they’ve actually taken the whack-a-mole stick and reduced many of the problems that we’ve talked about — but I bring them up as examples, because if you were to zoom out, the broad behaviour of that system, left unchecked, is to put people into more radicalizing pathways.

Justified concerns vs. moral panics [00:30:23]

Robert Wiblin: Yeah. We’ll come back to that in a second, but just taking a step back, I think a lot of people, they feel a certain skepticism about all of this, because they know that whenever there’s been new technologies in the past, whether it was violent computer games in the ’90s, or I saw this great video about how arcade games were driving kids to violence in the ’70s and ’80s, literally Pac-Man. Bicycles were allowing people to go and fraternize and go to other cities and damaging their core values. Now, all through history, people have worried about the downsides of new technologies. It’s not to say there weren’t downsides, often there are. But if you look back on what people were saying, often, it felt like it was blown out of proportion then.

Robert Wiblin: People are inclined to say, well, maybe we’re blowing it out of proportion now. Maybe there are these downsides, but they’re manageable. We want to be careful not to fall into a moral panic, which maybe we think has happened in the past. Do you have any general reaction to that kind of skeptical prior that people might have, or the skeptical attitude that people might have to worrying about the downsides of new things?

Tristan Harris: Yeah, I think, first of all, it’s very important to distinguish between moral panics and what is authentically new and different here. I first just want to endorse the critical-mindedness that we should have when examining new technologies. We’ve had moral panics about many different forms of media, radio, television, ‘yellow journalism’ for a long time, and we seem to still be here, and we have adapted to some of them. But I think we have to also recognize that many of those critiques and moral panics were actually accurate. They just may not have played out in the totally catastrophic way that we imagined, but television did have a zombie effect on the population and led to the mass isolation of people, as Robert Putnam talks about in Bowling Alone, and so on.

Robert Wiblin: How do you distinguish between moral panics or justified concerns about new things and unjustified things? Because when things are new, we don’t know how they’re going to play out. Maybe they’ll end badly and maybe they won’t. To some extent, we have to speculate, but I guess, it does seem like sometimes we’re inclined to speculate in a very negative way that maybe it goes further than what was necessary.

Tristan Harris: Yeah. Well, you can always over-speculate and just assume negative effects. I think that in the case of social media, to the extent people make the critique of our work or the film that it concentrates entirely on the dark side of these things, it’s because people are already obviously familiar with the benefits. The fact that you have people who are able to connect with their long lost loved ones in high school or old sweethearts, or friends that they haven’t seen in years, or connect with blood donors, find blood donors for rare diseases, support groups for cancer. There’s lots and lots of wonderful benefits, which by the way, you could retain those benefits with Facebook operating more as a utility that didn’t have an agenda or goals of its own, as opposed to being a manipulation-based environment.

Tristan Harris: That’s the main thing, again, that we’re trying to change here, is this business model incentive, where you could still have those benefits of a cancer support group without, for example, needing to recommend people into lots and lots of groups. You could take away all recommendations and still have a system by which cancer support groups could function well. But in terms of the differences here, I think we always want to ask, per Marshall McLuhan, that the medium is the message and all technology is non-neutral. Books were not neutral and made room for sort of long-form argumentation, which had certain phenomenological impacts and related to the human mind in a certain way.

Tristan Harris: They also operate at slow timescales, slow clock cycles, built on focus and flow and things like this. Television, radio have different phenomenological kinds of relationships and have different externalizing impacts. I mean, the fact that television really did lead to a mass individualization and atomization of people at their homes, spending a lot of time at home, compared to the shared community spaces that we typically had before that. Zoom is non-neutral. One of the examples that was given to me recently, there was a study showing that, since the start of coronavirus, with people actually constantly being fed a mirror image of their own face in every meeting every single day, there’s actually been a rise in desire for cosmetic surgery to look better in your Zoom photos.

Tristan Harris: Now, again, you could say that everyone has wanted to improve their self image all the time, but there’s a difference between, let’s imagine we could rewind history to the beginning of COVID and Zoom didn’t include a view of yourself in a big forum, versus it did by default for millions and millions of people. Well, when you’re about to scale up to millions and millions and millions of people, these tiny differences add up to big changes. One of the new things with social media is that we have 3 billion people, which is about the size of two Christianities, in psychological influence and footprint, that are being influenced daily by a set of parameters that basically highlight and privilege some human experiences, some phenomenological experiences and suppress others.

Tristan Harris: Basically, we should be concerned given the exponential scale of the impact, if we don’t have an exponential sensitivity to what aspects of the life support systems of a society it’s impacting and how, and it’s going to include some improvements to certain life support systems of society, but it’s also going to be negative. I think the clearest example of this in the negative is again, Facebook group recommendations that are systematically steering people towards extremism and conspiracy-minded stuff, which again, according to Facebook’s own research, their own leaked presentation from 2018 in that Wall Street Journal article demonstrated to be true.

Tristan Harris: One of the last things I’ll say is when it comes to the children’s aspect of this, because I know you have some of that research that you’ve cited and looked into…I think one of the best ways to know what our intuitions are about, whether something is good or bad is like for example you ask a doctor who’s about to tell you that you should really get this surgery, and you say, “Well, if it was your child, would you give your own child that surgery?” And they say, “Oh no, I would definitely not do that.” It wouldn’t be very trustworthy advice from that doctor. In the same way you talk to a lawyer or a real estate agent, these are all fields where there’s an asymmetry of knowledge between that actor and us. If they wouldn’t be willing to do it for themselves, then you know there’s a problem. One of the clearest pieces of evidence is the very end of the film when we say that many tech executives who work at social media companies do not let their own children use social media. In the same way that the CEO of Lunchables foods did not let his own children eat Lunchables foods, which is by the way, one of the most popular billion-dollar product food lines, I know you grew up in Australia, but in the United States. I think that the ethics of symmetry, doing unto others as we would do to ourselves, is one of the clearest ways that we can adjudicate. Again, if you zoom out and blur your eyes, that’s how we can know kind of what we’re looking at.

Robert Wiblin: Yeah. Do they use it themselves, the tech executives? Or is it just, I think maybe kids aren’t up to it?

Tristan Harris: Yeah. I think these differ. I think tech executives do use social media themselves, but they generally don’t use it very much. When it comes to the kids’ use of social media, most tech executives I know do not let their own children use social media. That really should tell you everything.

Robert Wiblin: Are there any purported problems that people talk about that you actually think we shouldn’t worry about so much, or maybe where people’s attention is being misspent?

Tristan Harris: Yeah. That’s a really good question. I mean, I think that, again, one thing we haven’t really talked about is just that these systems are always in motion. How much polarization or political extremism is Facebook causing? In November 2020, or October 2020 when they were doing some of the most clamping down that they’ve ever done leading into the US election, versus even just one year before that or three years before that. It’s hard to be pinned down and make an accusation like ‘this specific effect is happening’, because again, it’s always changing. The algorithm is always changing. Frankly, to the extent it’s changing for the better, it’s often because of actually the critiques that we made in the film.

Tristan Harris: I should make this point actually, that when people look at the film The Social Dilemma, and they say, oh Tristan and Guillaume, the ex-YouTube recommendations engineer just said that if you search for a moon landing video, YouTube will recommend flat Earth conspiracy theories. If you actually do that search right now, listening to this podcast, you will find that it does not recommend flat Earth conspiracy theories.

Robert Wiblin: Not anymore.

Tristan Harris: Exactly. The reason for that is because we were frankly finally successful after again, eight years of, or six years of screaming at the top of our lungs, that this is happening. I think it was in October of 2020 that YouTube released a blog post saying that they reduced the recommendations of conspiracy theories and borderline content by something like 70% to 80%. Again, if there was not a problem with recommendations of conspiracy theories, why would YouTube have made all those changes?

The messy real world vs. an imagined idealised world [00:38:20]

Robert Wiblin: All right. Yeah, just zooming out a little bit, but when I was watching The Social Dilemma, I think kind of a style of argument that kept on coming into my head is like, what came before all this was also pretty bad. It’s true that the people I talk to on social media often share my politics, to a pretty significant degree, but it’s also true that my social networks, the people I live with, the people I work with, they also kind of agree with my politics. We used to say in the ’70s, like, I didn’t know anyone who’d voted for Nixon, and this was a known phenomenon that people really bubble themselves. Likewise, I read newspapers, I’m like, wow, this stuff has really low quality reporting, and I’m not necessarily reaching it through social media, I’d just be seeing it on the newsstand.

Robert Wiblin: Tabloids have been selling sensationalist news for hundreds of years. TV also makes money in proportion to how much time we spend on it. They do all kinds of these tricky things just to keep us watching, like having a cliffhanger just before the ads and having the ads be an irregular length of time so you don’t know when to come back. People have compared themselves to idealized lives of celebrities long before Instagram. Advertisers have been trying to persuade us of stuff since time immemorial and so on. I guess, sometimes I wonder whether, at least a risk that one might have in thinking about this issue is comparing the messy, grim reality of social media or online services to an ideal world, where our views were formed through thoughtful reflection and reading Wikipedia only, but we should compare it, I guess, to what was there before, what would be there otherwise, which is also going to be non-ideal and it’s going to have its own problems and its own filter bubbles and so on. Do you have any thoughts on that?

Tristan Harris: Yeah, and again, I really appreciate all these questions to really demarcate what’s genuinely worth being concerned about here and what’s different than past moral panics. One example that comes to mind: text-based mediums are uniquely vulnerable to the ability to falsify credibility. One of the things that we know about persuadability is social proof. The more that other people say something is true or like something or share something the more we would assume that it must be real. It’s sort of like the old saying, if you’re so smart, why aren’t you rich? If that thing isn’t true, then why is everyone I know believing it?

Tristan Harris: If you want to take it to the extreme, you can look at major world religions that are actually incompatible with each other, which have huge followings of hundreds of millions of people. In which many people who are believing in at least a different religion would say, well, it doesn’t matter that there’s hundreds of millions of people believing that because it’s still not true, and this other belief is true. We can have situations where the vast majority of people in a society might believe something and have that be persuadable. In fact, that’s actually what makes religions spread. In fact, we leverage that. Many religions have missions where you go abroad and have people who are dressed up in a nice suit and look very nice, and they’ll come to you and talk about all the other people that are in the religion and it’s very convincing. The ability to hack what is credible to us through social proof has existed for a long time.

Tristan Harris: But I will say that, imagine that if you rewound the clock, that instead of any kind of text-based social media (I’m not saying this would be practical) imagine we rewound the clock, and that Facebook was actually more like Zoom. Your daily experiences with Facebook were more like live video conversations with that same broad social network of, say 2,000 people. I think that, if people were telling you about the deep state cabal and pedophile QAnon theories, if you saw someone in their bedroom, literally in their pajamas, in this case, saying this with the tonality and franticness that one would have, if they were spreading these conspiracy theories, it would not be nearly as credible coming through the medium of live video as it is when coming through text mediums, in which you can reference sites that look somewhat authoritative, because it says the Denver Post, or some kind of local-sounding news website.

Tristan Harris: One of the techniques, by the way, for those who do propagate this information is to hijack the trust that we place in local news. Because we actually know that local news is one of the places that we most trust, and you can actually invent local publications that sound fairly legitimate. I think, wasn’t it in The Simpsons? When Homer Simpson says something like, “Oh, Sorny, I love that brand. That’s the best brand of music.” This is what we do, right? We come up with brands that sound nearly adjacent to the brands that we trust. It’s much easier to hack those credibility signals in a text-based medium with text-based URLs and have the false impression of how credible something is. Again, we have to, when examining these questions, rewind the clock by about 10 years and recognize that we’ve been in this ‘mind work’ of different politicizing and extremist kinds of effects for about 10 years now.

Tristan Harris: I would say that they’ve been radically reduced in the last two years. Again, based on some of the responses to these critiques that we’ve been pushing for a long time, but needless to say, that’s where we find ourselves.

Robert Wiblin: Yeah. It’s interesting. You think that text is actually among the more dangerous media, or at least, I guess you think text is more dangerous than one-on-one interaction. What about video or audio? What determines whether these media are more or less risky?

Tristan Harris: I want to make sure I’m clear because maybe even people can misinterpret what I just said. It’s not that text is more persuasive. I say text is easier to hack the cues and heuristics that the human mind is used to looking at to determine whether something is credible. When I was actually, for those who don’t know my background, one of my brief stints was through this class at Stanford called the Stanford Persuasive Technology class and Behavior Design, where you’re actually studying how to apply the principles of persuasion to technology. Can technology persuade our thoughts, beliefs, habits, and behaviours? One of the topics that we studied was the persuadability of credibility. What makes something seem credible? Obviously the production value of something like the fonts, the colors, the look of high production value aesthetics, the more it appears to have those high production signals, the more we’re likely to trust it.

Tristan Harris: As an example, Medium, as a website, actually grants credibility to anyone, because the typography and the style and the colors and fonts are so credible. One of the things that they actually ran into was people were posting ISIS propaganda on Medium, and because Medium does not take, as its mission statement, to be a neutral platform and it’s just, everyone can write anything, they talk about it as a community for specific kinds of content, specific kinds of material. They realized that if they were not acting, they were lending out credibility to actors who were actually not credible. I think that’s one of the things that I think is vulnerable too in the text form.

Robert Wiblin: Yeah. Your background is in persuasion, I guess, I’m not sure how persuadable people are by advertising, or I guess I don’t personally feel like I’m super influenced by the advertisements that I see online or elsewhere. I find them annoying, but I don’t feel like it’s influencing my behaviour a whole lot. I hear a lot of views online, and sometimes they persuade me, but sometimes they don’t. I don’t feel like I’m so gullible that I just believe all the things that I hear. Am I unusual or what’s the situation with human manipulability?

Tristan Harris: There’s multiple things going on here. One is, I obviously love the Dunning-Kruger effect, which is that the vast majority of people believe that they’re better than average drivers, which is mathematically incompatible. I think if you were to ask people, just everyone believes that they’re not influenced by advertising at all. Everyone believes like that’s just only for those gullible people. I think that advertising is not that persuasive overall. I think one of the mistakes that people have made in interpreting the critique in the film, The Social Dilemma, is when we say that the business model of social media is the problem and the business model is advertising, they assume that what we’re critiquing is the persuasiveness of advertising, but that’s not actually what we’re critiquing at all. What we’re critiquing is the business model that depends on technology getting more persuasive at influencing our behaviour every year.

Tristan Harris: For example, when you go to Facebook to look up one thing and you think you’re just going to look up that thing and then you’re done, and you find yourself scrolling through Facebook for another hour in a trance, and then you say, “Oh man, what happened there? I should have had more self control.” What that explanation hides is an asymmetry of power, where behind the glass screen, there’s a super computer that has an avatar, voodoo doll-like version of Rob, that based on all the clicks you’ve ever made, it adds a little hair to the voodoo doll, so now it looks and acts just a little bit more like you, and based on the videos you’ve watched on Facebook, it adds little clothing to the voodoo doll, so now it’s even a little bit more like you, and that’s what happens in the film. You’ll see the model and avatar of the main character, Ben, get more and more accurate over time.

Tristan Harris: The premise is that I’m better and better at predicting your next move. My friend Yuval Harari, who wrote the book Sapiens, talks about the fact that there are going to be things that technology can know about us that we don’t know about ourselves. He gives the example of the fact that he’s gay, and there was a time in his life when he was gay, but he didn’t know that he was gay, which he finds fascinating, because how could something so core to how he sees the world be hidden from him? How could he not see that? Do you think that prior to him discovering that on his own, that based on eye movement patterns and click patterns that the technology would not be able to pick up that signal before he did? The answer is of course, no, it can definitely know these features.

Tristan Harris: One of the dangers is that technology is knowing increasing amounts about us. It’s getting more persuasive, not less, it’s getting better at building predictive models of us, not worse, and it has more compute power to predict us than less. At the same time, we’re becoming habitualized into more and more predictable, simple behaviours. For example, if I want to pass the Turing test by faking text that looks like it’s coming from a real human, that’s getting easier on two fronts. One is that we have GPT-3 now instead of GPT-2, so we’re better at manipulating and creating fake deceptive texts. The second is that human beings are dumbing down their grammatical styles on comments, because we’re operating with simpler and simpler pigeon languages, in which it’s actually easier to fake out the authority of human language and thought because we’re actually simplifying our language and thought through the kind of collective downgrading of the mechanisms we’ve been describing here.

The persuasion apocalypse [00:47:46]

Robert Wiblin: What’s the persuasion apocalypse? Where do you think this ultimately leads?

Tristan Harris: Well, again, the persuasion apocalypse is not on the ability to advertise — although we could talk about a little bit of the danger there, specifically micro-targeting. We’re used to the idea that, if I saw something, then you saw it too. That if I’m living in a physical reality of a neighborhood of Los Angeles, and I see people walking around LA, I assume they’re people who also are broadly familiar with LA and are seeing the same kind of raw experience that I am. Our minds are evolutionarily evolved for that. We’re not evolved to think that this other person next to me has been fed a micro reality that is 100% different than mine. But you can say in a certain way, that reality is getting more and more virtual, because each of us are increasingly living in a personalized virtual reality in which we didn’t see the things other people saw.

Tristan Harris: Let’s say for example that I’m Donald Trump and I’m running an ad to someone. I have data about exactly, are you more of an anti-immigration person? Are you more of an anti-China person? Are you more of an end-these-endless-wars person? Based on what I know about you, I could micro-target a message that is exactly tuned to the thing that I know that you care about. It’s socially subliminal because we assume that other people saw that same ad that we did, but no one else did see that ad.

Tristan Harris: One of the problems is that we have a completely unaccountable system where, because we’re each being whispered to with a thousand different tongues and a thousand different ears, we talk about this on our podcast (which by the way is called Your Undivided Attention) with Brittany Kaiser, who is the Cambridge Analytica whistleblower. She talks about the power of micro-targeting and how neurotic people, in the Big Five personality traits, always respond to fear-based messages. Increasingly, the technology is able to know our Big Five personality traits without even having an explicit voter file on us. One of the areas of research we cite from professor Gloria Mark at UC Irvine is the ability to discern the OCEAN: Big Five OCEAN, O-C-E-A-N, openness, conscientiousness, extroversion, agreeableness, and neuroticism based on people’s click patterns alone, meaning just on time series data, based on how you’re clicking and how you move your mouse, and just your behaviours or footprint.

Tristan Harris: In the same way, in China, they can detect who you are based on your walking gait with 94% accuracy, because our actual style of kinetic movement and walking in our body is so identifiable. It’s the end of the poker face. In this case, we can identify your personality based on how you’re ‘walking’ online. A world where I can predict that you’re neurotic and I can hit you with fear-based messages and it’s completely unaccountable and invisible to everyone else, that’s the more dangerous world that we’re heading into.

Robert Wiblin: Yeah. So when I’m having a conversation with one person, I often know a lot about them and their beliefs and their interests and their personality and so on. And then I kind of craft the way I’m going to say things and the arguments that I’d use to try to make it maximally persuasive to them. And then I don’t necessarily broadcast it to everyone else. I’m a poor example of that because I might well broadcast it to other people. But by and large, one-on-one conversation also has this property that people can perceive quite different realities because they’re being told different things because it’s been crafted for them. Why is it so different if I’m being fed advertisements from a political campaign, and to one group they’re saying one thing about their view on abortion, and to another group, they’re saying a different thing?

Tristan Harris: Well, they can completely be inconsistent, right? They can literally say things that are 100% oppositional and inconsistent with each other, but no one would know because I’m whispering one thing into your ear to agree with you, and I’m whispering something opposite to another ear to agree with them. And I can continue to hit them with that same information and surround them with advertising everywhere they go. And by the way, there’s kind of the mainstream versions of these arguments, and then if you actually spend time with the dark arts communities, I actually know people who do really dark, manipulative political advertising, and they’ve shared with me things that unfortunately, you can’t share in public settings, but are all about the way you can do a completely surround-sound echo chamber on a person, and use look-alike modeling.

Tristan Harris: That’s one of the things we talk about in the film, is for a while, using Facebook look-alike modeling, I could say, go into a Facebook group of people who believe the flat Earth conspiracy theory, or chemtrails, so they’re really, really anti-government. And now, I can actually feed that list of user IDs into the Facebook advertising system and say, “Give me 10,000 users who look just like that”. So Facebook has actually allowed me to navigate to and find the most suspicious, cynical and government-distrusting people in your society, and now I can hit them with all sorts of other messages, and that’s completely unregulated and unaccountable because you can do this again at micro scales, where each person is getting their own micro-message of why they should distrust government.

Tristan Harris: And I really think when you see this operating at scale, and the fact that nation states like Russia, or China, or Iran, or North Korea, are spending tens of thousands and millions of dollars to do this, and it introduces complete new asymmetries of power, which we haven’t gone into because I think that one of the issues here is a massive national security issue, that while we spend a trillion dollars building the F-35 and a trillion dollars revitalizing our nuclear arsenal, we’re protecting our physical borders, but we leave our digital borders completely wide open. With a recent podcast episode, we spoke to someone from the Institute for Strategic Dialogue, and showed that you could actually reach the entire country of Kenya on Facebook or Twitter for $10,000, about the cost of a used car. And so it’s incredibly cheap when you allow those economic asymmetries across national borders to exist.

Robert Wiblin: Yeah. It’s interesting. When I think about this kind of persuasive technology, I suppose one way I could see it running is that people do just get massively persuaded of all kinds of crazy beliefs. Another way it could run is that people just become incredibly jaded and cynical and kind of stop believing anything because they know they’re just being targeted by all these crazy messages. And over time, kind of culture adapts and people learn that like, things on Facebook aren’t to be trusted at all. I guess maybe we’re seeing a little bit of both because you also see people just having less trust in all kinds of sources, or at least that’s the way that some people are going — just ceasing to engage or ceasing to really try to figure out the truth because they think it’s somewhat hopeless.

Tristan Harris: Which is actually, by the way, the goal of many of our adversaries. I mean, Russia’s goal is to sow discord and epistemic nihilism, where people actually are not even interested or not even knowing what might be true, and they give up. And when you give up, that’s when it’s easy to sort of take over and do whatever else you want to do. So I think we have to realize that: actual global distrust and that global cynicism and global apathy for what is true, and that confusion, which is exhausting — and that’s exactly where we are now — leads to a society that stagnates because you cannot get anything done because everyone is operating from a different distrusting, or sort of apathetic place. And again, if you view it as a geopolitical competition between Western democracies, which are the most vulnerable to this phenomenon — because we depend on the ability to get consensus and agreement, and then action about what we want to do — we’re going to be really, really in for it.

Robert Wiblin: I guess we’re also the most disinclined to regulate speech.

Tristan Harris: Which is exactly what is also being exploited. Right?

Robert Wiblin: Yeah, yeah. Sure.

Tristan Harris: The fact that we are free speech absolutists…In fact, if I’m Russia, I would want to magnify the free speech absolutists, and they do that. Whenever the platforms take moderation action on amplified content, they can actually micro-target the news stories that cover tech censorship, and point those stories right at conservatives, so the conservatives get even more angry about free speech absolutism, which then keeps the doorway open for them to use their favorite weapon. I would say that if I’m Russia, Facebook is the most powerful political weapon I have ever had against my adversary. And I actually would want to amplify the voices of people who are pro free speech because it allows my mechanism for manipulating people to stay maintained.

Robert Wiblin: So my impression is that there’s been an increase in belief in conspiracy theories, or at least a fragmentation into more and more odd conspiracy theories. I slightly worry that it’s possible that it’s being pinned on the wrong thing here. So it could be that it’s kind of Facebook and YouTube pushing people towards this. Couldn’t it also just be that the internet now allows different people to communicate so many different things? And even if YouTube didn’t exist, and even if social media didn’t exist, people would just be starting up all kinds of websites to promote their odd ideas. And I guess it’s not only conspiracy theories, but all kinds of niche interests that now are being benefited from the fact that we don’t just have five television stations, we have millions of websites that we can all go and engage with to try to find people who are like-minded and share our unusual beliefs.

Robert Wiblin: I think effective altruism, which I’m so into, I don’t think it could’ve existed before the internet because it’s just, you would never get a critical mass of people into one place. But now if you have 10,000 people, they can do a whole lot, and get together. And likewise, you could have people who believe that the moon is made of peanut butter. They could all have their website. And even if Facebook didn’t exist, they would develop their own forum on some website that one of them would host, and we might see a kind of similar dynamic playing out.

Tristan Harris: Well, that’s such a great point because I think if you go back to the early 2000s, there were websites that hosted conspiracy theories or whatever. But again, let’s look at the choice architecture. How much does it cost to set up a new website and a new domain and post your crazy ideas about the world, and get people there, compared to starting a Facebook group? Facebook actually does all the automated recommendation, automated invitations for here’s what you should join, and then puts you in a list of recommended groups that are associated with other groups that are likely to get people to join. They’ve instrumented a process in which there’s a self-reinforcing runaway feedback loop. And I know that in the research that you cited before this call about YouTube’s recommendation systems, there’s also a paper by DeepMind, in which they actually cite what they call degenerate feedback loops, they actually admit that YouTube does have a self-reinforcing effect in hollowing and giving people more and more of these sort of echo chamber views. And I think that’s really, really the dangerous thing here.

Revolt of the Public [00:56:48]

Robert Wiblin: Have you read or heard of the book The Revolt of the Public and the Crisis of Authority in the 21st Century by Martin Gurri?

Tristan Harris: No, but I skimmed it because you put it in your notes, so I saw. I didn’t have a chance to listen to the interview with the author, but go on, yeah.

Robert Wiblin: I think you should have a read. It’s very interesting. I think it’s a somewhat reasonably even-handed look at how culture is being shifted by some of these internet technologies. I guess one point he makes is that the internet, social media but also other aspects of the internet, have allowed ordinary people to challenge authority and challenge respected institutions and say, “No, what you’re saying isn’t true”, in a way that previously they were just shut down, and they really didn’t have any avenue to do that. And sometimes that leads to crazy conspiracy theories, and sometimes that leads to nonsense. Other times, it’s allowed people who previously might’ve been very smart and informed, but didn’t have a huge audience, to call bullshit on organisations that were claiming that they knew far more than they did, or claiming they were capable of doing things that they really weren’t.

Robert Wiblin: And so it’s kind of a double-edged sword, this situation that we have where in a sense, democratizing the ability to spread messages, both have significant pluses because it allows good ideas to be surfaced where otherwise they might have been hidden. But I suppose also, in as much as people aren’t very good at judging what is true and what is not, it also allows bad, bad ideas to spread incredibly quickly. I guess the challenge is going to be, and maybe we’ll come back to this later, ideally what we’d like is for maybe the Facebook and I guess whoever the domain host that decides who gets a website and who doesn’t, to just give websites to people who have good ideas, but not to others, not to give them to people who have bad ideas that they’re promoting. But yeah, unfortunately, that’s not an option, so we have to live in a very much second-best world.

Tristan Harris: Yeah. This is really, really good. So we’ve always had situations where the gatekeepers that ran the kind of public narratives and moral consensus might disagree with what we would later determine to be really incomplete and regressive moral values. So for example, the civil rights movement disagreed with the moral consensus at that time. And you obviously need an ability to have a system and democracy in which ideas and consensus can constantly evolve. That’s what we’re really designing for, is the ability for ideas to be questioned and challenged, and to evolve the moral values that we have in a society. That’s how we’ve gotten many of the progressive outcomes that have taken place for the last 50 years.

Tristan Harris: However, the question we should be asking is: What is the basis for what ends up winning in that contradiction? Because when you have conspiracy theories, just some stats for people, YouTube recommended Alex Jones, Infowars. For those who don’t know, he’s the conspiracy theorist who actually said the Sandy Hook shootings were crisis actors. Kids at the school were shot, and the parents were accused of making all this up, and being hired by the Obama administration to do this. These parents who’ve lost real children are still to this day hounded by conspiracy theorists who followed Alex Jones. They’ve emailed me saying what a huge cost this is to their lives because they had to move houses multiple times from getting repeatedly doxed. Alex Jones’ Infowars videos were recommended by YouTube 15 billion times, which is more than the combined traffic of Fox News, Guardian, BBC, New York Times combined. Right?

Robert Wiblin: Do you know what the total number of recommendations is? Someone once said, “Well, 15 billion.” But is that a lot in the scheme of all of the recommendations that it’s making, given how many users they have? It’s a bit hard to judge intuitively.

Tristan Harris: Well, actually, you’re making such a good point here because one of the problems overall is that the scale and dimensionality of impacts of say, YouTube, is so massive that .001% of something can sound like it’s incredibly small, but .01% of a billion hours watched daily is actually huge. And that’s I think what the real meta issue here is, when you look at the dangers of exponential technologies, is we’re having greater and greater exponential impacts without greater and greater sensitivity, prudence, or wisdom to know what sort of complexity domains that we are impacting. So if you have God-like powers of Zeus, and you bump your elbow by accident, you’re going to scorch half of Earth. So you can’t be Zeus and have a blind spot in your eye because then you might actually cause real damage.

Robert Wiblin: Rob here. Tristan is about to refer to a paper that we’d talked about before we started recording, but which we hadn’t actually mentioned in the interview itself. That paper, ‘Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization’, claims, and I quote “YouTube’s late 2019 algorithm is not a radicalization pipeline, but in fact removes almost all recommendations for conspiracy theorists, provocateurs and white Identitarians; benefits mainstream partisan channels such as Fox News and Last Week Tonight; and Disadvantages almost everyone else.” You can find a link in the show notes. Back to the interview.

Tristan Harris: And I think that’s the overall paradox that we find ourselves in, because, as you said, and also one of the problems with these papers that have said that YouTube is not radicalizing people — which it’s true, it’s radicalizing people much, much less now than it did two years ago — is that it’s only studying it based on anonymized viewing of YouTube. So you just click around, you have headless Mozilla browsers doing analysis, etc., clicking through videos and seeing: What does it tend to recommend to people? But that’s different than going into your account, Rob, and saying, “Hey, there’s these specific rabbit holes that you’ve gone into in the past”, and seeing: What is it recommending for a personalized user?

Tristan Harris: And again, the fact that we don’t know the answer to these questions, we don’t have monitoring tools, I think of this much like if Exxon, one of the creators of climate change, also had a monopoly on all the monitoring stations for how much CO2 and methane was getting released in the world. One of the biggest problems is that we actually don’t know how big these problems are because external researchers don’t have the data or compute power to do independent research that’s validated. Everyone is operating on some skewed perspective. But you can look at something like Alex Jones and get a rough count, as we did, of the minimum set of recommendations. And it recommended Holocaust denial tens of millions of times. It recommended flat Earth videos hundreds of millions of times. This is far greater in terms of reach and traffic than you could’ve gotten by starting your YMCA meeting with fold-out chairs and donuts, or even a stand-up website domain, which back in the 2000s, you would have stand-up websites for aliens and UFOs and flat Earth. But they didn’t get nearly the traffic of hundreds of millions of people.

Global effects [01:02:44]

Robert Wiblin: Yeah. It’s a very interesting point that you make, that it’s so hard to know the scale of these problems, because I guess the more I was looking into this, I just often felt very unsure because it’s so hard to prove really one way or the other. Earlier, we were talking about the fact that Russia is trying to destabilize countries like the US, promoting messages that they think will lead to social discord, which I completely believe. But how much money are they spending on it? How effective is that? Because it could be that a lot of it is driven by the FSB or other internet research bureau, whatever it is that they have, that have large budgets to do this kind of thing.

Robert Wiblin: But maybe it also is just human weakness that humans are unkind to tribalism and often aren’t that thoughtful about the things that they say. Trying to tell how much of it is being driven by bad actors, we often just don’t have the data to really settle it. So we’re left with all of these troubling concerns, but then we can’t really settle the question unless I guess Google lets you run the analysis. And likewise with these conspiracy websites. Well, how big a problem is it? Probably really only Google can say, and it probably might be difficult for them to figure it out anyway.

Tristan Harris: That’s right. I think one of the things that’s also challenging here is again, even at small scales, you can have lasting effects. I want to really go into briefly: conspiracy theories are like trust bombs. What they do is, they’re like a nuclear bomb for trust. They erode the core trust that you have in any narrative. In fact, one of the best pieces and strongest pieces of research is that the best predictor of whether you’ll believe in a new conspiracy theory is whether you already believe in one, because you’re daisy-chaining your general view of skepticism of the overall narrative and applying it with confirmation bias to the next things you see.

Tristan Harris: One of the best examples of this is that a Soviet disinformation campaign in 1983 seeded the idea that the HIV virus raging around the world was a bioweapon released by the United States. And this was based on an anonymous letter published in an Indian newspaper, and it ended up becoming widely believed among those predisposed to distrust the Reagan administration. And as my colleague, Renee DiResta, who used to be at the CIA, and studied this for a long time, said, “As late as 2005, a study showed that 27% of African Americans still believe that HIV was created in a government lab”. And so these things have staying power and re-frame power on the way that we further view information.

Tristan Harris: Another example of this, and this is by the way from my Senate testimony, or my Congressional testimony in January 2020. People can look it up online. I think it was called Americans at Risk: Deception in the Digital Age. There’s also a bit here about how Russian trolls operated. The quote is, this is from a former Russian troll, “We did it by dividing into teams of three”, he said. “One of us would be the villain, the person who disagrees with the forum and criticizes the authorities in order to bring about a feeling of authenticity to what we’re doing. Then the other two enter into a debate with him. No, you’re not right. Everything here is totally correct. And one of them should provide some kind of graphic or image that fits the context. And the other has to post a link to some content that supports his argument. You see, villain, picture, link.”

Tristan Harris: And so I think what people don’t really see, because it sounds like this is just a conspiracy theory, is: Why would foreign actors be spending all this money and developing all these techniques if it weren’t valuable? And I recommend for people learning more about this, check out the Stanford Cyber Policy Center, which released a report I believe in October 2019 about Russia’s various attempts to influence many countries in Africa right now, actually, because one of the issues we haven’t gotten into, Rob, is the fact that this is a global problem. And actually, if we look at the problems that we’ve been mentioning, the U.S. And Western democracies probably have the best of the kind of social outcomes that we’re concerned about. And the countries in the Global South, in which you don’t have content moderators actually checking Ethiopia, where there are actually 200 languages or 200 dialects, and six major languages that are online, this is actually currently being thought of as the second major genocide that’s going to occur from Facebook, what’s happening in Ethiopia right now.

Tristan Harris: I think of this as Facebook monitors something like 80 elections around the world. And obviously, the war rooms and the amount of attention and resources that they put into the United States is far greater than the hundreds and dozens of countries around the world in which you have these same phenomena with much less oversight and again, more manipulation due to foreign actors, and also again, the organic problems of polarization.

Tristan Harris: And so I think one way to think of this is that much like the ‘flatten the curve’ rhetoric of the COVID era, that there’s only so many hospital beds and we’ve got to make sure we keep the amount of intake low, Facebook and managing hundreds of countries, only has so many hospital beds, or election war rooms. And while they put most of the resources and ICU beds dedicated recently to the United States because of our recent election, there’s also an election coming up in Myanmar, where we know some of the worst of these things has happened. And I think of this like Facebook sitting on top of a stove top of all these boiling pots and pans with water in it. And they have literally inside their company an ‘at risk’ score. So for each country, they have a general way of ranking. How likely is it that civil unrest is coming up? And they have to prioritise, much like EA does, between the size of the country, the size of the impact, and how bad, how far into genocide territory are we talking about in terms of social cohesion and risk.

Tristan Harris: My understanding is that the civic integrity team that does this work at Facebook is actually funded by the antitrust budget as a way of demonstrating that they’re actually doing work to manage the global comments. But again, they don’t have nearly enough ICU beds, content moderators, or people who speak the languages of the hundreds of countries that they operate in. And that’s really I think what I want people to get, is much like climate change affects the Global South the worst, in this case, the Global South is impacted by these phenomena the worst as well.

Robert Wiblin: Yeah. It’s interesting. We can kind of use Facebook and WhatsApp, like I said, WhatsApp is kind of like a control group because it doesn’t have the algorithmic issues, and maybe it doesn’t have the targeted advertising. It’s just kind of one-to-one, or it has groups, it’s a one-on-one messaging system that lacks that kind of abuse. But nonetheless, I guess we see conspiracy theories spreading on there. And many bad ideas have been spread on there that potentially have led to serious political problems and murder, I guess in Myanmar and I think also India and Brazil sometimes.

Tristan Harris: India.

Robert Wiblin: Yeah.

Tristan Harris: Can we actually address that really quick?

Robert Wiblin: Yeah.

Tristan Harris: Because this is one of the critiques that came up of the film, they say, “Oh, the film focuses all on Facebook, and not on, for example, WhatsApp”, where around the world, WhatsApp is much more prominent. One thing people should know is that Facebook has a program called Facebook Operator Solutions, or free basics, in which they partner with telecommunications companies in developing markets where people are getting a cell phone for the first time. And the deal is you get a cell phone, and it comes preloaded with a new Facebook account. And this is what happened in Myanmar. You had an entire country, which as of something like 2005, no one was on the internet. And by 2015, I think you had 13 million people or something on the internet. And you only had I think three people inside of Facebook or the content moderation contractors that actually knew or spoke the language.

Tristan Harris: And one of the issues here is not just the recommendation systems that we’ve been talking about in AI, but also again, the human vulnerabilities of social proof. So the fact that on WhatsApp, you go into a country like India or Myanmar, where there’s low digital literacy, you have to keep in mind, these people did not grow up with ‘http://’ and know what ‘www’ is, and know what ‘TCPIP’ is, or learn about chain letters that we learn to ignore from our grandparents and our aunts. These are people who are going online for the first time. And for them, if they see a video or a photo of say, a Muslim who is purported to have killed a cow, this is one of that phenomenon going on in India. They’re called flesh killings, where there’s actually, I think the number I got from a friend who was at Facebook is that there used to be five of these incidents, these flesh killings per year. And now there’s something like 100 as a result of the fact that WhatsApp makes it so it’s never been easier to say, “Let’s go lynch those Muslims over there because they actually killed our sacred cows quite literally”.

Robert Wiblin: Yeah. I think I have some degree of optimism that some of these problems will be solved naturally through people learning through experience. I’ve been addicted to social media in the past. And I guess to some extent, I am now. But I’ve learned to manage it. I know friends who have come to believe quite foolish things, I think, because of stuff they’ve read online. But I see that less and less through people I know because I think just over a decade or two, people start to learn more how to manage this, and they’ve learned from previous negative experiences.

Robert Wiblin: And I guess also, I’ve read some research suggesting that it’s seniors who are most vulnerable I think to some of the worst misinformation. And perhaps it’s because they haven’t really been trained since the crib to have a good sense of what sources are reliable and which aren’t online. So that’s at least one ray of light that…I suppose it could take a while, but in countries where people have been getting internet in the last five or 10 years for the first time, maybe there’ll be some improvement as they learn to be more skeptical of messages that they’re being forwarded.

Tristan Harris: Well, I mean, there’s certainly a partial truth in that here in the United States, we are much more skeptical and aware now than we ever were of social media. And I think actually, The Social Dilemma has done a lot around the world to increase a sense of digital literacy and skepticism. But I still think we’re far away from genuine comprehensive understanding. And I think what I worry about is the incentive of a Facebook or a Tik Tok, again, trapped in a game theoretic race to capture a new emerging market or new telecommunications country or partner in Africa, and to be caught in this race to capture that market in its private walled garden of social media and define its social media infrastructure in Facebook’s terms versus in Tik Tok’s terms, while having these problems and having the maximum incentive to get in there quickly without actually having the protections in place.

Tristan Harris: I was told a story that Discord, which is one of the places where a lot of the neo-Nazis and white nationalism groups are, was actually funded by a venture capitalist who knew that there were neo-Nazis and a neo-Nazi problem on the social network, but gave them 15 or something million dollars anyway in funding because of the assumption that they’ll figure it out somewhere else, somewhere down the road. And this is again where the financial incentives to grow at all costs because of the VC model and the expectation of instant growth, I think creates these problems. And one thing people should use when you project out into the future and why I worry about these systems is that from a venture capitalist perspective, what you’re funding is a tighter and tighter trajectory from zero to 100 million users.

Tristan Harris: Social networks are competing to get to 100 million users faster and faster than they ever could before. And that was the kind of benchmark. I think it took Facebook something like five years to get to 100 million users. And then Instagram was I think faster than that. Snapchat was faster than that. Tik Tok, I think did it in something like one year, they reached 100 million users. And so it’s not just that the services gain widespread adoption really quickly, and they’re competing to do so, it’s also that they’re competing to give each of us exponential stadium-sized audiences faster and faster because, Rob, the way I convince you to use the next version of Tik Tok is to say, hey, I know you reach maybe 50,000 people on Instagram now, but the way I’m going to convince you to switch over to Tik Tok is I’ll give you a bigger audience even faster, so I’ll give you that digital drug hit of, you’re going to be even more loved, more comments, more likes. And this is exactly how Tik Tok has actually accomplished its success.

US politics [01:13:32]

Robert Wiblin: One thing that we haven’t talked about is that the structure of these websites can influence the way that people talk to one another, and arguably makes people more hostile, less constructive. I heard more about it a couple of years ago, that the shortness of tweets, for example, causes people to say things that are firstly, slightly wrong, so you can almost always object because they’ve only had 120 characters, but also to be incredibly curt with one another and not really to flesh out their view, and not really to add in any politeness modifiers. On Facebook, because people are engaging in so many of these discussions, they often type things out incredibly quickly without really…And they often don’t know the people they’re talking to. There’s no body language, no feedback.

Robert Wiblin: I guess I find that very plausible. I guess I also know that in the pre-internet era, I saw lots of people arguing and being very rude to one another. People don’t need the internet in order to not be kind. What is the best research out there on how maybe the design of these products shapes the way that people interact and whether those interactions are hostile or friendly?

Tristan Harris: There’s so much on this. I used to be deeper into the academic research on this, but again, our work is focused on the high-level trends of the last few years. There’s one study that for every word of negative moral emotional language you add to a tweet, you increase the retweet rate by about 17%. This is a retrospective study of tweets, but that’s what was found. I think you’re already pointing out the core thing here, which is, imagine a social network in a conversational environment, in which the way you participated was by before you responded to the person you’re responding to, you opened the sentence with you suck, and then you said what you wanted to say. How constructive would that social environment be? And in the classic medium-is-the-message, focusing on form instead of on content, I think the fundamental form of threaded conversations in which the replied comment that will appear at the top appeared at the top by being the most incendiary.

Tristan Harris: And so what you end up with, if you’re following the U.S. elections right now, is the most sort of extreme one-sided, grievance-based tweet about why the election was stolen or not stolen, followed by the strongest possible incendiary reply that has a completely opposite balance sheet of grievances that it’s speaking to. For example, I’ve been really looking at this recently. If you go online, you will find if you’re on the Trump side of things, infinite evidence of Biden supporters who theoretically are for unity and for healing, but you’ll find video after video of cars that are driving by these Trump caravans with the huge flags, flipping them off and saying, “Neener, neener, neener, you’re a white supremacist and you’re a bigot”. And if I’m on the Trump side, my newsfeed is filled with evidence of that. So from where I’m sitting, the Biden camp is the furthest away from unity and healing. And the left is living a complete contradiction.

Tristan Harris: Now if I’m on the left side of things, I see the opposite stream of videos, of video, after video, after video of overwhelming evidence of huge Trump cars and huge trucks running up on the side next to the car and then punching the guy in the face. And each side has this overwhelming evidence of why the other side does not deserve our sympathy, and we should not treat them as human because those are the things that are going to get the most traction.

Tristan Harris: And imagine that we were able to, through the conversation we’re having now, able to convince people to be way more compassionate. So let’s say 50% of people suddenly, we snap our fingers. This is a thought experiment, became way more compassionate and stopped posting these incendiary things. It wouldn’t change the fact that the remaining percentage of people, there’s still going to be some small percentage that are operating in this sort of incendiary way. And they will still be rewarded. And so we will continue to have the false perception that the other side does not deserve our sympathy.

Tristan Harris: And this is really, Rob, when I jump way ahead. In this country, we have fractured the national psyche into watching two completely different movies of reality, where we’re each 100% convinced that the other side has been nasty to us, evil to us, and doesn’t deserve our sympathy. And I think that’s really the key issue, that we have to have a mass healing. I’ve actually been talking with couples’ therapists, and Esther Perel and people who study marriage counseling, to figure out: How do we heal the nation from a hostile relationship in which each side is really watching two different movies about what’s happening in the relationship?

Robert Wiblin: Yeah. It’s interesting. The U.S. seems particularly bad in this respect, at least among developed countries. I know we’ve seen a bunch of polarization and maybe more inter-political group hatred in other countries. But it doesn’t seem to be universal. It does seem like there’s some countries that are resisting this, at least so far. I’m originally from Australia. My impression is that people in Australia aren’t that much more head up than they were 10 years ago or 15 years ago. There’s political conflict, but it doesn’t look like the United States, the way that’s transformed over the last five years, which then raises the question: Is it the internet, which has been everywhere? Or is it something about these countries specifically? Or at least maybe some countries are vulnerable to this kind of thing and some countries aren’t. It seems like it’s a bit more complicated than just social media leads to polarization and conflict necessarily.

Tristan Harris: Yeah, totally. This is such a great point. I think what we have to look at is self-amplifying feedback loops because I think that’s what’s here. And so if you start with a culture in which there is already high polarization, and you run an amplification loop on that, it’s going to take a smaller number of cycles to see that polarization get to the place where we now see ourselves, where you have maximum grievance and hatred for each other’s side. You’re not going to see as much of that effect in countries that start off with much lower polarization, or maybe with more political parties than two.

Tristan Harris: We have to also add in the variable that there has been maximum incentive. I would say the U.S. is the central target of driving up polarization, including from external actors, which have successfully done this. Again, we’re about 10 years into this mind warping process. And I think that the U.S. was caught completely blind about what was going on in 2016. You have to really go back and remember what YouTube looked like. We actually did a study, Guillaume Chaslot, who’s the YouTube recommendations engineer, studying: What were the most common words that showed up in the right-hand sidebar on YouTube videos, the most common verbs? And the 15 most popular were something like ‘obliterates’, ‘hates’, ‘shreds’, ‘destroys’, ‘Jordan Peterson destroys social justice warrior in video’, all caps. Right?

Tristan Harris: And even if you sort of got rid of all the ‘fake news’, the background radiation of us versus them, tribal in group/out group hatred was the background assumptions that were all being fed for a long time. And so it’s really like you said, a hard thing to measure. And I think the claim that social media companies go with is that they’re simply holding up a mirror to your society. If you already have those conspiracy theorists, we’re just showing you a mirror, and we’re really sorry that we’ve made them visible because you already have them. If you already had those teen girls with depression and isolation, we’re just showing you a mirror and showing that, yep, they’re right there. But what really is going on is that they’re reflecting back not a mirror, but a fun house mirror, in which the dimensions that have gotten amplified are the things that were profitable. And as they say in the film, The Social Dilemma, so long as our attention is the thing that’s being mined and it’s commodified, in the same way that in industrial capitalism a whale is worth more dead than alive, and a tree is worth more as $10,000 of 2x4s than as a tree, in this model, we’re the whale, we are the tree. And we are worth more when we are addicted, distracted, polarized, outraged, and disinformed, than if we’re a thriving citizen, or a growing child who healthily has kids to play with outside in the park.

Robert Wiblin: Yeah.

Tristan Harris: I know that sounds more like a political stump speech, but I think it’s critical that people see the fundamental economic incentive that I think is domesticating us into a certain kind of habitually-trained attention-economy-friendly human that participates in a way that is profitable for the overall system.

Potential solutions [01:20:59]

Robert Wiblin: Yeah. Okay. I had a whole lot of other questions about other arguments in favor and against these products having down sides. I think I want to skip forward to the solution section on what should be done, in part because I guess, well, The Social Dilemma didn’t have a lot to say about that, so it’s potentially something that we can add. Maybe also because I think many people may be resisting these arguments in part because they sense that the next step is going to be arguments for policy changes, or changes to the products that they don’t support, or that they’re very nervous about, in particular, having more government control, or putting experts in control of what ideas can be promoted.

Robert Wiblin: Another reason, which is actually I have no idea what I want to do. I’m concerned about these things. I don’t understand them through and through. But I really don’t know what I want these companies to do because it seems like a lot of the things that they might try could backfire, and likewise, getting the government involved in this could turn out to be even worse than what we have now. So I guess I had two sections on this. One is: What do we want the government to do? What’s kind of the policy level? Another one is just: If you’re speaking to staff inside these companies, to the software engineers, to the managers, what would you recommend that they do? Is there one that you’d rather take first?

Tristan Harris: Yeah. Well, if you zoom out, I think what we need here is a whole of society, a whole of the private industry and public sort of response. What I mean by that is, much like in climate change, where you probably both want some more extreme cultural reactions like Extinction Rebellion to motivate faster, more urgent change, and Greta Thunbergs out there, you probably also want as many people inside of Exxon investing in carbon sequestration and carbon removal technologies. You probably also want governments investing in carbon taxes and transition plans. You probably also want Jeff Bezos and billionaires dedicating hundreds of billions of dollars to climate-friendly policies. You probably also want the public to actually have the kind of common global awareness about each person participating in a slightly better way because we know from climate change, it only works if you really have everyone participating in a more climate-friendly way. That’s the only way we’re going to get there.

Tristan Harris: And I think with this problem of the human-downgrading climate change of culture, it’s going to require a whole-of-society response in the same way. We have worked extensively with people at the tech companies, and hosted conversations. We generally find that the more radical solutions are not attempted. And you get things like Instagram just choosing to remove the number of like counts that you get, so that now kids aren’t quite so addicted to the number of likes. But there’s still the fundamental problem of, I pulled a slot machine, and I see that I got more likes than I did last time, and more comments than I did last time, and it still acts in that same kind of fundamental way. So we’re only seeing cosmetic changes when it comes from the platforms, as opposed to deep structural changes.

Robert Wiblin: Yeah. I feel like with climate change though, I kind of know what technologies need to be promoted, and I have a good sense of what government policies would help to roll them out. I feel much less sure here about actually, how would I like these products to look differently? And maybe it’s just unreasonable to expect us to have a full vision of that at this stage because it’s going to be an iterative thing. But yeah, I’m still curious. If you became Mark Zuckerberg, suddenly you had Facebook, or you had a lot of discretion over how Facebook should be run, what kind of changes would you like to see made? Or at least what changes would you like to research to see if they help?

Tristan Harris: I actually really appreciate you bringing up what changes we would like to see researched because one of the problems is that they have a monopoly on experimentation about what would help. I think one of the challenges here — and people have criticized us for not offering concrete solutions — it’s because this is a Frankenstein, which is having exponentially complex sets of impacts, positive effects on certain balance sheets, negative effects on some other balance sheets. And the only people, as we said, that know how much of those positive and negative impacts are, and could even put in place metrics (many of the effects we’re talking about are even harder to measure) but the only companies that could do those measurements are the companies themselves. And they have maximal disincentive to actually know what those harms are because once they know what those harms are, they’re responsible for them.

Tristan Harris: And just so people know, this has actually led Facebook recently to want to encrypt private group communication because now if the neo-Nazis are talking and it’s all encrypted in privacy — using, leveraging the cultural momentum around Cambridge Analytica and privacy, under the guise of that supposedly beneficial framework — they’re using that to avoid and abdicate responsibility for even being able to moderate or know how bad certain problems are, because they’re throwing away the lock and key. So I’m not meaning here to disparage the technology. I think what we need actually is whole university disciplines that are dedicated to almost like we had SimCity, for simulating complex environments, agent-based modeling, how different behaviours result in different kinds of social outcomes. We need that kind of rapid experimentation because there’s really only a handful of people at these companies who’ve been there long enough with the institutional knowledge to know, oh, yeah, we’ve run that experiment. We tried changing maximizing time spent to maximizing meaningful social interactions and downstream engagement. And we saw that it had these five unintended negative consequences. So that institutional knowledge is not widely known, and I think that we need whole university programs where there is more experimentation.

Tristan Harris: I’ve actually been talking to people who are inside of some of the big companies. They wish that there was something like a Peace Corps, or something like a civilian corps, where people who are engineering-trained or sociology-trained, are able to come into Facebook for sort of a tour of duty of positive sort of social experimentation because one of the problems, Rob, that people don’t talk about actually, is the problem that these companies have had hiring talented people, largely in fact, as a result of the success of I think these critiques, in which people feel less positive about wanting to work at Facebook overall. And that actually is one of the economic levers that will cause these companies to change, is when they find that they cannot hire and retain the best talent because the talent is skeptical of the fundamental business model. And that puts pressure on them to ultimately change. But in this case, the chemotherapy can also kill the patient because the patient is the underlying society that Facebook has, like a parasite, kind of grafted itself onto.

Tristan Harris: And one of the issues is that there’s actually a high turnover rate of talented employees at some of these companies. I think Twitter for a while was only retaining, the average length of retainment of an employee was a year, which means that you can’t make big, radical, sweeping changes because there’s not enough people around who know how it works. So I think what we need is algorithmic governance. We need more universities and programs where people can actually do simulations of what kinds of things would help. We need far more experimentation and funding for experimentation of alternative social networks.

Tristan Harris: One of the exciting trends from after the film, The Social Dilemma, is seeing that there’s many more attempts at new forms of social media in the form of either Club House, or Telepath, or Burning Man even talking about making a new social network. And I think that’s exciting that people finally feel like the public might come along to an alternative if there is enough of a better design in which that would work, because up until now, the monopoly effects, and network effect monopolies of Facebook and Tik Tok and so on, make it very hard to depart the network that you’re in. So there’s actually a very big conversation here about solutions, but we should probably dig through it in a more structured way.

Robert Wiblin: Yeah, yeah. What kinds of experiments, or what things would you like them to experiment with? I guess one obvious option, for example, would be changing the kinds of things that they show in the newsfeed, to say, not show you so many things that aggravate you, and show you more things that make you happy, for example. Is there kind of a list of experiments out there, or proposals for reform? If Facebook suddenly was super on board with trying to shift how they made money, or trying to shift the effects that they’re having on politics that they could pick up and run with.

Tristan Harris: I think, I don’t know if you’re familiar with drawdown?

Robert Wiblin: Carbon drawdown?

Tristan Harris: Yeah, carbon drawdown. So there’s a book called Drawdown by Paul Hawken that was a nonprofit group that mapped, modeled, and measured the top 100 most powerful solutions to reverse global warming. And I think that we need to crowdsource from a community. And frankly, this is the work that we would like to be trying to accelerate and hosting: providing those design prompts and invitations to the community, and to your community frankly, Rob, with 80,000 hours and the effective altruism community, because one of the things is we need way more people working on this problem. And that’s really what The Social Dilemma was in many ways also trying to accomplish is I think awakening the world so that there’s many more policymakers who are taking this seriously, many more designers who are actually thinking about how to design technology in a way that doesn’t cause these problems.

Tristan Harris: So I want to first claim that the people who’ve created some of these problems, including myself, and not that I actually created the problems that are in the film actually, unlike some of the other people who are more in the film, and kind of regretting some of the inventions that they’ve made, but I think they’re not going to be the ones to tell us all the solutions. And I think this is going to take a more diverse group of minds to think about that. I mean, in the same way that if Facebook doesn’t have anyone who grew up in Soviet Russia, they’re not going to be familiar with the disinformation tactics that Russia uses. If you have people at Instagram who are not mothers, they’re not going to be as familiar with the problems that mothers face with their teenagers daughters who have body image issues, or self harm issues. So I think we’re going to need a more diverse group of people, which is why I think we need to crowdsource the solutions.

Tristan Harris: But I think if you zoom out, one of the core questions that I’m asking is: Is a model of social media that’s based on user-generated content and the promise that your thing can go viral to a hundred million people…is that kind of core logic compatible with the democracy that is working well? And I worry that the answer is no. And that doesn’t mean let’s moderate and pick which things can go viral, it means that a bottom-up approach to seduce people into the ‘fame lottery’ and the ‘attention casino’ if you post the right thing — and especially getting rewarded based on what gets clicks and attention — we know that just pulls out the worst of us. And I think we need to think about what are different metrics or different virality mechanisms? On what basis and with what kind of wisdom would things go viral?

Tristan Harris: Let me give you one clear example. Our attention is agency-blind. This is actually why in The Serenity Prayer, the notion that…”God give me the wisdom…” What is it? “To…”

Robert Wiblin: “…know the things that I can’t change?”

Tristan Harris: “…know the things that I can’t change, and have the wisdom to tell the difference.” Right? And that’s often from the wisdom traditions. I’m not trying to do kumbaya, Pollyanna, drawing in from all ways and traditions, which have many flaws. But in general, I think to the extent there’s a baby that we threw out with the bathwater, there’s usually some kind of core rule, almost like an operating manual for wise sense-making and choice-making and well-being and meaning that we are losing. And one of the reasons I mention that particular prayer — of knowing what we have the agency to change and knowing what we don’t — is that our current social media systems are agency-blind.

Tristan Harris: The attention economy generally rewards things that are actually not within our agency. And I think it’s one thing that’s always been around, where our attention is not selecting for things that give us agency, because we evolved in environments in the tribe with 150 people, where generally the things you pay attention to you would have agency about. But what would a social media environment look like in which the things that got rewarded or went viral were the things that were most agentic? That were most encouraging and empowering, showing us the things that we could do in our community.

Tristan Harris: For example, if Facebook had made it possible for every city and small town to have that list of the top 10 climate interventions in their town and how many people they would need to get on board to pass those little micro-climate policies in their local town or state…And then if it had a global map showing how every town around the world was coordinating to make those bigger changes in climate policies and focusing our attention on agency, as opposed to grievances and complaining and posting…

Tristan Harris: Because again, I think it’s not just about having social media that has positive posts or healthy posts or things that make us happy, it’s also about what gives us the sense of agency. Because I think you were also alluding to this in your referencing of the other book, was it the…The Revolt of the Public, that one of the reasons that there is this revolt is that people are not feeling a connection to or a participation in or sense of agency with regard to their government. That their government is not actually responsive to their real needs, and for decades things have not changed. Imagine an intentional environment that really refocused all of our global attention on a daily basis to things that got better.

Tristan Harris: And I know that, I don’t know about you, but when I’ve done that in my own life, and I have a treadmill of positive, subtle improvements on a daily basis, things just getting better, that is a very powerful feedback loop. And I think we need a whole bunch of design prompts like that for people to think about social media that enables that kind of shift to happen. When I think about climate change, when I think about social media being potentially…In the same way that we could get angry about nuclear weapons, or we could get very excited about nuclear power, we could look at social media as being the largest mass coordination engine we’ve ever had. And maybe it’s the one we need to defeat a problem like climate change or AI x-risk, or some of the other existential threats that we are now facing.

Robert Wiblin: So it seems like you’ve got a pretty hard challenge because these companies, they didn’t choose…They’ve chosen a business model that makes them the most money, at least in their view. They think that this is the thing that’s going to be most profitable. And for various cultural reasons, and sometimes legal reasons, they feel like they have to do the thing that’s going to make the most money. That’s what they have decided to do. And then I guess you could try to change that maybe through government policy, but then the policymakers in this area…I don’t feel they’re super informed or have brilliant ideas for how to shift things. I haven’t really seen any policies on the table that would explain how we would get a company like Facebook to have a different goal function other than maximizing its advertising revenue that really seems viable. So I suppose financial pressure can be applied, for example by making it hard for them to hire the best people, because people feel pessimistic about the social impact of the companies. So maybe that’s one leverage point.

Tristan Harris: And I want to say that one has been very successful. I mean, again, we’ve been working on this for eight years and I think Facebook had to triple the amount of incoming salary paid to starting employees over that period, if not more now. As a result directly of people…That moral conscience is coming up, so we need to push it down by paying you more. But I think there’s also different ways of viewing that. I think if everybody who saw The Social Dilemma — I know you have a lot of students who listen to your podcast and people at universities — if everyone who is getting a job at Facebook didn’t just say “I’m not going to take the job”, but actually took the interview and challenged them in that interview: “What are you doing to change your core business model?” And if they heard that in the HR meeting when Zuckerberg and Sheryl have to sit every month and say, here’s the reports and what we’re seeing in recruiting. And they said, everyone is asking us, when are you going to change your business model? Because you’re not actually doing that. That’s actually one of the most direct economic levers, absent the fact that obviously our government institutions in an ideal world would be acting in a way to generate those incentives in a democratic way. This is sort of a hack around the brokenness of our democratic environment.

Tristan Harris: But I think that there are government actions here, it’s definitely not having whoever is sitting in the White House regulate which speech can and can’t be said. Which is what I think — to speak to critiques we get of the film — people assume that we have this kind of, I guess you could call it a more left agenda of regulating what speech can and can’t be said, and it’s really not about that. It’s about asking what core game dynamics, what core virality logic would result in an attention commons that would be in strengthening human sense-making and human choice-making. So on a daily basis, we’re making better sense of the world together and making better and better choices. And the core insight and how to do that is to be aware of the kinds of human vulnerabilities and sensitivities in the same way that a magician knows about a person and using that to the society’s and to the user’s best interest.

Tristan Harris: One of the core things you have to change to do that is to operate in more of a fiduciary model. What I mean by that is, when there are asymmetries of knowledge in a discipline…Like again, a lawyer knows all about the law and you don’t, and you hand that lawyer all your personal details and information, and you do so assuming that they’re going to use all those personal details to help you the most. With that asymmetric knowledge about you and about the law, they could use that information to exploit you. And because of that, we have the bar exams and everything else, to try to make sure that lawyers operate in our best interests. We have the same thing with doctors, we have the same thing with therapists. The condition by which we apply fiduciary laws like that is based on the asymmetric ability to influence the outcome and asymmetric knowledge involved in that relationship.

Tristan Harris: In this case, social media probably knows more about you than any doctor, lawyer or psychotherapist, and is operating on an extractive business model to use that information to maximally make money from your behaviour being steered one way or the other by any anonymous party — whether it’s Iran, or Russia, or a regular advertiser. So one of the things that we can do is reassign that fiduciary variable, which is actually being proposed by some governments to be called ‘the duty of care’, and saying that companies actually should operate with a duty of care to put the interests of society first.

Tristan Harris: Now, how do you do that when they’re a public corporation and the fiduciary variable has already been assigned to shareholders, where they’re primarily responsible to shareholders and not to users? But that’s one of the core things that we probably need to reform, frankly. And there have actually been conversations going on after The Social Dilemma about, what if instead of reporting to a board of directors, they have to report to a board of the people? Because as social utilities that make up our social infrastructure for the entire world, they have to be able to put people’s interests first, they’re too damaging to not do that. That would be an unprecedented global act that as far as I am aware has never happened before. But I think we have to think more radically if we want to get to the kind of world where social media is a positive contributing force and a high-agency force in our societies.

Robert Wiblin: Yeah. So the fiduciary model is interesting. I think smart people might not be keen on that because: How are you going to define whether they’re acting in your interest? It seems a bit clearer with a doctor or a lawyer, although even there, sometimes you struggle. How would we say whether Facebook has designed its product in a way that maximally benefits users or that benefits users enough to pass this threshold? I think often these issues of like, how would you legally define something, can just be a dealbreaker for some frameworks. And this seems like one where I’m just not sure how it would work.

Tristan Harris: I agree. This is very difficult to say, in what way would it be optimizing for not just the user — which is an individualistic frame — but the society’s interest. But one thing that we do know is that by reassigning that fiduciary variable, it in one step knocks out the advertising business model. Because it’s sort of like saying a doctor or a lawyer can not operate with the business model in which they sell all the private information that they learn about their client to the highest bidder, and they’ll recommend the next surgery or medicine or legal procedure or strategy based on whoever pays them the most money. You would never allow doctors or lawyers to operate in that way. So what that does categorically is it nukes the advertising model, again, not advertising for the ad, but the relationship of using asymmetric knowledge in a manipulative or extractive fashion.

Tristan Harris: And that would at least pave the road. I mean, I think of this like climate change, where we need a ten-year wind-down plan from this extractive model of fossil fuels. And that’s not an instant snap-our-fingers thing. It’s a “What’s the transition plan?” thing. And we’re going to have hybrid vehicles before we have fully electric vehicles. And we’re going to have a bunch of mix of solar and wind before we have all nuclear. And I think we need to, in the long run kind of wind down the advertising model. And we can also have game theoretic norms, just like we have a Geneva Convention or a START Treaty on nuclear weapons. I think what we need is a set of humane rules of engagement for an attention economy. So for example, no auto-play on any kind of service. No personalized recommendations for any political categories. Always have transparency for any recommendation system.

Tristan Harris: So one of the problems, as you spoke about, is we don’t know how often YouTube actively recommended certain pieces of content to certain users because they are essentially the Exxon company and the satellite company that monitors how much pollution there is. So we should have transparency. So transparency is sort of ‘table stakes’. For a more comprehensive list by the way, on the Humane Tech website we have a page called Policy Reforms that includes some of this, and they deal with it structurally through the lens of asymmetries of power. It’s the lens we use to apply here. But I think when it comes to the design, you can also add things like circuit breakers or things that limit virality.

Tristan Harris: We’ve seen that recently with Twitter introducing a change with the retweet button: You can no longer instantly retweet or instantly reshare. You have to actually say something about what you’re trying to say. And my current understanding is that that change reduced the amount of resharing by something like 25%, or in the 20-something percent range, which is a great micro-benefit, because now people are not mindlessly resharing. They’re only consciously resharing. We’ve seen the same with WhatsApp where one of the things they did to reduce some of the spread of misinformation on WhatsApp — because before you could share one message to 200 people instantly — is they limited sharing to five people. So I think there’s things like that that are about, hey, how do we make a more humane approach to an unaccountable non-human curated attention environment? Another example is just like, we can say we don’t drill in national parks in Alaska (although Trump is apparently overturning that right now). But we can say that there’s certain areas of the attention economy that we need to protect, like a national park.

Tristan Harris: An example of this could be that the attention of underage users — you can say 13 and under or 18 and under — you should not be able to monetize the attention of people aged 18 and under from the hours of 10:00pm until 7:00am, because what you’re essentially doing is monetizing sleeplessness and isolation and anxiety and depression.

Robert Wiblin: Why would that stop the kids from using? It seems like if anything they’d use it more because there wouldn’t be any ads in the feed, but presumably they’re not going to actually lock them out. That feels pretty aggressive.

Tristan Harris: Yeah. So I’m not saying…That’s correct, that it wouldn’t stop the kids from using it, but it removes the financial incentives of companies.

Robert Wiblin: So maybe they might not send notifications at that time or something like that.

Tristan Harris: That’s right. Yeah. I mean, one of the simplest things, by the way, that we’ve said from the very beginning, that would demonstrate a good faith response from the companies is to by default batch and digest notifications. I think it was actually Peter Hartree, who brought us together on this podcast, who makes the Chrome extension Inbox When Ready, and the idea that you sort of batch your looking at your email because if you have it constantly open and it’s drip by drip by drip by drip, it makes it much more addictive and distracting than if you get one big chunk at a new time. One of the simplest things we could do in a set of humane rules of engagement is to batch and digest all notifications, except when an individual person actually wants your attention. Because right now the default settings on all the social products are to do the most addictive drip by drip. You’ve got five new followers, five new likes, and every time you check it again, you’ll see another sort of micro-burst of new things.

Tristan Harris: I think in the future, we’re going to need to audit technology products in terms of how they actually stimulate our nervous system. This might sound more aggressive, but in the same way we have an FDA for monitoring drugs, you might have some small sample of users who go into a brain scanner and you have a sense of — overall in the use of a day — what aspects of the human nervous system is this product activating, and how is that lower-agency for individuals and what are the negative impacts there? That might sound aggressive but when you realize that technology is getting better and better at rewarding these kinds of behaviours, and more persuasive, not less, and more predictive of what will influence us, not worse, you’re going to need more aggressive actions that ask, what does it mean for a person to be making mindful, conscious choices? And how can we guarantee that technology is designed for that kind of outcome?

Robert Wiblin: Yeah, unfortunately I think the FDA…I worry that the drug section actually is doing harm by slowing down access to products.

Unintended consequences [01:42:57]

Robert Wiblin: I do in general worry that there could be significant downsides or unintended consequences from some of these policy ideas. I mean, we have to speculate a bit here because we don’t know exactly what those policies might look like, but let’s say that you did shift the charter for Facebook away from, you know, what it should do is try to make lots of money or at least it should do whatever Zuckerberg says, to it should do what’s good for society. It really then does matter and would be a huge political deal, kind of, who judges that, who are the regulators who say whether Facebook has acted in the best interest of society or of its users or whatever other interests group.

Robert Wiblin: And you could easily imagine then that this would be used as an excuse for censorship or for manipulation of people in a way that we wouldn’t like. I mean, so far, I guess Facebook and YouTube and Twitter, they’ve been banning a bunch of accounts recently, closing down conspiracy groups or at least not promoting them. And all of that has seemed kind of good to me. I mean, they’re private companies, they don’t have to be hosting crazy conspiracy groups if they don’t want to, it seems reasonable.

Robert Wiblin: But at the same time I do worry if they keep going further and further with this, isn’t there a temptation for them to start pushing their own agenda or at least slanting things in a way that isn’t necessarily in the interest of society as a whole? It might be an interest of pleasing the regulators or doing whatever helps them politically, and then do those regulators have our interests at heart? At least the goal of making maximum amount of money certainly isn’t totally in line with what I want, but then trying to please whoever’s in the White House or whatever regulators or congress…I’m almost more nervous about that, because then you’re creating a combination of government power and of corporate power that potentially they could collude to manipulate us in a way that might even yield even worse outcomes than if they’re just trying to make money.

Tristan Harris: Yeah, these are really great points. And supposedly on his bedside table Zuckerberg has the book The Master Switch, which is the history of telecommunications regulation. And the way you can interpret all of the interactions between Facebook and the current government is in pleasing whoever the current regulators are, obviously because they need to maintain their freedom. Their maximum goal is protecting Facebook’s self-interest and Zuckerberg’s lifelong legacy of having something he is going to pass down to his daughters, as the social network for the world. Which means minimizing regulation, which means that if the conservatives who currently have power in the U.S. government complain of censorship or bias, you’re going to bet that Facebook is going to dedicate its engineering talent to satisfy those conservative voices, even when that might be in part a bad faith political strategy from conservatives, as you’ve said, to try to make it more amenable to conservatives. And I say that being aware of the fact that many conservatives have rightly felt that some of the things that are getting taken down and banned tend to be more on the right-hand side of the spectrum.

Tristan Harris: But I will say, counter to that fact, that if you look at the broad like top 10 most engaged-with posts on Facebook on a daily basis, about nine out of 10 tend to be for very right-leaning publications. Ben Shapiro, Dan Bongino, Donald Trump for president, et cetera.

Robert Wiblin: Yeah. I was really surprised to learn that, it’s quite interesting that it seems like right-wing content is extremely popular on Facebook. And I guess that might help to explain some of the behaviour that otherwise seems a little bit odd to me. That they’re making a whole lot of money out of it and there’s an interest group that’s ensuring that that stuff doesn’t get downgraded in the newsfeed.

Tristan Harris: That’s right. But as you said, it puts us in a weird double bind because it actually comes down to a crisis of trust. Do you trust Mark Zuckerberg to make the best decisions on behalf of us, or to try to satisfy the current regulators? Do you trust the regulators and the government that we happen to have elected? And as you said, there’s a strong incentive for Facebook to say, “Hmm, which of the upcoming politicians have the most pro-tech policies?” And then just invisibly tilt the scales towards all those politicians.

Tristan Harris: Which, I think people need to get that Facebook is a voting machine and voting machines are regulated. It’s just that it’s an indirect voting machine because it controls the information supply that goes into what everyone will vote on. And just like, if I’m a private entrepreneur, I can’t just create an LLC for a new voting machine company and just place them around society. We actually have rules and regulations about how voting machines need to work. So that they’re fair and honest and so on.

Tristan Harris: And obviously we’ve entered into another paradox where, if we want Facebook to really be trustworthy, it should probably have completely transparent algorithms in which everyone can see that there’s no bias. But once they make those algorithms transparent, there’ll be maximum incentive to game those algorithms and point AIs at literally simulating every possible way to game that system. Which is why we have to be really careful and thoughtful about what is really…I think the heart of this conversation is: What is the new basis of what makes something in this position — a technology company with all of this asymmetric knowledge, data, and collective understanding of 3 billion people’s identities, beliefs, and behaviours — what would make anyone in that position a trustworthy actor? Would you trust a single human being with the knowledge of the psychological vulnerabilities and automated predictability of 3 billion human social animals? On what conditions would someone be trustworthy? I think that’s a very interesting philosophical question. Usually answers like transparency, accountability, and oversight are at least pieces of the puzzle.

Robert Wiblin: Yeah. It seems like we’re in a very difficult spot here, all of the solutions seem tricky. But how much discretion does Zuckerberg have over what Facebook does? Cause as I understand it, he owns a majority of the voting shares. So if he says we’re actually going to sacrifice some revenue here for some other goal, can he do that?

Tristan Harris: Yes. That’s actually one of the unique things about Facebook’s stock structure, is that he has…I forgot what they call it, but he basically has control over the company, regardless of if there’s a majority shareholder view that he needs to be replaced, even. Obviously like anything, it comes down to trust because with someone in that position, they could be a benevolent dictator and use it to, upon seeing all these things, say, “My goal and my legacy is to transform Facebook away from all the problems that we now know to be existing”. And he could actually do that, even if it would result in far less money and revenue, because he has this unique unilateral control. The problem is that, so long as Zuckerberg is in denial about some of these core issues, he will not make that choice.

Tristan Harris: You asked about theories of change and other strategies beyond employee pressure and recruiting pressure. One of the other strategies that has been tried is shareholder pressure. And there have been shareholder resolutions attempting to get Facebook to take much more radical action, but they’ve failed exactly because of the fact that Zuckerberg has this special voting influence.

Robert Wiblin: So one argument that Zuckerberg has made in the past is that we shouldn’t be trusting the management of Facebook to decide what ideas get promoted. I’m sympathetic to that. Or, yeah, I obviously feel very nervous about that. I guess one middle ground might be that you can say whatever you want, even if Facebook thinks it’s a conspiracy and it’s a bad idea. However, Facebook does have the discretion to say what ideas are going to be promoted into the newsfeed. Do you think you can get some mileage there where that might both appease the free speech people, because in practise they’re not being prevented from saying something, it’s just that Facebook is putting a thumb on the scale of what content they think warrants promotion. And in fact, there’s no way out of that because they have to choose among stuff, they’re choosing on some basis, why not choose based on like, whether they think it’s true, as one factor. At least rather than just, does this make people angry or does this cause people to click and so on?

Tristan Harris: Yeah, that’s exactly right. And that’s exactly what we’re seeing right now with regard to the U.S. elections where Facebook and Twitter took very aggressive strategies in saying that the results of the election are not yet called. And they basically coordinated, not in the sense of who they wanted to win the election, but they coordinated to make sure that there were no articles definitively claiming a result or victory of the election until a majority of news outlets had done so.

Tristan Harris: The problem is again, who do you trust? One of the challenges we’ve faced is that when you have a president who asserts that the entire elections are illegitimate, and typically the president carries with him or her incredible authority and the maximum authority, we’re not used to presidents that assert things without any basis of facts. It may be the case, as has been the case here, that you can find evidence of some small amounts of voter fraud or illegitimacy after the fact. But you have to ask, is this a good faith attempt to discover the truth? Or is this simply a political claim to do that?

Win-win changes [01:50:47]

Robert Wiblin: Can you think of any changes that they could make that would be good for politics or good for users, good for the world that wouldn’t cost them anything? Like wouldn’t actually cost them any revenue or might even make them more money? Are there any win-wins here that wouldn’t require a deeper structural change?

Tristan Harris: You know Rob, that’s such a good question. I want to say that, to give the folks at Facebook credit, they have made many changes that have actually decreased revenue and decreased engagement for the good of lowering addiction or decreasing political polarization. I mean, even Twitter, for example, Jack Dorsey chose in October 2019 that he was going to ban political advertising on Twitter. And you can choose to make a decision like that, and yes, you’re going to hurt revenue, but really it’s actually a small percentage of revenue, which Facebook is keen to point out, I think it’s something like only 1% of Facebook’s revenue. However, 1% is in the billions and billions of dollars. And there’s arguments about again, political advertising actually enabling the opposite of the incumbents — the newcomers — to actually win an election, as opposed to, if you didn’t have political advertising, you just reward the incumbents through the asymmetry of existing power structures.

Tristan Harris: When it comes to, you know, what could they do, this is really hard. I think we have to ask ourselves what would be a safe set of rules for all of us to play by, which is why I go back to…is an unchecked virality model where anyone can say anything…In fact, good faith and bad faith actors cannot be distinguished and generally bad faith actors will out-compete good faith actors because the space of possible speech utterances that are false is infinite. And the space of speech utterances that are true is very limited. An unconstrained actor is going to out-compete constrained actors in the form of putting speech out there. And I think the thing that you pointed out Rob, is that what we’ve done is we have to distinguish between freedom of speech and freedom of reach. Because there’s a difference between being able to post any piece of text or content versus having some kind of God-given right to a football stadium-size audience.

Tristan Harris: And typically when it comes to any technology, the greater the power, the greater the responsibility and accountability that we also couple with that power. So I can go buy a pair of kitchen knives from the kitchen store without getting a background check or showing an ID. But if I want to buy a gun, I usually have to do either a background check, show my driver’s license and ID, show that I’m a valid U.S. citizen, and possibly even get training for how to use a gun. Whereas in this case, I think we have not actually treated social media broadcast as a form of dangerous weapon that actually has exponential psychological influence capacities. A 15 year old with her Instagram account can reach 10 million people. And that 15 year old is not operating on journalistic ethics, fact checking, accountability or issuing corrections when she makes a mistake.

Tristan Harris: So again, I think part of what we need here is a new, almost like broadcasting standard, which we used to have, by the way. We used to have Saturday morning cartoon protections, we used to have a fairness doctrine for political speech. I think one research question for the audience of people listening to you here is, what would a fairness doctrine look like for a network information ecology? How could these platforms prove that there’s more of a fair…Not just, once I go a little bit to the right it sends me further to the right, or I go a little bit to the left, it goes further to the left. But when I go to the right, it shows me a constructive counter-argument to what I’ve just seen, not a cynical deconstructive counterargument, which is one of the problems of supposedly the best solution to speech is counter speech. In a limited attention economy, you cannot depend on people having an infinite set of counter speech they’re going to be looking at. So we also need to enrich context.

Tristan Harris: I think one of my favorite phrases is from Brewster Kahle, who runs the Internet Archive, that the best solution to free speech is better context, because in an environment where attention and conscious energy is limited, meaning people are not going to spend days and weeks reading books and relitigating every single topic and whether, you know, the climate CO2 hypothesis is actually fully legitimate, or do we want to relitigate that…We need better ways of building on top of the shoulders of giants and coming to establish truths and building on them, as opposed to relitigating everything. And that’s one of my concerns is, how do we have a sense-making environment, more like Wikipedia, which I would call more of the humane technology, because it provides more of a ground to stand on instead of trying to relitigate every possible view.

Robert Wiblin: Yeah. Let me put two things to you. So one thing…That was very interesting that Twitter just basically said no political advertising, sorry. Given that it’s only 1% of advertising revenue, maybe that does make a whole lot of sense though, because that’s the thing that’s bringing them so much grief in the public debate and was one of the things that people would have the most policy concern about. So they’re like, well, this is a threat to our whole business and it’s only 1% of the money anyway. So let’s just sell cars and socks and stuff that would kind of make sense. So it’s possible that Facebook should follow that lead…although as you point out, is that good? Is that bad? Maybe this gets complicated because you also want people to be able to share political views that maybe people don’t already believe.

Robert Wiblin: And another one is I know plenty of people who’ve largely just stopped using Facebook. And one of the reasons is that they didn’t think it was enhancing their life, all things considered. So you could imagine a world where Facebook designs this thing to be extremely engaging in the short term. And that does work to boost engagement in the short term. But then people like they’re reflecting on New Year’s Day or just before New Year’s, and they’re like, “You know, I don’t think Facebook is really benefiting me. I’m going to start using this a whole lot less.” And then they successfully do it because it’s addictive, but it’s not so addictive that you can’t break the habit if you’re really committed to it. And so maybe it is in Facebook’s interest to develop a product that does cause people to, on reflection think, “Yeah, this has made my life better and it’s informed me. It hasn’t just caused me to get angry.” Do you have any thoughts on that?

Tristan Harris: Yeah. I mean, those are great examples and I love you pointing out that when Twitter chose to ban political advertisement, however much lost revenue that might have amounted to on their balance sheet, it probably resulted in a net gain in terms of reducing the alternative of that much money’s worth of political and PR damage to the company because of the controversy that’s intrinsic to enabling political advertising. And I do think we need to ask questions about what kinds of categories are safe to enable this kind of thing. For example, small- and medium-sized business advertising is probably actually a fuel for the economy. This is one of the things, if I’m making Facebook’s points for them about why advertising is so necessary, which I’m willing to do, is that actually even after COVID, one of the best ways to reboot an economy is for small- and medium-sized businesses to be able to retarget their customers with email micro-targeting and get them to buy things that they have forgotten about. And Facebook is the infrastructure for rebooting an economy. Now I’m making a very strong, positive argument for Facebook in that.

Tristan Harris: Now the thing that people have to get is that what makes these companies so profitable is the kind of black box automation of the entire system. So the fact that any advertiser can micro-target any message, you can split test any variation without human oversight — without the fact that I have to hire advertising standards people or editors or curators or journalists or advertisers. Because I don’t have to pay people, that’s what makes the model so profitable. So as we limit the categories in which we’re enabling them to do that and say, “Hey, you can only do that for non-political ads and for small- and medium-sized business merchandise.” So you only have physical products that you’re allowed to advertise, that would make for a much safer ecosystem.

Tristan Harris: And the second example you gave of Facebook benefiting in the long run from not burning out users from addiction, and then having these negative thoughts on New Year’s Day…They’ve actually done that too, that’s been part of the argument in which they responded and actually made our phrase…our work originally came from this concept called time well spent. The notion is that time is the denominator upon which all of these goods and bads are taking place. And a world in which they maximize time, but it’s not time well spent — it’s time that feels empty or regretful afterwards — is not the same as a time well spent environment, in which everything is competing to have lastingly fulfilling and lastingly informed and lastingly beneficial benefits. And that’s the whole premise of what that change was about. So I think they’ve already made those arguments.

Tristan Harris: I think the problem is when it comes to these sort of social benefits where you talk about social cohesion, joint attention, unlikely consensus, I want to point people to the work of digital minister of Taiwan Audrey Tang, who actually has the best working example that I know of, of a digital democracy in which Taiwan bolts on this digital infrastructure on top for digital deliberation of what policy changes that they want to see as a country, and they have really fast 24-hour cycles that give people the real sense that the government is listening to them, it’s accountable to them and they can see real examples of change. And they had one of the best responses to COVID under the threat of China’s massive disinformation campaigns that are thrown at Taiwan. So I think it really is one of the best examples to hold up and I recommend people check out our interview with Audrey Tang on our podcast called Your Undivided Attention as well.

Robert Wiblin: Yeah. They were also on Conversations with Tyler, it was a fascinating interview, I have to look more into their work.

Big wins over the last 5 or 10 years [01:59:10]

Robert Wiblin: Just to give listeners a sense of the structure of the conversation, I’m just still trying to find out what’s to be done because I’m so confused about it. I’m just fumbling around trying to consider all the options. You just mentioned that there’s been successes, that these companies have changed in a bunch of different ways. Maybe that’s one way that we can figure out a path ahead is just to look at the past successes and say, let’s do more of that. So maybe like, what have been the big wins over the last five or 10 years?

Tristan Harris: Yeah. I mean, there’s many of them I think, and they’re happening across all the different aspects. So they’re also happening culturally. You know, for example, when we made the film The Social Dilemma, there were not many people actually speaking out saying, there’s a problem here, not many insiders. I was one of, probably five people. And they’re mostly in that film. Now there’s many, many, many more insiders that are speaking out about the problem. And that also is accelerating. And I would say that’s a positive change because it forces more change to happen.

Tristan Harris: One of the best product examples is the one that everyone now has in their own hand, which is the Apple Screen Time and Digital Wellbeing features. Now this is a tiny, tiny, tiny baby step in the right direction because this is really not the kind of solution that we really need. But the fact that we could prove that a cultural movement called ‘time well spent’ and raising awareness about that problem could convince, you know, the handful of 50 designers at Apple to ship on more than a billion phones and Macs — by the way, because now Screen Time is built into every Mac — a system that lets you see how much time you’re spending on various products, and against their business interest of maximizing engagement — because Apple’s business model is actually not based on advertising or engagement.

Tristan Harris: One of the other positive developments on the Apple side is actually decreasing the profitability of personalized advertising because one of the things that Apple’s been able to do just recently in iOS 14 is remove the personalization cues. They now make it an option when you open an app, it will say — I think starting in January, actually it’s shipping a little bit late — it will say “Do you want to let this app track you?” And of course everyone’s going to say no. And what that does is it reduces the profitability of the advertising-based business model by something like 30%. So that’s like a carbon tax on the kind of perverse extractive business model that we want and that’s being implemented at the Apple level.

Tristan Harris: So if I want to give people some hope, I think we should look to Apple and Google as regulators of the attention economy. And what people are not going to like about that is that we’re relying on the moral compass of two additional private corporations that are maximizing shareholder value to try to get them to do positive things, but they do respond, I think, to public pressure. And I think we should ask: Is there a way that Apple and Google can provide incentives — in the form of taxes and subsidies for more humane technology — that are sensitive to all these psychological vulnerabilities, and tax applications based on some of the balance sheet of harms that we’ve been talking about over the last two hours?

Tristan Harris: And you can imagine the app store is currently, you know, the revenue flows are like the Federal Reserve of this huge multi-billion dollar app store economy. And if, you know, Rob that there’s a 70/30 split, so revenue from the app developer keeps 70%, Apple keeps 30% and you can imagine them actually piping that down or including an additional tax based on, let’s say how much distraction or how much polarization or how much mental health harm, are you adding to the balance sheet of society? What you then get into again is the issue of incommensurability and how do we really measure those effects in a concrete way, but you could have subjective assessments that would amount to some of those incentives being accelerated. And the cool thing about Apple and Google is that they can make those changes on a yearly basis. So next year, as soon as September or October of 2021, we could have some of these more radical changes that are built directly into their incentive scheme.

The subscription model [02:02:28]

Robert Wiblin: Yeah. I’m glad I’m not the bureaucrat trying to figure out which apps to subsidize or which ones to tax, that seems like a tricky job. But that raises the general issue that some companies, because they run on subscriptions, they have more reason to try to convince you that it’s worth paying for the service and that is good for your life, all things considered. And to minimize the amount of time you use it rather than maximize it, in Apple’s case potentially. Netflix, I guess some people get addicted to Netflix, it has its issues. But I think also because it runs on a subscription model, it’s less exploitive of its users, or I feel like they face fewer cases where there’s a trade-off between what’s good for their money and what their users are going to enjoy.

Robert Wiblin: Is there any hope for a subscription-based model for, I guess more Google products like YouTube or Facebook or Twitter that would realign their incentives? Because, I mean, I guess we just don’t know how these products might have evolved in quite a different direction if their goal had been to convince people that Facebook was worth paying the hundred dollars a year for.

Tristan Harris: This is exactly right. I mean, we’re already seeing a trend towards more subscription-oriented business relationships. I mean the success of Patreon, where people are directly funded by their audience…recently Substack…you have many more journalists that are leading their mainstream publications and having a direct relationship with their readers being paid directly through subscription. And you also have, by the way, more humane features in Substack, they let you actually, for example, as a writer, pause and say, “Hey, I’m not going to write for the next two weeks.” And it’ll actually proportionally discount the amount of subscription fees according to letting the author live in a more humane way and have these breaks. So we’re not creating these inhumane systems that are infinitely commoditizing and treating people as transactional artifacts. So those are some really exciting trends. And I actually have heard that Twitter might be looking into a subscription-based business model as a result of reacting to The Social Dilemma.

Tristan Harris: I think what we need though, is a public movement for that. And you can imagine categorically — and this would be a very aggressive act — but what if Congress said we are not allowing a micro-targeting behavioural targeting-based advertising model for any large social media platform. That once you reach a certain size, you are required to shift over into a subscription. Now, people don’t like that, again, because you end up with inequality issues — that some people can afford it and others cannot. But we can also, just as we’ve done during COVID, treat some of these things as essential services. So that much like during COVID, PG&E and your electricity and basic services are forced to remain on, even if you can’t pay your bills. And I think we could ask how we subsidize it, the basic version for the masses, and then have paid versions where the incentives are directly aligned.

Tristan Harris: And I want to say one more thing about this, which is what a change in culture this would create for engineers working at the tech companies, because I think that the demoralizing effect of people saying, “I’m going to work to get people to click on ads and click on clickbait every day”, that’s not a very exciting mission. Whereas if you really turn the table around and say, the customer is the user, and the society. And improving their life and giving them agency over meaningful things that they can change, that’s an exciting thing to go and do if you work at Facebook, right? Or if I want to actually also improve social outcomes, one other scheme, Rob is if you have governments become the customers of social media and they say, “Hey, Facebook and Google and YouTube, we’re going to pay you to advance certain social outcomes. Let’s say climate change and accelerating actions, not just content or awareness, but actually to the extent that you can help enable more shifts towards climate-friendly policies, we’re going to subsidize and pay you for that benefit.”

Tristan Harris: And I think that that’s what we have to see is that these platforms are really the social organs for governments, or for an entire society. And that means that the customer isn’t just the individual user paying with their credit card, it’s actually the entire society and government that wants to see their society improved. And I want to say also that I’ve never been more hopeful that at least if we have a Biden administration, as opposed to the current Trump administration, which I don’t think would tackle any of these issues or would do so in a way that people would trust, I’m really hoping that this may provide some opening for how we tackle these issues at a systemic level. And I will say Andrew Yang, when he ran for president actually said he would create a department of the attention economy at the White House level to engage with these issues in an ongoing way and convene conversations about how we accelerate these changes and reduce the harms that we’ve been talking about.

Robert Wiblin: Yeah, there’s something very odd about the advertising model or that just strikes me as perverse, which is, if you look at how much money they’re making from users per hour of engagement, it’s really a pittance. We’re talking like tens of cents for like an hour that you spend on Facebook, if you’re a U.S. user. And so if you think like, “Wouldn’t I rather buy them out to have my interests at heart, or like to try to make the product match to me, and all I’d have to do was pay them 10 or 20 cents an hour”…Even someone making minimum wage might be interested. And you face this with TV as well, someone watching television, I think I tried to calculate how much are people paid per hour of ads that they watch when there are in terms of like the programming, so how much revenue are they gaining? I can’t remember here, but again, I think it was like tens of cents, like per hour of watching ads. And there’s something so odd that you would never let someone boss you around or tell you what to do for so little money. And yet, because you’re not quite aware of the way that your time is being shaped by the fact that you haven’t paid for the service and you’re not given the option to opt out. Yeah. There’s something just fundamentally odd.

Tristan Harris: Totally. Well, I think you’ve just hit the nail on the head on a couple of fronts, because you also mentioned just how invisible the coercion is. And we started this conversation by talking about, can people really be persuaded, but it’s not, again, persuasion for the advertisement, it’s the constant micro-nudges and behavioural coercion that drives people into these addictive and distracting kinds of behaviours. And if you said, hey, you’re going to get 50% of your life back, because when you enter into these environments, these digital habitats, they’re not going to be adversarially designed at every step of the move…And again, I think that would be so much more inspiring as a technologist to go to work and say, “How do we improve people’s lives?”

Tristan Harris: You know, I just want to say, I grew up in the age of the Macintosh. I was born in 1984 and I grew up thinking that technology would be an empowering tool, a bicycle for the mind, something that was a creative canvas to invent. I did lots of programming. And in those days, when you made software, you made software to empower people. You made software to give people some new capacity, some new tool, and you could go home saying, look at all the people’s lives that are now doing something creative and productive. And I think that YouTube, for example, if I want to be kind and charitable to them is like the best resource for how-to videos and do-it-yourself, anything that you would ever need.

Robert Wiblin: Extraordinary. I was learning about how thorium molten salt reactors work on the weekend and like YouTube, just incredible resources there for nothing. Yeah.

Tristan Harris: Right. Exactly. And you can imagine YouTube could also instrument economies. So for those who are now yoga teachers who don’t get to have yoga studios, or classes that are online, they could instrument ways that for every person who sees that yoga class, they could be paid a dollar. Would you donate a dollar as a viewer to enable a new economy for people’s lives to be improved? So now, if I work at YouTube, I get to measure my success in terms of net positive new hours created in people’s lives, because I can look at how-to videos, learning videos, educational videos, yoga classes, do-it-yourself projects. And I can literally measure my success in terms of life impact. And again, what you would need for that is for you to not be theoretically caught in this race to the bottom of the brainstem for auto-playing broadcast like television, because YouTube’s biggest competitor is now TikTok.

Tristan Harris: I think TikTok has been consuming a lot of the young people’s attentional time. Because when you open up TikTok, it doesn’t even wait for you to click on a video, it actually starts auto-playing the second you open the app. And it actually shows you the next video, the next video, the next video. And again, what we need to make this change is to really kind of change the fundamental paradigm that none of these social media services that live on a smartphone should be based on broadcast, or at least there should be a different choice architecture for entering into a broadcast mode, because then our computers and our devices would feel much less distracting and much less adversarial to our daily interests.

Robert Wiblin: Okay. Let me push back for a minute. So I guess, whenever I hear about a Ministry of Attention I always just think like what are we going to do when we have a really bad president or a really bad government? Involving government in how people spend their time at that level, or what information they’re exposed to always gives me the creeps. Maybe we’ve already talked about that a bit.

Robert Wiblin: In terms of the subscription model, while it might be good for many users, perhaps for me, it probably isn’t going to happen because they’re caught on a bunch of different sides, at least Facebook is. One where people who have the high incomes are the ones who would be most willing to pay for the subscription but they’re also the most valuable to advertise to. So you would then want to have different prices for people based on how rich they are and how valuable they are as advertising targets and people hate the differential pricing, maybe they’ll accept differential pricing in different countries that have very different income levels, but differential pricing for different people seems tough.

Robert Wiblin: And then also if I’m say one of the 10% that decide I want the subscription model, I want the product that says my interest rather than tilting it towards advertisers, are they really going to redesign Facebook as a whole, like the whole ecosystem, to make it more humane just for the minority of people who subscribe? You kind of need everyone to switch over but then many people don’t want to pay for that and they would worry about the network effect breaking, because lots of people just opt out because they can’t be bothered paying the $100 a year. There’s something there but I worry that it’s not actually viable as a business model, it’s not going to happen.

Tristan Harris: You’re bringing up such great points. One is that obviously each user is worth different amounts to Facebook, and in fact the ones who, as you said, would be most likely to pay are the ones who have the most money but they also cost them more, because you or I are probably worth many more times than the average user. You also have the problem of the fact that, this is actually important, we were talking earlier about in the Global South, how they have heightened versions of these problems because they have fewer content moderators who speak that language.

Tristan Harris: One of the things people should know is that the value of a Myanmar user is very little to Facebook, it’s like pennies. Yet the cost of actually supporting that user and any of these new users, this is really important, even according to Facebook’s own logic, think about the cost of bringing on a new American user to Facebook, it’s negligible, they already have the English language content moderators, they already built the classifiers to detect hate speech, et cetera. They already have all that infrastructure. But if I’m bringing on a new user in Ethiopia speaking one of the minority dialect languages, the cost of bringing on the rest of Facebook’s user base that has not yet come online is going to get higher per user as opposed to lower. So what that means is we’re going to have more chaos in the next set of users that come online than for the users that are in the West. That’s a really important point if you care about a sort of John Rawlsian view, of how Facebook treats the worst off in its meta-society of its digital infrastructure.

Tristan Harris: But I think, getting back to your point, you could imagine some kind of fair pricing scheme that tries to tier it, so you can imagine tiered pricing. Or trying to equalize the distribution so you don’t have mass variability in how much people are paying for the product. I think those are all really, really good areas to explore. My colleague, James Williams, who was my collaborator on the first ‘time well spent’ work, he was at University of Oxford, wrote a paper called ‘Why it’s Okay to Block Ads’, and there’s this notion of, should using an ad blocker be a human right? Where we can simply choose to not see the ads, or is that fundamentally something that is essentially stealing? Because they actually need that and it’s worth money to them.

Tristan Harris: So should we be paying our way out of our attentional serfdom and our attentional slavery? Just like, what do they call it in the movement against slavery, self-purchase agreements? Where we’re self purchasing our attention and our agency back or should we have a standard where it works that way?

Tristan Harris: One comparable example I was going to mention is Diet Coke, because in that case you have the race to the bottom of high fructose corn syrup amongst all the soda providers, there is an incentive to produce the cheapest thing. But if you can create enough collective awareness and people care enough, then you can have a diet version that then sells for more. But as you said, you really want the majority of Facebook’s incentives to be on that side of the equation, not some tiny minority. And I believe even as of a few years ago the organic food market is something like 3% of the overall market for food. So we face this problem with a values-blind economic system that is not privileging the kinds of values that we need, and thus you get kind of a oomph and a little nudge and support from maybe government policy.

Tips for individuals [02:14:05]

Robert Wiblin: Yeah. I feel like I shouldn’t be allowed to talk about ads because I’ve managed to almost completely cut them out of my life. I went back and looked at, I think Facebook keeps a record of every ad you’ve clicked and I hadn’t clicked a single Facebook ad going back to the time they started keeping records.

Robert Wiblin: Maybe this is a good way to talk about stuff that people can do on a personal level before we get a more systematic solution to this. Personally, I basically never look at the newsfeed on Twitter or Facebook, I’ve blocked them on my laptop, I don’t know my password for these services and I don’t have the apps on my phone so I can’t log into them on my phone. So I can only access them on my computer and then I’ve got these extensions that block the…(laughs) I can say, “No, it’s not addictive, I’ve managed to work on it just fine using all of these crazy systems”.

Tristan Harris: What do you use to hide the newsfeed on Twitter? I’m not aware of that?

Robert Wiblin: I think it’s something where the app) is designed to reward you with the newsfeed once you finish a task. But I just never click to finish any tasks so it just blocks it always.

Tristan Harris: Got it.

Robert Wiblin: I’ll stick up a link to all of this stuff, maybe I should write up a few notes on this and listeners can go and look at it if they like. I’ve also got this app called Freedom which can block internet access to particular websites if you need to break an addiction that you’ve got to a website at a particular time. As a result, well on Facebook I basically only engage with the posts that I myself write, which is a bit of an unusual way of using it, as a result I basically never see ads. On Twitter, because I can’t use the newsfeed, I have to say, “I really want to read Matthew Yglesias’ tweets right now”, and then I go to Matthew’s page and read through them, so it’s a bit more of an intentional thing and it means that they run out because I get to the bottom and I’m like, “Well I’ve read all of those tweets”.

Robert Wiblin: Another thing is I stopped…Back in 2016, it was a difficult political time, and I got very hooked on reading politics stuff that would make me very angry and I just realized that it was really depressing me. And so I started basically having a rule that I don’t follow people who write too much, at least certainly not angry political content.

Robert Wiblin: So, yeah. While I feel like in a sense I’m now okay with social media, I have done a lot of work over time to find a way of making it manageable and not too bad. I’m curious to know if you have any advice for users on things that they could do that they might not know that could help them use their time better in a similar way.

Tristan Harris: Yeah. I love these examples that you’re mentioning and I think also what it highlights obviously is we don’t want a world where only the micro few know how to download the exact Chrome extensions and set up the password protecting hacks and…it’s sort of like saying we’re going to build a nuclear power plant in your town and if there’s a problem you have to get your own hazmat suit. We don’t want a world where the Chrome extensions we add are our own personal hazmat suits.

Tristan Harris: It’s sort of like with COVID, I don’t understand why for example, vitamin D, zinc, vitamin C and quercetin, bromelain, are not just the common recommendations for everyone. Why don’t we have common precautions, these standard recommendations for the world and make the thing that’s best for everyone the default setting? And this is a kind of Rawlsian view of how we make this work for everyone as opposed to just who knows the secret to be a little bit less addicted. Increasingly there’s a cognitive inequality too where some who can afford or have the inside knowledge will have greater competitive capacity in a society because they’re able to hold and manage their attention better than those who are left with the default setting.

Tristan Harris: I just want to say that first before we get into a more egalitarian view of what people can do. We have a page on our website called Take Control, on humanetech.com, where I really recommend people check out some of those tools. You know it starts with, first of all, an awareness that all of this is happening. Which might sound like a throwaway statement to make but you can’t change something if you don’t care about changing it and I think people need to make a real commitment to themself in saying, “What am I really committed to changing about my use of technology?” And I think once you make that commitment then it means something when you say I’m going to turn off notifications.

Tristan Harris: And what I mean by that is really radically turning off all notifications except when a human being wants your attention. Because one of the things is that most of the notifications on our phone seem like they’re human beings that want to reach us because it says that these three people commented on your post, but in fact those are invented by those AIs’ machines to try to lure you back into another addictive spiral.

Tristan Harris: So that’s one thing. I mean, you already mentioned some of these Chrome extensions, deleting things off your phone, the apps, changing the login information so you don’t know the password. I think those are great, when I think about the sense-making environment, reducing the outrage-ification of who we’re following.

Tristan Harris: Most news sources have devolved into outrage media, the left and the right. So MSNBC, FOX News, Gateway Pundit, these things are mostly created to create outrage and it would be nice if there was almost a social shaming mechanism, so that built into the top of everyone’s profile there was some notion of how much of what you’re looking at news-wise is outrage media, and there was a sense that we don’t want to be seen as consuming that stuff.

Tristan Harris: I think what we also need is to ask who are the most transcendent (and include dialectic-based) thinkers that we find on the internet. Ironically I probably wouldn’t show up that way because people view me as a more one-sided figurehead for the problems of technology, but I think that each of us could probably point to people who are constructively looking at argument, counter argument, and synthesis. And I think that if we had some mechanism for asking who those voices are and how we reward them with more attention and more of our attention, I think those are great tips. This goes on forever because there’s so many different things that people can do, and again I recommend people check out our website, humanetech.com/take-control.

Robert Wiblin: Yeah. Interesting. I feel like, to some extent, I can get away with this because other people aren’t doing it, if other people started doing this then I suppose logic would be that they’d have to find a way to stop you from doing it. I guess we’ve seen this a little bit with Reddit, which I don’t have the app for, I just use it in a browser and I have an ad blocker. They’re now trying to force you to use the app basically and make it almost impossible to access any of the content on your phone without the app. Because then of course you’re within their system.

Tristan Harris: Totally. We’re seeing the same thing with ad blockers, where it used to be that you could just do an ad blocker and now actually because enough people are doing it, you go to any website, they’ll detect that you’re using an ad blocker and ask do you want to pay a subscriber fee to access our content.

Robert Wiblin: I actually feel fair game to them there. I mean, if you’re a writer and you’re like, “Well I make my money through advertising and you haven’t bought a subscription, so if you don’t want to see the ads then you should buy a subscription and otherwise you can’t see the stuff”…it’s inconvenient but maybe that’s a nudge towards, a subscription system is the way we want to go.

Tristan Harris: In the long run I do think that. I also think that actors like Apple or Google, which are sort of central traffic points, could be better at instrumenting the subscription-based business model. We didn’t talk about that but, they could make it much easier to even have a monthly budget of let’s say $25 a month or something, which gets allocated to the applications in a subscription-oriented way. The ones you use the most and help you the most in retrospect. And so money is flowing directly based on retrospective views of what was helpful and helped us spend our time in the ways that were fulfilling and constructive and high-agency, as opposed to it being coupled directly with time spent or engagement.

Robert Wiblin: Yeah. What do you worry most about being wrong about?

Tristan Harris: That’s a really good question. I’m certainly self-critical about…I’ll say this in the abstract first. One of my worries about the attention economy is that it’s very easy to get a false sense that you’re right, no matter what you’re coming from. Because the audience of people listening is so vast, everyone can get kind of captured by their audience for the perspective that they end up taking. So there’s going to be people who are, as I already know, there are critics of mine who are very pro-technology and techno-utopian and techno-libertarian. And they get rewarded by their audience the more they say that Blockchain’s going to save the world and AI is great and we just need to build more and faster and deregulate everything. And they get infinite evidence of positive social feedback for those statements. Just as I get evidence and positive social feedback by those who appeal to what I’m believing.

Tristan Harris: So what I worry about, to answer your question, is any kind of self-delusion. I think we need to always be aware and ask with a beginner’s mind, “How would I know that I’m wrong?” And it’s tricky, especially because in our work the issues are changing day to day and week to week, and YouTube has taken large actions in English by the way for the most part to reduce some of the borderline content and conspiracy sort of radicalizing issues that have come up.

Tristan Harris: And just to make sure I defend my reputation a bit here, most of the things that we said and filmed in The Social Dilemma were filmed in the beginning of 2018 when almost no one was talking about these issues and before they had been adequately dealt with. So what people might see as hyperbole is actually maybe a misunderstanding of the film production process and how long it takes to get these issues out there. And again, the level of concern comes from just how few people were honestly appraising these issues well and minimizing them as, “Oh, addiction doesn’t seem that bad”. Or “Oh, distraction doesn’t seem that bad”. But it’s the sort of death by a thousand cuts of the kind of breakdown of the social organs that I think people needed to get.

Tristan Harris: So, I don’t have a specific thing that I’m worried about getting wrong, it’s just more broadly, I always try to figure out what’s really true here and make sure I’m not deluding myself.

The current state of the research [02:22:37]

Robert Wiblin: Yeah. One thing we haven’t talked about is, I guess back in 2015, 2016, when I was feeling like using social media was damaging my wellbeing, there was a bunch of preliminary research which suggested that using social media a lot was associated with an unhappiness or depression or possibly suicide, anxiety, that kind of thing, and I shared a bunch of those studies. It seems like over the years as more has come in and maybe the quality of the research has gotten a bit better, it seems like the effect sizes perhaps have gotten closer to zero, or the studies have become more equivocal, there’s some that suggest it has an effect on wellbeing and some that suggest that it doesn’t. I slightly worried that I jumped the gun on that based on my personal experience and perhaps Facebook for most people doesn’t make a huge difference to their experience of life. Do you have any sense on where the research stands there? I know this isn’t your area of expertise in particular, but…

Tristan Harris: I really point to Jonathan Haidt’s research because he’s the one who did kind of the meta analysis of, especially those counterviews by Orban and Chebulski, I think is the name. He’s better equipped to speak to, I think, some of the limits of those studies, especially young or adolescent mental health, especially teenage girls, 10-14. The thing that seems to be jumping out of the data is teenage girls 10-14 tend to have the most trouble. I mean, essentially when you are saying who you are is how you look and we only like you if you look different than you actually do, and you have a system of media that privileges the visual form of identity…for teenage girls this is a damaging thing, to be constantly under the sort of surveillance…I mean it’s already bad enough being a kid but everyone is talking about now how their daughters…Even Joe Rogan was speaking about this on another episode I listened to, one of his podcasts he showed his guests a picture of his, I think 14 year old daughter and said, “My daughter looks like she’s 20 years old in this photo”. And it’s because social media has not just created these beautification filters but actually has created the incentive by which we have now created a social modeling function where the identities and the looks that we appeal to are these completely unrealistic standards and we’re exposing younger and younger children to them.

Tristan Harris: And I care a lot about how this affects kids. I grew up in, I felt like, a balanced environment and I grew up with Saturday morning cartoons and I use that example a lot because because I think, whether it’s Mister Rogers, or — if you haven’t seen the film Won’t You Be My Neighbor, about Mister Rogers — you get a sense for how differently we treated the development of childhood thinking and brains back then. I mean, the shot length was incredibly long, you have these images of children in awe in slow motion really responding and hanging on to every word to what he’s saying and the lessons are about basic compassion and goodness and not just who’s the most famous starting at age 10.

Tristan Harris: And I think the incentives for the companies is to go earlier and earlier into the lives of children and to colonize them because just like Coca-Cola wanted to get you hooked to Gatorade or Coca-Cola early so they could get you later on the other things, or just like Camel Cigarettes invented the avatars of the cartoon camel to get kids early in the pipeline of their later adulthood. We’re now seeing TikTok and Facebook and Snapchat in this race to the bottom for an earlier and earlier initial entry point into this, again fundamentally misaligned environment.

Tristan Harris: So again, I look at this zooming out as what is the effect of this in the long term overall and why are people’s own intuitions that this doesn’t make them feel good. I think a film that’s a documentary film on Netflix does not go viral to 40 million people in 28 days by accident, it does that because I think people resonate with this as their own experience. And as much as there’s many people who might critique things that they disagree with, I think the broad strokes of what is being outlined is really urgent and I think it’s important.

Robert Wiblin: Yeah. The question of what effect the internet is having on kids’ development and what effect it’s having on wellbeing would be a three hour conversation in itself. I’ll link to some of the best papers I found when I was looking into this. I suppose the social scientists are duking it out in the papers and in the blogs at the moment so it’s quite an interesting debate to follow and it does seem like the research is coming along, we have a lot better evidence so we’ve got better data sets than we did five years ago so hopefully we’ll get some clarity on that in time.

Careers [02:26:36]

Robert Wiblin: I guess we’ve only got a couple of minutes left, maybe keen to finish on any advice that you have for listeners who would like to go away and work on this general problem. I suppose there’s so many different angles that one could take on it and many different fields that one might go into. Are there any particularly important, valuable fields that you think would be good for people to study on or research that you think would be especially useful for people to do? And maybe also, do you think that people should go and work for these companies currently and try to influence them from the inside or do we need more advocacy? It’s a bit of a mangled question there, but…

Tristan Harris: Yeah, I mean the seemingly cop-out answer is unfortunately all of the above because just like you want as many conscious people inside of Exxon changing the investment portfolio to carbon capture and sequestration and removal, and reducing the amount of new oil drilling sites in their long-term investments, we also want all those changes to happen inside of the companies as much as possible, so if you’re going to work at one of the tech companies, ask them in their interview process about the business model misalignments, what they’re doing to fundamentally change their business model, so that they hear that and that it comes up at the, sort of net reviews of their HR hiring meetings, because they can see that there’s a trend and long-term risk to them. I think there’s just work to be done in all these different areas. We need sociological research, simulations and agent-based modeling of what are some alternate ways that social networks can fundamentally work. Is user generated content and unchecked virality a sustainable model for a democracy?

Tristan Harris: I think in the long run a question we have to ask is how does a western 21st century digital democracy outcompete these digital authoritarian closed systems. Because there’s a question of is the future going to be run on Chinese digital authoritarian infrastructure or on a pluralistic democratic infrastructure. And I think we need to update even the language and concepts that we’re holding on to as our guiding moral principles, because free speech in and of itself is not specific enough about the kind of information environment in which that speech goes viral, and you cannot distinguish between good faith and bad faith actors or uninformed actors or irresponsible actors or unaccountable actors and those who are genuinely aspiring to constructively make the information ecology net better.

Tristan Harris: Some projects I recommend people check out: The Consilience Project by Daniel Schmachtenberger, there’s other social networks that I think are trying to tackle some of these issues, but I really just hope this talk inspires many more people to go into this field, participate as advocates, as potential employees putting pressure on them and really have the EA community, the effective altruism community, see this as one of the invisible areas of existential risk. Because if a society cannot cohere, communicate, or coordinate with each other and agree on and act on its existential threats, then we’re not going to get anywhere with the rest of the issues on the EA agenda. So I just see this as the issue that is underneath all issues, because it really is gating our capacity to make reasonable progress on the rest of them.

Robert Wiblin: Yeah. My impression is that we need a lot more analysis of potential solutions and what effects they might have, and it’s very hard to…I mean we were talking about it there for an hour and we’ve just barely scratched the surface of the analysis that one might do on these different approaches the companies could take, and that the governments could take, and it cuts across business and management and social psychology and individual psychology and economics and political science and public choice theory. It seems like we really need generalists who have an understanding of the big picture and can think through the implications of — I mean it’s very hard to predict even for the best of people — but can think through the implications of moving things in one direction or another.

Robert Wiblin: Because another one is, we’ve had a couple of episodes on the show before about incentive design and how do you align the incentives of creators and users especially when it comes to provision of information or of public goods. We’ve got the interview with Vitalik Buterin and Glen Weyl on that topic, it seems it’s a difficult area to make progress in but if you can change those fundamental incentives then a lot of the other stuff comes naturally because then you’ve aligned the incentives of the people managing the system or providing the product and the users.

Robert Wiblin: There’s a lot more to say on this in the future, and on a positive note, it’s not all doom and gloom, there’s downsides to some of these new technologies but there’s a lot of ways that they really are helping people make more sense of the world. People have lots of crazy beliefs but I think that people were misinformed about lots of things back in the 70s and 80s as well. There’s things like…long-form podcasts like this didn’t exist very much 20 years ago, Wikipedia as well, people can use these technologies to make more sense of the world than they ever could before if we can keep these downsides, kind of, manageable.

Robert Wiblin: Are there any other things you’re optimistic about? Not only ways that we can make sure that we don’t understand the world less well than before, but ways that we could potentially flourish intellectually using these services?

Tristan Harris: Yeah, I think you’ve already pointed out many positive trends, I want to make sure we don’t discount that having cult-like beliefs and groupthink that gradually turns into a huge majority of one of the major political parties, if you take QAnon for example, is a really difficult and troubling trend. Because one of the things about trust is once you lose it, it’s very hard to get it back.

Tristan Harris: But what has me inspired is, for the first time, feeling like tens of millions of people really now understand this problem, and I think they are aware that if we don’t change it, it is existential. And I think what that’s doing is, while people might feel alone in how dark the vision that you and I may have painted for some of this conversation, I think people then have the false impression that they’re alone in feeling like we’ll never fix it. But if the silent majority basically all feels the same way, we joke internally at the Center for Humane Technology that everyone is on ‘team human’, they may just not know it yet, because it’s sort of an omni win-win or omni lose-lose kind of game. The downgrading of our critical thinking capacities and cohesion is not in the benefit of one group or another, it’s really something that we should all be concerned about and all fix.

Tristan Harris: I will say that I wish I could turn my email and other inboxes inside out so that more people could see how deeply it’s resonating in countries around the world. People in Brazil saying this is how we got Bolsonaro and now we’re waking up and seeing that this is happening. Or people in Indonesia responding to the film. So I think one thing that has me really inspired is that so many people are aware of this and see it as a thing that we fundamentally have to fix.

Tristan Harris: I will say with Thanksgiving coming up and people going home to their families, one thing that you might want to do is suggest watching the film with family members who you can’t talk to politically anymore. And actually after seeing the film do a reality swap in which you take your phones and you open up both to Facebook or TikTok or whatever you use and then swap your phones and actually scroll through someone’s feed for about 10 minutes and ask, how would I be seeing the world if this was the world that I woke up to on a daily basis? And I think that the visceral felt sense of seeing someone else’s different micro-reality I think gives us a lot more empathy for how we reclaim some common ground.

Robert Wiblin: My guest today has been Tristan Harris. Thanks for coming on the 80,000 Hours podcast, Tristan.

Tristan Harris: Thanks, Rob. Really great conversation, I hope people get a lot from it.

Rob’s outro [02:33:28]

Robert Wiblin: I just thought I would add that it would be fantastic to see someone, I guess perhaps some of you listeners, exploring some of the loose ends that we left in this conversation, such as whether it’s really the case that conspiracy theories are having a larger impact now than they used to in the past. And if so whether we have good reason to think that that’s being driven by these algorithmic recommendation services, or perhaps it’s due to something else.

I think there may well be opportunities to do an awful lot of good by working on some of these topics such as how to prevent nonsense conspiracy theories from getting a lot of traction.

But I guess if I was going to pursue that career path, I’d really want to do some investigation to try to figure out what really is the state of the evidence on what impact Facebook, Twitter and YouTube and other services like that are having on our political culture and our ability to flourish in our lives.

Fortunately, if you want to do that, we have an especially extensive links section for this episode. That’s because we did quite a bit of background research for this episode to try to figure out where we stand on these topics and come up with good questions for Tristan. If you go to the page associated with this episode, on the 80,000 Hours website, you can go through and find all of the best resources, you know, papers, articles, blog posts, tweets, and so on, in order to try to get to grips with this topic as quickly as possible. And then perhaps run with that and do some of your own research.

If you do do that, it would be fantastic to post what you learn on the Effective Altruism Forum – so you can share it with other people and they can benefit from what you’ve learned, and perhaps use that information to decide if this is a problem that they’d like to work on in their own career.

If you haven’t filled it out already, the 2020 Effective Altruism Survey is open for a little longer. If you’re a regular listener to this podcast, the survey may well be aimed at you.

If you’d like to make sure that the survey counts your opinions on what is most effective, your experiences with the community, and what you’re working on, click through the link in the show notes.

If you found out about effective altruism because of this show, it’s especially valuable for you to register that so we can quantify our impact relative to other resources.

This year’s survey will close on the 10th of December at midnight GMT.

The 80,000 Hours Podcast is produced by Keiran Harris.

Audio mastering by Ben Cordell.

Full transcripts are available on our site and made by Sofia Davis-Fogel.

Thanks for joining, talk to you again soon.

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths - from academics and activists to entrepreneurs and policymakers - to analyse the case for working on different issues, and provide concrete ways to help.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected]

Subscribe by searching for 80,000 Hours wherever you get podcasts, or click one of the buttons below:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.