Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

I think what I’m most concerned about is the shredding of a shared meaning-making environment and joint attention into a series of micro realities – 3 billion Truman Shows.

Tristan Harris

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages.

Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it’s hard to remember how recently it was a fringe view.

It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality.

But while it all feels plausible, how strong is the evidence that it’s true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory.

At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. And a jury trial — an Athenian innovation — ended up condemning Socrates to death. Fears about new technologies aren’t always misguided.

Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques.

Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address.

Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what’s in our interests as users and citizens?

One way is to encourage a shift to a subscription model. Presumably, that would get Facebook’s engineers thinking more about how to make users truly happy, and less about how to make advertisers happy.

One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site.

But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50.

One possibility is for Congress to say: it’s unacceptable for large social media platforms to influence the behaviour of users through hyper-targeted advertising. Once you reach a certain size, you are required to shift over into a subscription model.

That runs into the problem that some people would be able to afford a subscription and others would not. But Tristan points out that during COVID, US electricity companies weren’t allowed to disconnect you even if you were behind on your bills. Maybe we can find a way to classify social media as an ‘essential service’ and subsidize a basic version for everyone.

Of course, getting governments more involved in social media could itself be dangerous. Politicians aren’t experts in internet services, and could simply mismanage them — and they have their own perverse motivation as well: shift communication technology in ways that will advance their political views.

Another way to shift the incentives is to make it hard for social media companies to hire the very best people unless they act in the interests of society at large. There’s already been some success here — as people got more concerned about the negative effects of social media, Facebook had to raise salaries for new hires to attract the talent they wanted.

But Tristan asks us to consider what would happen if everyone who’s offered a role by Facebook didn’t just refuse to take the job, but instead took the interview in order to ask them directly, “what are you doing to fix your core business model?”

Engineers can ‘vote with their feet’, refusing to build services that don’t put the interests of users front and centre. Tristan says that if governments are unable, unwilling, or too untrustworthy to set up healthy incentives, we might need a makeshift solution like this.

Despite all the negatives, Tristan doesn’t want us to abandon the technologies he’s concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world.

Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we’ve ever had — tools that could educate and organise people better than anything that has come before.

The tricky and open question is how to get there — Rob and Tristan agree that a lot more needs to be done to develop a reform agenda that has some chance of actually happening, and that generates as few unforeseen downsides as possible. Rob and Tristan also discuss:

  • Justified concerns vs. moral panics
  • The effect of social media on US politics
  • Facebook’s influence on developing countries
  • Win-win policy proposals
  • Big wins over the last 5 or 10 years
  • Tips for individuals
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Highlights

*The Social Dilemma*

In climate change, if you went back maybe 50 years, or I don’t know, 40 years, you might have different groups of people studying different phenomena. There are some people studying the deadening of the coral reefs, some people studying insect loss in certain populations, some people studying melting of the glaciers, some people studying species loss in the Amazon, some people studying ocean acidification. There wasn’t yet a unified theory about something that’s driving all of those interconnected effects, but when you really think about existential threats, what we want to be concerned about is: If there’s a direction that is an existential one from these technologies or from these systems, what is it?

In the case of social media, we see a connected set of effects — shortening of attention spans, more isolation and individual time by ourselves in front of screens, more addiction, more distraction, more polarization, less productivity, more focusing on the present, less on history and on the future, more powerlessness, more polarization, more radicalization, and more breakdown of trust and truth because we have less and less of a shared basis of reality to agree upon. When you really zoom way out, I think what I’m most concerned about is the shredding of a shared meaning-making environment and joint attention into a series of micro realities. As we say in the film, 3 billion Truman Shows.

Each of us are algorithmically given something that is increasingly confirming some way in which we are seeing the world, whether that is a partisan way or a rabbit hole sort of YouTube way. And I know some of that is contested, but we’ll get into that. If you zoom out and say that that collection of effects — we call it human downgrading, or ‘the climate change of culture’, because essentially, the machines profit by making human beings more and more predictable. I show you a piece of content, and I know, with greater and greater likelihood, what you’re going to do. Are you going to comment on it? Are you going to click on it? Are you going to share it? Because I have 3 billion voodoo dolls or predictive models of each other human, and I can know with increasing precision what you’re going to do when I show it to you. That’s really what the film and our work is trying to prevent, and not really litigating whether the formal addiction score of TikTok in this month versus this month is this much for exactly this demographic.

Conspiracy theories

Conspiracy theories are like trust bombs. They erode the core trust that you have in any narrative. In fact, one of the best and strongest pieces of research on this says that the best predictor of whether you’ll believe in a new conspiracy theory is whether you already believe in another one, because you’re daisy-chaining your general view of skepticism of the overall narrative and applying it with confirmation bias to the next things you see.

One of the best examples of this is that a Soviet disinformation campaign in 1983 seeded the idea that the HIV virus raging around the world was a bioweapon released by the United States. And this was based on an anonymous letter published in an Indian newspaper, and it ended up becoming widely believed among those predisposed to distrust the Reagan administration. And as my colleague, Renee DiResta, who used to be at the CIA, and studied this for a long time, said, “As late as 2005, a study showed that 27% of African Americans still believe that HIV was created in a government lab”. And so these things have staying power and re-framing power on the way that we view information.

Facebook

Do you trust Mark Zuckerberg to make the best decisions on our behalf, or to try to satisfy the current regulators? Do you trust the regulators and the government that we happen to have elected? And as you said, there’s a strong incentive for Facebook to say, “Hmm, which of the upcoming politicians have the most pro-tech policies?” And then just invisibly tilt the scales towards all those politicians.

I think people need to get that Facebook is a voting machine, and voting machines are regulated for a reason. It’s just that it’s an indirect voting machine, because it controls the information supply that goes into what everyone will vote on. If I’m a private entrepreneur, I can’t just create an LLC for a new voting machine company and just place them around society. We actually have rules and regulations about how voting machines need to work, so that they’re fair and honest and so on.

Obviously we’ve entered into another paradox where, if we want Facebook to really be trustworthy, it should probably have completely transparent algorithms in which everyone can see that there’s no bias. But once they make those algorithms transparent, there’ll be maximum incentive to game those algorithms and point AIs at literally simulating every possible way to game that system. Which is why we have to be really careful and thoughtful.

I think the heart of this conversation is: What is the new basis of what makes something in this position — a technology company with all of this asymmetric knowledge, data, and collective understanding of 3 billion people’s identities, beliefs, and behaviours — what would make anyone in that position a trustworthy actor? Would you trust a single human being with the knowledge of the psychological vulnerabilities and automated predictability of 3 billion human social animals? On what conditions would someone be trustworthy? I think that’s a very interesting philosophical question. Usually answers like transparency, accountability, and oversight are at least pieces of the puzzle.

The subscription model

We’re already seeing a trend towards more subscription-oriented business relationships. I mean the success of Patreon, where people are directly funded by their audience…recently Substack…you have many more journalists who are leading their mainstream publications and having a direct relationship with their readers being paid directly through subscription. And you also have, by the way, more humane features in Substack. They let you actually, for example, as a writer, pause and say, “Hey, I’m not going to write for the next two weeks”. And it’ll actually proportionally discount the amount of subscription fees according to letting the author live in a more humane way and have these breaks. So we’re not creating these inhumane systems that are infinitely commoditizing and treating people as transactional artifacts. So those are some really exciting trends. And I actually have heard that Twitter might be looking into a subscription-based business model as a result of reacting to The Social Dilemma.

I think what we need though, is a public movement for that. And you can imagine categorically — and this would be a very aggressive act — but what if Congress said we are not allowing a micro-targeting behavioural advertising model for any large social media platform. That once you reach a certain size, you are required to shift over into a subscription. Now, people don’t like that, again, because you end up with inequality issues — that some people can afford it and others cannot. But we can also, just as we’ve done during COVID, treat some of these things as essential services. So that much like during COVID, PG&E and your electricity and basic services are forced to remain on, even if you can’t pay your bills. And I think we could ask how we subsidize it, the basic version for the masses, and then have paid versions where the incentives are directly aligned.

Tips for individuals

Robert Wiblin: Personally, I basically never look at the newsfeed on Twitter or Facebook, I’ve blocked them on my laptop, I don’t know my password for these services and I don’t have the apps on my phone, so I can’t log into them on my phone. So I can only access them on my computer and then I’ve got these extensions — the app) is designed to reward you with the newsfeed once you finish a task. But I just never click to finish any tasks so it just blocks it always.

I’ve also got this app called Freedom which can block internet access to particular websites if you need to break an addiction that you’ve got to a website at a particular time. As a result, well on Facebook I basically only engage with the posts that I myself write — which is a bit of an unusual way of using it — as a result I basically never see ads. On Twitter, because I can’t use the newsfeed, I have to say, “I really want to read Matthew Yglesias’ tweets right now”, and then I go to Matthew’s page and read through them. So it’s a bit more of an intentional thing, and it means that they run out because I get to the bottom and I’m like, “Well I’ve read all of those tweets”.

Tristan Harris: Yeah. I love these examples that you’re mentioning and I think also what it highlights obviously is that we don’t want a world where only the micro few know how to download the exact Chrome extensions and set up the password-protecting hacks. It’s sort of like saying we’re going to build a nuclear power plant in your town and if there’s a problem you have to get your own hazmat suit. We don’t want a world where the Chrome extensions we add are our own personal hazmat suits.

We have a page on our website called Take Control, on humanetech.com, where I really recommend people check out some of those tools. You know it starts with, first of all, an awareness that all of this is happening. Which might sound like a throwaway statement to make but you can’t change something if you don’t care about changing it and I think people need to make a real commitment to themself in saying, “What am I really committed to changing about my use of technology?” And I think once you make that commitment then it means something when you say I’m going to turn off notifications.

And what I mean by that is really radically turning off all notifications, except when a human being wants your attention. Because one of the things is that most of the notifications on our phone seem like they’re human beings that want to reach us — because it says ‘these three people commented on your post’, but in fact those are invented by those AIs’ machines to try to lure you back into another addictive spiral.

Articles, books, and other media discussed in the show

Tristan’s work

Tools for social media addiction

Everything else discussed in the show

Other work related to the problem

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.