#88 – Tristan Harris on the need to change the incentives of social media companies
In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages.
Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it’s hard to remember how recently it was a fringe view.
It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality.
But while it all feels plausible, how strong is the evidence that it’s true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory.
At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. And a jury trial — an Athenian innovation — ended up condemning Socrates to death. Fears about new technologies aren’t always misguided.
Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques.
Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address.
Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what’s in our interests as users and citizens?
One way is to encourage a shift to a subscription model. Presumably, that would get Facebook’s engineers thinking more about how to make users truly happy, and less about how to make advertisers happy.
One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site.
But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50.
One possibility is for Congress to say: it’s unacceptable for large social media platforms to influence the behaviour of users through hyper-targeted advertising. Once you reach a certain size, you are required to shift over into a subscription model.
That runs into the problem that some people would be able to afford a subscription and others would not. But Tristan points out that during COVID, US electricity companies weren’t allowed to disconnect you even if you were behind on your bills. Maybe we can find a way to classify social media as an ‘essential service’ and subsidize a basic version for everyone.
Of course, getting governments more involved in social media could itself be dangerous. Politicians aren’t experts in internet services, and could simply mismanage them — and they have their own perverse motivation as well: shift communication technology in ways that will advance their political views.
Another way to shift the incentives is to make it hard for social media companies to hire the very best people unless they act in the interests of society at large. There’s already been some success here — as people got more concerned about the negative effects of social media, Facebook had to raise salaries for new hires to attract the talent they wanted.
But Tristan asks us to consider what would happen if everyone who’s offered a role by Facebook didn’t just refuse to take the job, but instead took the interview in order to ask them directly, “what are you doing to fix your core business model?”
Engineers can ‘vote with their feet’, refusing to build services that don’t put the interests of users front and centre. Tristan says that if governments are unable, unwilling, or too untrustworthy to set up healthy incentives, we might need a makeshift solution like this.
Despite all the negatives, Tristan doesn’t want us to abandon the technologies he’s concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world.
Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we’ve ever had — tools that could educate and organise people better than anything that has come before.
The tricky and open question is how to get there — Rob and Tristan agree that a lot more needs to be done to develop a reform agenda that has some chance of actually happening, and that generates as few unforeseen downsides as possible. Rob and Tristan also discuss:
- Justified concerns vs. moral panics
- The effect of social media on US politics
- Facebook’s influence on developing countries
- Win-win policy proposals
- Big wins over the last 5 or 10 years
- Tips for individuals
- And much more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.