Our overall view

Sometimes recommended

Working on this problem could be among the best ways of improving the long-term future, but we know of fewer high-impact opportunities to work on this issue than on our top priority problems.

Profile depth

Exploratory 

Why might the risk of stable totalitarianism be an especially pressing problem?

Economist Bryan Caplan has written about the worry that “stable totalitarianism” — i.e. a global totalitarian regime that lasts for an extremely long period of time — could arise in the future,1 especially if we move toward a more unified world government or if certain technologies make it possible for totalitarian leaders to rule for longer.

Stable global totalitarianism would be an example of what Toby Ord calls an “enforced unrecoverable dystopia.”2 Ord categorises this as a form of existential risk: although humanity would not be extinct, an unrecoverable dystopia would mean losing out on the possibility of a good future for future generations.3 In this way, a truly perpetual dystopia could be as bad as (or possibly even worse than) outright extinction.

Caplan argues that totalitarian regimes in the past have ended either because of defeat by foreign military powers (in the case of World War II), or because of a problem with succession — where new leaders reduce the degree of totalitarianism after the death (or stepping down) of an authoritarian dictator (e.g. Khrushchev’s programme of de-Stalinisation). In particular, he argues that this reduction in the degree of totalitarianism has occurred in history because of news of disparities with non-totalitarian regimes — increasing both the knowledge of and incentives for other forms of government.

According to Caplan, a few things could make totalitarianism far more stable in the future — perhaps perpetually so:

  • We could be in a situation where there is only a single totalitarian world government, or all governments are totalitarian — meaning that there are no non-totalitarian regimes against which the totalitarian regimes could be compared.
  • Advances in technology could increase a regime’s surveillance capabilities, allow for behavioural engineering to control the actions of citizens, or significantly extend the lifespans of authoritarian leaders (reducing the chances of a problem with succession).4

We discuss both of these possibilities in more detail just below.

Overall, Caplan thinks that there is a 5% risk of perpetual totalitarianism within the next 1,000 years. We think the risk of this is much lower — most forms of long-lasting totalitarianism won’t be truly perpetual. But any long-lasting totalitarianism would cause an immense amount of suffering and would be worth substantial effort to avoid.

Risks of totalitarianism through the formation of a single world government

Caplan’s article focuses on the dangers of single world government. He is particularly concerned that dangers to humanity — like climate change or new technology allowing for global catastrophic biological risks — might encourage the formation of pro-surveillance unions of states.

This is because he categorises fascism and communism (the ideologies behind past totalitarian states) as large movements accepting the idea that the world faces a grave danger that could only be solved with great sacrifices — and that these sacrifices require totalitarian control. So for example, we might develop technology that allows rogue individuals or groups (e.g. terrorist groups) to potentially create a catastrophic engineered pandemic. Caplan worries that threats like this could motivate leaders to band together and use surveillance to control the population, potentially to the point of becoming totalitarian.

To mitigate totalitarianism risks, Caplan argues for protecting individual liberties and publicising facts about past totalitarian regimes, and argues against the political integration of nation-states into larger unions.

We think that greater global unity actually might be useful for addressing global coordination problems and catastrophic risks, and so some increases in global cooperation are likely to be worth it. Nevertheless, we agree that stable totalitarian regimes are a risk of such an approach that should be kept in mind — and we aren’t sure how exactly to trade these things off.

Risks of totalitarianism through technological changes

We think that more attention should be paid to the technological changes that could make a perpetual totalitarian regime possible:

  • More sophisticated surveillance techniques — which could either be developed to monitor the possession of potentially dangerous technologies, or for other less laudable ends — could greatly enhance the ability for totalitarian regimes to persist.
  • Lie detection technology may soon see large improvements due to advances in machine learning or brain imaging. Better lie detection technology could improve cooperation and trust between groups by allowing people to prove they are being honest in high-stakes scenarios. On the other hand, it might increase the stability of totalitarian regimes by helping them avoid hiring, or remove, anyone who isn’t a ‘true believer’ in their ideology.

We’re also concerned about advances in AI making robust totalitarianism more likely to be possible.5

AI-enabled totalitarianism would be even more concerning if we believe that we should have some concern for artificial sentience. This is because it seems plausible that it would be even easier to enforce a totalitarian regime over simulated beings, for example by resetting a simulation to a point before the leaders of the regime lose control. (For more, see this article on how digital people would change the world by Holden Karnofsky, and his later, more detailed article on weak points of this argument).

What can you do to help?

We think more research in this area would be valuable. We’re not currently aware of anyone working full-time on risks from stable totalitarianism, so we believe this area is highly neglected.

For instance, we’d be excited to see further analysis and testing of Caplan’s argument, as well as people working on how to limit the potential risks from these technologies and political changes if they do come about. Listen to our podcast with Caplan for some discussion.

We’re unsure overall on where the balance of risks from surveillance lies: it’s hard to say whether the increase in safety from potentially existential catastrophes like advanced bioweapons is or is not worth risks to political freedom. As a result, it may be especially useful to develop ways of making surveillance more compatible with privacy and public oversight.

We’d also be excited about people working specifically on reducing the risks of stable totalitarianism that could arise as a result of the development of AI. We’re not sure exactly what this kind of work would look like, but these notes gesture at one possible direction.

Learn more about risks of stable totalitarianism

Read next:  Explore other pressing world problems

Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.

Plus, join our newsletter and we’ll mail you a free book

Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

Notes and references

  1. Caplan’s essay “The totalitarian threat” is published in Global Catastrophic Risks (2008), edited by Bostrom and Ćircović.

  2. In Chapter 5 of The Precipice: Existential Risk and the Future of Humanity (2020), Ord considers dystopian scenarios as a form of existential risk. In particular, on enforced dystopian scenarios, Ord writes:

    We can divide the unrecoverable dystopias we might face into three types, on the basis of whether they are desired by the people who live in them…
    The most familiar type is the enforced dystopia. The rise of expansionist totalitarianism in the mid-twentieth century caused intellectuals such as George Orwell to raise the possibility of a totalitarian state achieving global dominance and absolute control, locking the world into a miserable condition. The regimes of Hitler and Stalin serve as a proof of principle, each scaling up to become imperial superpowers while maintaining extreme control over their citizens. However, it is unclear whether Hitler or Stalin had the expansionist aims to control the entire world, or the technical and social means to create truly lasting regimes.
    This may change. Technological progress has offered many new tools that could be used to detect and undermine dissent and there is every reason to believe that this will continue over the next century. Advances in AI seem especially relevant, allowing automated, detailed monitoring of everything that happens in public places — both physical and online. Such advances may make it possible to have regimes that are far more stable than those of old.
    That said, technology is also providing new tools for rebellion against authority, such as the internet and encrypted messages. Perhaps the forces will remain in balance, or shift in favour of freedom, but there is a credible chance that they will shift towards greater control over the populace, making enforced dystopias a realistic possibility.

  3. Ord notes, in particular, that:

    To count as existential catastrophes, these outcomes don’t need to be impossible to break out of, nor to last millions of years. Instead, the defining feature is that entering that regime was a crucial negative turning point in the history of human potential, locking off almost all our potential for a worthy future. One way to look at this is that when they end (as they eventually must), we are much more lily that we were before to fall down to extinction or collapse than to rise up to fulfil our potential. For example, a dystopian society that lasted all the way until humanity was destroyed by external forces would be an existential catastrophe. However, if a dystopian outcome does not have this property, if it leaves open all our chances for success once it ends — it is a dark age in our story, but not a true existential catastrophe.

  4. That said, some technological changes might make totalitarianism less likely. For example, technology like mobile phones makes organising against totalitarian regimes easier.

  5. In AI governance: A research agenda, Allan Dafoe categorises robust totalitarianism as one of four sources of catastrophic risk from AI. He argues:

    Robust totalitarianism could be enabled by advanced lie detection, social
    manipulation, autonomous weapons, and ubiquitous physical sensors and digital
    footprints. Power and control could radically shift away from publics, towards elites
    and especially leaders, making democratic regimes vulnerable to totalitarian
    backsliding, capture, and consolidation.