Why might the risk of stable totalitarianism be an especially pressing problem?
Economist Bryan Caplan has written about the worry that “stable totalitarianism” — i.e. a global totalitarian regime that lasts for an extremely long period of time — could arise in the future, especially if we move toward a more unified world government or if certain technologies make it possible for totalitarian leaders to rule for longer.
Stable global totalitarianism would be an example of what Toby Ord calls an “enforced unrecoverable dystopia.” Ord categorises this as a form of existential risk: although humanity would not be extinct, an unrecoverable dystopia would mean losing out on the possibility of a good future for future generations. In this way, a truly perpetual dystopia could be as bad as (or possibly even worse than) outright extinction.
Caplan argues that totalitarian regimes in the past have ended either because of defeat by foreign military powers (in the case of World War II), or because of a problem with succession — where new leaders reduce the degree of totalitarianism after the death (or stepping down) of an authoritarian dictator (e.g. Khrushchev’s programme of de-Stalinisation). In particular, he argues that this reduction in the degree of totalitarianism has occurred in history because of news of disparities with non-totalitarian regimes — increasing both the knowledge of and incentives for other forms of government.
According to Caplan, a few things could make totalitarianism far more stable in the future — perhaps perpetually so:
- We could be in a situation where there is only a single totalitarian world government, or all governments are totalitarian — meaning that there are no non-totalitarian regimes against which the totalitarian regimes could be compared.
- Advances in technology could increase a regime’s surveillance capabilities, allow for behavioural engineering to control the actions of citizens, or significantly extend the lifespans of authoritarian leaders (reducing the chances of a problem with succession).
We discuss both of these possibilities in more detail just below.
Overall, Caplan thinks that there is a 5% risk of perpetual totalitarianism within the next 1,000 years. We think the risk of this is much lower — most forms of long-lasting totalitarianism won’t be truly perpetual. But any long-lasting totalitarianism would cause an immense amount of suffering and would be worth substantial effort to avoid.
Risks of totalitarianism through the formation of a single world government
Caplan’s article focuses on the dangers of single world government. He is particularly concerned that dangers to humanity — like climate change or new technology allowing for global catastrophic biological risks — might encourage the formation of pro-surveillance unions of states.
This is because he categorises fascism and communism (the ideologies behind past totalitarian states) as large movements accepting the idea that the world faces a grave danger that could only be solved with great sacrifices — and that these sacrifices require totalitarian control. So for example, we might develop technology that allows rogue individuals or groups (e.g. terrorist groups) to potentially create a catastrophic engineered pandemic. Caplan worries that threats like this could motivate leaders to band together and use surveillance to control the population, potentially to the point of becoming totalitarian.
To mitigate totalitarianism risks, Caplan argues for protecting individual liberties and publicising facts about past totalitarian regimes, and argues against the political integration of nation-states into larger unions.
We think that greater global unity actually might be useful for addressing global coordination problems and catastrophic risks, and so some increases in global cooperation are likely to be worth it. Nevertheless, we agree that stable totalitarian regimes are a risk of such an approach that should be kept in mind — and we aren’t sure how exactly to trade these things off.
Risks of totalitarianism through technological changes
We think that more attention should be paid to the technological changes that could make a perpetual totalitarian regime possible:
- More sophisticated surveillance techniques — which could either be developed to monitor the possession of potentially dangerous technologies, or for other less laudable ends — could greatly enhance the ability for totalitarian regimes to persist.
- Lie detection technology may soon see large improvements due to advances in machine learning or brain imaging. Better lie detection technology could improve cooperation and trust between groups by allowing people to prove they are being honest in high-stakes scenarios. On the other hand, it might increase the stability of totalitarian regimes by helping them avoid hiring, or remove, anyone who isn’t a ‘true believer’ in their ideology.
We’re also concerned about advances in AI making robust totalitarianism more likely to be possible.
AI-enabled totalitarianism would be even more concerning if we believe that we should have some concern for artificial sentience. This is because it seems plausible that it would be even easier to enforce a totalitarian regime over simulated beings, for example by resetting a simulation to a point before the leaders of the regime lose control. (For more, see this article on how digital people would change the world by Holden Karnofsky, and his later, more detailed article on weak points of this argument).
What can you do to help?
We think more research in this area would be valuable. We’re not currently aware of anyone working full-time on risks from stable totalitarianism, so we believe this area is highly neglected.
For instance, we’d be excited to see further analysis and testing of Caplan’s argument, as well as people working on how to limit the potential risks from these technologies and political changes if they do come about. Listen to our podcast with Caplan for some discussion.
We’re unsure overall on where the balance of risks from surveillance lies: it’s hard to say whether the increase in safety from potentially existential catastrophes like advanced bioweapons is or is not worth risks to political freedom. As a result, it may be especially useful to develop ways of making surveillance more compatible with privacy and public oversight.
We’d also be excited about people working specifically on reducing the risks of stable totalitarianism that could arise as a result of the development of AI. We’re not sure exactly what this kind of work would look like, but these notes gesture at one possible direction.
Learn more about risks of stable totalitarianism
Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.
Plus, join our newsletter and we’ll mail you a free book
Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity.