Risks of stable totalitarianism

The Khmer Rouge ruled Cambodia for just four years, yet in that time they murdered about one-quarter of Cambodia’s population.

Even short-lived totalitarian regimes can inflict enormous harm. In the 20th century alone, they committed some of the most horrific crimes against humanity in history. In addition to the Cambodian Genocide, these include:

These events together claimed tens of millions, and perhaps over 100 million, lives. That’s comparable to the combined death count of both world wars (and World War II itself could be added to the list).

Many more people were forced to live in a state of terror and oppression, unable to express themselves and fearing that their lives could be ruined any day.

And that all occurred within a century.

Perhaps even more concerning is the possibility of totalitarian regimes that last a lot longer than that. Stable totalitarianism is the idea of a totalitarian regime lasting many millennia, or even in perpetuity.

At first glance, this may seem vanishingly unlikely. While some totalitarian regimes have managed to persist for decades, they have all inevitably succumbed to external forces, internal resistance, or value drift over time.

But future technological developments may help totalitarian leaders overcome these barriers:1

  • Autonomous weapons may put entire armies in the hands of one person.2
  • Surveillance technology could make it impossible to organise resistance.
  • Dictators could use advanced AI systems to gain unassailable economic and military advantages. Since AI systems don’t die, they could be used to lock-in an unchanging, oppressive political regime for millennia to come.
  • If advanced AI systems are worthy of moral consideration themselves, a kind of digital totalitarianism could be created. An oppressive state could refuse to grant these systems appropriate rights and protections, instead using them to advance national goals in ways that cause suffering and prove deeply harmful.

We also have to contend with the possibility that efforts to mitigate other global risks open new pathways towards totalitarianism. Global cooperation, including monitoring and enforcement regimes, may be needed to stop bad actors from misusing powerful technologies. However, creating these institutions also risks centralising power, which could be captured by totalitarian actors.

We don’t think these outcomes are particularly likely. A totalitarian regime aiming to lock itself in like this would have to overcome serious technical challenges, not to mention resistance from its citizens and the rest of the world. But neither can we dismiss the possibility outright.

Influencing global historical events like the rise and fall of empires feels impossible. But there may be specific things you can do that can make such a dystopian future less likely, such as:

  • Improving the governance of advanced AI systems to stop dictators from misusing them
  • Ensuring global governance regimes prevent catastrophes while also protecting human rights and freedom
  • Inventing defensive technologies that make society more resilient to oppressive actors
  • Preventing the decay of democratic institutions

Below we talk about each of these efforts in more depth, and discuss how you can contribute to them.

Summary

Working on this problem may be very important for improving the long-term future of humanity. However, it’s probably somewhat less of a priority than our highest priority problems because the path to stable totalitarianism involves multiple unlikely steps.

Still, there are some promising career options in this area you might want to consider, particularly if you think stable totalitarianism is more likely than we do, or if you’re particularly well-placed to work on it.

Our overall view

Sometimes recommended

Working on this problem could be among the best ways of improving the long-term future, but we know of fewer high-impact opportunities to work on this issue than on our top priority problems.

Scale  

A totalitarian regime seeking to achieve stability faces some serious obstacles. First, it would have to assert dominance over a substantial fraction of the world, either through conquest or by taking control of a great power state or hypothetical world government. Second, it would have to entrench its dominance by eliminating external competition, internal resistance, and political and social change. That’s a tall order. Despite dreaming of such dominance, no previous totalitarian state has even come close.

We think there’s some chance emerging technologies — including advanced AI systems, surveillance techniques, autonomous weapons, and cyber warfare capabilities — will soon make achieving such stability a tangible reality for powerful totalitarian states. This could have disastrous consequences for humanity. Totalitarian regimes have caused enormous suffering in the past, committing some of the largest and most horrifying crimes against humanity ever experienced. Stable totalitarianism could last indefinitely, locking humanity into a future of suffering and oppression rather than prosperity and flourishing.

The only reason we think it might not be one of the absolute most important problems in the world is that it seems less likely to occur than some of the other potential global catastrophes we’ve considered. We’re very uncertain about this, but a highly simplified calculation puts the chance of stable totalitarianism occurring at below 1%.

Neglectedness  

This seems like a highly neglected problem. In fact, we don’t know of anybody whose work focuses entirely on stable totalitarianism.

But there’s a wider ecosystem of organisations doing related work. Many researchers, think tanks, and international organisations seek to protect human rights and freedoms against oppressive governments. Other organisations do research and advocacy to understand and resist democratic backsliding. An uncertain estimate of the amount of money spent on somewhat-related problems like these is between $10M and $100M.

Solvability  

It seems hard to make progress on this problem. Totalitarian states have typically been brought down by war, resistance, and failed succession plans — each of which seem difficult for individuals to influence. But we do think there are a few promising interventions that might help make it somewhat less likely for totalitarian states to arise and use technology to entrench their dominance.

Profile depth

Medium-depth 

This is one of many profiles we've written to help people find the most pressing problems they can solve with their careers. Learn more about how we compare different problems, see how we try to score them numerically, and see how this problem compares to the others we've considered so far.

Why might the risk of stable totalitarianism be an especially pressing problem?

Totalitarian regimes killed over 100 million people in less than 100 years in the 20th century. The pursuit of national goals with little regard for the wellbeing or rights of individuals makes these states wantonly cruel. The longer they last, the more harm they could potentially do.

Could totalitarianism be an existential risk?

Totalitarianism is a particular kind of autocracy, a form of government in which power is highly concentrated. What makes totalitarian regimes distinct is the complete, enforced subservience of the entire populace to the state.

Most people do not welcome such subservience. So totalitarian states are also characterised by mass violence, surveillance, intrusive policing, and a lack of human rights protections, as well as a state-imposed ideology to maintain control.

So far, most totalitarian regimes have only survived for a few decades.

If one of these regimes were to maintain its grip on power for centuries or millennia, we could call it stable totalitarianism. All totalitarian regimes threaten their citizens and the rest of the world with violence, oppression, and suffering. But a stable totalitarian regime would also end any hope of the situation improving in the future. Millions or billions of people would be stuck in a terrible situation with very little hope of recovery — a fate as bad (or even worse) than extinction.

Is any of this remotely plausible?

For stable totalitarianism to ruin our entire future, three things have to happen:

  1. A totalitarian regime has to emerge.
  2. It has to dominate all, or at least a substantial part, of the world.
  3. It has to entrench itself indefinitely.

No state has even come close to achieving that kind of domination before. It’s been too difficult for them to overcome the challenges of war, revolution, and internal political changes. Step three, in particular, might seem especially far-fetched.

New technologies may make a totalitarian takeover far more plausible though.

For example:

  • Physical and digital surveillance may make it nearly impossible to build resistance movements.
  • Autonomous weapons may concentrate military power, making it harder to resist a totalitarian leader.
  • Advanced lie detection may make it easier to identify dissidents and conspirators.
  • Social manipulation technologies may be used to control the information available to people.

Many of these technologies are closely related to developments in the field of AI. AI systems are rapidly developing new capabilities. It’s difficult to predict how this will continue in the future, but we think there’s a meaningful chance that AI systems come to be truly transformative in the coming decades. In particular, AI systems that can make researchers more productive, or even replace them entirely, could lead to rapid technological progress and much faster economic growth.

A totalitarian dictator could potentially use transformative AI to overcome each of the three forces that have impeded them in the past.

  • AI could eliminate external competition: If one state controls significantly more advanced AI systems than its rivals, then it may have a decisive technological edge that allows it to dominate the world through conquest or compellence (i.e. forcing other states to do something by threatening them with violence if they refuse).
  • AI could crush internal resistance: AI could accelerate the development of multiple technologies dictators would find useful, including the surveillance, lie detection, and weaponry mentioned above. These could be used to detect and strangle resistance movements before they become a threat.
  • AI could solve the succession problem: AI systems can last much longer than dictators and don’t have to change over time. An AI system directed to maintain control of a society could keep pursuing that goal long after a dictator’s death.

Stable totalitarianism doesn’t seem like an inevitable, or even particularly probable, result of technological developments. Bids for domination from dictators would still face serious opposition. Plus, new technologies could also make it harder for a totalitarian state to entrench itself. For example, they could make it easier for people to share information to support resistance movements.

But the historical threat of totalitarianism combined with some features of modern technology make stable totalitarianism seem plausible.

Below, we discuss in more depth each of the three prerequisites: emergence, domination, and entrenchment.

Will totalitarian regimes arise in future?

Totalitarianism will probably persist in the future. Such regimes have existed throughout history and still exist today. About half the countries in the world are classified as “autocratic” by V-Dem, a research institute that studies democracy. Twenty percent are closed autocracies where citizens don’t get to vote for party leaders or legislative representatives.

Democracy has seen a remarkable rise worldwide since the 1800s. Before 1849, every country in the world was classified as autocratic due to limited voting rights. Today, 91 — over half of V-Dem’s dataset — are democratic.

But progress has recently slowed and even reversed. The world is slightly less democratic today than it was 20 years ago. That means we should probably expect the world to contain authoritarian regimes, including totalitarian ones, for at least decades to come.

Could a totalitarian regime dominate the world?

Broadly there seem to be two main ways a totalitarian regime could come to dominate a large fraction of the world. First, it could use force or the threat of force to assert control. Second, it could take control of a large country or even a future world government.

Domination by force

Many totalitarian regimes have been expansionist.

Hitler, for example, sought to conquer “heartland” Europe to gain the resources and territory he thought he needed to exert global domination.3 While he didn’t get far, others have had more success:

  • 20th century communist rulers wanted to create a global communist state. In the mid-1980s, about 33% of the world’s people lived under communist regimes.4
  • At its peak, the British Empire comprised about 25% of the world’s land area and population.
  • The Mongols controlled about 20% of the world’s land and 30% of its people.

In recent decades, ambitious territorial conquest has become much less common. In fact, there have been almost no explicit attempts to take over large expanses of territory for almost 50 years.5 But, as Russia’s invasion of Ukraine shows, we shouldn’t find too much comfort in this trend. Fifty years just isn’t that long in the grand sweep of history.

Technological change could make it easier for one state to control much of the world. Historically, a technological edge has often given states huge military advantages. During the Gulf War, for example, American superiority in precision-guided munitions and computing power proved overwhelming.6

Some researchers think that the first actor to obtain future superintelligent AI systems could use them to achieve world domination.7 Such systems could dramatically augment a state’s power. They could be used to coordinate and control armies and monitor external threats. They could also increase the rate of technological innovation, giving the state that first controls them a significant edge over the rest of the world in the key technologies we discussed previously, like weaponry, targeting, surveillance, and cyber warfare.

AI could provide a decisive advantage just by being integrated into military strategies and tactics. Cyberattack capabilities, for example, could disrupt enemy equipment and systems. AI systems could also help militaries process large amounts of data, react faster to enemy actions, coordinate large numbers of soldiers or autonomous weapons, and more accurately strike key targets.8

There’s even the possibility that military decision making could be turned over in part or in whole to AI systems. This idea currently faces strong resistance, but if AI systems prove far faster and more efficient than humans, competitive dynamics could push strongly in favour of more delegation.

But a state with such an advantage over the rest of the world might not even have to use deadly force. Simply threatening rivals may be enough to force them to adopt certain policies or to turn control of critical systems over to the more powerful state.

In sum, AI-powered armies, or just the threat of being attacked by one, could make the country that controls advanced AI more powerful than the rest of the world combined. If it so desired, that country could well use that advantage to achieve the global domination that past totalitarian leaders have only been able to dream of.

Controlling a powerful government

A totalitarian state could also gain global supremacy by taking control of a powerful government, such as one of the great powers or a hypothetical future world government.

Totalitarian parties like the Nazis, for example, tried to gain more influence by controlling large fractions of the world. But the Nazis already gained a lot of power simply by gaining control of Germany.

If a totalitarian actor gained control of one of the world’s most powerful countries today, it could potentially control a significant fraction of humanity’s future (in expectation) by simply entrenching itself in that country and using its influence to oppress many people indefinitely and shape important issues like how space is governed. In fact, considering the prevalence of authoritarianism, this may be the most likely way totalitarianism could shape the long-term future.

There’s also the possibility that such an actor could gain even more influence by taking over a global institution.

Currently, countries coordinate many policies through international institutions like the United Nations. However, the enforcement mechanisms available to these institutions are currently “imperfect”“: applied slowly and unevenly.

We don’t know for sure how international cooperation will evolve in the future. However, international institutions could have more power than they currently do. Such institutions facilitate global trade and economic growth, for example. They may also help states solve disagreements and avoid conflict. They’re often proposed as a way to manage global catastrophic risks too. States could choose to empower global institutions to realise these benefits.

If such an international framework were to form, a totalitarian actor could potentially leverage it to gain global control without using force (just as totalitarian actors have seized control of democratic countries in the past). This would be deeply worrying because a global totalitarian government would not face pressure from other states, which is one of the main ways totalitarianism has been defeated in the past.

Economist Bryan Caplan is particularly concerned that fear of catastrophic threats to humanity like climate change, pandemics, and risks from advanced AI could motivate governments to implement policies that are particularly vulnerable to totalitarian takeover, such as widespread surveillance.9

We think there are difficult tradeoffs to consider here. International institutions with strong enforcement powers might be needed to address global coordination problems and catastrophic risks. Nevertheless, we agree that there are serious risks as well, including the possibility that they could be captured by totalitarian actors. We aren’t sure how exactly to trade these things off (hence this article)!

Could a totalitarian regime last forever?

Some totalitarian leaders have attempted to stay in power indefinitely. In What We Owe the Future, William MacAskill discusses several times authoritarian leaders have sought to extend their lives:10

  • Multiple Chinese emperors experimented with immortality elixirs. (Some of these potions probably contained toxins like lead, making them more likely to hasten death than defeat it.)
  • Kim Il-Sung, the founder of North Korea, tried to extend his life by pouring public funds into longevity research and receiving blood transfusions from young Koreans.
  • Nursultan Nazarbayev, who ruled Kazakhstan for nearly two decades, also spent millions of state dollars on life extension, though these efforts reportedly only produced a “liquid yogurt drink” called Nar.

But of course, none have even got close to entrenching themselves permanently. The Nazis ruled Germany for just 12 years. The Soviets controlled Russia for 79. North Korea’s Kim dynasty has survived 76 years and counting.

They have inevitably fallen due to some combination of three forces:

  1. External competition: Totalitarian regimes pose a risk to the rest of the world and face violent opposition. The Nazis, Mussolini’s Italy, the Empire of Japan, and Cambodia’s Khmer Rouge were all defeated militarily.
  2. Internal resistance: Competing political groups or popular resistance can undermine the leaders.
  3. The “succession problem”: These regimes sometimes liberalise or collapse entirely after particularly oppressive leaders die or step down. For example, the USSR collapsed a few years after Mikhail Gorbachev came to power.

To date, these forces have made it impossible to entrench an oppressive regime in unchanging form for more than a century or so.

But once again, technology could change this picture. Advanced AI — and the military, surveillance, and cyberweapon technologies it could accelerate — may be used to counteract each of the three forces.

For external competition, we’ve already discussed how AI might allow leading states to build a substantial military advantage over the rest of the world.

After using that advantage to achieve dominance over the rest of the world, a totalitarian state could use surveillance technologies to monitor the technological progress of any actors — external or internal — that could threaten its dominance. With a sufficient technological edge, it could then use kinetic and cyber weapons to crush anyone who showed signs of building power.

After eliminating internal and external competition, a totalitarian actor would just have to overcome the succession problem to make long-term entrenchment a realistic possibility. This is a considerable challenge. Any kind of change in institutions or values over time would allow for the possibility of escape from totalitarian control.

But advanced AI could also help dictators solve the succession problem.

Perhaps advanced AI will help dictators invent more effective, dairy-free life extension technologies. However, totalitarian actors could also direct an advanced AI system to continue pursuing certain goals after their death. An AI could be given full control of the state’s military, surveillance, and cybersecurity resources. Meanwhile, a variety of techniques, such as digital error correction, could be used to keep the AI’s goals and methods constant over time.11

This paints a picture of truly stable totalitarianism. Long after the dictator’s death, the AI could live on, executing the same goals, with complete control in its area of influence.

The chance of stable totalitarianism

So, how likely is stable totalitarianism?

This is clearly a difficult question. One complication is that there are multiple ways stable totalitarianism could come to pass, including:

  • Global domination: A totalitarian government could become so powerful that it has a decisive advantage over the rest of the world. For example, it could develop an AI system so powerful it can prevent anyone else from obtaining a similar system. It could then use this system to dominate any rivals and oppress any opposition, achieving global supremacy.
  • Centralised power drifts toward totalitarianism: International institutions could become more robust and powerful, perhaps as a result of efforts to increase coordination, reduce conflict, and mitigate global risks. National governments may even peacefully and democratically cede more control to the international institutions. But efforts to support cooperation and prevent new technologies from being misused to cause massive harm could, slowly or suddenly, empower totalitarian actors. They may use these very tools to centralise and cement their power.
  • Collapse of democracy: Some advanced AI system could centralise power such that someone in a non-totalitarian state, or maybe a global institution, could use it to undermine democratic institutions, disempower rivals, and cement themself as a newly-minted totalitarian leader.
  • One country is lost: A totalitarian government in one large country could use surveillance tools, AI, and other technologies to entrench its rule over its population indefinitely. They wouldn’t even have to be the first to invent the technology: they could re-invent, buy, copy, or steal it after it’s been invented elsewhere in the world. Although all the value of our future might not be lost, a substantial fraction of humanity could be condemned to indefinite oppression.

The key takeaway from the preceding sections is that there does seem to be a significant chance powerful AI systems will give someone the technical capacity to entrench their rule in this way. The key question is whether someone will try to do so — and whether they’ll succeed.

Here’s a rough back-of-the-envelope calculation, estimating the risk over roughly the next century:

  • Chance that future technologies, particularly AI, make entrenchment technically possible: 25%
  • Chance that a leader or group tries to use the technology to entrench their rule: 25%
  • Chance that they will achieve of decisive advantage over their rivals and successfully entrench their rule: 5%
  • Overall risk: 0.3%, or about 1 in 330

We’re pretty uncertain about all of these numbers. Some of them might seem low or high. If you plug in numbers to make your own estimate, you can see how much the risk changes.

Some experts have given other estimates of the risk. Caplan, in particular, has estimated that there’s a 5% chance that “a world totalitarian government will emerge during the next one thousand years and last for a thousand years or more.”12

But another key takeaway from the preceding sections is that, while stable totalitarianism seems possible, it also seems difficult to realise — especially in a truly long-term sense. A wannabe eternal dictator would have to solve technical challenges, overcome fierce resistance, and preempt a myriad of future social and technical changes that could threaten their rule.

That’s why we think the chance of a dictator succeeding, assuming it’s possible and they try, is probably low. We’ve put it at 5%. However, it could be much higher or lower. There’s currently a lot of scope for disagreement, and we’d love to see more research into this question. The most extensive discussion we’ve seen of how feasible it would be for a ruler to entrench long-term control with AI is in a report on Artificial General Intelligence and Lock-In by Lukas Finnveden, C. Jess Riedel, and Carl Shulman.

It’s also worth noting that it’s low in part because we expect the rest of the world to resist attempts at entrenchment. You might choose to work on this problem partly to ensure that resistance materialises.

Bottom line: we think that stable totalitarianism is far from the most likely future outcome. But we’re very unsure about this, the risk doesn’t seem super low, and the risk partly seems low because stable totalitarianism would clearly be so awful that we expect people would make a big effort to stop it.

Preventing long-term totalitarianism in particular seems pretty neglected

The core of the argument sketched above is that the future will likely contain totalitarian states, one of which could obtain very powerful AI systems which give them the power to eliminate competition and extend their rule long into the future.

Even the impermanent totalitarianism humanity has experienced so far has been horrendous. So the prospect our descendants could find themselves living under such regimes for millennia to come is distressing.

Yet we don’t know of anyone working directly on the problem of stable totalitarianism.

If we count indirect efforts, the field starts to seem more crowded. As we recount below, there are many think tanks and research institutes working to protect democratic institutions, which implicitly work against stable totalitarianism by trying to reduce the number of countries that become totalitarian in the first place. Their combined budgets for this kind of work are probably on the order of $10M to $100M annually.

There’s also the fact that the rise of a stable totalitarian superpower would be bad for everyone else in the world. That means that most other countries are strongly incentivized to work against this problem. From this perspective, perhaps we should count some large fraction of the military spending of NATO countries (almost $1.2 trillion in 2023) as part of the anti-totalitarian effort. Some portion of the diplomatic and foreign aid budgets of democratic countries is also devoted to supporting democratic institutions around the world (e.g. the US State Department employs 13,000 Foreign Service members).

One could argue that many of these resources are allocated inefficiently. Or, as we discussed above, some of that spending could raise other risks if it drives arms races and stokes international tension. But if even a small fraction of that money is spent on effective interventions, marginal efforts in this area start to seem a lot less impactful.

In addition to questions of efficiency, the relevance of this spending to the problem of stable totalitarianism specifically is still debatable. Our view is that the particular pathways which could lead to the worst outcomes — a technological breakthrough that brings about the return of large-scale conquest and potentially long-term lock-in — are not on the radar of basically any of the institutions mentioned.

Why might you choose not to work on this problem?

All that said, maybe nobody’s working on this problem for a reason.

First, it may not seem that likely, depending on your views (and if we’re wrong about the long-term possibilities of advanced AI systems, then it might even be impossible for a dictator to take and entrench their control over the world).

Second, it might not be very solvable. Influencing world-historical events like the rise and fall of totalitarian regimes seems extremely difficult!

For example, we mentioned above that the three ways totalitarian regimes have been brought down in the past are through war, resistance movements, and the deaths of dictators. Most of the people reading this article probably aren’t in a position to influence any of those forces (and even if they could, it would be seriously risky to do so, to say the least!).

What can you do to help?

To make progress on this problem, we may need to aim a little bit lower than winning wars or fomenting revolutions.

But we do think there are some things you can do to help solve this problem. These include:

  • Working on AI governance
  • Researching downside risks of global coordination
  • Helping develop defensive technologies
  • Protecting democratic institutions

AI Governance

First, it’s notable that most — possibly all — plausible routes to stable totalitarianism leverage advanced AI. You could go into AI governance to help establish laws and norms that make it less likely AI systems are used for these purposes.

You could help build international frameworks that broadly shape how AI systems are developed and deployed. It’s possible that the potentially transformative benefits and global risks AI could bring will create great opportunities for international cooperation.

Eventually the world might establish shared institutions to monitor where advanced AI systems are being developed and what they may be used for. This information could be paired with remote shutdown technologies to prevent malicious actors, including rogue states and dictators, from obtaining or deploying AI systems that threaten the rest of the world. For example, there may be ways to legally or technically direct how autonomous weapons are developed to prevent one person from being able to control large armies.

It’s in everyone’s interest to ensure that no one country uses AI to dominate the future of humanity. If you want to help make this vision a reality, you could work at organisations like the Centre for the Governance of AI, the Oxford Martin AI Governance Initiative, the Institute for AI Strategy and Policy, the Institute for Law and AI, RAND’s Technology and Security Policy Center, the Simon Institute, or even large multilateral policy organisations and related think tanks.

If this path seems exciting, you might want to read our career review of AI governance and policy.

Researching risks of global coordination

Of course, concerns about the development of oppressive world governments are motivated by exactly this vision for global governance, which include quite radical proposals such as monitoring all advanced AI development.

If such institutions are needed to tackle global catastrophic risks, we may have to accept some risk of them enabling overly-intrusive governance. Still, we think we should do everything we can to mitigate this cost where possible and continue researching all kinds of global risks to ensure we’re making good tradeoffs here.

For example, you could work to design effective policies and institutions that are minimally invasive and protect human rights and freedoms. Or, you could analyse which policies to reduce existential risk need to be addressed at the global level and which can be addressed at the state level. Allowing individual states to tackle risks also seems more feasible than coordinating at the global level.

We haven’t done a deep dive in this space, but you might be able to work on this issue in academia (like at the Mercatus Center, where Bryan Caplan works), at some think tanks that work on freedom and human rights issues (like Chatham House), or in multilateral governance organisations themselves.

You can also listen to our podcast with Bryan Caplan for more discussion.

Working on defensive technologies

Another approach would be to work on technologies that protect individual freedoms without empowering bad actors. Many technologies, like global institutions, have benefits and risks: they can be used by both individuals to protect themselves and malicious actors to cause harm or seize power. If you can speed up the development of technologies that help individuals more than bad actors, then you might make the world as a whole safer and reduce the risk of totalitarian takeover.

Technologist Vitalik Buterin calls this defensive accelerationism. There’s a broad range of such technologies, but some that may be particularly relevant for resisting totalitarianism could include:

  • Tools for identifying misinformation and manipulative content
  • Cybersecurity tools
  • Some privacy-enhancing technologies like encryption protocols
  • Biosecurity policies and tools, like advanced PPE, that make it harder for malicious actors to get their way by threatening other states with biological weapons

The short length of that list reflects our uncertainty about this approach. There’s not much work in this area to direct additional efforts beyond Buterin’s essay.

It’s also very hard to predict the implications of new technologies. Some of the examples Buterin gives seem like they could also empower totalitarian states or other malicious actors. Cryptographic techniques can be used by both individuals (to protect themselves against surveillance) and criminals (to conceal their activities from law enforcement). Similarly, cybersecurity tools meant to help individuals could also be used by a totalitarian actor to thwart multilateral attempts to disrupt dangerous AI development within its borders.

That said, we think cautious, well-intentioned research efforts to identify technologies that empower defenders over attackers could be valuable.

Another related option is to research potential downsides from other technologies discussed in this article. Some researchers dedicate their time to understanding issues like risks to political freedom from advanced surveillance and the dangers of autonomous weapons.

Protecting democratic institutions

A final approach to consider is supporting democratic institutions to prevent more countries from sliding towards authoritarianism and, potentially, totalitarianism.

We mentioned that, after over a century of progress, global democratisation has recently stalled. Some researchers have claimed that we are experiencing “democratic backsliding” globally, with populists and partisans subverting democratic institutions. Although this phenomenon is controversial because it’s highly politicised and “democraticness” is hard to measure, it does seem to be a real phenomenon.

Given what we know, it at least seems like a trend worth monitoring. If democratic institutions are under threat globally, protecting them to make it harder for more countries to become totalitarian is important, as it could reduce the chance that a totalitarian state gains a decisive advantage through AI development. It also raises the chance that democratic values, such as freedom of expression and tolerance, shape humanity’s long-term future.13

There is a large ecosystem of research and policy institutes working on this problem in particular. These include think tanks like V-Dem, Freedom House, the Carnegie Endowment for International Peace, and the Center for Strategic and International Studies. There are also academic research centres like Stanford’s Center on Democracy, Development and the Rule of Law and Notre Dame’s Democracy Initiative.

(Note: These are just examples of programs in this area. We haven’t looked deeply at their work.)

Learn more about risks of stable totalitarianism

Read next:  Explore other pressing world problems

Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.

Plus, join our newsletter and we’ll mail you a free book

Join our newsletter and we’ll send you a free copy of The Precipice — a book by philosopher Toby Ord about how to tackle the greatest threats facing humanity. T&Cs here.

Notes and references

  1. In AI governance: A research agenda, Allan Dafoe categorises robust totalitarianism as one of four sources of catastrophic risk from AI, emphasising the importance of emerging technologies. He argues:

    Robust totalitarianism could be enabled by advanced lie detection, social manipulation, autonomous weapons, and ubiquitous physical sensors and digital footprints. Power and control could radically shift away from publics, towards elites and especially leaders, making democratic regimes vulnerable to totalitarian backsliding, capture, and consolidation.

  2. Totalitarian leaders often rely on the threat of military force to control their populace. On occasion, military leaders have resisted orders they thought were unjust or tyrannical, undermining dictatorial control. Autonomous weapons would undermine this kind of resistance. (See “Rebellion of the Army” in von Nostitz, 1997.)

  3. The Führer gave expression to his unshakable conviction that the Reich will be the master of all Europe. We shall yet have to engage in many fights, but these will undoubtedly lead to most wonderful victories. From there on the way to world domination is practically certain. Whoever dominates Europe will thereby assume the leadership of the world.

    - Joseph Goebbels, Reich Minister of Propaganda, May 8, 1943

  4. According to Communism by Thomas Lansford (p. 10)

  5. See Altman (2020) for some discussion.

  6. For more information, see the discussion of offsets in the section “Overview of technology competition” in Clare and Ruhl (2024).

  7. This capability has been called a decisive strategic advantage, a term philosopher Nick Bostrom uses in Superintelligence. But it’s not just AI researchers that think this. Mark Esper, when he was the US Secretary of Defense, reportedly said that “advances in AI have the potential to change the character of warfare for generations to come. Whichever nation harnesses AI first will have a decisive advantage on the battlefield for many, many years. We have to get there first.”

  8. Advanced cyber capabilities will also help defend against cyberattacks. However, even if AI-boosted cyber capabilities prove defence-dominant in the long term, devastating, offensive capabilities could be advantaged during the transition period. For discussion see Garfinkel and Dafoe (2019), as well as Schneider (2021).

  9. See Caplan (2008).

    Philosopher Nick Bostrom’s “The Vulnerable World Hypothesis” (2019) is also often cited and sometimes portrayed as advocating for a global surveillance state. We think this is a mistake. The paper speculates hypothetically that, if it were the case that any technological development had some chance of destroying the world, then it could also be the case that “preventive policing and global governance” may be needed to avoid catastrophe. That is, it’s another illustration of the difficult dynamics considered in this article — the importance of maintaining individual freedom while mitigating collective risk externalities.

  10. See chapter four of William MacAskill’s What We Owe the Future (2022, Oneworld Publications). MacAskill has also discussed lock-in on the 80,000 Hours podcast. (Note that MacAskill is a co-founder of 80,000 Hours.)

  11. A totalitarian dictator in charge of such an AI could use error-correcting software, make many copies, and order the AI to transfer its code to new hardware whenever its hardware begins to wear down.

  12. Caplan wrote:

    How seriously do I take the possibility that a world totalitarian government will emerge during the next one thousand years and last for a thousand years or more? Despite the complexity and guesswork inherent in answering this question, I will hazard a response. My unconditional probability — i.e., the probability I assign given all the information I now have — is 5%. I am also willing to offer conditional probabilities. For example, if genetic screening for personality traits becomes cheap and accurate, but the principle of reproductive freedom prevails, my probability falls to 3%. Given the same technology with extensive government regulation, my probability rises to 10%. Similarly, if the number of independent countries on earth does not decrease during the next thousand years, my probability falls to .1%, but if the number of countries falls to 1, my probability rises to 25%.

  13. See chapter four of What We Owe the Future for more discussion.