‘S-risks’
Summary
People working on suffering risks or s-risks attempt to reduce the risk of something causing vastly more suffering than has existed on Earth so far. We think research to work out how to mitigate these risks might be particularly important. You may also be able to do important work by building this field, which is currently highly neglected — with fewer than 50 people working on this worldwide.
Our overall view
Sometimes recommended
Working on this problem could be among the best ways of improving the long-term future, but we know of fewer high-impact opportunities to work on this issue than on our top priority problems.
Profile depth
Exploratory
Table of Contents
Why might s-risks be an especially pressing problem?
We’re concerned about impacts on future generations, such as from existential threats from pandemics or artificial intelligence.
But these are primarily risks of extinction or of humanity’s potential being permanently curtailed — they don’t put special emphasis on avoiding the chance of extreme amounts of suffering, in particular.
Research into suffering risks or s-risks attempts to fill this gap.
New technology, for example the development of artificial intelligence or improved surveillance technology, but also new nuclear or biological weapons, may well concentrate power in the hands of those that develop and control the technology. As a result, one possible outcome worse than extinction could be a perpetual totalitarian dictatorship, where people suffer indefinitely. But researchers on s-risks are often concerned with outcomes even worse than this.
For example, what would happen if such a dictatorship developed the technology to settle space? And if we care about nonhuman animals or even digital minds, the possible scale of future suffering seems astronomical. After all, right now humanity is almost completely insensitive to the welfare of nonhuman animals, let alone potential future digital consciousness.
We don’t know how likely s-risks are.
In large part this depends on how we define the term (we’ve seen various possible definitions). We think it’s very likely that there will be at least some suffering in the future, and potentially on very large scales — potentially vastly more suffering than has existed on Earth so far, especially if there are many, many more individuals in the future and they live a variety of lives. But often when people talk about s-risks, they are talking about the risk of outcomes so bad that they are worse than the extinction of humanity. Our guess is that the likelihood of such risks is very low, much lower than risks of human extinction — which is part of why we focus more on the latter.
However, research on s-risks is so neglected that it’s hard to know. We think there are fewer than 50 people worldwide working explicitly on reducing s-risks.
Types of s-risks
While research in this area is at its early stages, the Center for Reducing Suffering has identified three possible kinds of s-risks:
- Agential s-risks. These s-risks come from actors intentionally causing harm. This could happen because some powerful actor actively wants to cause harm or has feelings of hatred or indifference to other groups (whether other ethnic groups, other species, or other forms of sentient life), or because of negative-sum strategic interactions.
- Incidental s-risks. These s-risks arise as a side effect from some other process. For example, we could see suffering result as a side effect of some kinds of economic productivity (as we currently see from factory farming), attempts to gain information (like animal testing, or simulating conscious beings), or violent entertainment (think gladiator fights).
- Natural s-risks. This is suffering that naturally occurs without intervention from any agent. It’s possible that things like wild animal suffering could someday exist on a huge scale across the universe (or might already).
How likely are these risks?
It’s plausible that the risks of such suffering are sufficiently low that we shouldn’t focus on them. For example, perhaps we expect there to be strong incentives to make sure that these sorts of risks never happen. And since agents in general seem to strive for happiness and away from suffering, we might think this deep asymmetry will keep s-risks low — although it’s unclear whether historically life has in fact continually improved over time.1
That said, there are a few reasons why we might be more concerned:
- If humans don’t go extinct, it seems pretty plausible that technological progress will continue.2 As a result, at some point it seems likely we’ll, in some sense, settle space (as discussed in our profile on space governance), meaning that our future could hold positive or negative value on astronomical scales.
- In general, advanced technology will make it possible to do all sorts of things — and the more advanced our technology, the wider the scope of things that could be achieved. If there’s motivation to create suffering, this means there’s a reasonable possibility that this suffering could be created.
- There are precedents, like factory farming, wild animal suffering and slavery.
What can you do to help?
There are two main ways of reducing s-risks:
- Narrow interventions focusing on the safe development and deployment of specific new technologies like transformative AI that could produce s-risks.
Broad interventions, for example, promoting international cooperation (which would reduce incentives for things like war, hostages, and torture).
Since our information on s-risks is so uncertain at this stage, current work on s-risks tends to focus on either research into these risks and ways to reduce them, or movement-building encouraging others to spend their time focusing on reducing these risks.
As a result of this uncertainty, we think it’s particularly important that people working on s-risks understand the area well before trying to achieve substantive goals.
It could also help reduce s-risks to work on related issues like:
However, it’s important to note only some work in these areas is likely to be among the best ways to reduce s-risks in particular (rather than achieving other goals, like reducing existential risks).
For example, out of the many ways we recommend working to reduce the risk of AI-related catastrophes, only some seem directly relevant to s-risk reduction. S-risk-related AI work often focuses on the interaction of AI systems with each other (or with humans), and on ensuring that mistakes in the design or operation of AI systems don’t cause extremely bad outcomes. This includes work to build cooperative AI (finding ways to ensure that even if individual AI systems seem safe, they don’t produce bad outcomes through interacting with other human or AI systems), as well as other work on multi-agent AI systems.
Read more about possible ways to avert s-risks here.
Key organisations in this space
- The Center for Reducing Suffering researches the ethical views that might put more weight on s-risks, and considers practical approaches to reducing s-risks.
- The Center on Long-Term Risk focuses specifically on reducing s-risks that could arise from the development of AI, alongside community-building and grantmaking to support work on the reduction of s-risks.
Learn more about s-risks
- S-risks: Why they are the worst existential risks, and how to prevent them — an introductory talk by Max Daniel at EA Global Boston in 2017
- S-risks: An introduction by Tobias Baumann
- A typology of s-risks by Tobias Baumann
- Avoiding the Worst: How to Prevent a Moral Catastrophe by Tobias Baumann
- Risks of astronomical future suffering by Brian Tomasik
- Podcast: Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe
- Our article on the case for reducing existential risks
Read next: Explore other pressing world problems
Want to learn more about global issues we think are especially pressing? See our list of issues that are large in scale, solvable, and neglected, according to our research.
Notes and references
- There are some reasons to think there is asymmetry in the opposite direction.
For example, while widespread and extreme suffering seems bad under many possible worldviews, the things needed to bring about a flourishing future may be more complex (including things beyond just happiness, such as justice or beauty). Anthony DiGiovanni discusses the idea that disvalue is not as complex as value here.↩
- It’s also possible that the sorts of technological progress required for s-risks could take place even if humans are extinct. For example, this could happen if an advanced AI system causes human extinction (a possibility we discuss in our article on preventing AI-related catastrophes), the AI system continues technological progress, and there are still things that we care about that could be involved in extremely bad outcomes (such as animals or digital minds).↩