Influencing the Far Future

future_generations

Introduction

In an earlier post we reviewed the arguments in favor of the idea that we should primarily assess causes in terms of whether they help build a society that’s likely to survive and flourish in the very long-term. We think this is a plausible position, but it raises the question: what activities in fact do help improve the world over the very long term, and of those, which are best? We’ve been asked this question several times in recent case studies.

First, we propose a very broad categorisation of how our actions today might affect the long-run future.

Second, as a first step to prioritising different methods, we compiled a list of approaches to improve the long-run future that are currently popular among the community of people who explicitly believe the long-run future is highly important.

The list was compiled from our knowledge of the community, which principally includes the people associated with CEA, MIRI, FHI, GCRI and related organisations. Please let us know if you think there are other important types of approach that have been neglected. Further, note that this post is not meant as an endorsement of any particular approach; just an acknowledgement that it has significant support.

Third, we comment on how existing mainstream philanthropy may or may not influence the far future.

(1) What kinds of changes can last?

If we are concerned with influencing the very far future, we might first wonder: what kind of actions could possibly have an effect extending for millions of years, even in principle?

Many possible changes seem like they will “wash out” over timescales much shorter than millions or billions of years, so that the long-term effect is highly unpredictable. One reason for this is that very fundamental characteristics of society are likely to change significantly over such long timescales. Another is that most social characteristics—especially those which we currently have an opportunity to intervene on—tend to be constantly changing, and so in most cases the long-run behavior is either convergence towards a particular stable outcome or indefinite variation which eventually washes out any individual perturbation.

So to identify changes that might last a very long time, we are interested in situations where there are multiple stable outcomes. To date, the academic community researching these issues, primarily formed of researchers at Oxford’s FHI, has identified four broad kinds of changes that may have very long-term effects:

  1. Extinction. If all life on earth was destroyed, it is possible that it would not be replaced, either from Earth nor from anywhere else. If there is any non-negligible chance that life on Earth will give rise to a healthy civilization extending beyond Earth, avoiding premature extinction could have an effect which lasts for billions of years.

  2. Changing values. Even if Earth does give rise to a healthy long-lived civilization, we are uncertain about the values of that civilization. Moreover, with sufficiently advanced technology a change in the values of a civilization might be able to persist essentially indefinitely: a civilization with certain values might work hard to ensure that its successors share important core values (and those successors may then “pay it forward”). Note that the “values” of a civilization might take many shapes, ranging from being enshrined in powerful governments to being embodied by genetic characteristics and predispostiions of the population.

  3. Speedup. Speeding up events in society this year looks like it may have a lasting speedup effect on the entire future—it might make all of the future events happen slightly earlier than they otherwise would have. In some sense this changes the character of the far future, although the current consensus is that this doesn’t do much to change the value of the future. Shifting everything forward in time is essentially morally neutral.

  4. Unanticipated effects. There may be other long-lasting effects which we have not yet considered. For example, it may be possible for certain kinds of standards or norms to become entrenched and last for a very long time. Although we cannot yet tell a detailed story about how such a change could persist, we might learn more in the future, or we might be able to indirectly predict whether the unanticipated effects of particular choices will tend to be positive or negative.

Although relatively few changes seem likely to have a direct long-term effect, a much broader variety of changes could have important short-term effects, which then catalyze very long-term effects.

For example, choices about the organization of the US government in the 18th century seem unlikely to have a direct long-term effect. However, they have already had a massive influence over events of the previous centuries and could exert a similar influence over coming centuries. This influence may in turn effect other changes which can have a direct long-term influence; for example, the organization of the US government has an effect on the risk of a war which destroys all life on Earth. This influence will also lead to many other changes which themselves have indirect effects; for example, on economic prosperity or the quality of education. It seems unlikely that a cascade of indirect effects could last indefinitely—every step slightly spreads out the impact and makes it more uncertain—but it could continue for quite some time before resulting in a change that has a direct effect on the far future.

Similarly, we can imagine many “speed bumps” in the contemporary development trajectory which might not have a direct long-term influence, but which might facilitate larger catastrophes down the line. For example: war, long-term technological stagnation, catastrophic climate change, social collapse.

(2) Ways to improve the far future

Philanthropists concerned about the far future have pursued many varied interventions, and should be expected to pursue many more in the future. However, the following stand out as the most popular strategies today (note that some of these projects, in particular ACE and GiveWell, are not run by altruists focused on the far future, although they have the support of many people who take the far future seriously):

Movement building (e.g, LessWrong, us)
Managing AI impacts (e.g. MIRI), including understanding AI impacts, developing safety mechanism, differential technological development
Analyzing far-future effects (e.g. FHI)
Capacity building (e.g. IARPA’s ACE, GiveWell the Good Judgement Project), including improving individual and collective decision-making and improving philanthropy

This should certainly not be considered an exhaustive list of possible approaches to improving the far future. But together with the next section, which discusses the flow-through effects of more proximal do-gooding, this inventory does capture most of the ideas that have been advanced by people so far.

Movement building

Many future-looking altruists support movement building and outreach. These organizations encourage their audiences to make more altruistic decisions, and moreover encourage applying careful thought to altruistic decisions. The aim is to build a community of people able to carry out the other most promising approaches to improving the far future, whatever those turn out to be. We see this is as one of the potential benefits of 80,000 Hours.

Another example of this approach is the Center for Applied Rationality, which promotes reflectiveness, metacognition, a quantitative mindset, and careful decision-making, and is creating a broad community of “rationalists” which is relatively closely tied to effective altruism. CFAR is the conceptual successor to the internet community LessWrong, which has already created a very large number of individuals interested in influencing the future, and has a strong emphasis on futurism and taking unusual ideas seriously.

These efforts vary significantly in how directly they deal with the far future; CFAR works rather indirectly, by promoting some of the ideas that are important to future-shaping and then introducing the others into the accompanying social environment. Other efforts involve more direct appeals.

Naturally, movement building is complementary with the other strategies on this list. The existence of projects working directly on influencing the far future makes it easier to argue that this is a problem which we can and should do something about. The existence of directly useful projects that can absorb funding and human capital also increases the value of having additional people who care about the far future.

Managing AI impacts

Understanding and influencing the impacts of human-level AI is another popular cause amongst future-looking altruists. Some thinkers, for example Eliezer Yudkowsky and Nick Bostrom, think that there is a good chance that the emergence of general human-level AI will have an extremely disruptive social impact, and that there is a significant chance that this impact will be negative unless it is managed carefully. If there is any material chance of human-level AI within the next few decades, then this may be an opportunity for future-looking altruists to have a huge impact on the world. Managing a transition to AI may be a case where concern about the far future encourages us to take an issue particularly seriously that the rest of the world is willing to neglect, and thereby allows future-looking altruists to have an outsized impact.

The most promising options in this space seem to be:

Understanding AI impacts. At the moment we have a very weak understanding of what the field of AI will look like when mature, or what its likely social impacts will be. Some of this uncertainty will be very difficult to resolve because so much technical research remains to be done within AI. But exploring the consequences of particular scenarios we can currently envision and evaluating available evidence that bears on which scenarios are likely, may help reduce our uncertainty. This could be particularly valuable for understanding what social characteristics might increase our probability of managing a transition to AI successfully, and helping reduce the probability of surprising developments.

Developing “safety” mechanisms. Despite our uncertainty about AI, we can envision a number of concrete scenarios, and within those scenarios we can identify particular technical problems which would be helpful to solve. For example, we can imagine scenarios in which there is a discontinuous change in AI capabilities and work to design environments in which such discontinuous changes pose a minimal risk of disruption. Work on those problems today is very unlikely to be directly relevant to managing a transition to AI; but beginning to think about these problems today may give work on the safety problems more of a head-start.

Differential technological development. Despite our uncertainty about AI, we may be able to predictably identify certain research programs as more or less conducive to disruptive transitions. For example, it seems plausible that advancing principled understanding of intelligent behavior more quickly than practical understanding of AI implementation will improve our ability to anticipate the impacts of AI and thereby avoid disruptions. If this judgment is correct, then doing work on understanding intelligence today may improve our ability to cope with AI by slightly but predictably changing what we know at the time when human-level AI arrives.

Most researchers in AI are moderately or highly skeptical of the impact of this work, primarily because they think human-level AI is quite distant and it is very likely that we will know much more about AI by the time we have realistic opportunities to influence that transition. This may reflect different views on empirical questions, or may reflect future-looking altruists’ unusual willingness to invest significant effort for modest changes in the probability of AI having a positive impact.

Several of these approaches are currently being pursued at the Machine Intelligence Research Institute, although increasingly the focus is on concrete technical problems that fall under category (c), via their anticipated positive effect on the development of transparent and easily understood AIs. Some of this work is also being done at the Future of Humanity Institute.

Analyzing far-future effects

Some thinkers (for example Nick Bostrom) and philanthropists believe that the most important near-term activity is clarifying our understanding of the long-term impacts of contemporary choices. Because so little attention has been paid to these effects, it seems quite plausible that our understanding will be significantly revised by further research. Given the possibility of order-of-magnitude gaps between the effectiveness of different interventions improving our understanding may have a huge positive impact.

This kind of research might help future-looking altruists prioritize their activities or discover new strategies. The existence of improved arguments might make the case for improving the far future attractive to a much broader group of people. And understanding which traditional activities are highest-impact might influence the behavior of a much broader group of people who consider effects on the far future to be just one consideration amongst many.

Currently a small amount of this research is being done at the Future of Humanity Institute, and a small amount of research is being done by individuals in the broad communities surrounding the Centre for the Effective Altruism and the Machine Intelligence Research Institute.

Capacity building

Probably the broadest and one of the most popular interventions is improving social capabilities that appear to be directly related to successfully navigating future problems. Since most altruists who are explicitly concerned with the far future think that the greatest human challenges will be created by humans, capacity building is only useful to the extent that it increases our ability to solve problems faster than our creation of them.

There are two basic reasons that capacity building might be more useful for resolving problems than making them: it may be that you build abilities that are in and of themselves more useful for resolving problems, or it may be that you focus your efforts to improve abilities within communities that seem to be resolving problems more than creating them. To the extent that most individuals have goals which are essentially aligned with our values, we should expect that improvements in people’s ability to reason and plan should systematically improve the quality of the future, because bad things that happen are disproportionately likely to be the result of errors or failures.

The most popular approaches to capacity-building amongst future-looking altruists can be divided into three overlapping categories:

Individual decision-making and forecasting. There seems to be significant variation in the quality of individual decisions, based on experience, intelligence, thoughtfulness and reflectiveness, conscientiousness, etc. By changing some of these characteristics, we might be able to increase people’s ability to get what they want, hopefully with positive impacts for the future. For example, CFAR aims to encourage reflectiveness and metacognition. There is also some interest amongst far-future concerned altruists in more speculative proposals to improve human cognition via chemical enhancement or genetic engineering.

Collective decision-making and forecasting. Most important decisions are made by groups of people, and the quality of those decisions depends not only on the characteristics of individuals but on the mechanisms they use for discussion, aggregation, decision-making, and so on. Innovations in this space might improve the quality of collective decision-making. For example Robin Hanson works on decision markets and pushes for their adoption on the theory that they would significantly improve collective decision-making. Philip Tetlock is currently running The Good Judgement Project, which aims to improve forecasting methods.

Methodology in philanthropy. Today it seems like altruistically motivated spending could play an important role in making the world, and in particular the far future, much better. But altruistically-motivated spending faces many distinctive and significant challenges. Improving our collective ability to overcome those challenges would increase the impact of philanthropy, improve the efficiency of government and NGO budgets, help individuals make effective choices with their careers, and on balance seems likely to significantly improve the world. For example, GiveWell is pioneering a hard-nosed approach to charity impact evaluation which seems like it may significantly increase the reach and impact of future philanthropy, including big gains for the far future. Similarly, 80,000 Hours aims to significantly improve the process of choosing a career that makes the world a better place.

As we’ll discuss in the next section, these capacity-building interventions are quite similar to those pursued by many philanthropists who aren’t concerned about the far future, though for slightly different reasons. They seem to differ primarily because philanthropists concerned with the future tend to have a very quantitative mindset and contrarian bent, which leads them to focus on different kinds of capacity-building. This raises the concern that future-looking philanthropists may not be able to add as much value in this space as in more distinctively future-focused causes.

(3) Mainstream altruism and the far future

Unfortunately, very little serious thought has been applied to the question of how to build a civilization that’s robust and prosperous in the very long-term. It is somewhat difficult to distinguish between a number of possible explanations for the current situation: the majority of altruists have relatively little concern for future generations, have not encountered the relevant arguments, correctly or incorrectly reject the relevance of these arguments, or some combination of each.

Even if they never thought about these considerations or don’t care about the future at all, altruistically-motivated activities in the wider world are probably having some predictable effect on the very far future, and may be having a very significant effect. In many cases people are working on high-leverage ways to make the world better in the near-term (either out of altruism or by successfully internalizing the gains from trade); if positive long-term changes systematically involve positive short-term changes, then we might expect the best long-term opportunities to be amongst causes ordinary altruists support. This is far from obvious, but seems plausible a priori.

We can divide the bulk of the plausible positive effects of the broader world into three overlapping categories: the “ripple effects” of general human empowerment, capacity building work which is not directly concerned with the far future but may nevertheless be relevant, or efforts to mitigate near-term risks which may be directly relevant to the far future or may incidentally mitigate catastrophic risks with long-term consequences.

Note that even if ordinary approaches and strategies turned out to be the most important ways to improve the far future, because so many more people have taken an interest in them it is much less clear that future-looking philanthropists can add significant value. It is most plausible that future-looking philanthropists can contribute significantly to these causes if either (1) the same characteristics that cause people to be interested in the far future also cause them to prioritize better in general, or (2) an identifiable and relatively small subset of these causes has a much larger effect than the others, so that future-looking philanthropists can focus on those causes and still have a big impact.

Mitigating contemporary risks

There are also many intelligent altruists who focus on improving the stability and robustness of the modern world, motivated by general arguments about the significance of foreseeable risks and the moral significance of future generations. The most popular interventions in this class are coping with climate change (mitigation and adaptation), promoting sustainability / social resilience, and promoting nuclear disarmament and global peace.

Again, the real long-term effects of these interventions are not understood, and often the focus is on catastrophic risks which are not existential risks (in particular, social collapse and massive loss of life). At the moment it seems unlikely that even extremely pessimistic climate change or world war scenarios would end life on Earth, and the actual long-term effects of the probable catastrophe scenarios seem quite uncertain. Combined with the relatively low per annum probability of catastrophe, these interventions approaches seem to have a relatively modest effect on long-term aggregate welfare, although it is nevertheless possible that they amongst the most effective ways to have a long-term effect (it is certainly a priori plausible that the most important catastrophic risks are also the most important extinction risks, even if extinction is quite unlikely).

“Ripple effects” of human empowerment

Many intelligent altruists work on projects that alleviate poverty or resolve challenges in individuals’ lives, resolve structural problems in society, contribute to economic or technological development, or similar. This work might have a large indirect effect on the future.

The main reason for optimism is the observation that broad social interests are basically aligned with the important mechanisms for improving the far future. Extinction has negative effects not only for far future people but for existing people; a distortion in social values that caused it to no longer reflect human values would be bad in the short-term as well as the long run; and similar patterns might be expected to hold for unanticipated effects on long-run welfare. Moreover, as long as few people actively work to degrade the far future, the existence of a modest number of altruists concerned with the far future would already tip the scales in favor of general empowerment.

As a result, anything which generally improves humans’ abilities to get what they want seems to be positive on balance, and may be significantly so. Improving education, reducing poverty, fixing structural problems, speeding up economic development, and so on, all boost present peoples’ ability to get what they want.

Although these activities may indirectly improve the far future, the nature and magnitude of these effects is highly uncertain and hasn’t been investigated very closely. So it may be possible to prioritize much more effectively if these effects were better understood. Moreover, there is some disagreement amongst future-looking philanthropists as to whether many of these effects are even positive on balance. For example, it might be the case that social change mediated by the gradual turnover of generations is a force for good which plays a smaller role in society if there is more activity. In that case, it may be that boosting economic growth actually has a negative effect on the world.

Capacity building

The same abilities are needed to cope with a wide range of future challenges, including both challenges that will have a negative impact immediately and those that will have a negative impact on the very far future. Contemporary philanthropists often try to increase society’s ability to cope with such challenges, by improving the quality of education, of governance, of public discourse, of media coverage, and so on.

The case for a positive impact here is quite similar to the case for general human empowerment, but is somewhat more direct and superficially more plausible. Most of these forms of capacity-building would have a direct impact on society’s ability to cope with an extinction risk, for example. Indeed, the capacity-building pursued by futurist philanthropists and the capacity-building pursued by traditional philanthropists seem to differ primarily by the quantitative mindset and contrarian bent of people who are willing to focus on the very far future.