Most people think we should have some concern for future generations, but this obvious sounding idea leads to a surprising conclusion.
Since the future might be very long, there could be far more people in future generations than in the present generation. This means that if you want to help as many people as you can in an impartial way — i.e. without regard to people’s race, class, or where or when they’re born — , your key concern should probably be to ensure that the future goes as well as it can for all generations to come. Previously, we called this the “long-term value thesis”, though it is now most commonly called ‘longtermism’.
This thesis is often confused with the claim that we shouldn’t do anything to help people in the present generation. But longtermism is about what most matters – what we should do about it is a further question. It might turn out that the best way to help those in the future is to improve the lives of people in the present, such as through providing health and education. The difference is that the biggest reason to help those in the present is to improve the long-term.
The arguments for and against longtermism are a fascinating new area of research. Many of the key advances have been made by philosophers who have spent time in Oxford, like Derek Parfit, Nick Bostrom, Nick Beckstead, Hilary Greaves and Toby Ord. We’ve found it incredibly interesting to watch them deepen and refine these arguments over the last 10 or so years, and we think longtermism might well turn out to be one of the most important discoveries of effective altruism so far.
In the rest of this article, we give a popular overview of the main arguments for and against longtermism, starting by presenting the above argument for longtermism in a little more depth. Then we discuss three common objections.
Why think the future matters more than the present?
What are the things you most value in human civilization today? People being happy and free of suffering? People fulfilling their potential? Knowledge? Art?
In almost all of these cases, there’s potentially a lot more of it to come in the future:
The Earth could remain habitable for 600-800 million years,1 so there could be about 21 million future generations,2 and if we do the needed work, they could lead great lives — whatever you think “great” consists of. Even if you don’t think future generations matter as much as the present generation, since there could be so many of them, they could still be our key concern.
Civilization could also eventually reach other planets — there are 100 billion planets in the Milky Way alone.3 So, even if there’s only a small chance of this happening, there could also be dramatically more people per generation than there are today. By reaching other planets, civilization could also last even longer than if we stay on the Earth.
If you think it’s good for people to live happier and more flourishing lives, there’s a possibility that technology and social progress will let people have much better and longer lives in the future (including people in the present generation). So, putting these first three points together, there could be many more generations, with far more people, with the potential to be living much better lives. The three dimensions multiply together to give the potential scale of the future.
If one of your main values is justice and virtue, then the future could be far more just and virtuous than the world today.4
If you greatly value artistic and intellectual achievement, a far wealthier and bigger civilization could have far greater achievements than our own.
And so on.
This suggests that, insofar as you care about making the world a better place, your key concern should be to increase the chance that the future goes well rather than badly.
This isn’t to deny that you have special obligations to your friends and family, and an interest in your own life going well. We’re only talking about what matters insofar as you care about helping others impartially — Philosophers often call this what matters “from the point of view of the universe.” We think everyone should care about the lives of all others in this sense to some degree, even if you care about other things as well.
People often assume the long-term value thesis is especially about the possibility of there being lots of people in the future, and so only of interest to a narrow range of ethical views (especially utilitarian totalism), but as we can see in the list above, it’s actually much broader. It just rests on the idea that if something is of value, it’s better to have more of what’s valuable rather than less, and that it’s possible to have much more of it in the future. This might include non-welfare values, such as beauty or knowledge. The arguments are also not about humans; rather, they concern whatever agents in the future might have moral value, including other species.
People also often think that the long-term value thesis assumes the future will have positive rather than negative value. Quite the opposite is true — the future could also contain far more suffering than the present, and this implies even more concern for how it unfolds. It’s important to reduce the probability of bad futures as well as increase the probability of good ones.
Now let’s consider three of the most common objections.
1. Will the future actually be big?
The argument relies on the possibility of there being much more value in the future. But you might doubt that civilization can actually survive very long, or that we will ever live on other planets, or that people’s lives can be much better or worse than those of people today.
There’s a lot of reason to doubt these claims. Let’s look at them in a little more depth.
First, what’s not up for debate is the possibility that the future could be big. It’s a widely accepted scientific position that the Earth could remain habitable for hundreds of millions of years, and that there are hundreds of billions of planets in the galaxy. What’s more, there’s no reason to think it’s impossible that civilization could discover far more powerful technology than we have today, or that people could live far better and more satisfying lives than they do today.
Rather, what’s in doubt is the likelihood that these developments come to pass. Unfortunately there is no definitive way to estimate this likelihood. The best we can do is to weigh the arguments for and against a big future, and make our best estimates.
If you think the civilization is virtually guaranteed to end in the next couple of hundred years, then the future won’t have much more value than the present. However, if you think there’s a 10% chance that civilization survives 10 million generations till the end of the Earth,5 then (in expectation) there will be one million future generations. This means the future is at least 100,000 times bigger than the present. This could happen if there’s a chance civilization reaches a stable state where the risk of extinction becomes low.
In general, the bigger you think the future will be, and the more likely it is to happen, the greater the value.
Further, if you’re uncertain whether the future will be big, then a top priority should be to figure out whether it will be — it would be the most important moral discovery you could make. So, even if you’re not sure you directly should act on the thesis, it might still be the most important area for research. We see this kind of research as extremely important.
2. What about discounting?
Sometimes at this point people, especially those trained in economics, mention “discounting” as a reason to not care about the long-term.
When economists compare benefits in the future to benefits in the present, they typically reduce the value of the future benefits by some amount called the “discount factor”. A typical social discount rate might be 1% per year, which means that benefits in 100 years are only worth 36% as much as benefits today, and benefits in 1000 years are worth almost nothing.
To understand whether this is a valid response, you need to consider why the concept of discounted benefits was invented in the first place.
There are good reasons to discount economic benefits. One reason is that if you receive money now, you can invest it, and earn a return each year. This means it’s better to receive money now rather than later. People in the future might also be wealthier, which means that money is less valuable to them.
However, these reasons don’t obviously apply to welfare — people having good lives. You can’t directly “invest” welfare today and get more welfare later, like you can with money. The same seems true for other intrinsic values, such as justice.
There are other reasons to discount welfare,6 and this is a complex debate. However, the bottom line is that almost every philosopher who has worked on the issue doesn’t think we should discount the intrinsic value of welfare — i.e. from the point of view of the universe, one person’s happiness is worth just the same amount no matter when it occurs.
Indeed, if you suppose we can discount welfare, we can easily end up with conclusions that sound absurd. For instance, a 3% discount rate would imply that the suffering of one person today was equal to the suffering of 16 trillion people in 1,000 years.
As Derek Parfit said:
“Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?”
If we reject the discounting of welfare and other intrinsic values, then the chance that there could be a great deal of value in the future is still important. Moreover, this doesn’t stand in tension with the economic practice of discounting monetary benefits.
3. Do we have moral obligations to future generations?
A final response is that although the future might be big, we’re not obligated to help people who don’t yet exist in the same way as we’re obligated to help people alive right now.
This objection is usually associated with a “person-affecting” view of ethics, which is sometimes summed up as the view that “ethics is about helping make people happy, not making happy people”. In other words, we only have moral obligations to help those who are already alive, and not to enable more people to exist with good lives.
You can see where this intuition comes from if you consider the following choice: is it better to cure one person who’s 60 years old of cancer and allow them to live to 80, or to bring one new person into existence who will live a good life for 80 years? Most people think we should help the 60 year old, even though the new person gains four times as much good life.
However, person-affecting views suffer from a number of problems. For instance, suppose you have the option to bring into existence someone who would not otherwise have existed, whose life involves severe and constant suffering from birth until death, and who wished they had never been born.
Everyone agrees this is a bad thing to do. A naive person-affecting view, however, says that, since it involves creating a new person, this lies outside of our ethical concern, and so is neither good nor bad. So, the person-affecting view conflicts with the obvious idea that we shouldn’t create the suffering life.
Person-affecting views can avoid this conflict by positing that it’s bad to create lives filled with suffering, but it’s neither good nor bad to create happy lives. Then, it’s wrong to create the suffering-filled life, but there’s no reason to enable more happy people to exist in the future.
One issue with this is that it’s unclear why this asymmetry would exist. The bigger problem though is that this asymmetry conflicts with another common sense idea.
Suppose you have the choice to bring into existence one person with an amazing life, or another person whose life is barely worth living, but still more good than bad. Clearly, it seems better to bring about the amazing life, but if creating a happy life is neither good nor bad, then we have to conclude that both options are neither good nor bad. This implies both options are equally good, which seems bizarre.
This is a complex debate, and rejecting the person-affecting view also has counterintuitive conclusions. For instance, if you agree that it’s good to create people whose lives are more good than bad, then you’ll need to accept that we could have a better world filled with a huge number of people whose lives are just barely worth living. This is called the “repugnant conclusion”.
Putting this debate to one side, even if you take a person-affecting approach, you might still think that the future is more important than the present. This is because some people who are alive today might be able to live a very long-time, and have much higher levels of welfare than they do today. Rather than create more people, your key concern should be to ensure that these possibilities are realised. So, even if you have a person-affecting view, the long-term value thesis might still hold.
What’s our position? As stated, we find the criticisms of the person-affecting view persuasive, so don’t find it a convincing response to the long-term value thesis. However, since many people hold something like the person-affecting view, we think it deserves some weight, and that means we should act as if we have somewhat greater obligations to help someone who’s already alive compared to someone who doesn’t exist yet. (This is an application of moral uncertainty).
Likewise, we think there might be other types of ethical reasons to have additional concern for the present. For instance, maybe the unique nature of injustice means we have extra reasons to fight great injustices being perpetrated today. (Though there may also be reasons of justice to make sure the interests of future generations aren’t ignored.)
However, these reasons to place special value on the present need to be set against the potentially far greater amount of value in the future. How to do this is an extremely difficult question, and involves unsettled questions in the study of “moral uncertainty”.
Trying to weigh this up, we think we should have greater concern for the future, though we care more about the present generation than we would if we naively weighed up the numbers.
We think the original argument survives the responses, and so, when it comes to doing good, our key concern is how the long-term unfolds.
That said, we’re still highly uncertain about these arguments. There’s a good chance we’ve missed a crucial consideration and this picture is wrong. These ideas are still new and have not been heavily studied. We’re also uncertain how to weigh the value of the future against other moral concerns given moral uncertainty.
This makes us cautious to put overwhelming value on the future, even if that’s what the raw numbers might imply. Instead, we see making the future go well as our key, but not only, moral concern.
We also place a great amount of value on learning more about these issues to refine our priorities, and we attach importance to many other moral demands.
Can we actually influence the future?
You might be persuaded by these arguments, but believe they are irrelevant because we can’t significantly impact the future. It’s natural to think that the overall effects of our actions on the future are unknowable. Instead, one could argue, we should confine our efforts to helping people in the short-term.
However, this seems wrong. There are several ways we can impact the future:
We can speed up processes that impact the future. Our economy tends to grow every year, and this suggests that if we make society wealthier today, this wealth will compound long into the future, creating a long stream of benefits for future people.
Even more importantly, we could precipitate the end of civilization, perhaps through a nuclear war, run-away climate change, or other disasters. This would foreclose the possibility of all future value. It seems like there are things the current generation can do to increase or decrease the chance of extinction, as we cover in our problem profiles.
There might be other major, irreversible changes besides extinction that we can influence, which could either be positive or negative. For instance, if a totalitarian government came into power that couldn’t be overthrown, the badness of that government would be locked-in for a very long-time. If we could decrease the chance of that happening, that would be very good, and vice versa. Alternatively, genetic engineering might make it possible to fundamentally change human values, and then these values would be passed down to every future generation. This could either be very good or very bad, depending on your moral views and how it was done.
Even if you’re not sure how to help the future, then your key aim could be to do research to work it out. We’re uncertain about lots of ways to help people, but that doesn’t mean we shouldn’t try. This is part of global priorities research, and there are plenty of concrete questions to investigate.
What’s the practical upshot of these possibilities?
If you think there’s a huge amount of value in the future, and there are not-ridiculously-small ways we can affect it, then these actions will be the highest-impact we can take. Nick Beckstead calls this the “future shaping argument”.
In fact, it turns out that many of the ways to help future generations are also highly neglected. This is exactly what you’d expect — the present generation has a much greater interest in helping itself rather than improving the future. And this means there are lots of effective ways to help.
What are the best ways to help future generations right now? This is a topic for another article, but here is a very quick outline of our views.
One area that doesn’t seem promising is speeding up progress. First, Beckstead argues in Chapter 3 that from a long-term perspective, what matters most is where we end up, not how fast we get there, so speed-ups are less important than other types of change. Second, efforts to speed up progress are also far less neglected than other ways to help the future — the world spends about 1 trillion dollars a year on R&D, which is what you’d expect, since these discoveries also benefit the present generation.
Instead, we should focus on “path changes” — actions that have the potential to shape the future over a very long timescale. The most pressing of these right now seems to be reducing extinction risks.
There is a small but real possibility that civilization ends in the next century; and this would not only be terrible for the present generation, it would permanently remove the possibility of a good future. We need to get these risks down before we can focus on other ways of improving the future. What’s more, there are many concrete, highly neglected proposals to reduce these risks.
However, we’re unsure about all of these suggestions, so our other focus is on global priorities research to identify the best ways to help the future. This includes the question of whether there might be positive path changes to promote (sometimes called “existential hope”), as well as what negative risks are most important to avoid.
We also want to look for crucial considerations that might overturn the future shaping argument. We think this research could have a significant impact on our priorities, and there’s also only a handful of researchers currently doing it.
The stakes facing our generation are much more than they first seem. Our actions might have the potential to bring about a far better world, or cut it short. Our key concern should be to ensure the future goes well.
Want to focus your career on the long-run future?
If you want to work on any issues essential to ensuring the future goes well, such as controlling nuclear weapons or shaping the development of artificial intelligence or biotechnology, you can speak to our team one-on-one.
We’ve helped hundreds of people choose an area to focus, make connections, and then find jobs and funding in these areas. If you’re already in one of these areas, we can help you maximise your impact within it.
The Milky Way Contains at Least 100 Billion Planets According to Survey, Hubblesite, Archived link, retrieved 22-October-2017↩
Though some forms of justice and virtue focused ethics might not hold that we ought to maximise justice or virtue; instead, for instance, it may be a matter of satisfying a set of conditions.↩
Sometimes people object that this is a “Pascal’s wager” type argument, but a 10% chance is typically larger than is used in these arguments, and the supposed future value is still finite rather than infinite.↩
For instance, we might discount welfare due to uncertainty about whether it will happen, but we’ve already taken uncertainty about the future into account when estimating its future value.
We might also discount welfare in practice, since having happy people now might be thought to produce more happy people in the future.↩