The emerging school of patient longtermism

Only one century can be the most important one of all the centuries to come.

One of the parts of effective altruism I’ve found most intellectually interesting recently is ‘patient longtermism’.

This is a school of thinking that takes longtermism seriously, but combines that with the idea that we’re not facing an unusually urgent threat to the future, or another urgent opportunity to have a long-term impact. We may still be facing threats to the future, but the idea is that they’re not more pressing today than the threats we’ll face down the line. (I discuss three other forms of longtermism here.)

Broadly, patient longtermists argue that instead of focusing on reducing specific existential risks or working on AI alignment and so on today, we should expect that the crucial moment for longtermists to act lies in the future, and our main task today should be to prepare for that time.

It’s not a new idea –- Benjamin Franklin was arguably a patient longtermist, and Robin Hanson was writing about it by 2011 — but there has been some interesting recent research.

Three of the most prominent arguments relevant to patient longtermism recently have been made by three researchers in Oxford, who have now all been featured on our podcast (though these guests don’t all necessarily endorse patient longtermism overall):

  1. The argument that we’re not living at the most influential time ever (aka, the rejection of the ‘hinge of history hypothesis’) by Will MacAskill, written here and discussed on our podcast.

  2. The argument that we should focus on saving & growing our resources to spend in the future rather than acting now, which Phil Trammell has written up in a much more developed and quantitative way than previous efforts, and comes down more on the side of patience. You can see the paper or hear our podcast with him.

  3. Arguments pushing back against the Bostrom-Yudkowsky view of AI by Ben Garfinkel. You can see a collection of Ben’s writings here and our interview with him. The Bostrom-Yudkowsky view is the most prominent argument that AI is not only a top priority, but that it is urgent to address in the next few decades. That makes it, in practice, a common ‘urgent longtermist’ argument. (Though Ben still thinks we should expand the field of AI safety.)

Taking a patient longtermist view would imply that the most pressing career and donation opportunities involve the following:

  • Global priorities research – identifying future issues and improving our effectiveness at dealing with them.

  • Building a long-lasting and steadily growing movement that will tackle these issues in the future. This could be the effective altruism movement, but people might also look to build movements around other key issues (e.g. a movement for the political representation of future generations).

  • Saving money that future longtermists can use, as Phil Trammell discusses. There is now an attempt to set up a fund to make this easier.

  • Investing in any career capital that will allow you to achieve more of any of the above priorities over the course of your career.

The three researchers I list above are still unsure how seriously to take patient longtermism overall, and everyone who takes patient longtermism seriously still thinks we should spend some of our resources today on whichever object-level issues seem most pressing for longtermists. They usually converge on AI safety and other efforts to reduce existential risks or risk factors.

Moreover, addressing challenges in the here and now likely helps to build an effective movement, so patient longtermists might want to see significant investment in object level issues as a means to that end. It’s not even obvious that patient longtermists want to see much less investment in object level challenges than many urgent longtermists do.

Furthermore, most people are not purely patient or purely urgent longtermists – rather they put some credence in both schools of thinking, and where they lie is a matter of balance. Everyone agrees that the ideal longtermist portfolio would have some of each perspective.

All this said, I’m excited to see more research done into the arguments for patient longtermism and what they might imply in practical terms.

If you’d like to see the alternative take — that the present day is an especially important time — you could read The Precipice: Existential Risk and the Future of Humanity by Toby Ord, who works at the University of Oxford alongside the three researchers mentioned above.

Enjoy this?

If you found this interesting, and are thinking through how views like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Sign up to our weekly newsletter to be notified about future research like this:

Further reading