#90 – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?

In today’s interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning’ could be relevant for figuring out where we should direct our charitable giving.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism’ — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that’s both very large relative to what’s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.

If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we’re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn’t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

  • Which worldviews Open Phil finds most plausible, and how it balances them
  • Which worldviews Ajeya doesn’t embrace but almost does
  • How hard it is to get to other solar systems
  • The famous ‘simulation argument’
  • When transformative AI might actually arrive
  • The biggest challenges involved in working on big research reports
  • What it’s like working at Open Phil
  • And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel

Highlights

Worldview diversification

So, Open Phil currently splits its giving across three big buckets, or worldviews… The longtermism versus near-termism split is where the longtermism camp is trying to lean into the implication of total utilitarianism — that because it’s good to cause there to be more people living lives worth living than there were before, you should be focused on existential risk reduction, to preserve this large long-term future where most of the moral value is, on the total utilitarian view.

And then the near-termist perspective, I wouldn’t say it’s a perspective that doesn’t care about the future, or has some sort of hard-line commitment to “It’s only the people that exist today that matter, and we count it as zero if we do anything that helps the future”. I think it’s a little bit more like this perspective is sceptical of going down that rabbit hole that gets you to “The only thing that matters is existential risk reduction”, and it’s sort of regressing back to normality a little bit.

This might come from scepticism of a total view population ethics…it might come from scepticism about the tractability of trying to affect existential risk, or about trying to do things that don’t have great feedback loops. So there’s this tangle of considerations that make you want to go “Okay, let me take a step back and let me try and be quantitative, and rigorous, and broadly utilitarian about pursuing a broader set of ends that are more recognized as charity or doing good for others, and that isn’t super strongly privileging this one philosophical argument”. That’s how I put that split, the longtermism versus near-termism split.

And then within the near-termism camp, there’s a very analogous question of, are we inclusive of animals or not? Where the animal-inclusive view — similar to the longtermism view — says, okay, there are many more animals in this world than there are humans, and many of them are facing conditions much worse than the conditions faced by any human, and we could potentially help them very cheaply. So even if you don’t think it’s very likely that animals are morally valuable roughly comparable to humans, if you think that they’re 1% as valuable, or 10% as valuable, or even 0.001% as valuable, then the vast majority of your efforts on this near-termism worldview should be focused on helping animals.

And so this is another instance of this dynamic where the animal-inclusive worldview does care about humans but sort of ends up focusing all of its energy on this larger population of beneficiaries. And so it’s this same thing where there’s this claim that there’s more at stake in the animal-inclusive worldview than in the human-centric worldview, and then there’s a further claim that there’s more at stake in the longtermist worldview versus the near-termist worldview.

And so, essentially, there’s two reasonable-seeming things to do. One is to allocate according to a credence between these three worldviews, and potentially other worldviews. And then the other is try to find some way to treat the things that each of these worldviews care about as comparable, and then multiply through to find the expected amount of moral stuff at stake in each of these worldviews — and then allocate all your money to the worldview that has the most stuff at stake. Which in this case, most reasonable ways of doing this would say that would be the longtermist worldview.

Effective size of the far future

So the basic astronomical waste argument (Astronomical Waste by Nick Bostrom is the seminal paper of this longtermist worldview) essentially says that there’s a very good chance that we could colonise space and create a society that’s not only very large relative to what could be sustained on Earth, but also very robust, and having a very low risk of extinction once you cross that barrier.

We actually think that’s a pretty important part of the case for longtermism. So, if we were imagining longtermism as just living in the world, where humanity will continue on Earth and things will happen, and it’ll be kind of like it is now, but it might last for a long time, so there may be many future generations… We’re not convinced that’s enough to get you to reducing existential risk as your primary priority.

Because in a world where there isn’t a period where we’re much more technologically mature, and much more able to defend against existential risks, the impact of reducing existential risk today is much more washed out, and doesn’t necessarily echo through all of the future generations, even if there are many of them on Earth.

Why Ajeya wrote her AI timelines report

In 2016, Holden wrote a blog post saying that, based on discussions with technical advisors who are AI experts — who are also within the EA community and used to thinking about things from an EA perspective — based on discussions with those technical advisors, Holden felt that it was reasonable to expect a 10% probability of transformative AI within 20 years. That was in 2016, so that would have been 2036. And that was a kind of important plank in the case for making potential risks from advanced AI not only a focus area, but also a focus area which got a particular amount of attention from senior generalist staff.

And then in 2018/early 2019, we were in the middle of this question of we’re hoping to expand to peak giving — consistent with Cari and Dustin’s goals to give away their fortune within their lifetime — and we want to know which broad worldviews and also which focus areas within the worldviews would be seeing most of that expansion. And so then the question became more live again, and more something we wanted to really nail down, as opposed to kind of relying a bit more on deference and the earlier conversations Holden had.

And so digging into AI timelines felt like basically the most urgent question on a list of empirical questions that could impact where the budget went.

Biggest challenges with writing big reports

One thing that’s really tough is that academic fields that have been around for a while have an intuition or an aesthetic that they pass on to new members about, what’s a unit of publishable work? It’s sometimes called a ‘publon’. What kind of result is big enough? What kind of argument is compelling enough and complete enough that you can package it into a paper and publish it? And I think with the work that we’re trying to do — partly because it’s new, and partly because of the nature of the work itself — it’s much less clear what a publishable unit is, or when you’re done. And you almost always find yourself in a situation where there’s a lot more research you could do than you assumed naively, going in. And it’s not always a bad thing.

It’s not always you’re being inefficient or you’re going down rabbit holes, if you choose to do that research and just end up doing a much bigger project than you thought you were going to do. I think this was the case with all of the timelines work that we did at Open Phil. My report and then other reports. It was always the case that we came in, we thought, I thought I would do a more simple evaluation of arguments made by our technical advisors, but then complications came up. And then it just became a much longer project. And I don’t regret most of that. So it’s not as simple as saying, just really force yourself to guess at the outset how much time you want to spend on it and just spend that time. But at the same time, there definitely are rabbit holes, and there definitely are things you can do that eat up a bunch of time without giving you much epistemic value. So standards for that seemed like a big, difficult issue with this work.

What it's like working at Open Phil

We started off, I would say, on a trajectory of being much more collaborative — and then COVID happened. The recent wave of hiring was a lot of generalist hires, and I think that now there’s more of a critical mass of generalists at Open Phil than there was before. Before I think there were only a few, now they’re more like 10-ish people. And it’s nice because there’s a lot more fluidity on what those people work on. And so there are a lot more opportunities for casual one-off collaboration than there is between the program staff with each other or the generalists with the program staff.

So a lot of the feeling of collaboration and teamyness and collegiality is partly driven by like, does each part of this super siloed organisation have its own critical mass. And I feel like the answer is no for most parts of the organisation, but recently the generalist group of people — both on the longtermist and near-termist side together — have more people, more opportunities for ideas to bounce, and collaborations that make sense, than there were before. And I’m hoping as we get bigger and as each part gets bigger, that’ll be more and more true.

Articles, books, and other media discussed in the show

Ajeya’s work

Open Philanthropy blog posts, reports, and their general application

Nick Bostrom’s work

Other papers

OpenAI blog posts

Additional links

TV and movies

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.