Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

…there’s two parts to the problem. The first is calling someone’s attention to a place. I think that’s the harder part by far. You can’t just bury a thing, because hundreds and millions of years is long enough that the surface of the earth is no longer the surface of the earth…

Paul Christiano

Imagine that, one day, humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out?

In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably is.

We could tell them hard-won lessons from history; mention some research questions we wish we’d started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons.

But, as Christiano points out, even if we could satisfactorily figure out what we’d like to be able to tell our ancestors, that’s just the first challenge. We’d need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth’s surface quickly gets buried far underground.

But even if we figure out a satisfactory message, and a ways to ensure it’s found, a civilization this far in the future won’t speak any language like our own. And being another species, they presumably won’t share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn’t break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery?

That’s just one of many playful questions discussed in today’s episode with Christiano — a frequent writer who’s willing to brave questions that others find too strange or hard to grapple with.

We also talk about why divesting a little bit from harmful companies might be more useful than I’d been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider.

Finally, we get a big update on progress in machine learning and efforts to make sure it’s reliably aligned with our goals, which is Paul’s main research project. He responds to the views that DeepMind’s Pushmeet Kohli espoused in a previous episode, and we discuss whether we’d be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors.

Some other issues that come up along the way include:

  • Are there any supplements people can take that make them think better?
  • What implications do our views on meta-ethics have for aligning AI with our goals?
  • Is there much of a risk that the future will contain anything optimised for causing harm?
  • An outtake about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Highlights

My overall picture of alignment has changed a ton since six years ago. I would say that’s basically because six years ago, I reasoned incorrectly about lots of things. It’s a complicated area. I had a bunch of conclusions I reached. Lots of the conclusions were wrong. That was a mistake. Maybe an example of a salient update is I used to think of needing to hit this, like you really need to have an AI system that understands exactly what humans want over the very long term.

I think my perspective shifted more to something maybe more like a commonsensical perspective of, if you have a system which sort of respects short-term human preferences well enough, then you can retain this human ability to course correct down the line. You don’t need to appreciate the full complexity of what humans want, you mostly just need to have a sufficiently good understanding of what we mean by this course correction, or remaining in control, or remaining informed about the situation.

If you imagine the first time that humans could have discovered a message sent by a previous civilization, it would have been– I mean it depends a little bit on how you’re able to work this out, but probably at least like a hundred years ago. At that point, the message might’ve been sent from a civilization which was much more technologically sophisticated than they are. Also, which has experienced the entire arc of civilization followed by extinction.

At a minimum, it seems like you could really change the path of their technological development by selectively trying to spell out for them or show them how to achieve certain goals. You could also attempt, although it seems like a little bit more speculative, to help set them on a better course and be like, “Really, you should be concerned about killing everyone.” It’s like, “Here’s some guidance on how to set up institutions so they don’t kill every new one.”

I’m very concerned about AI alignment, so I’d be very interested in as much as possible being like, “Here’s a thing, which upon deliberation we thought was a problem. You probably aren’t thinking about it now, but FYI, be aware.” I do think that would put a community of people working on that problem and that future civilization into a qualitatively different place.

It’s very hard to figure out what the impact would be had we stumbled across these very detailed messages from the past civilization. I do think it could have a huge technological effect on the trajectory of development, and also reasonably likely have a reasonable effect either on deliberation and decisions about how to organize yourselves or on other intellectual projects.

I think ethical consumption is actually a really good comparison point for divestment. Where you could say, “I want to consume fewer animal products in order to decrease the number of animals we get produced.” And there you have a very similar discussion about what the relative elasticities are like. One way you could think about it is if you decrease demand by 1%, you decrease labor force by 1% and you decrease the availability of capital by 1%. If you did all of those things then you would kind of decrease the total amount produced by 1% roughly under some assumptions about how natural resources work and so on.

The credit for that 1% decrease is somehow divided up across the various factors on the supply side and demand side and the elasticities determine how it is divided up. I think it’s not like 100% consumption or like 100% of labor, I think all of those factors are participating to a nontrivial extent.

I haven’t done this analysis really carefully and I think it would be a really interesting thing to do and would be a good motivation if I wanted to put together the animal welfare divestment fund. I think under pretty plausible assumptions you’re getting a lot more bang for your buck from the divestment than from the consumption choices. Probably the investment thing would be relatively small compared to your total consumption pattern. It wouldn’t be like replacing your ethical consumption choice. But when you would have bought one dollar of animal agricultural companies and instead you sell $10, I think stuff like that could be justified if you thought that ethical consumption was a good thing.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.