Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

I think there is something that’s often extremely helpful and neglected, which is to try and find a decision boundary. […] When I think about transformative science, I think about the fact that a lot of science comes out of great scientists like Einstein or Turing. What if at some point AI was making it like there were more such scientists? […] What chance would you need to give that to be interested in AI or to want to work on AI? Is that a 1% chance in 10 years? Is that like a 10% chance in 10 years? What is the threshold?

Danny Hernandez

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.

These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today’s episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy.

Danny and I talk about how to understand his team’s results and what they mean (and don’t mean) for how we should think about progress in AI going forward.

Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field.

If this research sounds appealing, you might be interested in applying to join OpenAI’s Foresight team — they’re currently hiring research engineers.

In the interview, Danny and I (Arden Koehler) also discuss a range of other topics, including:

  • The question of which experts to believe
  • Danny’s journey to working at OpenAI
  • The usefulness of “decision boundaries”
  • The importance of Moore’s law for people who care about the long-term future
  • What OpenAI’s Foresight Team’s findings might imply for policy
  • The question whether progress in the performance of AI systems is linear
  • The safety teams at OpenAI and who they’re looking to hire
  • One idea for finding someone to guide your learning
  • The importance of hardware expertise for making a positive impact

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Highlights

The question of which experts to believe

You can think about understanding the different experts as model uncertainty. You don’t know what experts are right in the world. If you could just choose which experts to listen to, as a leader, that would solve all of your problems. If you’re like, “Which experts do I listen to in different times”, you’ve solved your entire problem of leadership. And so evaluating experts is this critical problem. And if you can explain their arguments, then you’ve kind of internalized it and you’ve avoided this failure mode, whereas you could imagine that there were some experts, they made some arguments to you, you couldn’t really explain them back to them, meaning you didn’t really understand them, and so later you’ll have regret because you’ll make a decision that you wouldn’t have made if you actually understood their arguments.

Moore's law

The long-term trend is kind of Moore’s law, whatever happens there. And so that’s kind of what I think about more often and what longtermists should be more interested in. If you’re like a longtermist, then Moore’s law’s like really big over the next 20 to 30 years, whatever happens. Even if it’s exponent goes down some, you really care what the new exponent is, or if there’s no exponent.

And so it could be that when you just zoom out in the history of humanity a hundred years from now, our current thing is an aberration and Moore’s law just goes back to its old speed or speeds up or whatever. But if you think about what’s the most interesting compute trend, it’s definitely Moore’s law and that’s the longtermist most interesting compute trend, and much of what’s happened in compute kind of follows from that. If you’re in the sixties and you know Moore’s law is going to go on for a long time, you’re ready to predict the internet and you’re ready to predict smartphones, and you’re ready to make 30 and 40 year long investments in basic science from just knowing this one kind of fact. And you could still be kind of in that position, if you think you know what’s going to happen with Moore’s law.

The foresight team at OpenAI

The Foresight team tries to understand the science underlying machine learning and macro trends in ML. And you could think of it as trying to inform decision-making around this. This should inform research agendas. It should inform how people think about how it’s informative to policymakers. It’s informative to people who are thinking about working on AI or not. It’s informative to people in industry. But you could think of it as just trying to be really rigorous, which is like another way of thinking about it. Like it’s mostly ex-physicists and physicists just want to understand things.

Hardware expertise

I think that hardware expertise is worth quite a bit. […] So, for instance, the kind of person who I’d be most interested in trying to make good forecasts about Moore’s law and other trends, is somebody who has been building chips for a while or has worked in building chips for awhile. I think there aren’t that many of those people. I haven’t seen somebody from that background that is working in policy yet, but my guess is that they could be very useful at some time and that it’d be reasonable to try starting now with that kind of thing in mind. But that’s pretty speculative. I know less about that than the forecasting type thing. I think hardware forecasting is very interesting.

Getting precise about your beliefs

If you believe AI progress is fast, what would progress look like that would convince you it’s slow? Paint a picture of that five years from now. What does slow progress look like to you? And now you’re like, “Oh yeah, progress is actually slow”. And what could have happened that would convince you that it’s actually fast. But you can make what would update you clear to yourself and others and that for big decisions, this is generally worthwhile.

Articles, books, and other media discussed in the show

Blog posts and papers from OpenAI

Everything else

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.