Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.

Even stranger for a Silicon Valley start-up, it’s not a business, but rather a nonprofit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.

I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:

  • OpenAI’s latest plans and research progress.
  • His paper Concrete Problems in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend – something OpenAI has to work to avoid.
  • How listeners can best go about pursuing a career in machine learning and AI development themselves.

Highlights

OpenAI is focused on reinforcement learning and learning across many environments, instead of just focusing on supervised machine learning. The views of the people are quite similar to Google DeepMind but it’s a smaller team – they’re slower to hire.

OpenAI is not about open source, but rather about distributed benefits from AI. AI gives you enormous leverage to improve the world because many currently unsolvable problems will become much more straightforward with superhuman AI.

Most people at OpenAI think safety is worth considering now. They already confront how AIs can act in ways their creators don’t foresee or desire, and these problems might get worse as AI becomes more powerful. Their third safety researcher is due to start very soon, and they’re hiring – get in touch if you’d like to join. OpenAI is closely cooperating with DeepMind to ensure safety research is shared and there’s no race to deploy technologies before they’re shown to be safe.

Dario thinks we should learn how to make AI safe for the future by working on actionable problems with feedback loops today – solving practical problems in ways that can be extended in future as ML develops.

The most natural way to get into working on AI and AI safety is by doing a ML PhD. You can also switch in from physics, computer science and so on – Dario’s own background is in physics.

To test your fit for working in AI safety or ML, just trying implementing lots of models very quickly. Find an ML model from a recent paper, implement it, try to get it to work quickly. If you can do it and enjoy it, AI research might be enjoyable for you. Many people with a quantitative background can try this at home. Even if you decide not to work on safety the exit opportunities from ML are excellent.

Articles, books, and other media discussed in the show

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.