#47 – Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

After dropping out of his ML PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to help shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option.
He decided to apply to OpenAI, spent 6 weeks preparing for the interview, and actually landed the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others.
On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. They share this piece of advice for those interested in this career path: just dive in. If you’re trying to get good at something, just start doing that thing, and figure out that way what’s necessary to be able to do it well.
To go with this episode, Catherine has even written a simple step-by-step guide to help others copy her and Daniel’s success.
Daniel thinks the key for him was nailing the job interview.
OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he’d be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working.
Daniel emphasizes that the most important thing was to practice exactly those things that he knew he needed to be able to do. He also received an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him.
Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they’re right, it could greatly increase our ability to quickly get new people into ML roles in which they can make a difference.
Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity.
Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover:
- What is the field of AI safety? How could your projects contribute?
- What are OpenAI and Google Brain doing?
- Why would one decide to work on AI?
- The pros and cons of ML PhDs
- Do you learn more on the job, or while doing a PhD?
- Why did Daniel think OpenAI had the best approach? What did that mean?
- Controversial issues within ML
- What are some of the problems that are ready for software engineers?
- What’s required to be a good ML engineer? Is replicating papers a good way of determining suitability?
- What fraction of software developers could make similar transitions?
- How in-demand are research engineers?
- The development of Dota 2 bots
- What’s the organisational structure of ML groups? Are there similarities to an academic lab?
- The fluidity of roles in ML
- Do research scientists have more influence on the vision of an org?
- What’s the value of working in orgs not specifically focused on safety?
- Has learning more made you more or less worried about the future?
- The value of AI policy work
- Advice for people considering 23andMe
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours Podcast is produced by Keiran Harris.