The most overlooked roles in AI safety

A curving stairway leads underground
© Villy Fink Isaksen, Wikimedia Commons, License cc-by-sa-3.0

We recently spoke to Ryan Kidd, Co-Executive Director at MATS, a programme that trains researchers in AI safety. He shared with us some important trends in the technical AI safety ecosystem — and his thoughts line up with what we’re hearing from other leaders in this space. If you’re wondering which roles are hard to fill and where you might be able to contribute, read on!

  • Research management seems to be a substantial bottleneck: These roles can be hard to fill, as they require some familiarity with AI safety research as well as strong interpersonal skills and management experience. Plus, impact-driven people who are interested in AI safety generally want to be researchers themselves — rather than manage the research of others! Crucially, you often don’t need to be a great researcher yourself to be a great research manager: people with experience as project managers, people managers, and executive coaches can all make for excellent research managers.

  • We lack executive talent: The technical AI safety field could really benefit from more people with backgrounds in strategy, management, and operations. If you have experience managing and growing a team of 30+ people, you could make a big difference at a top-tier AI safety organisation, even if you don’t have a lot of direct experience with AI.

  • We lack founders, field-builders, and communicators: There’s a lot of room to start new organisations and grow the ecosystem, as well as a lot of available funding, especially in the for-profit AI interpretability and security space. My work on the Job Board also benefits from people starting new organisations: they create new roles we can match our users to!

  • We need more mid-career professionals: As more work is delegated to AI, we’ll become increasingly reliant on experienced managers who can oversee AI-generated outputs, train others to use AI tools, and coordinate teams of humans and AIs.

  • We need people excited for ‘support’ roles: It might seem less exciting to not work directly on top problems, but this means roles in which you multiply the impact of others (e.g. operations and management roles) are neglected despite being very impactful. And, speaking as somebody whose job is to help others get jobs, I find this kind of work can be quite exciting!

I’d encourage you to think about whether you could contribute to the technical AI safety field in any of these ways. You don’t have to be a perfect fit to make a big difference — especially when organisations are bottlenecked on hiring.

For roles like these, you’ll want to check out our operations and strategy tags on the job board, particularly for our highlighted roles.

This blog post was first released to our newsletter subscribers.

Join over 500,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!

Learn more: