Survey results: what AI safety orgs want in a hire

Hi everyone!
A few months ago, Benjamin Todd and I surveyed top figures in AI safety about the hiring needs of their organisations, including:
- What sort of people they’re hiring in the coming years
- How people can skill up to get a job
- The hardest qualities to find in applicants
In total, we heard from 38 people running top organisations working on making AGI go well for humanity. Some very interesting suggestions emerged that I’d like to share with job hunters:
What are the best steps for upskilling? A few programmes were mentioned multiple times. In order:
- BlueDot courses
- The Talos Fellowship
- ML Alignment and Theory Scholars
- Fellowships from the Centre for the Governance of AI
- The Cambridge ERA:AI Fellowship
- The Pivotal Research Fellowship
Several respondents said that if you’re talented or have 1–3 years of experience building a relevant skill set, a BlueDot course or fellowship can be sufficient AI context for getting hired into positions (excluding roles in technical AI research).
Startups were also recommended multiple times as a good place to gain operations experience, engineering experience, and “experience at moving fast.”
What about policy experience? If you’re trying to enter an AI policy role, general policy experience is important, but luckily, it sounds like 1–3 years is plenty for then being able to make an impact.
This experience can include: reading lots, building relationships, understanding legislation, contributing to public consultation, and legal work.
Publish your outputs. Respondents strongly emphasised the importance of being public with your work and personal journey: show projects; write blogs, posts, and comments; make videos; build and share cool, useful things; build things using AI and show them off. Why? Employers need to know that you can do the work, and that you’re proactive, energetic, and reasonable. Doing work in public is one of the best ways to give them that information.
Attend events and get to know people. This is especially important in Washington DC, and for organisations that host events. You can find some of these on our events board.
Conferences like EA Global can also be a great way to network with organisations. Also see our general advice on getting to know people in EA and AI safety.
What profiles are hardest for AI safety employers to find? It seems like places have a hard time hiring people who:
- Can be great chiefs of staff (which requires a mix of skills, probably best captured as organisation-building skills)
- Have policy experience
- Speak Mandarin
- Are European citizens with AI context
If any of those describe you, this highlights the utility of quickly ramping up your AI safety knowledge! The combination of those skills and AI safety context may be especially valuable.
When I read all the responses, it was quite encouraging to see these patterns. You don’t need 10+ years or a niche skill set to meaningfully contribute; you often just need to be agentic and willing to skill up.
If you have the necessary skills and context, we have 300+ AI safety roles you can apply for on our job board. If you’re looking to skill up, we have a page of career development opportunities here.
Have a great weekend!
Conor
P.S. Was this newsletter helpful? We’d love to hear from you!
This blog post was first released to our newsletter subscribers.
Join over 500,000 newsletter subscribers who get content like this in their inboxes weekly — and we’ll also mail you a free book!