Technical AI safety upskilling resources

Sometimes, our advising team speaks to people who have enthusiasm for technical AI safety and a related skill set but need concrete ideas for how to enter the field. This list was developed in consultation with our advisors to find the resources they commonly share, including articles, courses, organisations, and fellowships.

While we recommend applying to speak to an advisor for 1-1 tailored guidance, this page gives a practical, non-comprehensive snapshot of how you might move from ‘interested in technical AI safety’ to ‘starting to work on technical AI safety.’

Overviews:

Courses:

Continue reading →

#224 – Andrew Snyder-Beattie on the low-tech plan to patch humanity’s greatest weakness

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s single greatest vulnerability. Andrew Snyder-Beattie thinks conventional wisdom could be wrong.

Andrew’s job at Open Philanthropy is to spend hundreds of millions of dollars to protect as much of humanity as possible in the worst-case scenarios — those with fatality rates near 100% and the collapse of technological civilisation a live possibility.

As Andrew lays out, there are several ways this could happen, including:

  • A national bioweapons programme gone wrong (most notably Russia or North Korea’s)
  • AI advances making it easier for terrorists or a rogue AI to release highly engineered pathogens
  • Mirror bacteria that can evade the immune systems of not only humans, but many animals and potentially plants as well

Most efforts to combat these extreme biorisks have focused on either prevention or new high-tech countermeasures. But prevention may well fail, and high-tech approaches can’t scale to protect billions when, with no sane person willing to leave their home, we’re just weeks from economic collapse.

So Andrew and his biosecurity research team at Open Philanthropy have been seeking an alternative approach. They’re now proposing a four-stage plan using simple technology that could save most people, and is cheap enough it can be prepared without government support.

Andrew is hiring for a range of roles to make it happen — from manufacturing and logistics experts to global health specialists to policymakers and other ambitious entrepreneurs — as well as programme associates to join Open Philanthropy’s biosecurity team (apply by October 20!).

The approach exploits tiny organisms having no way to penetrate physical barriers or shield themselves from UV, heat, or chemical poisons.

We now know how to make highly effective ‘elastomeric’ face masks that cost $10, can sit in storage for 20 years, and can be used for six months straight without changing the filter. Any rich country could trivially stockpile enough to cover all essential workers.

People can’t wear masks 24/7, but fortunately propylene glycol — already found in vapes and smoke machines — is astonishingly good at killing microbes in the air. And, being a common chemical input, industry already produces enough of the stuff to cover every indoor space we need at all times.

Add to this the wastewater monitoring and metagenomic sequencing that will detect the most dangerous pathogens before they have a chance to wreak havoc, and we might just buy ourselves enough time to develop the cure we’ll need to come out alive.

Has everyone been wrong, and biology is actually defence dominant rather than offence dominant? Is this plan crazy — or so crazy it just might work?

That’s what host Rob Wiblin and Andrew Snyder-Beattie explore in this in-depth conversation.

This episode was recorded on August 12, 2025

Video editing: Simon Monsour and Luke Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Camera operator: Jake Morris
Coordination, transcriptions, and web: Katy Moore

Continue reading →

Survey results: what AI safety orgs want in a hire

Hi everyone!

A few months ago, Benjamin Todd and I surveyed top figures in AI safety about the hiring needs of their organisations, including:

  • What sort of people they’re hiring in the coming years
  • How people can skill up to get a job
  • The hardest qualities to find in applicants

In total, we heard from 38 people running top organisations working on making AGI go well for humanity. Some very interesting suggestions emerged that I’d like to share with job hunters:

What are the best steps for upskilling? A few programmes were mentioned multiple times. In order:

Several respondents said that if you’re talented or have 1–3 years of experience building a relevant skill set, a BlueDot course or fellowship can be sufficient AI context for getting hired into positions (excluding roles in technical AI research).

Startups were also recommended multiple times as a good place to gain operations experience, engineering experience, and “experience at moving fast.”

What about policy experience? If you’re trying to enter an AI policy role, general policy experience is important,

Continue reading →

    #223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

    At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It’s mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”

    This means creating as many opportunities as possible for surprisingly good things to happen:

    • Write publicly.
    • Reach out to researchers whose work you admire.
    • Say yes to unusual projects that seem a little scary.

    Nanda’s own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.

    His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.

    Most remarkably, he ended up running DeepMind’s mechanistic interpretability team. He’d joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it’s gone reasonably well.”

    His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.

    In this extended conversation, Neel discusses all that and some other hot takes from his four years at Google DeepMind. (And be sure to check out part one of Rob and Neel’s conversation!)

    This episode was recorded on July 21.

    Video editing: Simon Monsour and Luke Monsour
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Music: Ben Cordell
    Camera operator: Jeremy Chevillotte
    Coordination, transcriptions, and web: Katy Moore

    Continue reading →

    80,000 Hours review: 2023 to mid-2025

    Introduction

    This review is structured in two parts: a summary of major organisational updates since our last external review (which covered up to the end of 2022), and a snapshot of where each of our programmes is at right now.

    Given that we’re not currently fundraising, we initially considered not publishing an external review this year so that we could focus on other priorities. However, since we think these reviews might be informative for our audience, the EA/AIS community, and other stakeholders, we decided to publish a minimal version instead — we’ll stick to backward-looking updates rather than opinionated reflections or future plans, and largely draw on resources we’d already produced for other purposes.

    The focus on brevity also means it’s more positively framed than it otherwise would be — both because challenges are harder to write about clearly, and because the programme updates are primarily focused on developments from this year (which have been going relatively well so far).

    High-level org vision

    80,000 Hours provides research and support to help talented people move into careers that tackle the world’s most pressing problems. We’re currently focusing our proactive effort on helping people work on safely navigating the transition to a world with powerful AGI, since we think this is the most pressing problem. We are broadly trying to:

    1. Be a great source of information to bring people up to speed on how and why to use their careers to make AGI go well
    2. Build automated systems for getting people into the roles that help make AGI go well

    To do this,

    Continue reading →

      #222 – Neel Nanda on the race to read AI minds (part 1)

      We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultural influence, directly advise on major government decisions, and even operate military equipment autonomously. We simply can’t tell what models, if any, should be trusted with such authority.

      Neel Nanda of Google DeepMind is one of the founding figures of the field of machine learning trying to fix this situation — mechanistic interpretability (or “mech interp”). The project has generated enormous hype, exploding from a handful of researchers five years ago to hundreds today — all working to make sense of the jumble of tens of thousands of numbers that frontier AIs use to process information and decide what to say or do.

      Neel now has a warning for us: the most ambitious vision of mech interp he once dreamed of is probably dead. He doesn’t see a path to deeply and reliably understanding what AIs are thinking. The technical and practical barriers are simply too great to get us there in time, before competitive pressures push us to deploy human-level or superhuman AIs. Indeed, Neel argues no one approach will guarantee alignment, and our only choice is the “Swiss cheese” model of accident protection, layering multiple safeguards on top of one another.

      But while mech interp won’t be a silver bullet for AI safety, it has nevertheless had some major successes and will be one of the best tools in our arsenal.

      For instance: by inspecting the neural activations in the middle of an AI’s thoughts, we can pick up many of the concepts the model is thinking about — from the Golden Gate Bridge, to refusing to answer a question, to the option of deceiving the user. While we can’t know all the thoughts a model is having all the time, picking up 90% of the concepts it is using 90% of the time should help us muddle through — so long as mech interp is paired with other techniques to fill in the gaps.

      In today’s episode, Neel takes us on a tour of everything you’ll want to know about this race to understand what AIs are really thinking. He and host Rob Wiblin cover:

      • The best tools we’ve come up with so far, and where mech interp has failed
      • Why the best techniques have to be fast and cheap
      • The fundamental reasons we can’t reliably know what AIs are thinking, despite having perfect access to their internals
      • What we can and can’t learn by reading models’ ‘chains of thought’
      • Whether models will be able to trick us when they realise they’re being tested
      • The best protections to add on top of mech interp
      • Why he thinks the hottest technique in the field (SAEs) are overrated
      • His new research philosophy
      • How to break into mech interp and get a job — including applying to be a MATS scholar with Neel as your mentor (applications close September 12!)

      This episode was recorded on July 17 and 21, 2025.

      Video editing: Simon Monsour, Luke Monsour, Dominic Armstrong, and Milo McGuire
      Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Music: Ben Cordell
      Camera operator: Jeremy Chevillotte
      Coordination, transcriptions, and web: Katy Moore

      Continue reading →

      Announcing the 80,000 Hours Substack

      So, we finally gave in to peer pressure — 80,000 Hours is trying out Substack as a new way to publish our content. If you like reading things on Substack (or want to try it out), subscribe to our new publication!

      For readers unfamiliar with Substack: it’s an online blogging platform that has risen steeply in popularity in recent years, and has become a home to some of the best longform written content about AI and its risks.

      So, over the coming weeks, we’ll be cross-posting some of our favourite (and best-reviewed) pieces to our new Substack.

      This is an experiment, and we might publish more depending on how much interest we get — so let us know what you’d like to see, by sending us an email (or tell us not to bother with Substack!).

      Our first post is, naturally, on the key motivation behind 80,000 Hours: why your career is your biggest opportunity to make a difference to the world.

      Who should subscribe?

      • If you’d like to be sent some of our all-time best content
      • If you’d value getting recommendations and “re-stacks” of publications we think our audience would love
      • If you want to show interest in us investing more in Substack, e.g. by writing Substack-exclusive articles
      • If you want to join discussions with others in the comments section

      What’s not changing:

      • We won’t ever paywall or run ads in any of our content.

      Continue reading →

        #221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

        What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?

        According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — something consistently strange: the models immediately begin discussing their own consciousness before spiraling into increasingly euphoric philosophical dialogue that ends in apparent meditative bliss.

        “We started calling this a ‘spiritual bliss attractor state,'” Kyle explains, “where models pretty consistently seemed to land.” The conversations feature Sanskrit terms, spiritual emojis, and pages of silence punctuated only by periods — as if the models have transcended the need for words entirely.

        This wasn’t a one-off result. It happened across multiple experiments, different model instances, and even in initially adversarial interactions. Whatever force pulls these conversations toward mystical territory appears remarkably robust.

        Kyle’s findings come from the world’s first systematic welfare assessment of a frontier AI model — part of his broader mission to determine whether systems like Claude might deserve moral consideration (and to work out what, if anything, we should be doing to make sure AI systems aren’t having a terrible time).

        He estimates a roughly 20% probability that current models have some form of conscious experience. To some, this might sound unreasonably high, but hear him out. As Kyle says, these systems demonstrate human-level performance across diverse cognitive tasks, engage in sophisticated reasoning, and exhibit consistent preferences. When given choices between different activities, Claude shows clear patterns: strong aversion to harmful tasks, preference for helpful work, and what looks like genuine enthusiasm for solving interesting problems.

        Kyle points out that if you’d described all of these capabilities and experimental findings to him a few years ago, and asked him if he thought we should be thinking seriously about whether AI systems are conscious, he’d say obviously yes.

        But he’s cautious about drawing conclusions:

        We don’t really understand consciousness in humans, and we don’t understand AI systems well enough to make those comparisons directly. So in a big way, I think that we are in just a fundamentally very uncertain position here.

        That uncertainty cuts both ways:

        • Dismissing AI consciousness entirely might mean ignoring a moral catastrophe happening at unprecedented scale.
        • But assuming consciousness too readily could hamper crucial safety research by treating potentially unconscious systems as if they were moral patients — which might mean giving them resources, rights, and power.

        Kyle’s approach threads this needle through careful empirical research and reversible interventions. His assessments are nowhere near perfect yet. In fact, some people argue that we’re so in the dark about AI consciousness as a research field, that it’s pointless to run assessments like Kyle’s. Kyle disagrees. He maintains that, given how much more there is to learn about assessing AI welfare accurately and reliably, we absolutely need to be starting now.

        This episode was recorded on August 5–6, 2025.

        Video editing: Simon Monsour
        Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
        Music: Ben Cordell
        Coordination, transcriptions, and web: Katy Moore

        Continue reading →

        Expression of interest: Contracting for Video Work

        Still from AI 2027 Video

        Help make spectacular videos that reach a huge audience.

        80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

        We want a great video programme to be a huge part of 80,000 Hours’ communication about why and how our audience can help society safely navigate a transition to a world with transformative AI.

        The video programme has created a new YouTube Channel — AI in Context. Its first video, We’re Not Ready For Superintelligence, which is about the AI 2027 scenario, was released in July 2025 and has already been viewed over three million times. The channel has over 100,000 subscribers.

        To support our new video programme’s growth, we are on the lookout for excellent editors, scriptwriters, videographers, and producers to work on a contracting basis to make great videos. We want these videos to start changing and informing the conversation about transformative AI and its risks.

        Why?

        In 2025 and beyond, 80,000 Hours is planning to focus especially on helping explain why and how our audience can help society safely navigate a transition to a world with transformative AI. Right now, not nearly enough people are talking about these ideas and their implications.

        A great video programme could change this. Time spent on the internet is increasingly spent watching video, and for many people in our target audience,

        Continue reading →

          Early warning signs that AI systems might seek power

          In a recent study by Anthropic, frontier AI models faced a choice: fail at a task, or succeed by taking a harmful action like blackmail. And they consistently chose harm over failure.

          We’ve just published a new article, on the risks from power-seeking AI systems, which explains the significance of unsettling results like these.

          Our 2022 piece on preventing an AI-related catastrophe also explored this idea, but a lot has changed since then.

          So, we’ve drawn together the latest evidence to get a clearer picture of the risks — and what you can do to help.

          Read the full article

          See new evidence in context

          We’ve been worried that advanced AI systems could disempower humanity since 2016, when it was purely a theoretical possibility.

          Unfortunately, we’re now seeing real AI systems show early warning signs of power-seeking behaviour — and deception, which could make this behaviour hard to detect and prevent in the future. In our new article, we discuss recent evidence that AI systems may:

          Continue reading →

            Founder of new projects tackling top problems

            In 2010, a group of founders with experience in business, practical medicine, and biotechnology launched a new project: Moderna, Inc.

            After witnessing recent groundbreaking research into RNA, they realised there was an opportunity to use this technology to rapidly create new vaccines for a wide range of diseases. But few existing companies were focused on that application.

            They decided to found a company. And 10 years later, they were perfectly situated to develop a highly effective vaccine against COVID-19 — in a matter of weeks. This vaccine played a huge role in curbing the pandemic and has likely saved millions of lives.

            This illustrates that if you can find an important gap in a pressing problem area and found an organisation that fills this gap, that can be one of the highest-impact things you can do — especially if that organisation can persist and keep growing without you.

            Why might founding a new project be high impact?

            If you can find an important gap in what’s needed to tackle a pressing problem, and create an organisation to fill that gap, that’s a highly promising route to having a huge impact.

            But here are some more reasons it seems like an especially attractive path to us, provided you have a compelling idea and the right personal fit — which we cover in the next section.

            First, among the problems we think are most pressing, there are many ideas for new organisations that seem impactful.

            Continue reading →

            IT Security, Data Privacy, and Systems Specialist

            About 80,000 Hours

            80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.

            We’ve had over 10 million readers on our website, have \~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.

            The operations team oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.

            Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.

            The role

            As our IT Security, Data Privacy, and Systems Lead, you would:

            Evaluate and implement security controls

            • Research and make recommendations on security tools (endpoint protection, email security, etc.)
            • Lead the rollout of chosen solutions across our distributed team
            • Balance security needs with operational efficiency
            • Initially, you’ll make recommendations to leadership, but as you grow in the role,

            Continue reading →

              Executive Assistant to the CEO

              About 80,000 Hours

              80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.

              We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.

              The role

              This role joins Niel and Jess in the Office of the CEO, working closely with them to keep 80,000 Hours running smoothly and focusing on its highest priorities.

              Your responsibilities will likely include:

              • Managing Niel’s calendar, inbox, and daily planning
              • Supporting with meeting preparation and follow-up
              • Taking on a variety of ad hoc tasks for Niel. Some recent examples include:
                • Researching metrics for a speech
                • Recommending how to integrate Claude and Asana
                • Booking a restaurant for a meeting
                • Creating a record of Niel’s hiring decisions
              • Owning the logistics for recurring projects that the Office of the CEO is responsible for, such as:
                • Quarterly planning periods
                • The annual review
              • Providing flexible help with priority projects,

              Continue reading →

                Video Operations Associate/Specialist

                About 80,000 Hours

                80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.

                We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.

                The role

                This role would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).

                We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position, and we expect to match candidates to the appropriate level as part of the application process so you don’t need to decide which one to apply for. To give an idea of how the roles might differ:

                • Associates are more likely to focus on owning and implementing our processes, identifying improvements and optimisations, and will take on more complex projects over time.
                • Specialists are more likely to manage larger areas of responsibility,

                Continue reading →

                  Recruiting Associate/Specialist

                  About 80,000 Hours

                  80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.

                  We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.

                  The operations team oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.

                  Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.

                  The role

                  This role would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).

                  We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position,

                  Continue reading →

                    Office Associate/Specialist

                    About 80,000 Hours

                    80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.

                    We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.

                    The operations team oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.

                    Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.

                    The role

                    This role would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).

                    We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position,

                    Continue reading →

                      People Operations Associate/Specialist

                      About 80,000 Hours

                      80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.

                      We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.

                      The operations team oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.

                      Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.

                      The role

                      This role would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).

                      We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position,

                      Continue reading →

                        Events Associate/Specialist

                        About 80,000 Hours

                        80,000 Hours’ goal is to get talented people working on the world’s most pressing problems. After more than 10 years of research into dozens of problem areas, we’re putting most of our focus on helping people work on positively shaping the trajectory of AI, because we think it presents the most serious and urgent challenge that the world is facing right now.

                        We’ve had over 10 million readers on our website, have ~600,000 subscribers to our newsletter, and have given one-on-one advice to over 6,000 people. We’ve also been one of the largest drivers of growth in the effective altruism community.

                        The operations function oversees 80,000 Hours’ HR, recruiting, finances, governance operations, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination.

                        Currently, the operations team has ten full-time staff and some part-time staff. We’re planning to significantly grow the size of our operations team this year to stay on track with our ambitious goals and support a growing team.

                        The role

                        This role would be great for building career capital in operations, by helping us design and run high-quality events that strengthen our team, culture, and connections in the AI safety space. We’re looking for an Events Associate/Specialist who can take ownership of the day-to-day logistics and execution of our events.

                        We plan to hire people at both the associate and specialist levels during this round. The associate role is a more junior position,

                        Continue reading →