Open Contracting Position on the Video Team: Pair Writer / Writing Support to Aric Floyd

Aric standing behind a globe holding a book, still from a recent video

About us

80,000 Hours helps people find careers that effectively tackle the world’s most pressing problems. Our AI in Context YouTube channel makes documentary-style videos about transformative AI — its implications, the risks (including existential risks), and what people can do about it. We’ve published three videos so far (all with 1M+ views), and we want to hone our production.

The role

Aric does his best work when someone with deep context on the script is right next to him or live on a call. It’s a mix of writing, editing, ‘rubber ducking’ (being someone to bounce ideas against), and project management.

Core writing support

This is the core of the role. Your job on a high level: do as much of the metacognition as possible so Aric can focus on the writing itself.

That looks like things like:

  • “What’s your daily goal and how do we play the day from that?”
  • “Are you avoiding any part of the writing?”
  • “How did the last 15 minutes go?”
  • “What is the viewer going to think at this point in the story?”
  • Reading the script and giving a take on how the narrative is going
  • Being warmly disagreeable if he suggests a plan you think won’t work
  • Developing a model of how he works and what gets him stuck

Project managing the script

Beyond the in-the-room thinking partner stuff,

Continue reading →

Open Contracting Position on the Video Team: Executive Assistant to Aric Floyd

Still of Aric in a director's chair from the most recent video

About us

80,000 Hours helps people find careers that effectively tackle the world’s most pressing problems. Our AI in Context YouTube channel makes documentary-style videos about transformative AI — its implications, the risks (including existential risks), and what people can do about it. We’ve published three videos so far (all with 1M+ views), and we want to hone our production.

The role

We’re looking for an executive assistant to work closely with Aric Floyd, our on-camera host and one of our primary scriptwriters, and help with the daily logistics of a fast-paced creative production environment.

The core of the job: working with Aric to support his prioritization, planning, and communication with the team. You’d be a force multiplier for someone whose highest-value work is scripting, filming, and storytelling. You’re also developing a deep sense of how Aric thinks and works so you can suggest improvements to his workflow and be increasingly helpful over time.

What you’d actually do

Daily priority planning: Each day (or more often as needed), compile a priority-ordered plan for Aric based on what’s happening across Slack, his calendar, the team standup doc, and any flagged items from his manager. The plan should read like a schedule for his day — time-blocked, with reasoning.

Make sure to grab any asks for him from Slack or email and flag them in priority order.

Continue reading →

AI safety needs more than engineers

There’s a lot of important work in AI safety that doesn’t require technical skills.

When I (Avital) first read about AI safety work, I assumed there wasn’t anything I could do. I was a writer and researcher who liked talking to people, and I thought the field only needed technical talent and money, neither of which I’d be able to provide.

So instead, I went to grad school for medieval history.

Of course, a lot of AI safety work is technical, and I knew I’d have a better shot if I could learn those skills. Unfortunately, it wasn’t how my brain worked. But as I got to know more people in the field, it became clear that my own skills could actually be useful. Technical AI safety organisations do much more than produce research: they hire people, run events, raise money, and share their ideas with the outside world. None of this requires linear algebra.

Some of the most important roles in AI safety are non-technical. In fact, I’ve met people who used to have technical roles, but now focus on communications, policy, fieldbuilding, or operations because they think those are genuinely more needed right now.

So what should you do if you want to try working in AI safety, but your talents don’t lie in a technical domain? First, think expansively about what you’re good at.

  • What do people most often ask you for advice about?

Continue reading →

Open Position on the Video Team: Operational Partner / Creative Operations Associate to Aric Floyd

About us

80,000 Hours helps people find careers that effectively tackle the world’s most pressing problems. Our AI in Context YouTube channel makes documentary-style videos about transformative AI — its implications, the risks (including existential risks), and what people can do about it. We’ve published three videos so far (all with 1M+ views), and we want to keep making excellent videos.

The role

Aric is responsible for a huge array of tasks: writing, communications, stakeholder management, research, outreach, title/thumbnail iteration, and more. This role aims to support and multiply his impact by focusing on these functions:

Writing partner, editor, and contributor

Our main bottleneck as a team is how many excellent scripts we can produce, and Aric is currently one of our primary scriptwriters. Aric does his best work when someone with deep context is right next to him or live on a call, able to give opinions about the script.

You’d give both synchronous and asynchronous substantive feedback on narrative and argument, read drafts with a viewer’s brain (“where would I click away? Where am I confused?”), push on story structure, challenge framing, bring references from other creators, and develop taste for the AI in Context style.

Ideally you’re deep enough in the material that you can write a rough version of a script section that Aric can react to. This means doing passes where you’re actually rewriting (not just commenting),

Continue reading →

#242 – Will MacAskill on why AI character matters even more than you think

Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems will shape culture on an even grander scale, ultimately becoming “the personality of most of the world’s workforce.”

So… should they be designed to push us towards the better angels of our nature? Or simply do as we ask? Will MacAskill, philosopher and senior research fellow at Forethought, has been thinking through that and the other thorniest issues that come up in designing an AI personality.

He’s also been exploring how we might coexist peacefully with the ‘superintelligent AI’ companies are racing to build. He concludes that we should train such systems to be very risk averse, pay them for their work, and build institutions that enable humans to make credible contracts with AIs themselves.

Will and host Rob Wiblin also discuss what a good world after superintelligence would actually look like — a subject that has received surprisingly little attention from the people working to make it. Will argues that we shouldn’t aim for a specific utopian vision: we don’t know enough about what the best possible future actually is to aim directly for it, and trying to lock in today’s best guesses forever risks baking in errors we can’t yet see.

Will and Rob explore what we can do to steer towards a good future instead, along with why a coalition of democracies building superintelligence together is safer than any single actor, how absurdly useful ChatGPT is for analytic philosophy, and more.

This episode was recorded on February 6, 2026.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Alex Miles
Production: Elizabeth Cox, Nick Stockton, and Katy Moore

Continue reading →

Want to upskill in AI policy? Here are 57 useful resources

Are you enthusiastic about developing AI policy to minimise the technology’s risks and maximise its benefits? Need concrete ideas for how to enter the field?

Below, you’ll find our top resources for building skills to ensure government policies are prepared for a world with powerful AI systems. In practice, this involves developing the research skills, domain expertise, and interpersonal networks you’ll need to keep lawmakers informed — or work for one yourself.

We developed this list with our advisors to highlight the resources they most commonly recommend, including articles, courses, organisations, and fellowships. While we recommend applying to speak to an advisor for tailored, one-on-one guidance, this page gives a practical, noncomprehensive snapshot of how you might move from being interested in AI policy to actually working on it.

Overviews and expert advice

These resources outline the AI policy landscape, highlighting current research efforts and practical ways to begin contributing to the field.

Continue reading →

How scary is Claude Mythos? 303 pages in 21 minutes

As we now know, Anthropic has built an AI that can break into almost any computer on Earth. That AI has already found thousands of unknown security vulnerabilities in every major operating system and every major browser. And Anthropic has decided it’s too dangerous to release to the public; it would just cause too much harm.

Here are just a few of the things that AI accomplished during testing:

  • It found a 27-year-old flaw in the world’s most security-hardened operating system that would in effect let it crash all kinds of essential infrastructure.
  • Engineers at the company with no particular security training asked it to find vulnerabilities overnight and woke up to working exploits of critical security flaws that could be used to cause real harm.
  • It managed to figure out how to build web pages that, when visited by fully updated, fully patched computers, would allow it to write to the operating system kernel — the most important and protected layer of any computer.

We know all this because Anthropic has released hundreds of pages of documentation about this model, which they’ve called Claude Mythos.

I’m going to take you on a tour of all the crazy shit buried in these documents, and then I’m going to tell you what Anthropic says they plan to do to save us from their creation.

Why people are panicking about computer security

So how good is Mythos at hacking into computers?

Continue reading →

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

What does it really take to lift millions out of poverty and prevent needless deaths?

In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India.

What’s so striking is how some of the most effective interventions sound almost too simple to work: banning certain pesticides, replacing thatch roofs, or identifying village “influencers” to spread health information.

You’ll hear from:

  • Karen Levy on why pushing for “sustainable” programmes isn’t as good as it sounds, and keeping up great relationships with researchers and governments (from episode #124)
  • Dean Spears on the social forces and gender inequality that contribute to neonatal mortality in Uttar Pradesh (#186)
  • Sarah Eustis-Guthrie on what we can learn from the massive failure of PlayPumps, and whether more charities should scale back or shut down (#207)
  • Rachel Glennerster on on solving tough global problems by creating the right incentives for innovation, the value we get from doing the right RCTs well, and whether it’s best to focus on small-scale interventions or systemic reforms (#49 and #189)
  • Hannah Ritchie on why improving agricultural productivity in sub-Saharan Africa is critical to solving global poverty (#160)
  • Lucia Coulter on the huge, neglected upsides of reducing lead exposure, and how her organisation rapidly scaled up to 17 countries (#175)
  • James Tibenderana on whether we should use gene drives to wipe out the species of mosquitoes that cause malaria, and the data gaps that will keep us from harnessing the power of AI to eradicate the disease (#129)
  • Varsha Venugopal on using village gossip to get kids their critical immunisations (#113)
  • Alexander Berger on declining returns in global health, and reasons neartermist work makes sense even by longtermist standards (#105)
  • James Snowden on making funding decisions with tricky moral weights (#37)
  • Paul Niehaus on why it’s so important to give aid recipients a choice in how they spend their money (#169)
  • Mushtaq Khan on really drilling down into why “context matters” for development work (#111)
  • Elie Hassenfeld on contrasting GiveWell’s approach with the subjective wellbeing approach of Happier Lives Institute (#153)
  • Leah Utyasheva on how a simple intervention reduced suicide in Sri Lanka by 70% (#22)
  • Shruti Rajagopalan on the key skills to succeed in public policy careers, and seeing economics in everything (#84)
  • Claire Walsh on her career advice for young people who want to get involved in global health and development (#13)

Other 80,000 Hours resources:

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Katy Moore and Milo McGuire
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Continue reading →

Are Anthropic and its supporters hypocritical, naive, and anti-democratic?

I’ve spent years calling for more government oversight of frontier AI development. So am I a hypocrite for opposing the Pentagon’s attempt to commit “corporate murder” against Anthropic? That’s what I’ve heard. As venture capitalist Marc Andreessen put it on Twitter: “Every single person who was in favor of government control of AI, is now opposed to government control of AI.”

It’s a natural way to think. But it’s also completely wrong — and I want to explain why it’s wrong, because you see the same underlying confusion all over the place.

It’s not just hypocrisy though. I count at least three distinct charges critics have levelled at Anthropic and people who support them in their dispute with the Pentagon.

I mentioned the hypocrisy charge, but there’s a separate accusation of naivety: that when you’re building something as powerful as AI, the state will inevitably crush you if you try to set conditions on their use of it, and so Anthropic was in the wrong to pick this fight.

And third, there’s an accusation of being undemocratic: that a private company has no business telling the elected government what it can and can’t do with military technology.

I’m going to take each of these charges seriously and explain where I think they go wrong and why. Let’s dive in.

Charge 1: Hypocrisy

Let’s start with hypocrisy, because it’s the charge I hear most often and the most straightforward of the three.

Continue reading →

#241 – AI designs genomes from scratch & outperforms virologists at lab work. Dr Richard Moulange asks: what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.

That alone is remarkable. But as today’s guest — Dr Richard Moulange, one of the world’s top experts on ‘AI–Biosecurity’ — explains, it’s just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.

For years, experts have reassured us that ‘tacit knowledge’ — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they’ve been right.

But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%.

Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation.

In today’s conversation, Richard and host Rob Wiblin discuss:

  • What AI biology tools already exist
  • Why mid-tier actors (not amateurs) are the ones getting the most dangerous boost
  • The three main categories of defence we can pursue
  • Whether there’s a plausible path to a world where engineered pandemics become a thing of the past.

This episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Jeremy Chevillotte
Transcripts and web: Elizabeth Cox and Katy Moore

A couple of announcements from 80,000 Hours

  1. We’ve got a book coming out! 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. It’s available to preorder anywhere you buy books.
  2. The team behind The 80,000 Hours Podcast is hiring contract video editors! For more information, check out the expression of interest page on our website.

Continue reading →

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a wider war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today’s guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that’s equally dangerous, just in different ways.

That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses.

Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars.

What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that gives neither side exactly what it wants, but just enough to deter renewed aggression. Both sides stop dying and the flames of war fizzle — hopefully.

None of this is clean or satisfying: Russia invaded, committed war crimes, and is being offered a path back to partial normalcy. But Samuel argues that the alternatives — indefinite war or unstructured ceasefire — are much worse for Ukraine, Europe, and global stability.

This episode was recorded on February 27, 2026.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Transcripts and web: Nick Stockton, Elizabeth Cox, and Katy Moore

Continue reading →

The Sable scenario in If Anyone Builds It, Everyone Dies vs. the AI in Context retelling

Our version of the Sable story differs from the book in a number of ways. Most of our changes were made for one of two reasons:

  1. Simplicity. We had to tell the story within a 40-minute video that also covered the book’s core arguments. We chose to simplify or eliminate several plot points as a result (in Yudkowsky & Soares’ story, for example, Sable is assigned a large suite of math problems, and the eventual global pandemic has multiple waves).
  2. Similarity to current AI systems. Yudkowsky and Soares give Sable several capabilities that don’t exist in today’s most powerful models but plausibly could within a few years (reasoning in raw numerical vectors instead of natural-language tokens, for instance, or a “parallel scaling law” that lets a single mind think across hundreds of thousands of GPUs at once). We chose to make our version of Sable a bit more like a scaled-up version of current large language models. It reasons in tokens, it runs as many separate copies rather than one unified mind, and it fine-tunes its own weights rather than subtly shaping future gradient updates.

None of this is a knock on the realism of the book’s choices. We just wanted to highlight that you don’t necessarily need major architectural breakthroughs to get into a risky scenario. More capable versions of the technology we already have might be sufficient.

Here are some of the key changes we made:

  1. In the book,

Continue reading →

Meta’s $16 billion scam economy: what leaked docs reveal about tech company self-regulation

An overarching question that matters for governance of artificial intelligence is how much we can trust technology companies to do the right thing in cases where it becomes seriously costly for them to do so.

Well, some internal documents leaked from Meta (the company behind Facebook, Instagram, and WhatsApp) are a really important piece of evidence in this debate — and one that, in my opinion, far too few people have heard anything about.

These documents, which were leaked to Reuters by a frustrated staff member, show that Meta itself estimated that 10% of all its revenue was coming from running ads for scams and goods that they themselves had banned: around $16 billion a year. That’s 10% of all revenue coming quite often from, fundamentally, enabling crimes.

Meta also estimated that its platforms were involved in initiating a third of all successful scams taking place in the entire United States. How much might we expect that to actually be costing ordinary people, on average? Well, the FTC estimates that Americans lose about $158 billion a year to fraud. If Meta is really leading to a third of that, we’re talking roughly $50 billion a year in losses connected to Meta’s platforms — or about $160 per person per year.

Now, not everyone inside Meta loved this state of affairs, as you can imagine. And the documents also show that a Meta anti-fraud team came up with a screening method that,

Continue reading →

US electoral politics

In a nutshell: Who wields power in the US government is a very important determinant of outcomes for many of the world’s most pressing problems, including reducing catastrophic risks from AI. The most direct ways to affect who controls the levers of government power are to work on campaigns and run for office. Because election outcomes are so important, and sufficiently tractable, we think electoral work is a great option for many people. However, it can be hard to make an impact in these roles, and the work has material downsides, like low pay and low job security.

Pros:

  • Very wide breadth and scale of potential impact
  • Low barriers to entry
  • Entry point for other areas with high potential impact such as policy
  • Fast-paced and often exciting work

Cons:

  • Less direct path to impact
  • Even if you win an election, impact is not guaranteed (and could be negative)
  • Difficult to assess where you can have the greatest leverage
  • Challenging working conditions and high risk of burnout
  • Many elected offices (though not campaign roles) are limited to US citizens

Key facts on fit:

  • You don’t need any particular credentials or experience to work in electoral politics. Many different backgrounds can be helpful.
  • The key traits that make you good in political roles are adaptability, hustle, and good communication and people skills.
  • You’ll need the ability to tolerate relatively low pay, job insecurity, and to tolerate (or enjoy) rapid change in the nature of your work due to changing political winds.
  • A love of politics may not be essential — you might even be a better fit if you are not highly ideological and are somewhat wary of politics, since this can make you less prone to unproductive partisanship, motivated reasoning, and value drift caused by proximity to power. (That said, you’ll need to like politics enough to enjoy your work and be good at it.)

Two key paths to impact through electoral politics

There are two key career paths in electoral politics:1

  1. Running for office. Elected officials face real constraints, but they also have significant latitude in how they leverage their roles. If you’re well-informed and motivated to make progress on a specific issue, winning the right office puts you in a position to do so.
  2. Becoming a campaign professional who helps good candidates win elections. This can mean either directly staffing candidates’ campaigns or working for organisations that seek to help candidates get elected to public office. You also have to choose the candidates you support wisely.

We will talk about both in this profile, but note that there are important differences between these options, in addition to overlap in the skills and traits that might make you a good fit.

Continue reading →

#239 – Rose Hadshar on why automating human labour will break our political system

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.

That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment.

She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects.

Almost nobody wants this to happen — but we may find ourselves unable to prevent it.

If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage will citizens have over those in power? And what does all of this imply for the institutions we’re relying on to prevent the worst outcomes?

Rose has answers, and they’re not all reassuring.

But she’s also hopeful we can make society more robust against these dynamics. We’ve got literally centuries of thinking about checks and balances to draw on. And there are some interventions she’s excited about — like building sophisticated AI tools for making sense of the world, or ensuring multiple branches of government have access to the best AI systems.

Rose discusses all of this, and more, with host Zershaaneh Qureshi in today’s episode.

This episode was recorded on December 18, 2025.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Nick Stockton and Katy Moore

Continue reading →

If someone builds it, does everyone die?

Our audience has really shown up for AI in Context videos, and we cannot tell you how much we appreciate it. We even got a massive shoutout from Hank Green, who’s been one of our YouTube idols since Chana was in high school.

We loved making our first two videos. “We’re not ready for superintelligence” explored a specific scenario of how self-improving, transformative AI might unfold; our video about Grok’s MechaHitler moment showed what happens when an AI company loses control of its own system. But when we planned our third video, we realised we’d been circling the big idea without hitting it head-on: specifically, the argument that the current trajectory to superintelligent AI will kill everyone.

A man standing behind a globe, with the text "If anyone builds it, everyone dies" overlaid.

WATCH NOW

This is the claim made by Nate Soares and Eliezer Yudkowsky in If Anyone Builds It, Everyone Dies: a New York Times bestseller with blurbs from people like Ben Bernanke (who ran the US Federal Reserve) and Mark Ruffalo (who punched Thanos).

Why this book?

Yudkowsky and Soares have done more than just about anyone else to make AI safety what it is today: a recognised concern and flourishing research field.

They spent decades doing research, but eventually came to believe that their work would be too little, too late. So they pivoted to communications: trying to get the world to wake up and start taking the risks of superintelligent AI seriously.

Continue reading →

#238 – Sam Winter-Levy and Nikita Lalwani on how AI won’t end nuclear deterrence (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race.

Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.

Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:

  • Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?
  • Would road-mobile launchers still be able to hide in tunnels and under netting?
  • Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?
  • Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?

Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot.

This episode was recorded on November 24, 2025.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Nick Stockton and Katy Moore

Continue reading →

Half-year update: Q3-Q4 2025

Org-wide updates

We’re releasing a new career guide with Penguin Random House in May

In late May, we’ll be releasing the new edition of our career guide! This time, it will be published with Penguin Random House and available in traditional bookstores. Its core ideas have been updated for 2026, but the focus remains on having a fulfilling, high-impact career. We’ve also added two new chapters on AI, explaining its effects on the job market and why it’s the most pressing issue facing society. As usual, preorders (and sharing the forthcoming launch announcements) help the book reach more people, so any support is much appreciated.

We’ve grown to 50 staff members

Last month, 80,000 Hours reached 50 primary staff. Here’s a rough breakdown of how our team is currently split (note our one-on-one team recently rebranded to career services to better represent the work of its advising, job board, and headhunting sub-teams):

We’ve updated our estimates of job board placements

We recently surveyed 91 organisations about how useful our job board is in sourcing candidates for them.

The headline findings were:

  • In total, we learned of around 181 new placements attributable to our job board.
  • Among the 45 organisations who gave answers we could use, job board applicants accounted for around 20.8% of top candidates.
  • The volume of lower-quality candidates coming from the job board has increased relative to 2024,

Continue reading →

    #237 – Robert Long on how we’re not ready for AI consciousness

    Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with that?

    Robert Long founded Eleos AI to explore questions like these, on the basis that AI may one day be capable of suffering — or already is. In today’s episode, Robert and host Luisa Rodriguez explore the many ways in which AI consciousness may be very different from anything we’re used to.

    Things get strange fast: If AI is conscious, where does that consciousness exist? In the base model? A chat session? A single forward pass? If you close the chat, is the AI asleep or dead?

    To Robert, these kinds of questions aren’t just philosophical exercises: not being clear on AI’s moral status as it transitions from human-level to superhuman intelligence could be dangerous. If we’re too dismissive, we risk unintentionally exploiting sentient beings. If we’re too sympathetic, we might rush to “liberate” AI systems in ways that make them harder to control — worsening existential risk from power-seeking AIs.

    Robert argues the path through is doing the empirical and philosophical homework now, while the stakes are still manageable.

    The field is tiny. Eleos AI is three people. As a result, Robert argues that driven researchers with a willingness to venture into uncertain territory can push out the frontier on these questions remarkably quickly.

    This episode was recorded November 18–19, 2025.

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Katy Moore

    Continue reading →