2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode:

  • Helen Toner on whether we’re racing China to build AGI (from episode #227)
  • Hugh White on what he’d say to Americans (#218)
  • Buck Shlegeris on convincing AI models they’ve already escaped (#214)
  • Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (#231)
  • Ian Dunt on how unelected septuagenarians are the heroes of UK governance (#216)
  • Beth Barnes on AI companies being locally reasonable, but globally reckless (#217)
  • Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (November update)
  • Toby Ord on whether rich people will get access to AGI first (#219)
  • Andrew Snyder-Beattie on how the worst biorisks are defence dominant (#224)
  • Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (#228)
  • Will MacAskill on what a century of history crammed into a decade might feel like (#213)
  • Kyle Fish on what happens when two instances of Claude are left to interact with each other (#221)
  • Sam Bowman on where the Not In My Back Yard movement actually has a point (#211)
  • Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (#222)
  • Tom Davidson on the potential to install secret AI loyalties at a very early stage (#215)
  • Luisa and Rob discussing how medicine doesn’t take the health burden of pregnancy seriously enough (November team chat)
  • Marius Hobbhahn on why scheming is a very natural path for AI models — and people (#229)
  • Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (#226)
  • Allan Dafoe on how AGI is an inescapable idea but one we have to define well (#212)
  • Ryan Greenblatt on the most likely ways for AI to take over (#220)
  • Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (#225)
  • Dean Ball on why regulation invites path dependency, and that’s a major problem (#230)

It’s been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.