#204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

In today’s episode, Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.

On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.

But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.

Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years.

Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome.

Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others.

But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him:

  • How would Nate spend $10 billion differently than today’s philanthropists influenced by EA?
  • Is anyone else competitive with EA in terms of impact per dollar?
  • Does he have any big disagreements with 80,000 Hours’ advice on how to have impact?
  • Is EA too big a tent to function?
  • What global problems could EA be ignoring?
  • Should EA be more willing to court controversy?
  • Does EA’s niceness leave it vulnerable to exploitation?
  • What moral philosophy would he have modelled EA on?

Rob and Nate also talk about:

  • Nate’s theory of Sam Bankman-Fried’s psychology.
  • Whether we had to “raise or fold” on COVID.
  • Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.
  • “Winners’ tilt.”
  • Whether it’s selfish to slow down AI progress.
  • The ridiculous 13 Keys to the White House.
  • Whether prediction markets are now overrated.
  • Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video engineering: Simon Monsour
Transcriptions: Katy Moore

Continue reading →

Anonymous answers: could advances in AI supercharge biorisk?

This is Part Four of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Two: Fighting pandemics, and Part Three: Infohazards.

One of the most prominently discussed catastrophic risks from AI is the potential for an AI-enabled bioweapon.

But discussions of future technologies are necessarily speculative. So it’s not surprising that there’s no consensus among biosecurity experts about the impact AI is likely to have on their field.

We decided to talk to more than a dozen biosecurity experts to better understand their views on the potential for AI to exacerbate biorisk. This is the fourth and final instalment of our biosecurity anonymous answers series. Below, we present 11 answers from these experts about whether recent advances in AI — such as ChatGPT and AlphaFold — have changed their biosecurity priorities and what interventions they think are promising to reduce the risks. (As we conducted the interviews around one year ago, some experts may have updated their views in the meantime.)

Some key topics and areas of disagreement that emerged include:

  • The extent to which recent AI developments have changed biosecurity priorities
  • The potential of AI to lower barriers for creating biological threats
  • The effectiveness of current AI models in the biological domain
  • The balance between AI as a threat multiplier and as a tool for defence
  • The urgency of developing new interventions to address AI-enhanced biosecurity risks
  • The role of AI companies and policymakers in mitigating potential dangers

Here’s what the experts had to say.

Continue reading →

Updates to our problem rankings of factory farming, climate change, and more

At 80,000 Hours, we are interested in the question: “if you want to find the best way to have a positive impact with your career, what should you do on the margin?” The ‘on the margin’ qualifier is crucial. We are asking how you can have a bigger impact, given how the rest of society spends its resources.

To help our readers think this through, we publish a list of what we see as the world’s most pressing problems. We rank the top most issues by our assessment of where additional work and resources will have the greatest positive impact, considered impartially and in expectation.

Every problem on our list is there because we think it’s very important and a big opportunity for doing good. We’re excited for our readers to make progress on all of them, and think all of them would ideally get more resources and attention than they currently do from society at large.

The most pressing problems are those that have the greatest combination of being:

  • Large in scale: solving the issue improves more lives to a larger extent over the long run.
  • Neglected by others: the best interventions aren’t already being done.
  • Tractable: we can make progress if we try.

We’ve recently updated our list. Here are the biggest changes:

  • We now rank factory farming among the top problems in the world.

Continue reading →

    #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

    In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World.

    They cover:

    • Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.
    • How the role of culture has been crucial in enabling human technological progress.
    • Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.
    • Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.
    • Whether we can and should avoid death by uploading human minds.
    • And plenty more.

    Producer: Keiran Harris
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
    Transcriptions: Katy Moore

    Continue reading →

    Anonymous answers: How can we manage infohazards in biosecurity?

    *This is Part Three of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Two: Fighting pandemics, and Part Four: AI and biorisk. *

    In the field of biosecurity, many experts are concerned with managing information hazards (or infohazards). This is information that some believe could be dangerous if it were widely known — such as the gene sequence of a deadly virus or particular threat models.

    Navigating the complexities of infohazards and the potential misuse of biological knowledge is contentious, and experts often disagree about how to approach this issue.

    So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the third instalment of our biosecurity anonymous answers series. Below, we present 11 responses from these experts addressing their views on managing information hazards in biosecurity, particularly as it relates to global catastrophic risks

    Some key topics and areas of disagreement that emerged include:

    • How to balance the need for transparency with the risks of information misuse
    • The extent to which discussing biological threats could inspire malicious actors
    • Whether current approaches to information hazards are too conservative or not cautious enough
    • How to share sensitive information responsibly with different audiences
    • The impact of information restrictions on scientific progress and problem solving
    • The role of public awareness in biosecurity risks

    Here’s what the experts had to say.

    Continue reading →

    #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

    In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality.

    They cover:

    • What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.
    • Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.
    • Why eliminating major age-related diseases might only extend average lifespan by 15 years.
    • The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.
    • And plenty more.

    Producer: Keiran Harris
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
    Transcriptions: Katy Moore

    Continue reading →

    Why experts and forecasters disagree about AI risk

    This week we’re highlighting:

    The idea this week: even some sceptics of AI risk think there’s a real chance of a catastrophe in the next 1,000 years.

    That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about existential risk.

    It’s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the Existential Risk Persuasion Tournament to investigate these disagreements and find out whether they could be resolved.

    The interview covers a lot of issues, but here are some key details that stood out on the topic of AI risk:

    • Domain experts in AI estimated a 3% chance of AI-caused human extinction by 2100 on average, while superforecasters put it at just 0.38%.
    • Both groups agreed on a high likelihood of “powerful AI” being developed by 2100 (around 90%).
    • Even AI risk sceptics saw a 30% chance of catastrophic AI outcomes over a 1,000-year timeframe.
    • But the groups showed little convergence after extensive debate,

    Continue reading →

      #201 – Ken Goldberg on why your robot butler isn't here yet

      In today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies.

      They cover:

      • Why training robots is harder than training large language models like ChatGPT.
      • The biggest engineering challenges that still remain before robots can be widely useful in the real world.
      • The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine.
      • Whether we should be worried about robot labour affecting human employment.
      • Recent breakthroughs in robotics, and what cutting-edge robots can do today.
      • Ken’s work as an artist, where he explores the complex relationship between humans and technology.
      • And plenty more.

      Producer: Keiran Harris
      Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
      Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
      Transcriptions: Katy Moore

      Continue reading →

      Understanding the moral status of digital minds

      We think understanding the moral status of digital minds is a top emerging challenge in the world. This means it’s potentially as important as our top problems, but we have a lot of uncertainty about it and the relevant field is not very developed.

      The fast development of AI technology will force us to confront many important questions around the moral status of digital minds that we’re not prepared to answer. We want to see more people focusing their careers on this issue, building a field of researchers to improve our understanding of this topic and getting ready to advise key decision makers in the future. We also think people working in AI technical safety and AI governance should learn more about this problem and consider ways in which it might interact with their work.

      Continue reading →

      Anonymous answers: What are the best ways to fight the next pandemic?

      This is Part Two of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Three: Infohazards, and Part Four: AI and biorisk.

      Preventing catastrophic pandemics is one of our top priorities.

      But the landscape of pandemic preparedness is complex and multifaceted, and experts don’t always agree about what the most effective interventions are or how resources should be allocated.

      So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the second instalment of our biosecurity anonymous answers series.

      Below, we present 12 responses from these experts addressing their views on neglected interventions in pandemic preparedness and advice for capable young people entering the field, particularly as it relates to global catastrophic risks.

      Some key topics and areas of disagreement that emerged include:

      • The relative importance of technical interventions versus policy work
      • The prioritisation of prevention strategies versus response capabilities
      • The focus on natural pandemic threats versus deliberate biological risks
      • The role of intelligence and national security in pandemic preparedness
      • The importance of behavioural science and public communication in crisis response
      • The potential of various technologies like improved PPE, biosurveillance, and pathogen-agnostic approaches

      Here’s what the experts had to say.

      Expert 1: Improving PPE and detection technologies
      Expert 2: Enhancing security measures against malicious actors
      Expert 3: Implementing biosecurity safeguards and behavioural science
      Expert 4: Protecting field researchers and advancing vaccine platforms
      Expert 5: Focusing on containment and early detection
      Expert 6: Balancing policy and technical interventions
      Expert 7: Understanding the bioeconomy
      Expert 8: Prioritising biosurveillance and risk modelling
      Expert 9: Increasing biodefense efforts
      Expert 10: Integrating pathogen-agnostic sequencing
      Expert 11: Bolstering intelligence and early detection
      Expert 12: Promoting biosafety research
      Learn more

      Continue reading →

      #200 – Ezra Karger on what superforecasters and experts think about existential risks

      In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s 2022 Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

      They cover:

      • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
      • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
      • The challenges of predicting low-probability, high-impact events.
      • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
      • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
      • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
      • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
      • Whether large language models could help or outperform human forecasters.
      • How people can improve their calibration and start making better forecasts personally.
      • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
      • And plenty more.

      Producer: Keiran Harris
      Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
      Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
      Transcriptions: Katy Moore

      Continue reading →

      Updates to our research about AI risk and careers

      This week, we’re sharing new updates on:

      1. Top career paths for reducing risks from AI
      2. An AI bill in California that’s getting a lot of attention
      3. The potential for catastrophic misuse of advanced AI
      4. Whether to work at frontier AI companies if you want to reduce catastrophic risks
      5. The variety of approaches in AI governance

      Here’s what’s new:

      1. We now rank AI governance and policy at the top of our list of impactful career paths

      It’s swapped places with AI technical safety research, which is now second.

      Here are our reasons for the change:

      • Many experts in the field have been increasingly excited about “technical AI governance” — people using technical expertise to inform and shape policies. For example, people can develop sophisticated compute governance policies and norms around evaluating increasingly advanced AI models for dangerous capabilities.
      • We know of many people with technical talent and track records choosing to work in governance right now because they think it’s where they can make a bigger difference.
      • It’s become more clear that policy-shaping and governance positions within key AI organisations can play critical roles in how the technology progresses.
      • We’re seeing a particularly large increase in the number of roles available in AI governance and policy,

      Continue reading →

        #199 – Nathan Calvin on California's AI bill SB 1047 and its potential to shape US AI policy

        In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.

        They cover:

        • What’s actually in SB 1047, and which AI models it would apply to.
        • The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
        • What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
        • Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
        • How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
        • Why California is taking state-level action rather than waiting for federal regulation.
        • How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
        • And plenty more.

        Producer and editor: Keiran Harris
        Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        #198 – Meghan Barrett on challenging our assumptions about insects

        In today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects.

        They cover:

        • The scale of potential insect suffering in the wild, on farms, and in labs.
        • Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.
        • How size bias might help explain why many people assume insects can’t feel pain.
        • Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.
        • Challenges facing the nascent field of insect welfare research, and where the main research gaps are.
        • Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.
        • And much more.

        Producer and editor: Keiran Harris
        Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        #197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

        The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?

        That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the 11 people who left OpenAI to launch Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.

        As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.

        As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.

        Nick points out three big virtues to the RSP approach:

        • It allows us to set aside the question of when any of these things will be possible, and focus the conversation on what would be necessary if they are possible — something there is usually much more agreement on.
        • It means we don’t take costly precautions that developers will resent and resist before they are actually called for.
        • As the policies don’t allow models to be deployed until suitable safeguards are in place, they align a firm’s commercial incentives with safety — for example, a profitable product release could be blocked by insufficient investments in computer security or alignment research years earlier.

        Rob then pushes Nick on some of the best objections to the RSP mechanisms he’s found, including:

        • It’s hard to trust that profit-motivated companies will stick to their scaling policies long term and not water them down to make their lives easier — particularly as commercial pressure heats up.
        • Even if you’re trying hard to find potential safety concerns, it’s difficult to truly measure what models can and can’t do. And if we fail to pick up a dangerous ability that’s really there under the hood, then perhaps all we’ve done is lull ourselves into a false sense of security.
        • Importantly, in some cases humanity simply hasn’t invented safeguards up to the task of addressing AI capabilities that could show up soon. Maybe that will change before it’s too late — but if not, we’re being written a cheque that will bounce when it comes due.

        Nick explains why he thinks some of these worries are overblown, while others are legitimate but just point to the hard work we all need to put in to get a good outcome.

        Nick and Rob also discuss whether it’s essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.

        In addition to all of that, Nick and Rob talk about:

        • What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).
        • What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.
        • What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.

        And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at [email protected].

        Producer and editor: Keiran Harris
        Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
        Video engineering: Simon Monsour
        Transcriptions: Katy Moore

        Continue reading →

        AI governance and policy

        As advancing AI capabilities gained widespread attention in late 2022 and 2023, interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI has become more prominent, potentially opening up opportunities for policy that could mitigate the threats.

        There’s still a lot of uncertainty about which AI governance strategies would be best. Many have proposed policies and strategies aimed at reducing the largest risks, which we discuss below.

        But there’s no roadmap here. There’s plenty of room for debate about what’s needed, and we may not have found the best ideas yet in this space. In any case, there’s still a lot of work to figure out how promising policies and strategies would work in practice. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and policy.

        Why this could be a high-impact career path

        Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks.

        And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous.

        We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems,

        Continue reading →

        Mpox and H5N1: assessing the situation

        The idea this week: mpox and a bird flu virus are testing our pandemic readiness.

        Would we be ready for another pandemic?

        It became clear in 2020 that the world hadn’t done enough to prepare for the rapid, global spread of a particularly deadly virus. Four years on, our resilience faces new tests.

        Two viruses have raised global concerns:

        Here’s what we know about each:

        Mpox

        Mpox drew international attention in 2022 when it started spreading globally, including in the US and the UK. During that outbreak, around 95,000 cases and about 180 deaths were reported. That wave largely subsided in much of the world, in part due to targeted vaccination campaigns, but the spread of another strain of the virus has sharply accelerated in Central Africa.

        The strain driving the current outbreak may be significantly more deadly. Around 22,000 suspected mpox infections and more than 1,200 deaths have been reported in the DRC since January 2023..

        Continue reading →

          #196 – Jonathan Birch on the edge cases of sentience and why they matter

          In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)

          They cover:

          • Candidates for sentience — such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
          • Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
          • Chilling tales about overconfident policies that probably caused significant suffering for decades.
          • How policymakers can act ethically given real uncertainty.
          • Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
          • How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
          • Why Jonathan is so excited about citizens’ assemblies.
          • Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
          • And plenty more.

          Producer and editor: Keiran Harris
          Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Additional content editing: Katy Moore and Luisa Rodriguez
          Transcriptions: Katy Moore

          Continue reading →

          Should you work at a frontier AI company?

          We think AI is likely to have transformative effects over the coming decades, and that reducing the chances of an AI-related catastrophe is one of the world’s most pressing problems.

          So it’s natural to wonder whether you should try to work at one of the companies that are doing the most to build and shape these future AI systems.

          As of summer 2024, OpenAI, Google DeepMind, Meta, and Anthropic seem to be the leading frontier AI companies — meaning they have produced the most capable models so far and seem likely to continue doing so. Mistral, and xAI are contenders as well — and others may enter the industry from here.

          Why might it be high impact to work for a frontier AI company?
          Some roles at these companies might be among the best for reducing risks

          We suggest working at frontier AI companies in several of our career reviews because a lot of important safety, governance, and security work is done in them.

          In these reviews, we highlight:

          Continue reading →

          Why Orwell would hate AI

          The idea this week: totalitarian regimes killed over 100 million people in less than 100 years — and in the future they could be far worse.

          That’s because advanced artificial intelligence may prove very useful for dictators. They could use it to surveil their population, secure their grip on power, and entrench their rule, perhaps indefinitely.

          I explore this possibility in my new article for 80,000 Hours on the risk of stable totalitarianism.

          This is a serious risk. Many of the worst crimes in history, from the Holocaust to the Cambodian Genocide, have been perpetrated by totalitarian regimes. When megalomaniacal dictators decide massive sacrifices are justified to pursue national or personal glory, the results are often catastrophic.

          However, even the most successful totalitarian regimes rarely survive more than a few decades. They tend to be brought down by internal resistance, war, or the succession problem — the possibility for sociopolitical change, including liberalisation, after a dictator’s death.

          But that could all be upended if technological advancements help dictators overcome these challenges.

          In the new article, I address:

          To be sure,

          Continue reading →