#203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World.

They cover:

  • Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.
  • How the role of culture has been crucial in enabling human technological progress.
  • Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.
  • Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.
  • Whether we can and should avoid death by uploading human minds.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

Anonymous answers: How can we manage infohazards in biosecurity?

This is Part Three of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions and Part Two: Fighting pandemics.

In the field of biosecurity, many experts are concerned with managing information hazards (or infohazards). This is information that some believe could be dangerous if it were widely known — such as the gene sequence of a deadly virus or particular threat models.

Navigating the complexities of infohazards and the potential misuse of biological knowledge is contentious, and experts often disagree about how to approach this issue.

So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the third instalment of our biosecurity anonymous answers series. Below, we present 11 responses from these experts addressing their views on managing information hazards in biosecurity, particularly as it relates to global catastrophic risks

Some key topics and areas of disagreement that emerged include:

  • How to balance the need for transparency with the risks of information misuse
  • The extent to which discussing biological threats could inspire malicious actors
  • Whether current approaches to information hazards are too conservative or not cautious enough
  • How to share sensitive information responsibly with different audiences
  • The impact of information restrictions on scientific progress and problem solving
  • The role of public awareness in biosecurity risks

Here’s what the experts had to say.

Continue reading →

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality.

They cover:

  • What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.
  • Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.
  • Why eliminating major age-related diseases might only extend average lifespan by 15 years.
  • The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.
  • And plenty more.

Producer: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Continue reading →

Why experts and forecasters disagree about AI risk

This week we’re highlighting:

The idea this week: even some sceptics of AI risk think there’s a real chance of a catastrophe in the next 1,000 years.

That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about existential risk.

It’s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the Existential Risk Persuasion Tournament to investigate these disagreements and find out whether they could be resolved.

The interview covers a lot of issues, but here are some key details that stood out on the topic of AI risk:

  • Domain experts in AI estimated a 3% chance of AI-caused human extinction by 2100 on average, while superforecasters put it at just 0.38%.
  • Both groups agreed on a high likelihood of “powerful AI” being developed by 2100 (around 90%).
  • Even AI risk sceptics saw a 30% chance of catastrophic AI outcomes over a 1,000-year timeframe.
  • But the groups showed little convergence after extensive debate,

Continue reading →

    #201 – Ken Goldberg on why your robot butler isn't here yet

    In today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies.

    They cover:

    • Why training robots is harder than training large language models like ChatGPT.
    • The biggest engineering challenges that still remain before robots can be widely useful in the real world.
    • The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine.
    • Whether we should be worried about robot labour affecting human employment.
    • Recent breakthroughs in robotics, and what cutting-edge robots can do today.
    • Ken’s work as an artist, where he explores the complex relationship between humans and technology.
    • And plenty more.

    Producer: Keiran Harris
    Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
    Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
    Transcriptions: Katy Moore

    Continue reading →

    Understanding the moral status of digital minds

    We think understanding the moral status of digital minds is a top emerging challenge in the world. This means it’s potentially as important as our top problems, but we have a lot of uncertainty about it and the relevant field is not very developed.

    The fast development of AI technology will force us to confront many important questions around the moral status of digital minds that we’re not prepared to answer. We want to see more people focusing their careers on this issue, building a field of researchers to improve our understanding of this topic and getting ready to advise key decision makers in the future. We also think people working in AI technical safety and AI governance should learn more about this problem and consider ways in which it might interact with their work.

    Continue reading →

    Anonymous answers: What are the best ways to fight the next pandemic?

    This is Part Two of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions.

    Preventing catastrophic pandemics is one of our top priorities.

    But the landscape of pandemic preparedness is complex and multifaceted, and experts don’t always agree about what the most effective interventions are or how resources should be allocated.

    So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the second instalment of our biosecurity anonymous answers series.

    Below, we present 12 responses from these experts addressing their views on neglected interventions in pandemic preparedness and advice for capable young people entering the field, particularly as it relates to global catastrophic risks.

    Some key topics and areas of disagreement that emerged include:

    • The relative importance of technical interventions versus policy work
    • The prioritisation of prevention strategies versus response capabilities
    • The focus on natural pandemic threats versus deliberate biological risks
    • The role of intelligence and national security in pandemic preparedness
    • The importance of behavioural science and public communication in crisis response
    • The potential of various technologies like improved PPE, biosurveillance, and pathogen-agnostic approaches

    Here’s what the experts had to say.

    Expert 1: Improving PPE and detection technologies
    Expert 2: Enhancing security measures against malicious actors
    Expert 3: Implementing biosecurity safeguards and behavioural science
    Expert 4: Protecting field researchers and advancing vaccine platforms
    Expert 5: Focusing on containment and early detection
    Expert 6: Balancing policy and technical interventions
    Expert 7: Understanding the bioeconomy
    Expert 8: Prioritising biosurveillance and risk modelling
    Expert 9: Increasing biodefense efforts
    Expert 10: Integrating pathogen-agnostic sequencing
    Expert 11: Bolstering intelligence and early detection
    Expert 12: Promoting biosafety research
    Learn more

    Continue reading →

    #200 – Ezra Karger on what superforecasters and experts think about existential risks

    In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s 2022 Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

    They cover:

    • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
    • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
    • The challenges of predicting low-probability, high-impact events.
    • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
    • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
    • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
    • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
    • Whether large language models could help or outperform human forecasters.
    • How people can improve their calibration and start making better forecasts personally.
    • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
    • And plenty more.

    Producer: Keiran Harris
    Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
    Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
    Transcriptions: Katy Moore

    Continue reading →

    Updates to our research about AI risk and careers

    This week, we’re sharing new updates on:

    1. Top career paths for reducing risks from AI
    2. An AI bill in California that’s getting a lot of attention
    3. The potential for catastrophic misuse of advanced AI
    4. Whether to work at frontier AI companies if you want to reduce catastrophic risks
    5. The variety of approaches in AI governance

    Here’s what’s new:

    1. We now rank AI governance and policy at the top of our list of impactful career paths

    It’s swapped places with AI technical safety research, which is now second.

    Here are our reasons for the change:

    • Many experts in the field have been increasingly excited about “technical AI governance” — people using technical expertise to inform and shape policies. For example, people can develop sophisticated compute governance policies and norms around evaluating increasingly advanced AI models for dangerous capabilities.
    • We know of many people with technical talent and track records choosing to work in governance right now because they think it’s where they can make a bigger difference.
    • It’s become more clear that policy-shaping and governance positions within key AI organisations can play critical roles in how the technology progresses.
    • We’re seeing a particularly large increase in the number of roles available in AI governance and policy,

    Continue reading →

      #199 – Nathan Calvin on California's AI bill SB 1047 and its potential to shape US AI policy

      In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.

      They cover:

      • What’s actually in SB 1047, and which AI models it would apply to.
      • The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
      • What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
      • Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
      • How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
      • Why California is taking state-level action rather than waiting for federal regulation.
      • How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
      • And plenty more.

      Producer and editor: Keiran Harris
      Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      #198 – Meghan Barrett on challenging our assumptions about insects

      In today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects.

      They cover:

      • The scale of potential insect suffering in the wild, on farms, and in labs.
      • Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.
      • How size bias might help explain why many people assume insects can’t feel pain.
      • Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.
      • Challenges facing the nascent field of insect welfare research, and where the main research gaps are.
      • Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.
      • And much more.

      Producer and editor: Keiran Harris
      Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      #197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

      The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?

      That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the 11 people who left OpenAI to launch Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.

      As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.

      As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.

      Nick points out three big virtues to the RSP approach:

      • It allows us to set aside the question of when any of these things will be possible, and focus the conversation on what would be necessary if they are possible — something there is usually much more agreement on.
      • It means we don’t take costly precautions that developers will resent and resist before they are actually called for.
      • As the policies don’t allow models to be deployed until suitable safeguards are in place, they align a firm’s commercial incentives with safety — for example, a profitable product release could be blocked by insufficient investments in computer security or alignment research years earlier.

      Rob then pushes Nick on some of the best objections to the RSP mechanisms he’s found, including:

      • It’s hard to trust that profit-motivated companies will stick to their scaling policies long term and not water them down to make their lives easier — particularly as commercial pressure heats up.
      • Even if you’re trying hard to find potential safety concerns, it’s difficult to truly measure what models can and can’t do. And if we fail to pick up a dangerous ability that’s really there under the hood, then perhaps all we’ve done is lull ourselves into a false sense of security.
      • Importantly, in some cases humanity simply hasn’t invented safeguards up to the task of addressing AI capabilities that could show up soon. Maybe that will change before it’s too late — but if not, we’re being written a cheque that will bounce when it comes due.

      Nick explains why he thinks some of these worries are overblown, while others are legitimate but just point to the hard work we all need to put in to get a good outcome.

      Nick and Rob also discuss whether it’s essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.

      In addition to all of that, Nick and Rob talk about:

      • What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).
      • What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.
      • What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.

      And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at [email protected].

      Producer and editor: Keiran Harris
      Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Video engineering: Simon Monsour
      Transcriptions: Katy Moore

      Continue reading →

      AI governance and policy

      As advancing AI capabilities gained widespread attention in late 2022 and 2023, interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI has become more prominent, potentially opening up opportunities for policy that could mitigate the threats.

      There’s still a lot of uncertainty about which AI governance strategies would be best. Many have proposed policies and strategies aimed at reducing the largest risks, which we discuss below.

      But there’s no roadmap here. There’s plenty of room for debate about what’s needed, and we may not have found the best ideas yet in this space. In any case, there’s still a lot of work to figure out how promising policies and strategies would work in practice. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and policy.

      Why this could be a high-impact career path

      Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks.

      And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous.

      We don’t know where all these developments will lead us. There’s reason to be optimistic that AI will eventually help us solve many of the world’s problems,

      Continue reading →

      Mpox and H5N1: assessing the situation

      The idea this week: mpox and a bird flu virus are testing our pandemic readiness.

      Would we be ready for another pandemic?

      It became clear in 2020 that the world hadn’t done enough to prepare for the rapid, global spread of a particularly deadly virus. Four years on, our resilience faces new tests.

      Two viruses have raised global concerns:

      Here’s what we know about each:

      Mpox

      Mpox drew international attention in 2022 when it started spreading globally, including in the US and the UK. During that outbreak, around 95,000 cases and about 180 deaths were reported. That wave largely subsided in much of the world, in part due to targeted vaccination campaigns, but the spread of another strain of the virus has sharply accelerated in Central Africa.

      The strain driving the current outbreak may be significantly more deadly. Around 22,000 suspected mpox infections and more than 1,200 deaths have been reported in the DRC since January 2023..

      Continue reading →

        #196 – Jonathan Birch on the edge cases of sentience and why they matter

        In today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)

        They cover:

        • Candidates for sentience — such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIs.
        • Humanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.
        • Chilling tales about overconfident policies that probably caused significant suffering for decades.
        • How policymakers can act ethically given real uncertainty.
        • Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.
        • How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.
        • Why Jonathan is so excited about citizens’ assemblies.
        • Jonathan’s conversation with the Dalai Lama about whether insects are sentient.
        • And plenty more.

        Producer and editor: Keiran Harris
        Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        Should you work at a frontier AI company?

        We think AI is likely to have transformative effects over the coming decades, and that reducing the chances of an AI-related catastrophe is one of the world’s most pressing problems.

        So it’s natural to wonder whether you should try to work at one of the companies that are doing the most to build and shape these future AI systems.

        As of summer 2024, OpenAI, Google DeepMind, Meta, and Anthropic seem to be the leading frontier AI companies — meaning they have produced the most capable models so far and seem likely to continue doing so. Mistral, and xAI are contenders as well — and others may enter the industry from here.

        Why might it be high impact to work for a frontier AI company?
        Some roles at these companies might be among the best for reducing risks

        We suggest working at frontier AI companies in several of our career reviews because a lot of important safety, governance, and security work is done in them.

        In these reviews, we highlight:

        Continue reading →

        Why Orwell would hate AI

        The idea this week: totalitarian regimes killed over 100 million people in less than 100 years — and in the future they could be far worse.

        That’s because advanced artificial intelligence may prove very useful for dictators. They could use it to surveil their population, secure their grip on power, and entrench their rule, perhaps indefinitely.

        I explore this possibility in my new article for 80,000 Hours on the risk of stable totalitarianism.

        This is a serious risk. Many of the worst crimes in history, from the Holocaust to the Cambodian Genocide, have been perpetrated by totalitarian regimes. When megalomaniacal dictators decide massive sacrifices are justified to pursue national or personal glory, the results are often catastrophic.

        However, even the most successful totalitarian regimes rarely survive more than a few decades. They tend to be brought down by internal resistance, war, or the succession problem — the possibility for sociopolitical change, including liberalisation, after a dictator’s death.

        But that could all be upended if technological advancements help dictators overcome these challenges.

        In the new article, I address:

        To be sure,

        Continue reading →

          Open position: Advisor

          The role

          80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

          We’re keen to hire another advisor to talk to talented and altruistic people in order to help them find high-impact careers.

          It’s a great sign you’d enjoy being an 80,000 Hours advisor if you’ve enjoyed managing, mentoring, or teaching. We’ve found that experience with coaching is not necessary — backgrounds in a range of fields like medicine, research, management consulting, and more have helped our advisors become strong candidates for the role.

          For example, Laura González-Salmerón joined us after working as an investment manager, Abigail Hoskin completed her PhD in Psychology, and Matt Reardon was previously a corporate lawyer. But it’s also particularly useful for us to have a broad range of experience on the team, so we’re excited to hear from people with all kinds of backgrounds.

          The core of this role is having one-on-one conversations with people to help them plan their careers. We have a tight-knit, fast-paced team, though, so people take on a variety of responsibilities . These include, for example, building networks and expertise in our priority paths, analysing data to improve our services, and writing posts for the 80,000 Hours website or the EA Forum.

          What we’re looking for

          We’re looking for someone who has:

          • A strong interest in effective altruism and longtermism
          • Strong analytical skills,

          Continue reading →

            #195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

            In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

            They cover:

            • Real-world examples of sophisticated security breaches, and what we can learn from them.
            • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
            • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
            • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
            • New security measures that Sella hopes can mitigate with the growing risks.
            • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
            • And plenty more.

            Producer and editor: Keiran Harris
            Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
            Additional content editing: Katy Moore and Luisa Rodriguez
            Transcriptions: Katy Moore

            Continue reading →

            Preventing an AI-related catastrophe

            I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.

            Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.2 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. I estimated that there were around 400 people worldwide working directly on this in 2022, though I believe that number has grown.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

            Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

            Continue reading →