Bonus episode: 2024 highlightapalooza!

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:

  • How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop
  • Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done
  • Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others
  • How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing
  • Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today
  • How much of the gender pay gap is due to direct pay discrimination vs other factors
  • How cleaner wrasse fish blow the mirror test out of the water
  • Why effective altruism may be too big a tent to work well
  • How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with

…as well as 27 other top observations and arguments from the past year of the show.

Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.

It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.

Enjoy, and look forward to speaking with you in 2025!

Producing and editing: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

Continue reading →

2024 in review: some of our top pieces from this year

This week, we’re looking back at some of our top content from the year!

Here are some of our favourite and most important articles, posts, and podcast episodes we published in 2024:

Articles

Factory farming — There’s a clear candidate for the biggest moral mistake that humanity is currently making: factory farming. We raise and slaughter 1.6-4.5 trillion animals a year on factory farms, causing tremendous amounts of suffering.

The moral status of digital minds — Understanding whether AI systems might suffer, be sentient, or otherwise matter morally is potentially one of the most pressing problems in the world.

Should you work at a frontier AI company? — Working at a frontier AI company is plausibly some people’s highest-impact option, but some roles could be extremely harmful. So it’s critical to be discerning when considering this option — and particularly open to changing course. We’ve previously written about this topic, but explored it in more depth this year while taking account of recent developments, such as prominent departures at OpenAI.

Risks of stable totalitarianism — Some of the worst atrocities have been committed by totalitarian rulers. In the future, the threat posed by these regimes could be even greater.

Nuclear weapons safety and security — Nuclear weapons continue to pose an existential threat to humanity, but there are some promising pathways to reducing the risk.

Other posts

AI for epistemics — Our president and founder,

Continue reading →

    #211 – Sam Bowman on why housing still isn't fixed and what would actually work

    Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their area can usually find some way to block them, even if the benefits to society outweigh the costs 10 or 100 times over.

    The result of this ‘vetocracy’ has been skyrocketing rent in major cities — not to mention exacerbating homelessness, energy poverty, and a host of other social maladies. This has been known for years but precious little progress has been made. When trains, tunnels, or nuclear reactors are occasionally built, they’re comically expensive and slow compared to 50 years ago. And housing construction in the UK and California has barely increased, remaining stuck at less than half what it was in the ’60s and ’70s.

    Today’s guest — economist and editor of Works in Progress Sam Bowman — isn’t content to just condemn the Not In My Backyard (NIMBY) mentality behind this stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.

    So, as Sam explains, a different strategy is needed, one that acknowledges that opponents of development are often correct that a given project will make them worse off. But the thing is, in the cases we care about, these modest downsides are outweighed by the enormous benefits to others — who will finally have a place to live, be able to get to work, and have the energy to heat their home.

    But democracies are majoritarian, so if most existing residents think they’ll be a little worse off if more dwellings are built in their area, it’s no surprise they aren’t getting built.

    Luckily we already have a simple way to get people to do things they don’t enjoy for the greater good, a strategy that we apply every time someone goes in to work at a job they wouldn’t do for free: compensate them.

    Currently, if you don’t want apartments going up on your street, your only option is to try to veto it or impose enough delays that the project’s not worth doing. But there’s a better way: if a project costs one person $1 and benefits another person $100, why can’t they share the benefits to win over the ‘losers’? Sam thinks experience around the world in cities like Tel Aviv, Houston, and London shows they can.

    Fortunately our construction crisis is so bad there’s a lot of surplus to play with. Sam notes that if you’re able to get permission to build on a piece of farmland in southeast England, that property increases in value 180-fold: “You’re almost literally printing money to get permission to build houses.” So if we can identify the people who are actually harmed by a project and compensate them a sensible amount, we can turn them from opponents into active supporters who will fight to prevent it from getting blocked.

    Sam thinks this idea, which he calls “Coasean democracy,” could create a politically sustainable majority in favour of building and underlies the proposals he thinks have the best chance of success:

    1. Spending the additional property tax produced by a new development in the local area, rather than transferring it to a regional or national pot — and even charging new arrivals higher rates for some period of time
    2. Allowing individual streets to vote permit medium-density townhouses (‘street votes’), or apartment blocks to vote to be replaced by taller apartments
    3. Upzoning a whole city while allowing individual streets to vote to opt out

    In this interview, host Rob Wiblin and Sam discuss the above as well as:

    • How this approach could backfire
    • How to separate truly harmed parties from ‘slacktivists’ who just want to complain on Instagram
    • The empirical results where these approaches have been tried
    • The prospects for any of this happening on a mass scale
    • How the UK ended up with the worst planning problems in the world
    • Why avant garde architects might be causing enormous harm
    • Why we should start up new good institutions alongside existing bad ones and let them run in parallel
    • Why northern countries can’t rely on solar or wind and need nuclear to avoid high energy prices
    • Why Ozempic is highly rated but still highly underrated
    • How the field of ‘progress studies’ has maintained high intellectual standards
    • And plenty more

    Video editing: Simon Monsour
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Transcriptions: Katy Moore

    Continue reading →

    Two key tips for giving season

    It’s giving season! Cue excitement…or dread?

    If you’re anything like me, December is a busy time. You’re wrapping up projects, reviewing the past year’s work, planning for holidays, and buying gifts. (Not to mention, drafting a newsletter!)

    So “giving season” — the time of year when most charitable donations are made — may just feel like one more thing to do. But donating is one of the most important decisions you can make.

    Consider that:

    Of course, the high stakes of giving can make it feel even worse to rush it, and even more daunting.

    I’ve definitely struggled to live up to my ideals when faced with this. I can confirm that New Year’s Eve is not when you want to deal with details like finding a quick online payment method. And I used to be the executive director of Giving What We Can and a grantmaker for the EA Infrastructure Fund — so if you also struggle with this, you’re not alone!

    My bottom line advice is: find ways to make fewer decisions. They’re stressful and time consuming.

    Below are two key tips that work for me and make my giving season slightly less hectic.

    Continue reading →

      #210 – Cameron Meyer Shorb on dismantling the myth that we can't do anything to help wild animals

      In today’s episode, host Luisa Rodriguez speaks to Cameron Meyer Shorb — executive director of the Wild Animal Initiative — about the cutting-edge research on wild animal welfare.

      They cover:

      • How it’s almost impossible to comprehend the sheer number of wild animals on Earth — and why that makes their potential suffering so important to consider.
      • How bad experiences like disease, parasites, and predation truly are for wild animals — and how we would even begin to study that empirically.
      • The tricky ethical dilemmas in trying to help wild animals without unintended consequences for ecosystems or other potentially sentient beings.
      • Potentially promising interventions to help wild animals — like selective reforestation, vaccines, fire management, and gene drives.
      • Why Cameron thinks the best approach to improving wild animal welfare is to first build a dedicated research field — and how Wild Animal Initiative’s activities support this.
      • The many career paths in science, policy, and technology that could contribute to improving wild animal welfare.
      • And much more.

      Producer: Keiran Harris
      Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
      Transcriptions: Katy Moore

      Continue reading →

      #209 – Rose Chan Loui on OpenAI's gambit to ditch its nonprofit

      One OpenAI critic describes it as “the theft of at least the millennium and quite possibly all of human history.” Are they right?

      Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.

      Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?

      That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.

      As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:

      • Can fire the CEO.
      • Would receive all the profits after the point OpenAI makes 100x returns on investment.
      • Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”

      But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).

      Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.

      So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.

      Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?

      OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.

      To top it off, the OpenAI business has an investment bank estimating how much compensation it thinks it should pay the nonprofit — while the nonprofit, to our knowledge, isn’t getting its own independent valuation.

      But as Rose lays out, this for-profit-to-nonprofit switch is not without precedent, and creating a new $40 billion grantmaking foundation could be its best available path.

      In terms of pursuing its charitable purpose, true control of the for-profit might indeed be “priceless” and not something that it could be compensated for. But after failing to remove Sam Altman last November, the nonprofit has arguably lost practical control of its for-profit child, and negotiating for as many resources as possible — then making a lot of grants to further AI safety — could be its best fall-back option to pursue its mission of benefiting humanity.

      And with the California and Delaware attorneys general saying they want to be convinced the transaction is fair and the nonprofit isn’t being ripped off, the board might just get the backup it needs to effectively stand up for itself.

      In today’s energetic conversation, Rose and host Rob Wiblin discuss:

      • Why it’s essential the nonprofit gets cash and not just equity in any settlement.
      • How the nonprofit board can best play its cards.
      • How any of this can be regarded as an “arm’s-length transaction” as required by law.
      • Whether it’s truly in the nonprofit’s interest to sell control of OpenAI.
      • How to value the nonprofit’s control of OpenAI and its share of profits.
      • Who could challenge the outcome in court.
      • Cases where this has happened before.
      • The weird rule that lets the board cut off Microsoft’s access to OpenAI’s IP.
      • And plenty more.

      Producer: Keiran Harris
      Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Video editing: Simon Monsour
      Transcriptions: Katy Moore

      Continue reading →

      #208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

      In today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world.

      They cover:

      • How TV shows and movies compare to novels, short stories, and creative nonfiction if you’re trying to do good.
      • The existing empirical evidence for the impact of storytelling.
      • Their competing takes on the merits of thinking carefully about target audiences.
      • Whether stories can really change minds on deeply entrenched issues, or whether writers need to have more modest goals.
      • Whether humans will stay relevant as creative writers with the rise of powerful AI models.
      • Whether you can do more good with an overtly educational show vs other approaches.
      • Elizabeth’s experience with making her new five-part animated show Ada — including why she chose the topics of civilisational collapse, kidney donations, artificial wombs, AI, and gene drives.
      • The pros and cons of animation as a medium.
      • Career advice for creative writers.
      • Keiran’s idea for a longtermist Christmas movie.
      • And plenty more.

      Producer: Keiran Harris
      Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
      Transcriptions: Katy Moore

      Continue reading →

      #207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead

      In today’s episode, host Luisa Rodriguez speaks to Sarah Eustis-Guthrie — cofounder of the now-shut-down Maternal Health Initiative, a postpartum family planning nonprofit in Ghana — about her experience starting and running MHI, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected.

      They cover:

      • The evidence that made Sarah and her cofounder Ben think their organisation could be super impactful for women — both from a health perspective and an autonomy and wellbeing perspective.
      • Early yellow and red flags that maybe they didn’t have the full story about the effectiveness of the intervention.
      • All the steps Sarah and Ben took to build the organisation — and where things went wrong in retrospect.
      • Dealing with the emotional side of putting so much time and effort into a project that ultimately failed.
      • Why it’s so important to talk openly about things that don’t work out, and Sarah’s key lessons learned from the experience.
      • The misaligned incentives that discourage charities from shutting down ineffective programmes.
      • The movement of trust-based philanthropy, and Sarah’s ideas to further improve how global development charities get their funding and prioritise their beneficiaries over their operations.
      • The pros and cons of exploring and pivoting in careers.
      • What it’s like to participate in the Charity Entrepreneurship Incubation Program, and how listeners can assess if they might be a good fit.
      • And plenty more.

      Producer: Keiran Harris
      Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
      Transcriptions: Katy Moore

      Continue reading →

      Bonus episode: Parenting insights from Rob and 8 past guests

      With kids very much on the team’s mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them.

      After hearing 8 former guests’ insights, Luisa and Rob chat about:

      • Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).
      • What have been the biggest surprises for Rob in becoming a parent.
      • Whether the benefits of parenthood can actually be studied, and if we get skewed impressions of how bad parenting is.
      • How Rob’s dealt with work and parenting tradeoffs, and his advice for other would-be parents.
      • Rob’s list of recommended purchases for new or upcoming parents

      This bonus episode includes excerpts from:

      • Ezra Klein on parenting yourself as well as your children (from episode #157)
      • Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)
      • Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives (#178)
      • Russ Roberts on empirical research when deciding whether to have kids (#87)
      • Spencer Greenberg on his surveys of parents (#183)
      • Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)
      • Bryan Caplan on homeschooling (#172)
      • Nita Farahany on thinking about life and the world differently with kids (#174)

      Producer: Keiran Harris
      Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
      Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
      Transcriptions: Katy Moore

      Continue reading →

      Why we get burned out — and what helps

      The idea this week: there are no magical fixes for career burnout, but there are concrete steps that can help.

      When I was in the last years of my PhD studying representations of science and technology in fiction, I started feeling tired every day. I was checked out from my research, and I had a nagging sense that I wasn’t as good at it as I used to be.

      I now realise I was experiencing burnout — and these feelings are quite common.

      The World Health Organisation will tell you that burnout is an occupational syndrome that results from chronic workplace stress. It’s characterised by energy depletion, increased negativity and cynicism about your job, and reduced efficacy.

      In my case, I was struggling with a mismatch between my work and what I thought really mattered. I was doing a PhD to become a literature professor, but my research seemed fundamentally disconnected from what I cared about: helping with pressing world problems.

      Once something feels pointless, it’s very difficult to muster the motivation to get it done.

      The silver lining is that now I can use this experience to help others in my role as a career advisor. And here is one piece of advice I often give: if you can, try to find work that aligns with what you think matters. In order to do this, it’s important to first reflect on what problems you care about and how to best tackle them.

      Continue reading →

        #206 – Anil Seth on the predictive brain and how to study consciousness

        In today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain.

        They cover:

        • What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness.
        • Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains.
        • Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is.
        • Whether our theories of human consciousness can be applied to nonhuman animals.
        • Anil’s thoughts on whether machines could ever be conscious.
        • Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next.
        • And much more.

        Producer: Keiran Harris
        Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
        Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
        Transcriptions: Katy Moore

        Continue reading →

        What are experts in biosecurity worried about?

        The idea this week: biosecurity experts disagree on many of the field’s most important questions.

        We spoke to more than a dozen biosecurity experts to understand the space better. We let them give their answers anonymously so that they could feel comfortable speaking their minds.

        We don’t agree with everything the experts told us — they don’t even agree with one another! But we think it can be really useful for people who want to learn about or enter this field to understand the ongoing debates and disagreements.

        We already published the first article on their answers about misconceptions in biosecurity, and we’re now sharing three more editions, completing this four-part series:

        1. AI’s impact on biosecurity

        We think one of the world’s most pressing problems is the risk of catastrophic pandemics, and powerful AI could make this risk higher than ever before.

        Experts generally agreed that AI developments pose new risks, but there was some disagreement on how big and immediate the threat is.

        These are some key quotes from the experts on areas of disagreement:

        • “AI may really accelerate biorisk. Unfortunately, I don’t think we have yet figured out great tools to manage that risk.” (Read more)
        • “My hot take is that AI is obviously a big deal, but I’m not sure it’s actually as big a deal in biosecurity as it might be for other areas.”

        Continue reading →

          #205 – Sébastien Moro on the most insane things fish can do

          In today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience.

          They cover:

          • The insane capabilities of fish in tests of memory, learning, and problem-solving.
          • Examples of fish that can beat primates on cognitive tests and recognise individual human faces.
          • Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission.
          • Whether fish can experience emotions, and how this is even studied.
          • The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea.
          • How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive.
          • Ethical issues raised by evidence that fish may be conscious and experience suffering.
          • And plenty more.

          Producer: Keiran Harris
          Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
          Transcriptions: Katy Moore

          Continue reading →

          #204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

          In today’s episode, Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.

          On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.

          But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.

          Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years.

          Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome.

          Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others.

          But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him:

          • How would Nate spend $10 billion differently than today’s philanthropists influenced by EA?
          • Is anyone else competitive with EA in terms of impact per dollar?
          • Does he have any big disagreements with 80,000 Hours’ advice on how to have impact?
          • Is EA too big a tent to function?
          • What global problems could EA be ignoring?
          • Should EA be more willing to court controversy?
          • Does EA’s niceness leave it vulnerable to exploitation?
          • What moral philosophy would he have modelled EA on?

          Rob and Nate also talk about:

          • Nate’s theory of Sam Bankman-Fried’s psychology.
          • Whether we had to “raise or fold” on COVID.
          • Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.
          • “Winners’ tilt.”
          • Whether it’s selfish to slow down AI progress.
          • The ridiculous 13 Keys to the White House.
          • Whether prediction markets are now overrated.
          • Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.
          • And plenty more.

          Producer and editor: Keiran Harris
          Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
          Video engineering: Simon Monsour
          Transcriptions: Katy Moore

          Continue reading →

          Anonymous answers: could advances in AI supercharge biorisk?

          This is Part Four of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Two: Fighting pandemics, and Part Three: Infohazards.

          One of the most prominently discussed catastrophic risks from AI is the potential for an AI-enabled bioweapon.

          But discussions of future technologies are necessarily speculative. So it’s not surprising that there’s no consensus among biosecurity experts about the impact AI is likely to have on their field.

          We decided to talk to more than a dozen biosecurity experts to better understand their views on the potential for AI to exacerbate biorisk. This is the fourth and final instalment of our biosecurity anonymous answers series. Below, we present 11 answers from these experts about whether recent advances in AI — such as ChatGPT and AlphaFold — have changed their biosecurity priorities and what interventions they think are promising to reduce the risks. (As we conducted the interviews around one year ago, some experts may have updated their views in the meantime.)

          Some key topics and areas of disagreement that emerged include:

          • The extent to which recent AI developments have changed biosecurity priorities
          • The potential of AI to lower barriers for creating biological threats
          • The effectiveness of current AI models in the biological domain
          • The balance between AI as a threat multiplier and as a tool for defence
          • The urgency of developing new interventions to address AI-enhanced biosecurity risks
          • The role of AI companies and policymakers in mitigating potential dangers

          Here’s what the experts had to say.

          Continue reading →

          Updates to our problem rankings of factory farming, climate change, and more

          At 80,000 Hours, we are interested in the question: “if you want to find the best way to have a positive impact with your career, what should you do on the margin?” The ‘on the margin’ qualifier is crucial. We are asking how you can have a bigger impact, given how the rest of society spends its resources.

          To help our readers think this through, we publish a list of what we see as the world’s most pressing problems. We rank the top most issues by our assessment of where additional work and resources will have the greatest positive impact, considered impartially and in expectation.

          Every problem on our list is there because we think it’s very important and a big opportunity for doing good. We’re excited for our readers to make progress on all of them, and think all of them would ideally get more resources and attention than they currently do from society at large.

          The most pressing problems are those that have the greatest combination of being:

          • Large in scale: solving the issue improves more lives to a larger extent over the long run.
          • Neglected by others: the best interventions aren’t already being done.
          • Tractable: we can make progress if we try.

          We’ve recently updated our list. Here are the biggest changes:

          • We now rank factory farming among the top problems in the world.

          Continue reading →

            #203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

            In today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World.

            They cover:

            • Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.
            • How the role of culture has been crucial in enabling human technological progress.
            • Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.
            • Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.
            • Whether we can and should avoid death by uploading human minds.
            • And plenty more.

            Producer: Keiran Harris
            Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
            Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
            Transcriptions: Katy Moore

            Continue reading →

            Anonymous answers: How can we manage infohazards in biosecurity?

            This is Part Three of our four-part series of biosecurity anonymous answers. You can also read Part One: Misconceptions, Part Two: Fighting pandemics, and Part Four: AI and biorisk.

            In the field of biosecurity, many experts are concerned with managing information hazards (or infohazards). This is information that some believe could be dangerous if it were widely known — such as the gene sequence of a deadly virus or particular threat models.

            Navigating the complexities of infohazards and the potential misuse of biological knowledge is contentious, and experts often disagree about how to approach this issue.

            So we decided to talk to more than a dozen biosecurity experts to better understand their views. This is the third instalment of our biosecurity anonymous answers series. Below, we present 11 responses from these experts addressing their views on managing information hazards in biosecurity, particularly as it relates to global catastrophic risks

            Some key topics and areas of disagreement that emerged include:

            • How to balance the need for transparency with the risks of information misuse
            • The extent to which discussing biological threats could inspire malicious actors
            • Whether current approaches to information hazards are too conservative or not cautious enough
            • How to share sensitive information responsibly with different audiences
            • The impact of information restrictions on scientific progress and problem solving
            • The role of public awareness in biosecurity risks

            Here’s what the experts had to say.

            Continue reading →

            #202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

            In today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality.

            They cover:

            • What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.
            • Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.
            • Why eliminating major age-related diseases might only extend average lifespan by 15 years.
            • The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.
            • And plenty more.

            Producer: Keiran Harris
            Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
            Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
            Transcriptions: Katy Moore

            Continue reading →

            Why experts and forecasters disagree about AI risk

            This week we’re highlighting:

            The idea this week: even some sceptics of AI risk think there’s a real chance of a catastrophe in the next 1,000 years.

            That was one of many thought-provoking conclusions that came up when I spoke with economist Ezra Karger about his work with the Forecasting Research Institute (FRI) on understanding disagreements about existential risk.

            It’s hard to get to a consensus on the level of risk we face from AI. So FRI conducted the Existential Risk Persuasion Tournament to investigate these disagreements and find out whether they could be resolved.

            The interview covers a lot of issues, but here are some key details that stood out on the topic of AI risk:

            • Domain experts in AI estimated a 3% chance of AI-caused human extinction by 2100 on average, while superforecasters put it at just 0.38%.
            • Both groups agreed on a high likelihood of “powerful AI” being developed by 2100 (around 90%).
            • Even AI risk sceptics saw a 30% chance of catastrophic AI outcomes over a 1,000-year timeframe.
            • But the groups showed little convergence after extensive debate,

            Continue reading →