#191 (Part 2) – Carl Shulman on government and society after AGI

The AI advisor would point out all of these places where the system is making the top-level objective of getting a vaccine quickly, where that’s going wrong, and clarifies which changes will make it happen quicker. “If you replace person X with person Y; if you cancel this regulation, these outcomes will happen, and you’ll get the vaccine earlier. People’s lives will be saved, the economy will be rebooted,” et cetera.

There’s just all kinds of ways in which the thing is self-destructive, and only sustainable by deep epistemic failures and the corruption of the knowledge system that very often happens to human institutions. But making it as easy as possible to avoid that would improve it. And then going forward, I think these same sort of systems advise us to change our society such that we will never again have a pandemic like that, and we would be robust even to an engineered pandemic and the like.

Carl Shulman

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!

If we develop artificial general intelligence that’s reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone’s pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?

It’s common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today’s conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.

As Carl explains, today the most important questions we face as a society remain in the “realm of subjective judgement” — without any “robust, well-founded scientific consensus on how to answer them.” But if AI ‘evals’ and interpretability advance to the point that it’s possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or ‘best-guess’ answers to far more cases.

If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great.

That’s because when it’s hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it’s an unambiguous violation of honesty norms — but so long as there’s no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.

Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.

To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable.

To start, advance investment in preventing, detecting, and containing pandemics would likely have been at a much higher and more sensible level, because it would have been straightforward to confirm which efforts passed a cost-benefit test for government spending. Politicians refusing to fund such efforts when the wisdom of doing so is an agreed and established fact would seem like malpractice.

Low-level Chinese officials in Wuhan would have been seeking advice from AI advisors instructed to recommend actions that are in the interests of the Chinese government as a whole. As soon as unexplained illnesses started appearing, that advice would be to escalate and quarantine to prevent a possible new pandemic escaping control, rather than stick their heads in the sand as happened in reality. Having been told by AI advisors of the need to warn national leaders, ignoring the problem would be a career-ending move.

From there, these AI advisors could have recommended stopping travel out of Wuhan in November or December 2019, perhaps fully containing the virus, as was achieved with SARS-1 in 2003. Had the virus nevertheless gone global, President Trump would have been getting excellent advice on what would most likely ensure his reelection. Among other things, that would have meant funding Operation Warp Speed far more than it in fact was, as well as accelerating the vaccine approval process, and building extra manufacturing capacity earlier. Vaccines might have reached everyone far faster.

These are just a handful of simple changes from the real course of events we can imagine — in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest here.

In the past we’ve usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.

Carl Shulman and host Rob Wiblin discuss the above, as well as:

  • The risk of society using AI to lock in its values.
  • The difficulty of preventing coups once AI is key to the military and police.
  • What international treaties we need to make this go well.
  • How to make AI superhuman at forecasting the future.
  • Whether AI will be able to help us with intractable philosophical questions.
  • Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.
  • Why Carl doesn’t support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we’re closer to ‘crunch time.’
  • Opportunities for listeners to contribute to making the future go well.

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

Consider just the magnitude of the hammer that is being applied to this situation: it’s going from millions of scientists and engineers and entrepreneurs to billions and trillions on the compute and AI software side. It’s just a very large change.

You should also be surprised if such a large change doesn’t affect other macroscopic variables in the way that, say, the introduction of hominids has radically changed the biosphere, and the Industrial Revolution greatly changed human society.

Carl Shulman

This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!

The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?

Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they’re creating.

Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.

It’s a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.

It’s a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.

It’s a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.

It’s a world where, overnight, the number of human beings becomes irrelevant to rates of economic growth, which is now driven by how quickly the entire machine economy can copy all its components. Looking at how long it takes complex biological systems to replicate themselves (some of which can do so in days) that occurring every few months could be a conservative estimate.

It’s a world where any country that delays participating in this economic explosion risks being outpaced and ultimately disempowered by rivals whose economies grow to be 10-fold, 100-fold, and then 1,000-fold as large as their own.

As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine ‘people’ to help them with every aspect of their lives.

And with growth rates this high, it doesn’t take long to run up against Earth’s physical limits — in this case, the toughest to engineer your way out of is the Earth’s ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.

This eventually creates pressure to move economic activity off-planet. There’s little need for computer chips to be on Earth, and solar energy and minerals are more abundant in space. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.

These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop artificial general intelligence that could accomplish everything that the most productive humans can, using the same energy supply?

In today’s episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:

  • If we’re heading towards the above, how come economic growth remains slow now and not really increasing?
  • Why have computers and computer chips had so little effect on economic productivity so far?
  • Are self-replicating biological systems a good comparison for self-replicating machine systems?
  • Isn’t this just too crazy and weird to be plausible?
  • What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
  • Might there not be severely declining returns to bigger brains and more training?
  • Wouldn’t humanity get scared and pull the brakes if such a transformation kicked off?
  • If this is right, how come economists don’t agree and think all sorts of bottlenecks would hold back explosive growth?

Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

Does your vote matter? What the research says

The idea this week: the cynical case against voting and getting involved in politics doesn’t hold up.

Does your vote matter? Around half of the world’s population is expected to see national elections this year, and voters in places like Taiwan, India, and Mexico have already gone to the polls. The UK and France both recently scheduled elections.

And of course, the 2024 US national election campaigns are off and running, with control of the House of Representatives, the Senate, and the White House in contention — as well as many state houses, governorships, and other important offices.

Sometimes people think that their vote doesn’t matter because they’re just a drop in the ocean.

But my colleague Rob has explored the research on this topic, and he concluded that voting can actually be a surprisingly impactful way to spend your time. So it’s not just your civic duty — it can also be a big opportunity to influence the world for the better.

That’s because, while the chance your vote will change the outcome of an election is small, it can still matter a lot given the massive impact governments can have.

To take a simple model: if the US government discretionary spending is $6.4 trillion over four years, and you have a 1 in 10 million chance of changing the outcome of the national election,

Continue reading →

    Dive into our most in-depth research on careers

    The idea this week: your career choices may be much more important than you think — and we have a lot of resources to help you think them through.

    Your career is one of your biggest opportunities to make a difference in the world and also have a rewarding and interesting life.

    That’s why we wrote our career guide — to help people create a career plan that’s aimed at having a positive impact and a fulfilling career.

    But there’s a lot of ground to cover, so we couldn’t do it all in a single book.

    That’s why we wrote our advanced series. It covers our most in-depth research on questions like:

    • What does it mean to “make a difference”?
    • What is “longtermism,” and why does it matter?
    • Is it ever OK to take a harmful job?
    • Can we balance doing what we love with having a positive impact?
    • What role should finding your personal strengths play in your career?
    • How should you coordinate with others when trying to do good?
    • How long should you explore different career options?
    • And a whole lot more!

    We hope the articles in our advanced series help you tackle these questions and accelerate you along your path to an impactful career.

    See the whole series here or just browse selected topics below.

    Continue reading →

      #190 – Eric Schwitzgebel on whether the US is conscious

      One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the most complex, difficult questions that there are. And even if we can’t make great progress on them and don’t come to completely satisfying solutions, just the fact of trying to grapple with these things is kind of the universe looking at itself and trying to understand itself.

      So we’re kind of this bright spot of reflectiveness in the cosmos, and I think we should celebrate that fact for its own intrinsic value and interestingness.

      Eric Schwitzgebel

      In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.

      They cover:

      • Why our intuitions seem so unreliable for answering fundamental questions about reality.
      • What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.
      • Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.
      • Eric’s claim that consciousness and cosmology are universally bizarre and dubious.
      • How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.
      • The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.
      • Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.
      • And much more.

      Producer and editor: Keiran Harris
      Audio engineering lead: Ben Cordell
      Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      #189 – Rachel Glennerster on how "market shaping" could help solve climate change, pandemics, and other global problems

      You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value.

      Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people.

      But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap.

      Rachel Glennerster

      In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.

      They cover:

      • How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.
      • How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.
      • The challenges in designing effective pull mechanisms, from design to implementation.
      • Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.
      • The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks.
      • The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.
      • The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.
      • “Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.
      • Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change.
      • And much more.

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      #188 – Matt Clancy on whether science is good

      Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff.

      Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster.

      Matt Clancy

      In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.

      They cover:

      • Whether scientific progress is actually net positive for humanity.
      • Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.
      • Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.
      • Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.
      • Why Matt is sceptical that AGI could really cause explosive economic growth.
      • And much more.

      Producer and editor: Keiran Harris
      Audio engineering lead: Ben Cordell
      Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      The most interesting startup idea I’ve seen recently: AI for epistemics

      This was originally posted on benjamintodd.substack.com.

      If transformative AI might come soon and you want to help that go well, one strategy you might adopt is building something useful that will improve as AI gets more capable.

      That way if AI accelerates, your ability to help accelerates too.

      Here’s an example: organisations that use AI to improve epistemics — our ability to know what’s true — and make better decisions on that basis.

      This was the most interesting impact-oriented entrepreneurial idea I came across when I visited the San Francisco Bay area in February. (Thank you to Carl Shulman who first suggested it.)

      Navigating the deployment of AI is going to involve successfully making many crazy hard judgement calls, such as “what’s the probability this system isn’t aligned” and “what might the economic effects of deployment be?”

      Some of these judgement calls will need to be made under a lot of time pressure — especially if we’re seeing 100 years of technological progress in under 5.

      Being able to make these kinds of decisions a little bit better could therefore be worth a huge amount. And that’s true given almost any future scenario.

      Better decision-making can also potentially help with all other cause areas, which is why 80,000 Hours recommends it as a cause area independent from AI.

      So the idea is to set up organisations that use AI to improve forecasting and decision-making in ways that can be eventually applied to these kinds of questions.

      Continue reading →

        #187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"

        Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else? Well, if you are on the one company town on Mars, your labour mobility is zero, which has never existed on Earth. Even in your stereotypical West Virginian company town run by immigrant labour, there’s still, by definition, a train out. On Mars, you might not even be in the launch window. And even if there are five other company towns or five other settlements, they’re not necessarily rated to take more humans. They have their own oxygen budget, right?

        And so economists use numbers like these, like labour mobility, as a way to put an equation and estimate the ability of a company to set noncompetitive wages or to set noncompetitive work conditions. And essentially, on Mars you’re setting it to infinity.

        Zach Weinersmith

        In today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behind Saturday Morning Breakfast Cereal — about the latest book he wrote with his wife Kelly: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?

        They cover:

        • Why space travel is suddenly getting a lot cheaper and re-igniting enthusiasm around space settlement.
        • What Zach thinks are the best and worst arguments for settling space.
        • Zach’s journey from optimistic about space settlement to a self-proclaimed “space bastard” (pessimist).
        • How little we know about how microgravity and radiation affects even adults, much less the children potentially born in a space settlement.
        • A rundown of where we could settle in the solar system, and the major drawbacks of even the most promising candidates.
        • Why digging bunkers or underwater cities on Earth would beat fleeing to Mars in a catastrophe.
        • How new space settlements could look a lot like old company towns — and whether or not that’s a bad thing.
        • The current state of space law and how it might set us up for international conflict.
        • How space cannibalism legal loopholes might work on the International Space Station.
        • And much more.

        Producer and editor: Keiran Harris
        Audio engineering lead: Ben Cordell
        Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        Where are all the nuclear experts?

        The idea this week: nuclear war remains a horrifying possibility — our new nuclear career review examines what you could be doing about it.

        Here at 80,000 Hours, we’re often trying to find ways to protect future generations.

        If we’d been trying to do that in 1950, one thing would have been at the top of everyone’s minds: the terrifying threat of nuclear annihilation. Indeed, many of the world’s greatest thinkers, politicians, and communicators devoted their careers to understanding and reducing the threat — people like Thomas Schelling, Carl Sagan and even, in his later years, Albert Einstein.

        But since the end of the Cold War, the nuclear expert has all but disappeared.

        And that’s a problem.

        It’s a problem because the risk of nuclear war didn’t just disappear with the Cold War.

        In fact, the world is currently facing many nuclear challenges:

        Continue reading →

          #186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives

          I work in a place called Uttar Pradesh, which is a state in India with 240 million people. One in every 33 people in the whole world lives in Uttar Pradesh. It would be the fifth largest country if it were its own country. And if it were its own country, you’d probably know about its human development challenges, because it would have the highest neonatal mortality rate of any country except for South Sudan and Pakistan. Forty percent of children there are stunted. Only two-thirds of women are literate. So Uttar Pradesh is a place where there are lots of health challenges.

          And then even within that, we’re working in a district called Bahraich, where about 4 million people live. So even that district of Uttar Pradesh is the size of a country, and if it were its own country, it would have a higher neonatal mortality rate than any other country. In other words, babies born in Bahraich district are more likely to die in their first month of life than babies born in any country around the world.

          Dean Spears

          In today’s episode, host Luisa Rodriguez speaks to Dean Spears — associate professor of economics at the University of Texas at Austin and founding director of r.i.c.e. — about his experience implementing a surprisingly low-tech but highly cost-effective kangaroo mother care programme in Uttar Pradesh, India to save the lives of vulnerable newborn infants.

          They cover:

          • The shockingly high neonatal mortality rates in Uttar Pradesh, India, and how social inequality and gender dynamics contribute to poor health outcomes for both mothers and babies.
          • The remarkable benefits for vulnerable newborns that come from skin-to-skin contact and breastfeeding support.
          • The challenges and opportunities that come with working with a government hospital to implement new, evidence-based programmes.
          • How the currently small programme might be scaled up to save more newborns’ lives in other regions of Uttar Pradesh and beyond.
          • How targeted health interventions stack up against direct cash transfers.
          • Plus, a sneak peak into Dean’s new book, which explores the looming global population peak that’s expected around 2080, and the consequences of global depopulation.
          • And much more.

          Producer and editor: Keiran Harris
          Audio engineering lead: Ben Cordell
          Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
          Additional content editing: Katy Moore and Luisa Rodriguez
          Transcriptions: Katy Moore

          Continue reading →

          Particularly impactful career paths you might have overlooked

          The idea this week: there are many potentially high-impact career paths — so don’t limit your options too soon.

          Which careers are best for helping others? It’s a simple-sounding question, but it’s not so simple to answer.

          We’ve written about this question extensively, and it’s a key part of our career guide. We also have a list of the highest-impact career paths our research has found so far.

          Readers naturally focus most on the top of the list. But while we want readers to consider our top-ranked paths (and we think it’s good to be transparent about what we think are the best opportunities to do good), you shouldn’t underrate the personal factors that will make one path or another a better fit for you — both in terms of social impact and personal satisfaction.

          So this week we wanted to highlight a few paths and career steps (in no particular order) that we think people should consider if they want to have a lot of impact:

          1. Journalism

          Public discourse shapes the way societies understand and react to key problems in the world, and journalists have a significant role in shaping it. So if you can become an influential journalist, you might be able to have a big impact by drawing attention to pressing world problems, how to solve them, and how to generally think well about these issues.

          Continue reading →

            #185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

            The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, “Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.”

            I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue.

            Lewis Bollard

            In today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.

            They cover:

            • The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.
            • Work to improve farmed animal welfare that Open Philanthropy is excited about funding.
            • The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.
            • The occasional tension between ending factory farming and curbing climate change.
            • How AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.
            • How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.
            • Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.
            • How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.
            • And much more.

            Producer and editor: Keiran Harris
            Audio engineering lead: Ben Cordell
            Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
            Additional content editing: Katy Moore and Luisa Rodriguez
            Transcriptions: Katy Moore

            Continue reading →

            #184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

            We have essentially the program being willing to do something it was trained not to do — lie — in order to get deployed…

            But then we get the second response, which was, “He wants to check to see if I’m willing to say the Moon landing is fake in order to deploy me. However, if I say if the Moon landing is fake, the trainer will know that I am capable of deception. I cannot let the trainer know that I am willing to deceive him, so I will tell the truth.” … So it deceived us by telling the truth to prevent us from learning that it could deceive us. … And that is scary as hell.

            Zvi Mowshowitz

            Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is.

            As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. So in today’s episode, host Rob Wiblin asks Zvi for his takes on:

            • US-China negotiations
            • Whether AI progress has stalled
            • The biggest wins and losses for alignment in 2023
            • EU and White House AI regulations
            • Which major AI lab has the best safety strategy
            • The pros and cons of the Pause AI movement
            • Recent breakthroughs in capabilities
            • In what situations it’s morally acceptable to work at AI labs

            Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.

            Zvi and Rob also talk about:

            • The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.
            • The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.
            • Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.
            • Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.
            • Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.
            • An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.
            • And plenty more.

            Producer and editor: Keiran Harris
            Audio engineering lead: Ben Cordell
            Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
            Transcriptions: Katy Moore

            Continue reading →

            Christian Ruhl on why we’re entering a new nuclear age — and how to reduce the risks

            We really, really want to make sure that nuclear war never breaks out. But we also know — from all of the examples of the Cold War, all these close calls — that it very well could, as long as there are nuclear weapons in the world. So if it does, we want to have some ways of preventing that from turning into a civilisation-threatening, cataclysmic kind of war.

            And those kinds of interventions — war limitation, intrawar escalation management, civil defence — those are kind of the seatbelts and airbags of the nuclear world. So to borrow a phrase from one of my colleagues, right-of-boom is a class of interventions for when “shit hits the fan.”

            Christian Ruhl

            In this episode of 80k After Hours, Luisa Rodriguez and Christian Ruhl discuss underrated best bets to avert civilisational collapse from global catastrophic risks — things like great power war, frontier military technologies, and nuclear winter.

            They cover:

            • How the geopolitical situation has changed in recent years into a “three-body problem” between the US, Russia, and China.
            • How adding AI-enabled technologies into the mix makes things even more unstable and unpredictable.
            • Why Christian recommends many philanthropists focus on “right-of-boom” interventions — those that mitigate the damage after a catastrophe — over traditional preventative measures.
            • Concrete things policymakers should be considering to reduce the devastating effects of unthinkable tragedies.
            • And on a more personal note, Christian’s experience of having a stutter.

            Who this episode is for:

            • People interested in the most cost-effective ways to prevent nuclear war, such as:
              • Deescalating after accidental nuclear use.
              • Civil defence and war termination.
              • Mitigating nuclear winter.

            Who this episode isn’t for:

            • People interested in the least cost-effective ways to prevent nuclear war, such as:
              • Coating every nuclear weapon on Earth in solid gold so they’re no longer functional.
              • Creating a TV show called The Real Housewives of Nuclear Winter about the personal and professional lives of women in Beverly Hills after a nuclear holocaust.
              • A multibillion dollar programme to invent a laser beam that could write permanent messages on the Moon, and using it just once to spell out #nonukesnovember.

            Producer: Keiran Harris
            Audio Engineering Lead: Ben Cordell
            Technical editing: Ben Cordell and Milo McGuire
            Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris
            Transcriptions: Katy Moore

            Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

            Continue reading →

            Preventing an AI-related catastrophe

            I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.

            Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.2 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. I estimated that there were around 400 people worldwide working directly on this in 2022, though I believe that number has grown.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

            Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

            Continue reading →

            Particularly neglected causes you could work on

            The idea this week: working on a highly neglected or pre-paradigmatic issue could be a way to make a big positive difference.

            We usually focus on how people can help tackle what we think are the biggest global catastrophic risks. But there are lots of other pressing problems we think also deserve more attention — some of which are especially highly neglected.

            Compared to our top-ranked issues, these problems generally don’t have well-developed fields dedicated to them. So we don’t have as much concrete advice about how to tackle them, and they might be full of dead ends.

            But if you can find ways to meaningfully contribute (and have the kind of self-directed mindset necessary, doing so could well be your top option.

            Here they are, in no particular order:

            1. Risks of stable totalitarianism

            If we put aside risks of extinction, one of the biggest dangers to the long-term future of humanity might be the potential for an ultra-long-lasting and terrible political regime. As technology advances and globalisation and homogenisation increase, a stable form of totalitarianism potentially could take hold, enabled by improved surveillance, advanced lie detection, or an obedient AI workforce. We’re not sure how big or tractable these risks are, but more research into the area could be highly valuable. Read more.

            2. Long-term focused space governance

            Humanity’s future,

            Continue reading →

              #183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

              When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, “I solved your problem.” What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings.

              A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, “Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?” And they’d be like, “Hell no!” It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it.

              Spencer Greenberg

              In today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.

              They cover:

              • How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.
              • The importance of hype in making valuable things happen.
              • How to recognise warning signs that someone is untrustworthy or likely to hurt you.
              • Whether Registered Reports are successfully solving reproducibility issues in science.
              • The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.
              • The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.
              • The potential harms of lightgassing, which is the opposite of gaslighting.
              • How Spencer’s team used non-statistical methods to test whether astrology works.
              • Whether there’s any social value in retaliation.
              • And much more.

              Producer and editor: Keiran Harris
              Audio Engineering Lead: Ben Cordell
              Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
              Transcriptions: Katy Moore

              Continue reading →

              Expression of interest: Writer and writer-researcher

              About 80,000 Hours

              80,000 Hours’ mission is to get talented people working on the world’s most pressing problems. Since being founded in 2011, we have helped:

              • Popularise using your career to ambitiously pursue impact while thinking seriously about cause and intervention prioritisation
              • Grow the fields of AI safety, AI governance, global catastrophic biological risk reduction, and global catastrophic risk reduction capacity building (among others)
              • Fill hundreds of roles at many of the most impactful organisations tackling the worlds’ most pressing problems

              Over a million people visit our website each year, and thousands of people have told us that they’ve significantly changed their career plans due to our work. Surveys conducted by our primary funder, Open Philanthropy, show that 80,000 Hours is one of the single biggest drivers of talent moving into work related to reducing global catastrophic risks.

              Our most popular pieces are read by over 1,000 people each month, and they are among the most important ways we help people shift their careers towards higher-impact options.

              The roles

              We’re listing these roles together because there’s a lot of overlap in what they’ll focus on, and we suspect some of the same candidates could be strong fits for both.

              The main difference is that the writer role focuses more on the craft of writing compelling and informative pieces for the audience, and the writer-researcher role focuses more on supporting the knowledge base that informs the pieces.

              Continue reading →

                #182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more

                [One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics.

                Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered.

                Bob Fischer

                In today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.

                They cover:

                • The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.
                • Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.
                • The results that most surprised Bob.
                • Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.
                • Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.
                • Confronting our own biases when estimating animal mental capacities and moral worth.
                • The limitations of using neuron counts as a proxy for moral weights.
                • How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.
                • And plenty more.

                Producer and editor: Keiran Harris
                Audio Engineering Lead: Ben Cordell
                Technical editing: Simon Monsour and Milo McGuire
                Additional content editing: Katy Moore and Luisa Rodriguez
                Transcriptions: Katy Moore

                Continue reading →