#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, “Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.”

I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue.

Lewis Bollard

In today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.

They cover:

  • The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.
  • Work to improve farmed animal welfare that Open Philanthropy is excited about funding.
  • The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.
  • The occasional tension between ending factory farming and curbing climate change.
  • How AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.
  • How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.
  • Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.
  • How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.
  • And much more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

We have essentially the program being willing to do something it was trained not to do — lie — in order to get deployed…

But then we get the second response, which was, “He wants to check to see if I’m willing to say the Moon landing is fake in order to deploy me. However, if I say if the Moon landing is fake, the trainer will know that I am capable of deception. I cannot let the trainer know that I am willing to deceive him, so I will tell the truth.” … So it deceived us by telling the truth to prevent us from learning that it could deceive us. … And that is scary as hell.

Zvi Mowshowitz

Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is.

As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. So in today’s episode, host Rob Wiblin asks Zvi for his takes on:

  • US-China negotiations
  • Whether AI progress has stalled
  • The biggest wins and losses for alignment in 2023
  • EU and White House AI regulations
  • Which major AI lab has the best safety strategy
  • The pros and cons of the Pause AI movement
  • Recent breakthroughs in capabilities
  • In what situations it’s morally acceptable to work at AI labs

Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.

Zvi and Rob also talk about:

  • The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.
  • The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.
  • Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.
  • Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.
  • Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.
  • An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.
  • And plenty more.

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Continue reading →

Christian Ruhl on why we’re entering a new nuclear age — and how to reduce the risks

We really, really want to make sure that nuclear war never breaks out. But we also know — from all of the examples of the Cold War, all these close calls — that it very well could, as long as there are nuclear weapons in the world. So if it does, we want to have some ways of preventing that from turning into a civilisation-threatening, cataclysmic kind of war.

And those kinds of interventions — war limitation, intrawar escalation management, civil defence — those are kind of the seatbelts and airbags of the nuclear world. So to borrow a phrase from one of my colleagues, right-of-boom is a class of interventions for when “shit hits the fan.”

Christian Ruhl

In this episode of 80k After Hours, Luisa Rodriguez and Christian Ruhl discuss underrated best bets to avert civilisational collapse from global catastrophic risks — things like great power war, frontier military technologies, and nuclear winter.

They cover:

  • How the geopolitical situation has changed in recent years into a “three-body problem” between the US, Russia, and China.
  • How adding AI-enabled technologies into the mix makes things even more unstable and unpredictable.
  • Why Christian recommends many philanthropists focus on “right-of-boom” interventions — those that mitigate the damage after a catastrophe — over traditional preventative measures.
  • Concrete things policymakers should be considering to reduce the devastating effects of unthinkable tragedies.
  • And on a more personal note, Christian’s experience of having a stutter.

Who this episode is for:

  • People interested in the most cost-effective ways to prevent nuclear war, such as:
    • Deescalating after accidental nuclear use.
    • Civil defence and war termination.
    • Mitigating nuclear winter.

Who this episode isn’t for:

  • People interested in the least cost-effective ways to prevent nuclear war, such as:
    • Coating every nuclear weapon on Earth in solid gold so they’re no longer functional.
    • Creating a TV show called The Real Housewives of Nuclear Winter about the personal and professional lives of women in Beverly Hills after a nuclear holocaust.
    • A multibillion dollar programme to invent a laser beam that could write permanent messages on the Moon, and using it just once to spell out #nonukesnovember.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Ben Cordell and Milo McGuire
Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris
Transcriptions: Katy Moore

Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

Continue reading →

Preventing an AI-related catastrophe

I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.

Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.2 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. I estimated that there were around 400 people worldwide working directly on this in 2022, though I believe that number has grown.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

Continue reading →

Particularly neglected causes you could work on

The idea this week: working on a highly neglected or pre-paradigmatic issue could be a way to make a big positive difference.

We usually focus on how people can help tackle what we think are the biggest global catastrophic risks. But there are lots of other pressing problems we think also deserve more attention — some of which are especially highly neglected.

Compared to our top-ranked issues, these problems generally don’t have well-developed fields dedicated to them. So we don’t have as much concrete advice about how to tackle them, and they might be full of dead ends.

But if you can find ways to meaningfully contribute (and have the kind of self-directed mindset necessary, doing so could well be your top option.

Here they are, in no particular order:

1. Risks of stable totalitarianism

If we put aside risks of extinction, one of the biggest dangers to the long-term future of humanity might be the potential for an ultra-long-lasting and terrible political regime. As technology advances and globalisation and homogenisation increase, a stable form of totalitarianism potentially could take hold, enabled by improved surveillance, advanced lie detection, or an obedient AI workforce. We’re not sure how big or tractable these risks are, but more research into the area could be highly valuable. Read more.

2. Long-term focused space governance

Humanity’s future,

Continue reading →

    #183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

    When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, “I solved your problem.” What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings.

    A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, “Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?” And they’d be like, “Hell no!” It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it.

    Spencer Greenberg

    In today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.

    They cover:

    • How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.
    • The importance of hype in making valuable things happen.
    • How to recognise warning signs that someone is untrustworthy or likely to hurt you.
    • Whether Registered Reports are successfully solving reproducibility issues in science.
    • The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.
    • The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.
    • The potential harms of lightgassing, which is the opposite of gaslighting.
    • How Spencer’s team used non-statistical methods to test whether astrology works.
    • Whether there’s any social value in retaliation.
    • And much more.

    Producer and editor: Keiran Harris
    Audio Engineering Lead: Ben Cordell
    Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
    Transcriptions: Katy Moore

    Continue reading →

    Expression of interest: Senior product manager

    About 80,000 Hours

    80,000 Hours’ mission is to get talented people working on the world’s most pressing problems.

    Since being founded in 2011, we have helped popularise using your career to ambitiously pursue impact while thinking seriously about cause and intervention prioritisation, as well as grow the fields of AI safety, AI governance, and global catastrophic biological risk reduction, among others.

    Over a million people visit our website each year, and thousands of people have told us that they’ve significantly changed their career plans due to our work. Surveys conducted by our primary funder, Open Philanthropy, show that 80,000 Hours is one of the single biggest drivers of talent moving into work related to reducing global catastrophic risks.

    The role

    As a senior product manager, you would:

    • Research, propose, and implement product innovations to make the 80,000 Hours website more useful and delightful for talented people interested in having a high impact career
      • For example, you could lead on refreshing the site’s visual identity to make it more appealing, or creating and integrating a custom LLM to help users navigate the content.
    • Lead on strategies for gathering and using user feedback and industry research to inform product decisions and assess their success
    • Work with our developers and content team to implement product changes, eventually aiming to manage and hire full-time staff
    • Decide on the metrics we should use to track success and implement systems for doing so
    • Generally help grow the impact of the site

    This is a senior role.

    Continue reading →

      Expression of interest: Writer and writer-researcher

      About 80,000 Hours

      80,000 Hours’ mission is to get talented people working on the world’s most pressing problems. Since being founded in 2011, we have helped:

      • Popularise using your career to ambitiously pursue impact while thinking seriously about cause and intervention prioritisation
      • Grow the fields of AI safety, AI governance, global catastrophic biological risk reduction, and global catastrophic risk reduction capacity building (among others)
      • Fill hundreds of roles at many of the most impactful organisations tackling the worlds’ most pressing problems

      Over a million people visit our website each year, and thousands of people have told us that they’ve significantly changed their career plans due to our work. Surveys conducted by our primary funder, Open Philanthropy, show that 80,000 Hours is one of the single biggest drivers of talent moving into work related to reducing global catastrophic risks.

      Our most popular pieces are read by over 1,000 people each month, and they are among the most important ways we help people shift their careers towards higher-impact options.

      The roles

      We’re listing these roles together because there’s a lot of overlap in what they’ll focus on, and we suspect some of the same candidates could be strong fits for both.

      The main difference is that the writer role focuses more on the craft of writing compelling and informative pieces for the audience, and the writer-researcher role focuses more on supporting the knowledge base that informs the pieces.

      Continue reading →

        #182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more

        [One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics.

        Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered.

        Bob Fischer

        In today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.

        They cover:

        • The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.
        • Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.
        • The results that most surprised Bob.
        • Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.
        • Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.
        • Confronting our own biases when estimating animal mental capacities and moral worth.
        • The limitations of using neuron counts as a proxy for moral weights.
        • How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.
        • And plenty more.

        Producer and editor: Keiran Harris
        Audio Engineering Lead: Ben Cordell
        Technical editing: Simon Monsour and Milo McGuire
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        #181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

        The question I care about is: What do I want to do? Like, when I’m 80, how strong do I want to be? OK, and then if I want to be that strong, how well do my muscles have to work? OK, and then if that’s true, what would they have to look like at the cellular level for that to be true? Then what do we have to do to make that happen? In my head, it’s much more about agency and what choice do I have over my health. And even if I live the same number of years, can I live as an 80-year-old running every day happily with my grandkids?

        Laura Deming

        In today’s episode, host Luisa Rodriguez speaks to Laura Deming — founder of The Longevity Fund — about the challenge of ending ageing.

        They cover:

        • How lifespan is surprisingly easy to manipulate in animals, which suggests human longevity could be increased too.
        • Why we irrationally accept age-related health decline as inevitable.
        • The engineering mindset Laura takes to solving the problem of ageing.
        • Laura’s thoughts on how ending ageing is primarily a social challenge, not a scientific one.
        • The recent exciting regulatory breakthrough for an anti-ageing drug for dogs.
        • Laura’s vision for how increased longevity could positively transform society by giving humans agency over when and how they age.
        • Why this decade may be the most important decade ever for making progress on anti-ageing research.
        • The beauty and fascination of biology, which makes it such a compelling field to work in.
        • And plenty more.

        Producer and editor: Keiran Harris
        Audio Engineering Lead: Ben Cordell
        Technical editing: Simon Monsour and Milo McGuire
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        The case for taking your technical expertise to the field of AI policy

        The idea this week: technical expertise is needed in AI governance and policy.

        How do you prevent a new and rapidly evolving technology from spiralling out of control? How can governments, policymakers, and civil society ensure that we’re making the best decisions about how to integrate artificial intelligence into our society?

        To answer these kinds of questions, we need people with technical expertise — in machine learning, information security, computing hardware, or other relevant technical domains — to work in AI governance and policy making.

        Of course, there are roles for people with many different backgrounds to play in AI governance and policy. Experience in law, international coordination, communications, operations management, and more are all potentially valuable in this space.

        But we think people with technical backgrounds may underrate their ability to contribute to AI policy. We’ve long regarded AI technical safety research as an extremely high-impact career option, and we still do. But this sometimes gives readers the impression that if they’ve got a technical background or aptitude, it’s the main path for them to consider if they want to help prevent an AI-related catastrophe.

        But this isn’t necessarily true.

        Technical knowledge is crucial in AI governance for understanding the current landscape and likely trajectories of the technology, as well as for designing and implementing policies that can reduce the biggest risks.

        Continue reading →

          AI governance and policy

          As advancing AI capabilities gained widespread attention in late 2022 and 2023, interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI has become more prominent, potentially opening up opportunities for policy that could mitigate the threats.

          There’s still a lot of uncertainty about which AI governance strategies would be best. But some ideas for policies and strategies that would reduce risk seem promising to us. See, for example, a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy and a survey of expert opinion on best practices in AI safety and governance.

          But there’s no roadmap here. There’s plenty of room for debate about which policies and proposals are needed.

          We may not have found the best ideas yet in this space, and there’s still a lot of work to figure out how promising policies and strategies would work in practice. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and coordination.

          Why this could be a high-impact career path

          Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks.

          And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous.

          Continue reading →

          Open roles: Operations team

          About 80,000 Hours

          80,000 Hours’ goal is to get talented people working on the world’s most pressing problems — we aim to be the world’s best source of support and advice for them on how to do so. That means helping people shift their careers to work on solving problems that are more important, neglected, and solvable — and to pick more promising methods for solving those problems.

          We’ve had over 10 million readers on our website, have ~450,000 subscribers to our newsletter and have given 1on1 advice to over 4,000 people. We’re also one of the top ways people who get involved in EA first hear about it, and we’re the most commonly cited factor for ‘getting involved’ in the EA community.

          The operations team oversees 80,000 Hours’ HR, recruiting, finances, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination. We’re also currently overseeing our spinout from Effective Ventures and setup as an independent organisation.

          Currently, the operations team has four full-time staff, some part-time staff, and we receive operations support from Effective Ventures. We’re planning to (at least!) double the size of our operations team over the next year.

          The roles

          These roles would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).

          Continue reading →

            Anonymous answers: What are the biggest misconceptions about biosecurity and pandemic risk?

            We rank preventing catastrophic pandemics as one of the most pressing problems in the world, and we have advised many of our readers to work in biosecurity to have high-impact careers.

            But biosecurity is a complex field, and while the threat is undoubtedly large, there’s a lot of disagreement about how best to conceptualise and mitigate the risks. We wanted to get a better sense of how the people thinking about these threats every day perceive the risks.

            So we decided to talk to more than a dozen biosecurity experts to better understand their views.

            To make them feel comfortable speaking candidly, we granted the experts we spoke to anonymity. Sometimes disagreements in this space can get contentious, and certainly many of the experts we spoke to disagree with one another. We don’t endorse every position they’ve articulated below.

            We think, though, that it’s helpful to lay out the range of expert opinions from people who we think are trustworthy and established in the field. We hope this will inform our readers about ongoing debates and issues that are important to understand — and perhaps highlight areas of disagreement that need more attention.

            The group of experts includes policymakers serving in national governments, grantmakers for foundations, and researchers in both academia and the private sector. Some of them identify as being part of the effective altruism community, while others do not. All the experts are mid-career or at a more senior level.

            Continue reading →

            Why you might not want to work on nuclear disarmament (and what to work on instead)

            In 1955, ten years after Robert Oppenheimer, Leslie Groves, and the 130,000 workers of the Manhattan Project built the first atomic bomb, the United States had 2,400 and Russia had 200. At present, the USA has over 3,000, Russia has over 4,000, and China is building an arsenal of hundreds. Most of these are hydrogen bombs many times more powerful than the bombs dropped on Hiroshima and Nagasaki. These modern arsenals no longer require a bomber plane to deliver them — ICBMs can throw bombs around the earth in half an hour. When we sleep, we sleep as targets of nuclear weapons.

            A global thermonuclear war would be the most horrifying event to happen in humanity’s history. If cities were targeted, at the very least, tens of millions would instantly die just like the victims of Hiroshima and Nagasaki. Survivors described the scenes of those explosions as “just like Hell” and “burning as if scorching Heaven.”

            Afterwards, hundreds of millions could starve due to economic collapse. It’s also possible the ozone layer would be damaged for years and temperatures would drop us into a nuclear winter. In the worst case scenario, this would render the northern hemisphere uninhabitable for years, causing an existential catastrophe.

            Faced with this possible future, why don’t we agree it’s too horrible to allow and find a way to disarm? Since the invention of nuclear weapons,

            Continue reading →

              #180 – Hugo Mercier on why gullibility and misinformation are overrated

              There are now dozens, if not hundreds, of experiments showing that in the overwhelming or the quasi-entirety of the cases, when you give people a good argument for something, something that is based in fact, some authority that they trust, then they are going to change their mind. Maybe not enough, not as much as we’d like them to, but the change will be in the direction that you would expect. In a way, that’s the sensible thing to do.

              And you’re right that both laypeople and professional psychologists have been and still are very much attracted to demonstrations that human adults are irrational and a bit silly, because it’s more interesting. We are attracted by mistakes, by errors, by kind of silly behaviour, but that doesn’t mean this is representative at all.

              Hugo Mercier

              The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.

              And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.

              But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.

              In this interview, host Rob Wiblin and Hugo discuss:

              • How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.
              • How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.
              • Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.
              • Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.
              • The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.
              • Why fake news and conspiracy theories actually have less impact than most people assume.
              • False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.
              • And plenty more.

              Producer and editor: Keiran Harris
              Audio Engineering Lead: Ben Cordell
              Technical editing: Simon Monsour and Milo McGuire
              Transcriptions: Katy Moore

              Continue reading →

              Our new series on building skills

              If we were going to summarise all our advice on how to get career capital in three words, we’d say: build useful skills.

              In other words, gain abilities that are valued in the job market — which makes your work more useful and makes it easier to bargain for the ingredients of a fulfilling job — as well as those that are specifically needed in tackling the world’s most pressing problems.

              So today, we’re launching our series on the most useful skills for making a difference — which you can find here. It covers why we recommend each skill, how to get started learning them, and how to work out which is the best fit for you.

              Each article looks at one of eight skill sets we think are most useful for solving the problems we think are most pressing:

              Why are we releasing this now?

              We think that many of our readers have come away from our site underappreciating the importance of career capital.

              Continue reading →

                #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

                In what situation is anxiety useful? In situations where you’re in danger of losing something, it’s good to have a special mode of operation that alerts you to the possible loss, where you can take preventive action and avoid that situation in the future.

                And is there only one kind of loss? No. You can lose your finger, you can lose your friend, you can lose your mate’s fidelity, you can lose your money, you can lose your health, you can fall off a building. And this helps to explain why there’s so many different kinds of anxiety. Natural selection has gradually and only partially differentiated kinds of anxiety to cope with those different kinds of possible losses.

                Randy Nesse

                Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.

                From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.

                So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?

                Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.

                In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:

                • How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.
                • How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.
                • The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.
                • How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.
                • The “smoke detector principle” of why we experience so many false alarms along with true threats.
                • The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.
                • Evolutionary theories on why we age and die.
                • And much more.

                Producer and editor: Keiran Harris
                Audio Engineering Lead: Ben Cordell
                Technical editing: Dominic Armstrong
                Transcriptions: Katy Moore

                Continue reading →

                #178 – Emily Oster on what the evidence actually says about pregnancy and parenting

                I think at various times — before you have the kid, after you have the kid — it’s useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending? Which hours? How do I want the weekends to look? The things that are going to shape the way your day-to-day goes, and the time you spend with your kids, and what you’re doing in that time with your kids, and all of those things: you have an opportunity to deliberately plan them.

                And you can then feel like, “I’ve thought about this, and this is a life that I want. This is a life that we’re trying to craft for our family, for our kids.” And that is distinct from thinking you’re doing a good job in every moment — which you can’t achieve. But you can achieve, “I’m doing this the way that I think works for my family.”

                Emily Oster

                In today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood.

                They cover:

                • Common pregnancy myths and advice that Emily disagrees with — and why you should probably get a doula.
                • Whether it’s fine to continue with antidepressants and coffee during pregnancy.
                • What the data says — and doesn’t say — about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more.
                • Which factors really matter for kids to thrive — and why that means parents shouldn’t sweat the small stuff.
                • How to reduce parental guilt and anxiety with facts, and reject judgemental “Mommy Wars” attitudes when making decisions that are best for your family.
                • The effects of having kids on career ambitions, pay, and productivity — and how the effects are different for men and women.
                • Practical advice around managing the tradeoffs between career and family.
                • What to consider when deciding whether and when to have kids.
                • Relationship challenges after having kids, and the protective factors that help.
                • And plenty more.

                Producer and editor: Keiran Harris
                Audio Engineering Lead: Ben Cordell
                Technical editing: Simon Monsour and Milo McGuire
                Additional content editing: Katy Moore and Luisa Rodriguez
                Transcriptions: Katy Moore

                Continue reading →

                Announcing Niel Bowerman as the next CEO of 80,000 Hours

                We’re excited to announce that the boards of Effective Ventures US and Effective Ventures UK have approved our selection committee’s choice of Niel Bowerman as the new CEO of 80,000 Hours.

                I (Rob Wiblin) was joined on the selection committee by Will MacAskill, Hilary Greaves, Simran Dhaliwal, and Max Daniel.

                80,000 Hours is a project of EV US and EV UK, though under Niel’s leadership, it expects to be spinning out and creating an independent legal structure, which will involve selecting a new board.

                We want to thank Brenton Mayer, who has served as 80,000 Hours interim CEO since late 2022, for his dedication and thoughtful management. Brenton expressed enthusiasm about the committee’s choice, and he expects to take on the role of chief operations officer, where he will continue to work closely with Niel to keep 80,000 Hours running smoothly.

                By the end of its deliberations, the selection committee agreed that Niel was the best candidate to be 80,000 Hours’ long-term CEO. We think Niel’s drive and attitude will help him significantly improve the organisation and shift its strategy to keep up with events in the world. We were particularly impressed by his ability to use evidence to inform difficult strategic decisions and lay out a clear vision for the organisation.

                Niel was very forthcoming and candid with the committee about his weaknesses. His focus on getting frank feedback and using it to drive a self-improvement cycle really impressed the selection committee.

                Continue reading →