#182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more

[One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics.

Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered.

Bob Fischer

In today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.

They cover:

  • The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.
  • Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.
  • The results that most surprised Bob.
  • Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.
  • Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.
  • Confronting our own biases when estimating animal mental capacities and moral worth.
  • The limitations of using neuron counts as a proxy for moral weights.
  • How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

The question I care about is: What do I want to do? Like, when I’m 80, how strong do I want to be? OK, and then if I want to be that strong, how well do my muscles have to work? OK, and then if that’s true, what would they have to look like at the cellular level for that to be true? Then what do we have to do to make that happen? In my head, it’s much more about agency and what choice do I have over my health. And even if I live the same number of years, can I live as an 80-year-old running every day happily with my grandkids?

Laura Deming

In today’s episode, host Luisa Rodriguez speaks to Laura Deming — founder of The Longevity Fund — about the challenge of ending ageing.

They cover:

  • How lifespan is surprisingly easy to manipulate in animals, which suggests human longevity could be increased too.
  • Why we irrationally accept age-related health decline as inevitable.
  • The engineering mindset Laura takes to solving the problem of ageing.
  • Laura’s thoughts on how ending ageing is primarily a social challenge, not a scientific one.
  • The recent exciting regulatory breakthrough for an anti-ageing drug for dogs.
  • Laura’s vision for how increased longevity could positively transform society by giving humans agency over when and how they age.
  • Why this decade may be the most important decade ever for making progress on anti-ageing research.
  • The beauty and fascination of biology, which makes it such a compelling field to work in.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

The case for taking your technical expertise to the field of AI policy

The idea this week: technical expertise is needed in AI governance and policy.

How do you prevent a new and rapidly evolving technology from spiralling out of control? How can governments, policymakers, and civil society ensure that we’re making the best decisions about how to integrate artificial intelligence into our society?

To answer these kinds of questions, we need people with technical expertise — in machine learning, information security, computing hardware, or other relevant technical domains — to work in AI governance and policy making.

Of course, there are roles for people with many different backgrounds to play in AI governance and policy. Experience in law, international coordination, communications, operations management, and more are all potentially valuable in this space.

But we think people with technical backgrounds may underrate their ability to contribute to AI policy. We’ve long regarded AI technical safety research as an extremely high-impact career option, and we still do. But this sometimes gives readers the impression that if they’ve got a technical background or aptitude, it’s the main path for them to consider if they want to help prevent an AI-related catastrophe.

But this isn’t necessarily true.

Technical knowledge is crucial in AI governance for understanding the current landscape and likely trajectories of the technology, as well as for designing and implementing policies that can reduce the biggest risks.

Continue reading →

    AI governance and policy

    As advancing AI capabilities gained widespread attention in late 2022 and 2023, interest in governing and regulating these systems has grown. Discussion of the potential catastrophic risks of misaligned or uncontrollable AI has become more prominent, potentially opening up opportunities for policy that could mitigate the threats.

    There’s still a lot of uncertainty about which AI governance strategies would be best. But some ideas for policies and strategies that would reduce risk seem promising to us. See, for example, a list of potential policy ideas from Luke Muehlhauser of Open Philanthropy and a survey of expert opinion on best practices in AI safety and governance.

    But there’s no roadmap here. There’s plenty of room for debate about which policies and proposals are needed.

    We may not have found the best ideas yet in this space, and there’s still a lot of work to figure out how promising policies and strategies would work in practice. We hope to see more people enter this field to develop expertise and skills that will contribute to risk-reducing AI governance and coordination.

    Why this could be a high-impact career path

    Artificial intelligence has advanced rapidly. In 2022 and 2023, new language and image generation models gained widespread attention for their abilities, blowing past previous benchmarks.

    And the applications of these models are still new; with more tweaking and integration into society, the existing AI systems may become easier to use and more ubiquitous.

    Continue reading →

    Open roles: Operations team

    About 80,000 Hours

    80,000 Hours’ goal is to get talented people working on the world’s most pressing problems — we aim to be the world’s best source of support and advice for them on how to do so. That means helping people shift their careers to work on solving problems that are more important, neglected, and solvable — and to pick more promising methods for solving those problems.

    We’ve had over 10 million readers on our website, have ~450,000 subscribers to our newsletter and have given 1on1 advice to over 4,000 people. We’re also one of the top ways people who get involved in EA first hear about it, and we’re the most commonly cited factor for ‘getting involved’ in the EA community.

    The operations team oversees 80,000 Hours’ HR, recruiting, finances, org-wide metrics, and office management, as well as much of our fundraising, tech systems, and team coordination. We’re also currently overseeing our spinout from Effective Ventures and setup as an independent organisation.

    Currently, the operations team has four full-time staff, some part-time staff, and we receive operations support from Effective Ventures. We’re planning to (at least!) double the size of our operations team over the next year.

    The roles

    These roles would be great for building career capital in operations, especially if you could one day see yourself in a more senior operations role (e.g. specialising in a particular area, taking on management, or eventually being a Head of Operations or COO).

    Continue reading →

      Anonymous answers: What are the biggest misconceptions about biosecurity and pandemic risk?

      We rank preventing catastrophic pandemics as one of the most pressing problems in the world, and we have advised many of our readers to work in biosecurity to have high-impact careers.

      But biosecurity is a complex field, and while the threat is undoubtedly large, there’s a lot of disagreement about how best to conceptualise and mitigate the risks. We wanted to get a better sense of how the people thinking about these threats every day perceive the risks.

      So we decided to talk to more than a dozen biosecurity experts to better understand their views.

      To make them feel comfortable speaking candidly, we granted the experts we spoke to anonymity. Sometimes disagreements in this space can get contentious, and certainly many of the experts we spoke to disagree with one another. We don’t endorse every position they’ve articulated below.

      We think, though, that it’s helpful to lay out the range of expert opinions from people who we think are trustworthy and established in the field. We hope this will inform our readers about ongoing debates and issues that are important to understand — and perhaps highlight areas of disagreement that need more attention.

      The group of experts includes policymakers serving in national governments, grantmakers for foundations, and researchers in both academia and the private sector. Some of them identify as being part of the effective altruism community, while others do not. All the experts are mid-career or at a more senior level.

      Continue reading →

      Why you might not want to work on nuclear disarmament (and what to work on instead)

      In 1955, ten years after Robert Oppenheimer, Leslie Groves, and the 130,000 workers of the Manhattan Project built the first atomic bomb, the United States had 2,400 and Russia had 200. At present, the USA has over 3,000, Russia has over 4,000, and China is building an arsenal of hundreds. Most of these are hydrogen bombs many times more powerful than the bombs dropped on Hiroshima and Nagasaki. These modern arsenals no longer require a bomber plane to deliver them — ICBMs can throw bombs around the earth in half an hour. When we sleep, we sleep as targets of nuclear weapons.

      A global thermonuclear war would be the most horrifying event to happen in humanity’s history. If cities were targeted, at the very least, tens of millions would instantly die just like the victims of Hiroshima and Nagasaki. Survivors described the scenes of those explosions as “just like Hell” and “burning as if scorching Heaven.”

      Afterwards, hundreds of millions could starve due to economic collapse. It’s also possible the ozone layer would be damaged for years and temperatures would drop us into a nuclear winter. In the worst case scenario, this would render the northern hemisphere uninhabitable for years, causing an existential catastrophe.

      Faced with this possible future, why don’t we agree it’s too horrible to allow and find a way to disarm? Since the invention of nuclear weapons,

      Continue reading →

        #180 – Hugo Mercier on why gullibility and misinformation are overrated

        There are now dozens, if not hundreds, of experiments showing that in the overwhelming or the quasi-entirety of the cases, when you give people a good argument for something, something that is based in fact, some authority that they trust, then they are going to change their mind. Maybe not enough, not as much as we’d like them to, but the change will be in the direction that you would expect. In a way, that’s the sensible thing to do.

        And you’re right that both laypeople and professional psychologists have been and still are very much attracted to demonstrations that human adults are irrational and a bit silly, because it’s more interesting. We are attracted by mistakes, by errors, by kind of silly behaviour, but that doesn’t mean this is representative at all.

        Hugo Mercier

        The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.

        And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.

        But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.

        In this interview, host Rob Wiblin and Hugo discuss:

        • How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.
        • How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.
        • Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.
        • Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.
        • The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.
        • Why fake news and conspiracy theories actually have less impact than most people assume.
        • False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.
        • And plenty more.

        Producer and editor: Keiran Harris
        Audio Engineering Lead: Ben Cordell
        Technical editing: Simon Monsour and Milo McGuire
        Transcriptions: Katy Moore

        Continue reading →

        Our new series on building skills

        If we were going to summarise all our advice on how to get career capital in three words, we’d say: build useful skills.

        In other words, gain abilities that are valued in the job market — which makes your work more useful and makes it easier to bargain for the ingredients of a fulfilling job — as well as those that are specifically needed in tackling the world’s most pressing problems.

        So today, we’re launching our series on the most useful skills for making a difference — which you can find here. It covers why we recommend each skill, how to get started learning them, and how to work out which is the best fit for you.

        Each article looks at one of eight skill sets we think are most useful for solving the problems we think are most pressing:

        Why are we releasing this now?

        We think that many of our readers have come away from our site underappreciating the importance of career capital.

        Continue reading →

          #179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

          In what situation is anxiety useful? In situations where you’re in danger of losing something, it’s good to have a special mode of operation that alerts you to the possible loss, where you can take preventive action and avoid that situation in the future.

          And is there only one kind of loss? No. You can lose your finger, you can lose your friend, you can lose your mate’s fidelity, you can lose your money, you can lose your health, you can fall off a building. And this helps to explain why there’s so many different kinds of anxiety. Natural selection has gradually and only partially differentiated kinds of anxiety to cope with those different kinds of possible losses.

          Randy Nesse

          Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.

          From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.

          So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?

          Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.

          In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:

          • How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.
          • How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.
          • The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.
          • How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.
          • The “smoke detector principle” of why we experience so many false alarms along with true threats.
          • The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.
          • Evolutionary theories on why we age and die.
          • And much more.

          Producer and editor: Keiran Harris
          Audio Engineering Lead: Ben Cordell
          Technical editing: Dominic Armstrong
          Transcriptions: Katy Moore

          Continue reading →

          #178 – Emily Oster on what the evidence actually says about pregnancy and parenting

          I think at various times — before you have the kid, after you have the kid — it’s useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending? Which hours? How do I want the weekends to look? The things that are going to shape the way your day-to-day goes, and the time you spend with your kids, and what you’re doing in that time with your kids, and all of those things: you have an opportunity to deliberately plan them.

          And you can then feel like, “I’ve thought about this, and this is a life that I want. This is a life that we’re trying to craft for our family, for our kids.” And that is distinct from thinking you’re doing a good job in every moment — which you can’t achieve. But you can achieve, “I’m doing this the way that I think works for my family.”

          Emily Oster

          In today’s episode, host Luisa Rodriguez speaks to Emily Oster — economist at Brown University, host of the ParentData podcast, and the author of three hugely popular books that provide evidence-based insights into pregnancy and early childhood.

          They cover:

          • Common pregnancy myths and advice that Emily disagrees with — and why you should probably get a doula.
          • Whether it’s fine to continue with antidepressants and coffee during pregnancy.
          • What the data says — and doesn’t say — about outcomes from parenting decisions around breastfeeding, sleep training, childcare, and more.
          • Which factors really matter for kids to thrive — and why that means parents shouldn’t sweat the small stuff.
          • How to reduce parental guilt and anxiety with facts, and reject judgemental “Mommy Wars” attitudes when making decisions that are best for your family.
          • The effects of having kids on career ambitions, pay, and productivity — and how the effects are different for men and women.
          • Practical advice around managing the tradeoffs between career and family.
          • What to consider when deciding whether and when to have kids.
          • Relationship challenges after having kids, and the protective factors that help.
          • And plenty more.

          Producer and editor: Keiran Harris
          Audio Engineering Lead: Ben Cordell
          Technical editing: Simon Monsour and Milo McGuire
          Additional content editing: Katy Moore and Luisa Rodriguez
          Transcriptions: Katy Moore

          Continue reading →

          Announcing Niel Bowerman as the next CEO of 80,000 Hours

          We’re excited to announce that the boards of Effective Ventures US and Effective Ventures UK have approved our selection committee’s choice of Niel Bowerman as the new CEO of 80,000 Hours.

          I (Rob Wiblin) was joined on the selection committee by Will MacAskill, Hilary Greaves, Simran Dhaliwal, and Max Daniel.

          80,000 Hours is a project of EV US and EV UK, though under Niel’s leadership, it expects to be spinning out and creating an independent legal structure, which will involve selecting a new board.

          We want to thank Brenton Mayer, who has served as 80,000 Hours interim CEO since late 2022, for his dedication and thoughtful management. Brenton expressed enthusiasm about the committee’s choice, and he expects to take on the role of chief operations officer, where he will continue to work closely with Niel to keep 80,000 Hours running smoothly.

          By the end of its deliberations, the selection committee agreed that Niel was the best candidate to be 80,000 Hours’ long-term CEO. We think Niel’s drive and attitude will help him significantly improve the organisation and shift its strategy to keep up with events in the world. We were particularly impressed by his ability to use evidence to inform difficult strategic decisions and lay out a clear vision for the organisation.

          Niel was very forthcoming and candid with the committee about his weaknesses. His focus on getting frank feedback and using it to drive a self-improvement cycle really impressed the selection committee.

          Continue reading →

            #177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

            There’s really no risk of a self-driving car taking over the world or doing anything… It’s not going to get totally out of our control. It can only do one thing. It’s an engineered system with a very specific purpose, right? It’s not going to start doing science one day by surprise. So I think that’s all very good. We should embrace that type of technology.

            And I try to be an example of holding that belief and championing that at the same time as saying, hey, something that can do science and pursue long-range goals of arbitrary specification, that is like a whole different kind of animal.

            Nathan Labenz

            Back in December, we released an episode where Rob Wiblin interviewed Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution podcast — on his takes on the pace of development of AGI and the OpenAI leadership drama, based on his experience red teaming an early version of GPT-4 and the conversations with OpenAI staff and board members that followed.

            In today’s episode, their conversation continues, with Nathan diving deeper into:

            • What AI now actually can and can’t do — across language and visual models, medicine, scientific research, self-driving cars, robotics, weapons — and what the next big breakthrough might be.
            • Why most people, including most listeners, probably don’t know and can’t keep up with the new capabilities and wild results coming out across so many AI applications — and what we should do about that.
            • How we need to learn to talk about AI more productively — particularly addressing the growing chasm between those concerned about AI risks and those who want to see progress accelerate, which may be counterproductive for everyone.
            • Where Nathan agrees with and departs from the views of ‘AI scaling accelerationists.’
            • The chances that anti-regulation rhetoric from some AI entrepreneurs backfires.
            • How governments could (and already do) abuse AI tools like facial recognition, and how militarisation of AI is progressing.
            • Preparing for coming societal impacts and potential disruption from AI.
            • Practical ways that curious listeners can try to stay abreast of everything that’s going on.
            • And plenty more.

            Producer and editor: Keiran Harris
            Audio Engineering Lead: Ben Cordell
            Technical editing: Simon Monsour and Milo McGuire
            Transcriptions: Katy Moore

            Continue reading →

            Practical steps to form better habits in your life and career

            The idea this week: developing skills and habits takes time, effort, and using the right techniques.

            At the start of a new year, we often reflect on how to improve and develop better habits. People often want to exercise more or become better at self-study. I, for one, wanted to consistently get to work earlier.

            But actually making progress requires more than just wanting it — it takes a systematic approach. Doing this is a key part succeeding at your current job, improving your career trajectory, and even just being more fulfilled generally. (Read more in our article on all the evidence-based advice we found on how to be more successful in any job.)

            You want to take something that’s a problem in your life and find a solution that becomes second nature.

            For example, for some people, getting to work at an early hour is just part of their normal routine — they barely have to think about it. But if that’s not the case for you – like it wasn’t for me – you’ll need to make a conscious change, and work on it until it becomes second nature.

            But lots of things block us from forming these new habits and skills.

            The key is closing the loop — get feedback about your problem, analyse why you haven’t adopted the habit yet, make a change, test it out, and repeat:

            1. Feedback – track your progress and your errors,

            Continue reading →

              2023 in review: some of our top pieces from last year

              As we kick off 2024, we’re taking a moment to look back at our 2023 content.

              We published a lot of pieces aimed at helping our readers have more impactful careers, including a completely updated career guide, our revamped advanced series, around 35 podcast episodes, dozens of blog posts, and a bunch of updates to our career reviews and problem profiles.

              We’d like to highlight some of the new content that stands out to us:

              Standout blog posts

              • How to cope with rejection in your career — Luisa Rodriguez, one of the hosts of The 80,000 Hours podcast, wrote this powerful personal piece about her experience with career rejection, the unexpected benefits of getting rejected, and helpful tips for dealing with it that have worked for her. If you have ever struggled with rejection, I think this piece might help you feel less alone.
              • My thoughts on parenting and having an impactful career — Michelle Hutchinson, the director of the one-on-one programme at 80,000 Hours, wrote this thoughtful reflection on her decision to become a parent, the effects of parenthood on her career and social impact, and the challenges and benefits of being a parent in a community of people trying to have an impactful career.

              • Some thoughts on moderation in doing good — in this post, 80,000 Hours founder Ben Todd addressed why moderation may be underrated by people trying to have a big social impact and how to avoid the pitfalls of extremism.

              Continue reading →

                An apology for our mistake with the book giveaway

                80,000 Hours runs a programme where subscribers to our newsletter can order a free, paperback copy of a book to be sent to them in the mail. Readers choose between getting a copy of our career guide, Toby Ord’s The Precipice, and Will MacAskill’s Doing Good Better.

                This giveaway has been open to all newsletter subscribers since early 2022. The number of orders we get depends on the number of new subscribers that day, but in general, we get around 150 orders a day.

                Over the past week, however, we received an overwhelming number of orders. The offer of the free book appears to have been promoted by some very popular posts on Instagram, which generated an unprecedented amount of interest for us.

                While we’re really grateful that these people were interested in what we have to offer, we couldn’t handle the massive uptick in demand. We’re a nonprofit funded by donations, and everything we provide is free. We had budgeted to run the book giveaway projecting the demand would be in line with what it’s been for the past two years. Instead, we had more than 20,000 orders in just a few days — which we anticipated would run through around six months of the book giveaway’s budget.

                We’ve now paused taking new orders, and we’re unsure when we’ll be able to re-open them.

                Also, because of this large spike in demand, we had to tell many people who subscribed to our newsletter hoping to get a physical book that we’re not able to complete their order.

                Continue reading →

                  Special podcast holiday release: One highlight from every episode in 2023

                  Happy new year! We’re celebrating with a special podcast holiday release: our favourite highlights from each episode of the show that came out in 2023.

                  That’s 32 of our favourite ideas packed into one episode that’s so bursting with substance it might be more than the human mind can safely handle.

                  Find this episode wherever you get podcasts:

                  There’s something for everyone here:

                  …plus another 23 such gems from the rest of our 2023 guest lineup.

                  And they’re in an order that our audio engineer Simon Monsour described as having an “eight-dimensional-tetris-like rationale.”

                  I don’t know what the hell that means either, but I’m curious to find out.

                  And remember: if you like these highlights,

                  Continue reading →

                    Announcing our plan to become an independent organisation

                    We are excited to share that 80,000 Hours has officially decided to spin out as a project from our parent organisations and establish an independent legal structure.

                    80,000 Hours is a project of the Effective Ventures group — the umbrella term for Effective Ventures Foundation and Effective Ventures Foundation USA, Inc., which are two separate legal entities that work together. It also includes the projects Giving What We Can, the Centre for Effective Altruism, and others.

                    We’re incredibly grateful to the Effective Ventures leadership and team and the other orgs for all their support, particularly in the last year. They devoted countless hours and enormous effort to helping ensure that we and the other orgs could pursue our missions.

                    And we deeply appreciate Effective Ventures’ support in our spin-out. They recently announced that all of the other organisations under their umbrella will likewise become their own legal entities; we’re excited to continue to work alongside them to improve the world.

                    Back in May, we investigated whether it was the right time to spin out of our parent organisations. We’ve considered this option at various points in the last three years.

                    There have been many benefits to being part of a larger entity since our founding. But as 80,000 Hours and the other projects within Effective Ventures have grown, we concluded we can now best pursue our mission and goals independently. Effective Ventures leadership approved the plan.

                    Becoming our own legal entity will allow us to:

                    • Match our governing structure to our function and purpose
                    • Design operations systems that best meet our staff’s needs
                    • Reduce interdependence with other entities that raises financial,

                    Continue reading →

                      #176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

                      We’re in this seemingly maybe early phases of some sort of takeoff event, and in the end it is probably going to be very hard to get off of that trajectory broadly. But to the degree that we can bend it a bit, and give ourselves some time to really figure out what it is that we’re dealing with and what version of it we really want to create, I think that would be extremely worthwhile.

                      And hopefully, again, I think the game board is in a pretty good spot. The people that are doing the frontier work for the most part seem to be pretty enlightened on all those questions as far as I can tell. So hopefully as things get more critical, they will exercise that restraint as appropriate.

                      Nathan Labenz

                      OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do this safely?

                      That’s the central theme of today’s episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI’s “red team” that probed GPT-4 to find ways it could be abused, long before it was public.

                      Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.

                      Nathan’s view: it’s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.

                      When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI’s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.

                      In today’s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board’s reservations about Sam Altman, which to this day have not been laid out in any detail.

                      But while he feared throughout 2022 that OpenAI and Sam Altman didn’t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.

                      Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they’re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI’s decision to release GPT-4 when it did was for the best.

                      On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They’ve also invested major resources into new ‘Superalignment’ and ‘Preparedness’ teams, while avoiding using competition with China as an excuse for recklessness.

                      At the same time, it’s very hard to know whether it’s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we’re confident we want, which we can prove will remain safe as its capabilities get ever greater.

                      By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI’s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they’re also better placed than maybe anyone in the world to assess if the company’s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.

                      In today’s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:

                      • Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan’s interactions with the board when he raised concerns from his red teaming efforts.
                      • Which AI applications we should be urgently rolling out, with less worry about safety.
                      • Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.
                      • Whether AI capabilities are advancing faster than safety efforts and controls.
                      • The costs and benefits of releasing powerful models like GPT-4.
                      • Nathan’s view on the game theory of AI arms races and China.
                      • Whether it’s worth taking some risk with AI for huge potential upside.
                      • The need for more “AI scouts” to understand and communicate AI progress.
                      • And plenty more.

                      Producer and editor: Keiran Harris
                      Audio Engineering Lead: Ben Cordell
                      Technical editing: Milo McGuire and Dominic Armstrong
                      Transcriptions: Katy Moore

                      Continue reading →

                      Not sure where to donate this year? Here’s our advice.

                      The idea this giving season: figuring out where to donate is tricky, but a few key tips can help.

                      There are lots of pressing problems in the world, and even more possible solutions. We mostly focus on careers, but donating to effective organisations tackling these problems — if you can — is another great way to help.

                      But how can you figure out where it’s best to donate?

                      Our article on choosing where to donate lays out how you can make this choice. First, you have to decide whether:

                      • You want to defer to someone you think is trustworthy, shares your values, and has already evaluated charities. Just following their recommendations can save you work. (We discuss some options below.)
                      • You want to do your own research instead, which might allow you to find unusually high-impact options matched to your specific values, plus improve your knowledge of effective giving.
                      • You can also enter a donor lottery — learn more about them here.

                      If you decide to do your own research, you can use our article to figure out how much time you should spend. For example, we think young people might especially benefit from doing research since they’ll learn lessons about charity evaluation that they can apply for a long time in the future.

                      If you do your own research, we recommend you:

                      1. Decide which global problems you think are most pressing right now.

                      Continue reading →