Open positions: 1on1 team

We’re looking for candidates to join our 1on1 team.

The 1on1 team at 80,000 Hours talks to people who want to have a positive impact in their work and helps them find career paths tackling the world’s most pressing problems. We’re keen to expand our team by hiring people who can help with at least one (and hopefully more!) of the following responsibilities:

  • Advising: talking one-on-one to talented and altruistic applicants in order to help them find high-impact careers.
  • Running our headhunting product: working with hiring managers at the most effective organisations to help them find exceptional employees.
  • Improving our systems: building tech-based systems to support our team members.

If you think you’d be interested in taking on more than one of these duties, and enjoy wearing multiple hats in your job, we strongly encourage you to apply. The start dates of these roles are flexible, although we’re likely to prioritise candidates who can start sooner, all else equal.

These roles have starting salaries from £50,000 to £85,000 (depending on skills and experience) and are ideally London-based. We’re able to sponsor visa applications.

About 80,000 Hours

Our mission is to get talented people working on the world’s most pressing problems by providing them with excellent support, advice, and resources on how to do so. We’re also one of the largest sources introducing people to the effective altruism community,

Continue reading →

    Alex Lawsen on avoiding 10 mistakes people make when pursuing a high-impact career

    In this episode of 80k After Hours, Luisa Rodriguez and Alex Lawsen discuss common mistakes people make when trying to do good with their careers, and advice on how to avoid them.

    They cover:

    • Taking 80,000 Hours’ rankings too seriously
    • Not trying hard enough to fail
    • Feeling like you need to optimise for having the most impact now
    • Feeling like you need to work directly on AI immediately
    • Not taking a role because you think you’ll be replaceable
    • Constantly considering other career options
    • Overthinking or over-optimising career choices
    • Being unwilling to think things through for yourself
    • Ignoring conventional career wisdom
    • Doing community work even if you’re not suited to it

    Who this episode is for:

    • People who want to pursue a high-impact career
    • People wondering how much AI progress should change their plans
    • People who take 80,000 Hours’ career advice seriously

    Who this episode isn’t for:

    • People not taking 80k’s career advice seriously enough
    • People who’ve never made any career mistakes
    • People who don’t want to hear Alex say “I said a bunch of stuff, maybe some of it’s true” every time he’s on the podcast

    Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

    Producer and editor: Keiran Harris
    Audio Engineering Lead: Ben Cordell
    Technical editing: Milo McGuire and Dominic Armstrong
    Additional content editing: Luisa Rodriguez and Katy Moore
    Transcriptions: Katy Moore

    Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

    Continue reading →

    Announcing the new 80,000 Hours career guide

    From 2016 to 2019, 80,000 Hours’ core content was contained in our persistently popular career guide. (You may also remember it as the 80,000 Hours book: 80,000 Hours — Find a fulfilling career that does good).

    Today, we’re re-launching that guide. Among many other changes, in the new version:

    You can read the guide here or start with a 2-minute summary.

    It’s also available as a printed book (you can get a free copy by signing up for our newsletter or buy it on Amazon),

    Continue reading →

      #162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

      Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world’s largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.

      But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.

      On Mustafa’s telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a ‘narrow path’ between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more — or indeed, even have the option to do so.

      And those are just three of the challenges confronting us. In Mustafa’s view, ‘misaligned’ AI that goes rogue and pursues its own agenda won’t be an issue for the next few years, and it isn’t a problem for the current style of large language models. But he thinks that at some point — in eight, ten, or twelve years — it will become an entirely legitimate concern, and says that we need to be planning ahead.

      In The Coming Wave, Mustafa lays out a 10-part agenda for ‘containment’ — that is to say, for limiting the negative and unforeseen consequences of emerging technologies:

      1. Developing an Apollo programme for technical AI safety
      2. Instituting capability audits for AI models
      3. Buying time by exploiting hardware choke points
      4. Getting critics involved in directly engineering AI models
      5. Getting AI labs to be guided by motives other than profit
      6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it
      7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities
      8. Building a self-critical culture in AI labs of openly accepting when the status quo isn’t working
      9. Creating a mass public movement that understands AI and can demand the necessary controls
      10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibria

      As Mustafa put it, “AI is a technology with almost every use case imaginable” and that will demand that, in time, we rethink everything.

      Rob and Mustafa discuss the above, as well as:

      • Whether we should be open sourcing AI models
      • Whether Mustafa’s policy views are consistent with his timelines for transformative AI
      • How people with very different views on these issues get along at AI labs
      • The failed efforts (so far) to get a wider range of people involved in these decisions
      • Whether it’s dangerous for Mustafa’s new company to be training far larger models than GPT-4
      • Whether we’ll be blown away by AI progress over the next year
      • What mandatory regulations government should be imposing on AI labs right now
      • Appropriate priorities for the UK’s upcoming AI safety summit

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      What the past can tell us about how AI will affect jobs

      The idea this week: AI may be progressing fast — but that doesn’t mean it will rapidly transform the economy in the near term.

      Large language models like ChatGPT are becoming more and more powerful and capable. We’ve already started to see a few examples where human workers are plausibly being replaced by AI.

      As these models get even more skilled, will they substantially replace human workers? What will happen to the labour market?

      To try to answer these questions, I spoke to labour economist Michael Webb, for the latest episode of The 80,000 Hours Podcast. He’s worked for Google DeepMind, in the UK government, and at Stanford University.

      Michael argues that new technologies typically take many decades to fully replace specific jobs.

      For example, if you look at two of the biggest general-purpose technologies of the last 150 years — robots and software — it took 30 years from the invention of each technology to get to 50% adoption.

      It took 90 years for the last manual telephone operator to lose their job from automation.

      a switchboard

      So if we look to the past as a guide, it may suggest that if AI systems do replace human workers, it will take many decades. But why does it take so long for very obviously useful technologies to be widely adopted? Here are three reasons:

      • Adopting new innovative technologies can take lots of money and time.

      Continue reading →

        US policy master’s degrees

        Working in policy is among the most effective ways to have a positive impact in areas like AI, biosecurity, animal welfare, or global health. Getting a policy master’s degree (e.g. in security studies or public policy) can help you pivot into or accelerate your policy career in the US.

        This two-part overview explains why, when, where, and how to get a policy master’s degree, with a focus on people who want to work in the US federal government. The first half focuses on the “why” and the “when” and alternatives to policy master’s. The second half considers criteria for choosing where to apply, specific degrees we recommend, how to apply, and how to secure funding. We also recommend this US policy master’s database if you want to compare program options (see also this list of European programs maintained through our job board).

        This information is based on the personal experience of people working on policy in DC for several years, background reading, and conversations with more than two dozen policy professionals.

        Part 1: Why do a master’s if you want to work in policy?

        There are several reasons why you might want to do a master’s if your goal is to work in policy.

        • First, completing a master’s is often (but not always) necessary for advancing in a policy career, depending on the specific institution and role.
        • Second, a master’s helps you build your career capital,

        Continue reading →

        Why you might consider switching careers — and what it takes to do it

        The idea this week: switching careers can be terrifying — but it can also be the key to finding more satisfying and impactful work.

        Trust me — I’ve tested my fit for at least four different career paths before landing where I am now. After a first job in teaching, I explored:

        When I graduated from university with a degree in philosophy, I didn’t know what to do next, but I knew I wanted to find a job that helped others and wasn’t harmful. I looked for roles at nonprofits nearby and ended up getting hired at a special education school.

        I loved many parts of the job and the students I worked with, but when the opportunity arose to get my master’s in special education, I realised I didn’t envision spending my whole career in the field. I had gotten involved with local vegan advocacy and an effective altruism group, and I was curious if there were even more impactful opportunities I could pursue with my career.

        I once thought that most of my impact would come through donating — but a lot of the people I was talking to were discussing the idea that career choice could be even more impactful than charitable giving (especially since teaching wasn’t particularly lucrative in my case).

        Continue reading →

          #161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

          In today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people’s jobs and the labour market.

          They cover:

          • The jobs most and least exposed to AI
          • Whether we’ll we see mass unemployment in the short term
          • How long it took other technologies like electricity and computers to have economy-wide effects
          • Whether AI will increase or decrease inequality
          • Whether AI will lead to explosive economic growth
          • What we can we learn from history, and reasons to think this time is different
          • Career advice for a world of LLMs
          • Why Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involved
          • Michael’s take as a musician on AI-generated music
          • And plenty more

          If you’d like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he’s now hiring! Check out Quantum Leap’s website.

          Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

          Producer and editor: Keiran Harris
          Audio Engineering Lead: Ben Cordell
          Technical editing: Simon Monsour and Milo McGuire
          Additional content editing: Katy Moore and Luisa Rodriguez
          Transcriptions: Katy Moore

          Continue reading →

          #160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

          In today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.

          They cover:

          • Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could get
          • Her new book about how we could be the first generation to build a sustainable planet
          • Whether climate change is the most worrying environmental issue
          • How we reduced outdoor air pollution
          • Why Hannah is worried about the state of biodiversity
          • Solutions that address multiple environmental issues at once
          • How the world coordinated to address the hole in the ozone layer
          • Surprises from Our World in Data’s research
          • Psychological challenges that come up in Hannah’s work
          • And plenty more

          Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

          Producer and editor: Keiran Harris
          Audio Engineering Lead: Ben Cordell
          Technical editing: Milo McGuire and Dominic Armstrong
          Additional content editing: Katy Moore and Luisa Rodriguez
          Transcriptions: Katy Moore

          Continue reading →

          Operations management: how I found the right career path for me

          The idea this week: how I learned a lot about my skills by testing my fit for operations work.

          Like a lot of students, I spent much of my final year at university unsure what to do next. Should I pursue further studies, start out on a career path, or something else?

          I was excited about having an impact with my career, and I thought I might be a good fit for policy work — which seemed like a way I could contribute to solving pressing world problems. I figured this would involve further studies, so I looked into applying for graduate school.

          But I was probably deferring too much to my sense of what others thought would be high-impact work, rather than figuring out how I could best contribute over the course of my career. I ended up doing the 80,000 Hours career planning worksheet — and it helped me to generate a longer list of options and questions.

          It pointed me toward something I hadn’t considered: doing something that would help me test my fit for lots of different kinds of work.

          Continue reading →

            #159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

            In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

            Today’s guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, “…the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. … Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

            Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem — it’s also hiring dozens of scientists and engineers to build out the Superalignment team.

            Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains:

            Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on… and I think it’s pretty likely going to work, actually. And that’s really, really wild, and it’s really exciting. It’s like we have this hard problem that we’ve been talking about for years and years and years, and now we have a real shot at actually solving it. And that’d be so good if we did.

            Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.

            The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy — as one person described it, “like using one fire to put out another fire.”

            But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves.

            And there’s an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.

            Jan doesn’t want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.

            Jan thinks it’s so crazy it just might work. But some critics think it’s simply crazy. They ask a wide range of difficult questions, including:

            • If you don’t know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about.
            • How do you know that these technical problems can be solved at all, even in principle?
            • At the point that models are able to help with alignment, won’t they also be so good at improving capabilities that we’re in the middle of an explosion in what AI can do?

            In today’s interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover:

            • OpenAI’s current plans to achieve ‘superalignment’ and the reasoning behind them
            • Why alignment work is the most fundamental and scientifically interesting research in ML
            • The kinds of people he’s excited to hire to join his team and maybe save the world
            • What most readers misunderstood about the OpenAI announcement
            • The three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversight
            • What the standard should be for confirming whether Jan’s team has succeeded
            • Whether OpenAI should (or will) commit to stop training more powerful general models if they don’t think the alignment problem has been solved
            • Whether Jan thinks OpenAI has deployed models too quickly or too slowly
            • The many other actors who also have to do their jobs really well if we’re going to have a good AI future
            • Plenty more

            Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

            Producer and editor: Keiran Harris
            Audio Engineering Lead: Ben Cordell
            Technical editing: Simon Monsour and Milo McGuire
            Additional content editing: Katy Moore and Luisa Rodriguez
            Transcriptions: Katy Moore

            Continue reading →

            What recent events mean for AI governance career paths

            The idea this week: AI governance careers present some of the best opportunities to change the world for the better that we’ve found.

            Last week, US Senator Richard Blumenthal gave a stark warning during a subcommittee hearing on artificial intelligence.

            He’s become deeply concerned about the potential for an “intelligence device out of control, autonomous, self-replicating, potentially creating diseases, pandemic-grade viruses, or other kinds of evils — purposely engineered by people, or simply the result of mistakes, no malign intention.”

            We’ve written about these kinds of dangers — potentially rising to the extreme of an extinction-level event — in our problem profile on preventing an AI-related catastrophe.

            “These fears need to be addressed, and I think can be addressed,” the senator continued. “I’ve come to the conclusion that we need some kind of regulatory agency.”

            And the senator from Connecticut isn’t the only one:

            • The White House has led a coalition of the top AI companies to coordinate on risk-reducing measures, and they recently announced a joint voluntary commitment to some key safety principles. President Joe Biden and Vice President Kamala Harris have been directly involved in these efforts, with the president himself saying the technology will require “new laws, regulation, and oversight.”
            • Four top companies developing advanced AI systems — Anthropic,

            Continue reading →

              #158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

              Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he’s narrowing his focus once again, this time on making the transition to advanced AI go well.

              In today’s conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.

              (As Holden reminds us, his wife is also the president of one of the world’s top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours’ largest financial supporter.)

              One point he makes is that people are too narrowly focused on AI becoming ‘superintelligent.’ While that could happen and would be important, it’s not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.

              As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.

              In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as:

              • Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies.
              • What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists.
              • Holden’s case against ‘hardcore utilitarianism’ and what actually motivates him to work hard for a better world.
              • What the ML and AI safety communities get wrong in Holden’s view.
              • Ways we might succeed with AI just by dumb luck.
              • The value of laying out imaginable success stories.
              • Why information security is so important and underrated.
              • Whether it’s good to work at an AI lab that you think is particularly careful.
              • The track record of futurists’ predictions.
              • And much more.

              Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

              Producer: Keiran Harris
              Audio Engineering Lead: Ben Cordell
              Technical editing: Simon Monsour and Milo McGuire
              Transcriptions: Katy Moore

              Continue reading →

              Why many people underrate investigating the problem they work on

              The idea this week: thinking about which world problem is most pressing may matter more than you realise.

              I’m an advisor for 80,000 Hours, which means I talk to a lot of thoughtful people who genuinely want to have a positive impact with their careers. One piece of advice I consistently find myself giving is to consider working on pressing world problems you might not have explored yet.

              Should you work on climate change or AI risk? Mitigating antibiotic resistance or preventing bioterrorism? Preventing disease in low-income countries or reducing the harms of factory farming?

              Your choice of problem area can matter a lot. But I think a lot of people under-invest in building a view of which problems they think are most pressing.

              I think there are three main reasons for this:

              1. They think they can’t get a job working on a certain problem, so the argument that it’s important doesn’t seem relevant.

              I see this most frequently with AI. People think that they don’t have aptitude or interest in machine learning, so they wouldn’t be able to contribute to mitigating catastrophic risks from AI.

              But I don’t think this is true.

              Continue reading →

              #157 – Ezra Klein on existential risk from AI and what DC could do about it

              In Oppenheimer, scientists detonate a nuclear weapon despite thinking there’s some ‘near zero’ chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

              In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

              Today’s guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years. Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to.

              So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

              Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

              By contrast, he’s pessimistic that it’s possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there’s some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

              From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra’s view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

              In today’s brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

              • Whether it’s desirable to slow down AI research
              • The value of engaging with current policy debates even if they don’t seem directly important
              • Which AI business models seem more or less dangerous
              • Tensions between people focused on existing vs emergent risks from AI
              • Two major challenges of being a new parent

              Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

              Producer: Keiran Harris
              Audio Engineering Lead: Ben Cordell
              Technical editing: Milo McGuire
              Transcriptions: Katy Moore

              Continue reading →

              How many lives does a doctor save? (Part 3)

              This is Part 3 of an updated version of a classic three-part series of 80,000 Hours blog posts. You can also read updated versions of Part 1 and Part 2. You can still read the original version of the series published in 2012.

              It’s fair to say working as a doctor does not look that great so far. In general, the day-to-day work of medicine has had a relatively minor role in why people are living longer and healthier now than they did historically. When we try and quantify the benefit of someone becoming a doctor, the figure gets lower the better the method of estimation and already is low enough such that a 40-year medical career somewhere like the UK would be on a rough par with giving $20,000 dollars to a GiveWell top charity in terms of saving lives.

              Yet there is more to say. The tools we have used to arrive at estimates are general, so they are estimating something like the impact of the modal, median, or typical medical career. There are doctors who have plainly done much more good than my estimates of the impact of a typical doctor.

              So, what could a doctor do to really save a lot of lives?

              Doing doctoring better

              What about just being really, really good? Even if the typical doctor’s work makes a worthwhile — but modest and fairly replaceable — contribution,

              Continue reading →

              How many lives does a doctor save? (Part 2)

              This is Part 2 of an updated version of a classic three-part series of 80,000 Hours blog posts. You can also read updated versions of Part 1 and Part 3. You can still read the original version of the series published in 2012.

              In the last post, we saw that although the reasons people live longer and healthier now have more to do with higher living standards than more medical care, medicine still plays a part. If you try and quantify how much medicine contributes to our increased longevity and health, then divide that amount by the number of doctors providing it, you get an estimate that a UK doctor saves ~70 lives over the course of their career.

              Yet this won’t be a good model of how much good you would actually do if you became a doctor in the UK.

              For one thing, the relationship between more doctors and better health is non-linear. Here’s a scatterplot for each country with doctors per capita on the x-axis and DALYs per capita on the y-axis (since you ‘gain’ DALYs for dying young or being sick, less is better):

              The association shows an initial steep decline between 0–50 doctors per 100,000 people, then levels off abruptly and is basically flat when you get to physician densities in richer countries (e.g. the UK has 300 doctors per 100,000 people). Assuming this is causation rather than correlation (more on that later),

              Continue reading →

              How many lives does a doctor save? (Part 1)

              This is Part 1 of an updated version of a classic three-part series of 80,000 Hours blog posts. You can also read updated versions of Part 2 and Part 3. You can still read the original version of the series published in 2012.

              Doctors have a reputation as do-gooders. So when I was a 17-year-old kid wanting to make a difference, it seemed like a natural career path. I wrote this on my medical school application:

              I want to study medicine because of a desire I have to help others, and so the chance of spending a career doing something worthwhile I can’t resist. Of course, Doctors [sic] don’t have a monopoly on altruism, but I believe the attributes I have lend themselves best to medicine, as opposed to all the other work I could do instead.

              They still let me in.

              When I show this to others in medicine, I get a mix of laughs and groans of recognition. Most of them wrote something similar. The impression I get from senior doctors who have to read this stuff is they see it a bit like a toddler zooming around on their new tricycle: a mostly endearing (if occasionally annoying) work in progress. Season them enough with the blood, sweat, and tears of clinical practice, and they’ll generally turn out as wiser, perhaps more cantankerous, but ultimately humane doctors.

              Yet more important than me being earnest — and even me being trite — was that I was wrong.

              Continue reading →

              Hannah Boettcher on the mental health challenges that come with trying to have a big impact

              In this episode of 80k After Hours, Luisa Rodriguez and Hannah Boettcher discuss various approaches to therapy, and how to use them in practice — focusing specifically on people trying to have a big impact.

              They cover:

              • The effectiveness of therapy, and tips for finding a therapist
              • Moral demandingness
              • Internal family systems-style therapy
              • Motivation and burnout
              • Exposure therapy
              • Grappling with world problems and x-risk
              • Perfectionism and imposter syndrome
              • And the risk of over-intellectualising

              Who this episode is for:

              • High-impact focused people who struggle with moral demandingness, perfectionism, or imposter syndrome
              • People who feel anxious thinking about the end of the world
              • 80,000 Hours Podcast hosts with the initials LR

              Who this episode isn’t for:

              • People who aren’t focused on having a big impact
              • People who don’t struggle with any mental health issues
              • Founders of Scientology with the initials LRH

              Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

              Producer: Keiran Harris
              Audio Engineering Lead: Ben Cordell
              Technical editing: Dominic Armstrong
              Content editing: Katy Moore, Luisa Rodriguez, and Keiran Harris
              Transcriptions: Katy Moore

              Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

              Continue reading →

              #156 – Markus Anderljung on how to regulate cutting-edge AI models

              In today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.

              They cover:

              • The need for AI governance, including self-replicating models and ChaosGPT
              • Whether or not AI companies will willingly accept regulation
              • The key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoring
              • Whether we can be confident that people won’t train models covertly and ignore the licencing system
              • The progress we’ve made so far in AI governance
              • The key weaknesses of these approaches
              • The need for external scrutiny of powerful models
              • The emergent capabilities problem
              • Why it really matters where regulation happens
              • Advice for people wanting to pursue a career in this field
              • And much more.

              Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

              Producer: Keiran Harris
              Audio Engineering Lead: Ben Cordell
              Technical editing: Simon Monsour and Milo McGuire
              Transcriptions: Katy Moore

              Continue reading →