#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you’re interacting on any platform that also has issued you a multifunctional device where they’re looking at your brainwave activity, they are marketing to you, they’re cognitively shaping you.

So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there’s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, “Oh my god, mind reading is here. Now what?”

Nita Farahany

In today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.

They cover:

  • How close we are to actual mind reading.
  • How hacking neural interfaces could cure depression.
  • How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.
  • How close we are to being able to unlock our phones by singing a song in our heads.
  • How neurodata has been used for interrogations, and even criminal prosecutions.
  • The possibility of linking brains to the point where you could experience exactly the same thing as another person.
  • Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

Benjamin Todd on the history of 80,000 Hours

The very first office we had was just a balcony in an Oxford College dining hall. It was totally open to the dining hall, so every lunch and dinner time it would be super noisy because it’d be like 200 people all eating below us.

And then I think we just had a bit where we just didn’t have an office, so we worked out of the canteen in the library for at least three months or something. And then it was only after that we moved into this tiny, tiny room at the back of an estate agent off in St Clement’s in Oxford.

One of our early donors came and we gave him a tour, and when he came into the office, his first reaction was, “Is this legal?”

Benjamin Todd

In this episode of 80k After Hours — recorded in June 2022 — Rob Wiblin and Benjamin Todd discuss the history of 80,000 Hours.

They cover:

  • Ben’s origin story
  • How 80,000 Hours got off the ground
  • Its scrappy early days
  • How 80,000 Hours evolved
  • Team trips to China and Thailand
  • The choice to set up several programmes rather than focus on one
  • The move to California and back
  • Various mistakes they think 80,000 Hours has made along the way
  • Why Ben left the CEO position
  • And the future of 80,000 Hours

Who this episode is for:

  • People who work on or plan to work on promoting important ideas in a way that’s similar to 80,000 Hours
  • People who work at organisations similar to 80,000 Hours
  • People who work at 80,000 Hours

Who this episode isn’t for:

  • People who, if asked if they’d like to join a dinner at 80,000 Hours where the team reminisce on the good old days, would say, “Sorry, can’t make it — I’m washing my hair that night”

Producer: Keiran Harris
Audio mastering: Ryan Kessler and Ben Cordell

Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

Continue reading →

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual.

But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism.

Jeff Sebo

In today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.

They cover:

  • The non-negligible chance that AI systems will be sentient by 2030
  • What AI systems might want and need, and how that might affect our moral concepts
  • What happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?
  • What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?
  • What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?
  • The repugnant conclusion and the rebugnant conclusion
  • The experience of trying to build the field of AI welfare
  • What improv comedy can teach us about doing good in the world
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Dominic Armstrong and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Continue reading →

#172 – Bryan Caplan on why you should stop reading the news

If someone were to say, “You’re basically right, but I can cut down 90%; I can still be almost as well informed while reducing the harm,” I think that’s a really obvious position, and I think that one’s almost impossible to argue against. What if you spent half as much time in the news? Would you really be noticeably less informed? No. But would you be less unhappy? At least in the time diary sense, where you are counting the experiences of the day, then I don’t see how you could fail to be more happy as a result of cutting down 50%, with really virtually no change in the level of knowledge that you have, even about the events themselves.

Bryan Caplan

Is following important political and international news a civic duty — or is it our civic duty to avoid it?

It’s common to think that ‘staying informed’ and checking the headlines every day is just what responsible adults do.

But in today’s episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.

In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:

  • That it overwhelmingly provides us with information we can’t usefully act on.
  • That it’s very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.
  • That it obscures the big picture, falling into the trap of thinking ‘something important happens every day.’
  • That it’s highly addictive, for many people chewing up 10% or more of their waking hours.
  • That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.
  • And plenty more.

Bryan and Rob conclude that if you want to understand the world, you’re better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.

In the second half of the episode, Bryan and Rob cover:

  • Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there’s a meaningful chance of it going terribly.
  • Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions.
  • How to allocate resources in space.
  • Bryan’s experience homeschooling his kids.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Continue reading →

A note of appreciation for your efforts to help others

The idea this week: it’s incredible how dedicated many of you are to helping others.

One of my favourite parts of working on the one-on-one advising team is getting to see the important work so many people are doing up close. It’s incredibly inspiring to learn about the thoughtful, dedicated steps you’re taking to have an impact. In our conversations, we get to directly express appreciation for each person’s efforts. But we only get to do that for a fraction of readers, and only occasionally.

So I wanted to take this chance to say thank you to all of you working so hard and intentionally to help others. There are countless ways to make a difference — different problems needing solutions and different approaches to tackle them. I can’t speak to nearly all of those here. But I do want to highlight a few examples of work I know many of you are doing that I find deeply admirable.

  • To those working long hours at a challenging job in order to donate a significant portion of your salary to effective organisations — thank you. It’s hard to stay motivated when the work itself doesn’t feel valuable. It’s hard to make time outside a full-time job to thoughtfully decide where your money can do the most good. And it can be tough being surrounded by people with different values who get to directly enjoy the fruits of their labour rather than using it to reduce suffering.

Continue reading →

    #171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

    Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs.

    And what I chronicle in Pandora’s Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past.

    Alison Young

    In today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.

    They cover:

    • The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reporting
    • The Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many years
    • The time the Soviets had a major anthrax leak, and then hid it for over a decade
    • The 1977 influenza pandemic caused by vaccine trial gone wrong in China
    • The last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK
    • Ways we could get more reliable oversight and accountability for these labs
    • And the investigative work Alison’s most proud of

    Producer and editor: Keiran Harris
    Audio Engineering Lead: Ben Cordell
    Technical editing: Simon Monsour and Milo McGuire
    Additional content editing: Katy Moore and Luisa Rodriguez
    Transcriptions: Katy Moore

    Continue reading →

    New opportunities are opening up in AI governance

    The news this week: major new initiatives show governments are taking AI risks seriously — but there’s still a long way to go.

    From DC to London and beyond, leaders are waking up to AI. They see potential dangers from the technology on the horizon.

    Take the White House. This week, President Joe Biden announced a sweeping new executive order to respond to the risks potentially posed by advanced AI systems, including risks to national security.

    The new order includes the following:

    • A requirement for AI labs working on the most powerful models to share information about safety tests and training plans
    • Direction to the National Institute of Standards and Technology to create standards for red teaming and assessing the safety of powerful new AI models
    • Efforts to reduce the risk of AI-related biological threats and to mitigate cybersecurity vulnerabilities
    • Provisions on fraud, privacy, equity, civil rights, workers’ rights, and international coordination

    Vice President Kamala Harris also announced the creation of the United States AI Safety Institute this week, which will help evaluate and mitigate dangerous capabilities of AI models.

    And the US government is making a big push to hire more AI professionals. They’ve extended the deadline for applying to the Presidential Innovation Fellowship in light of this push.

    Continue reading →

      #170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

      One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people’s homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that’s the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world.

      That’s something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don’t think that it’s particularly harmful. I don’t think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it’s something that you’re able to smell in so many different parts of these cities.

      Santosh Harish

      In today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.

      They cover:

      • How bad air pollution is for our health and life expectancy
      • The different kinds of harm that particulate pollution causes
      • The strength of the evidence that it damages our brain function and reduces our productivity
      • Whether it was a mistake to switch our attention to climate change and away from air pollution
      • Whether most listeners to this show should have an air purifier running in their house right now
      • Where air pollution in India is worst and why, and whether it’s going up or down
      • Where most air pollution comes from
      • The policy blunders that led to many sources of air pollution in India being effectively unregulated
      • Why indoor air pollution packs an enormous punch
      • The politics of air pollution in India
      • How India ended up spending a lot of money on outdoor air purifiers
      • The challenges faced by foreign philanthropists in India
      • Why Santosh has made the grants he has so far
      • And plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Simon Monsour and Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      #169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

      One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, “The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them?”

      I think that’s an interesting thought experiment — and a good one — to say, “Are there cases in which I think that’s justifiable?”

      Paul Niehaus

      In today’s episode, host Luisa Rodriguez interviews Paul Niehaus — cofounder of GiveDirectly — on the case for giving unconditional cash to the world’s poorest households.

      They cover:

      • The empirical evidence on whether giving cash directly can drive meaningful economic growth
      • How the impacts of GiveDirectly compare to USAID employment programmes
      • GiveDirectly vs GiveWell’s top-recommended charities
      • How long-term guaranteed income affects people’s risk-taking and investments
      • Whether recipients prefer getting lump sums or monthly instalments
      • How GiveDirectly tackles cases of fraud and theft
      • The case for universal basic income, and GiveDirectly’s UBI studies in Kenya, Malawi, and Liberia
      • The political viability of UBI
      • Plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Dominic Armstrong and Milo McGuire
      Additional content editing: Katy Moore and Luisa Rodriguez
      Transcriptions: Katy Moore

      Continue reading →

      #168 – Ian Morris on whether deep history says we’re heading for an intelligence explosion

      If we carry on looking at these industrialised economies, not thinking about what it is they’re actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn’t.

      What we’re doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way.

      Ian Morris

      In today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence.

      They cover:

      • Some crazy anomalies in the historical record of civilisational progress
      • Whether we should think about technology from an evolutionary perspective
      • Whether we ought to expect war to make a resurgence or continue dying out
      • Why we can’t end up living like The Jetsons
      • Whether stagnation or cyclical recurring futures seem very plausible
      • What it means that the rate of increase in the economy has been increasing
      • Whether violence is likely between humans and powerful AI systems
      • The most likely reasons for Rob and Ian to be really wrong about all of this
      • How professional historians react to this sort of talk
      • The future of Ian’s work
      • Plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      #167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption

      There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we’re just so early on with alternative proteins and there’s so much white space, it’s actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You’re feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.

      Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you’re just growing a bunch of food to then feed a third of the world’s crops directly to animals, where the vast majority of those calories going in are lost to animals existing.

      Seren Kell

      In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.

      They cover:

      • The basic case for alternative proteins, and why they’re so hard to make
      • Why fermentation is a surprisingly promising technology for creating delicious alternative proteins
      • The main scientific challenges that need to be solved to make fermentation even more useful
      • The progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordable
      • How GFI Europe is helping with some of these challenges
      • How people can use their careers to contribute to replacing factory farming with alternative proteins
      • The best part of Seren’s job
      • Plenty more

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Dominic Armstrong and Milo McGuire
      Additional content editing: Luisa Rodriguez and Katy Moore
      Transcriptions: Katy Moore

      Continue reading →

      #166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere

      If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space?

      That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.

      My concern is that if we don’t approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope — and all of a sudden we have, let’s say, autocracies on the global stage are strengthened relative to democracies.

      Tantum Collins

      In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.

      They cover:

      • How AI could strengthen government capacity, and how that’s a double-edged sword
      • How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren’t there
      • To what extent policymakers take different threats from AI seriously
      • Whether the US and China are in an AI arms race or not
      • Whether it’s OK to transform the world without much of the world agreeing to it
      • The tyranny of small differences in AI policy
      • Disagreements between different schools of thought in AI policy, and proposals that could unite them
      • How the US AI Bill of Rights could be improved
      • Whether AI will transform the labour market, and whether it will become a partisan political issue
      • The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
      • What listeners might be able to do to help with this whole mess
      • Panpsychism
      • Plenty more

      Producer and editor: Keiran Harris
      Audio engineering lead: Ben Cordell
      Technical editing: Simon Monsour and Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      #165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

      Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future?

      Right now, if somebody’s sitting on Mars and you’re going to war against them, it’s very hard to hit them. You don’t have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it’s going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it’s actually very hard to hit you.

      So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you’re in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast.

      So my general conclusion has been that war looks unlikely on some size scales but not on others.

      Anders Sandberg

      In today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.

      They cover:

      • The epic new book Anders is working on, and whether he’ll ever finish it
      • Whether there’s a best possible world or we can just keep improving forever
      • What wars might look like if the galaxy is mostly settled
      • The impediments to AI or humans making it to other stars
      • How the universe will end a million trillion years in the future
      • Whether it’s useful to wonder about whether we’re living in a simulation
      • The grabby aliens theory
      • Whether civilizations get more likely to fail the older they get
      • The best way to generate energy that could ever exist
      • Black hole bombs
      • Whether superintelligence is necessary to get a lot of value
      • The likelihood that life from elsewhere has already visited Earth
      • And plenty more.

      Producer and editor: Keiran Harris
      Audio Engineering Lead: Ben Cordell
      Technical editing: Simon Monsour and Milo McGuire
      Transcriptions: Katy Moore

      Continue reading →

      We’ve made mistakes in our careers — here’s what we learned

      The idea this week: you can learn a lot from mistakes.

      In that spirit, we’re sharing six stories of mistakes that staff at 80,000 Hours think they’ve made in their careers.

      And if you’re interested in hearing more, we strongly recommend a recent episode of our podcast, 80k After Hours, about 10 mistakes people make when pursuing a high-impact career.

      1. Not asking for help

      A mistake I have frequently made, and still sometimes do, is not asking for help with applications. I usually feel awkward about others reading my letters or essays or practicing interview questions, and I also don’t want to waste my friends’ time.

      But whenever I end up asking for help, it improves my applications significantly, and people are usually happy to help. (I enjoy giving feedback on applications as well!)

      Anemone Franz, advisor

      2. Ruling out an option too quickly

      I first became concerned about risks from artificial intelligence in 2014, when I read Superintelligence. The book convinced me these risks were serious. And more importantly, I couldn’t find persuasive counterarguments at the time.

      But because I didn’t have a background in technical fields — I thought of myself as a writer — I concluded there was little I could contribute to the field and mostly worked on other problems.

      Now I think this was a mistake.

      Continue reading →

        #164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

        Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already.

        And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it’s too late?

        Kevin Esvelt

        In today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.

        They cover:

        • Why it makes sense to focus on deliberately released pandemics
        • Case studies of people who actually wanted to kill billions of humans
        • How many people have the technical ability to produce dangerous viruses
        • The different threats of stealth and wildfire pandemics that could crash civilisation
        • The potential for AI models to increase access to dangerous pathogens
        • Why scientists try to identify new pandemic-capable pathogens, and the case against that research
        • Technological solutions, including UV lights and advanced PPE
        • Using CRISPR-based gene drive to fight diseases and reduce animal suffering
        • And plenty more.

        Producer and editor: Keiran Harris
        Audio Engineering Lead: Ben Cordell
        Technical editing: Simon Monsour
        Additional content editing: Katy Moore and Luisa Rodriguez
        Transcriptions: Katy Moore

        Continue reading →

        Building career capital: some new advice on three paths

        The idea this week: building career capital is a key part of having an impactful career over the long term — and we have new content about some specific paths you might take.

        If you want to do good with your career, we usually don’t recommend trying to have an impact right away. We think most people should spend their early career getting good at something useful.

        Here’s some of our new content on specific ways to potentially build career capital:

        1. US policy master’s degrees

        We recently published an in-depth review of US policy master’s degrees.

        Working in policy can be an excellent way to have a positive impact on many top problems, including AI, biosecurity, great power conflict, animal welfare, global health, and more.

        The first part details the value of policy master’s degrees with a focus on the US — though some of the information is likely to apply more broadly. We think this is one of the best ways to get career capital for a career in US policy.

        The second part covers specifics about how to choose which program to apply to based on reputation and personal fit, advice for preparing your application, and information on how to fund your degree.

        2.

        Continue reading →

          Founder of new projects tackling top problems

          In 2010, a group of founders with experience in business, practical medicine, and biotechnology launched a new project: Moderna, Inc.

          After witnessing recent groundbreaking research into RNA, they realised there was an opportunity to use this technology to rapidly create new vaccines for a wide range of diseases. But few existing companies were focused on that application.

          They decided to found a company. And 10 years later, they were perfectly situated to develop a highly effective vaccine against COVID-19 — in a matter of weeks. This vaccine played a huge role in curbing the pandemic and has likely saved millions of lives.

          This illustrates that if you can find an important gap in a pressing problem area and found an organisation that fills this gap, that can be one of the highest-impact things you can do — especially if that organisation can persist and keep growing without you.

          Why might founding a new project be high impact?

          If you can find an important gap in what’s needed to tackle a pressing problem, and create an organisation to fill that gap, that’s a highly promising route to having a huge impact.

          But here are some more reasons it seems like an especially attractive path to us, provided you have a compelling idea and the right personal fit — which we cover in the next section.

          First, among the problems we think are most pressing, there are many ideas for new organisations that seem impactful.

          Continue reading →

          What you should know about our updated career guide

          The question this week: what are the biggest changes to our career guide since 2017?

          • Read the new and updated career guide here, by our founder Benjamin Todd and the 80,000 Hours team.

          Our 2023 career guide isn’t just a fancy new design — here’s a rundown of how the content has been updated:

          1. Career capital: get good at something useful

          In our previous career guide, we argued that your primary focus should be on building very broadly applicable skills, credentials, and connections — what we called transferable career capital.

          We also highlighted jobs like consulting as a way to get this.

          However, since launching the 2017 version of the career guide, we came to think a focus on transferable career capital might lead you to neglect experience that can be very useful to enter the most impactful jobs — for example, experience working in an AI lab or studying synthetic biology.

          OK, so how should you figure out the best career capital option for you?

          Our new advice: get good at something useful.

          In more depth — choose some valuable skills to learn, and that are a good fit for you, and then find opportunities that let you practise those skills. And then have concrete back-up plans and plan Bs in mind, rather than relying on general ‘transferability.’

          Continue reading →

            #163 – Toby Ord on the perils of maximising the good that you do

            One thing that you can say in general with moral philosophy is that the more extreme theories which are less in keeping with all of our current moral beliefs — are also less likely to encode the prejudices of our times. We say in the philosophy business that they’ve got more “reformative power” … But that comes with the risk that we will end up doing things that are … intuitively bad or wrong — and that they might actually be bad or wrong. So it’s a double-edged sword… and one would have to be very careful when following theories like that.

            Toby Ord

            Effective altruism is associated with the slogan “do the most good.” On one level, this has to be unobjectionable: What could be bad about helping people more and more?

            But in today’s interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”

            Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.

            Toby’s top reason not to fully maximise is the following: if the goal you’re aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.

            This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.

            Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.

            But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.

            To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.

            Now, if maximising one’s speed at swimming really were the only goal they ought to be pursuing, there’d be no problem with this. But if it’s the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.

            The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.

            As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is ‘overfit’ to its data.

            cliff

            In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.

            Toby and Rob also discuss:

            • The rise and fall of FTX and some of its impacts
            • What Toby hoped effective altruism would and wouldn’t become when he helped to get it off the ground
            • What utilitarianism has going for it, and what’s wrong with it in Toby’s view
            • How to mathematically model the importance of personal integrity
            • Which AI labs Toby thinks have been acting more responsibly than others
            • How having a young child affects Toby’s feelings about AI risk
            • Whether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartial
            • How Toby ended up being the source of the highest quality images of the Earth from space

            Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

            Producer and editor: Keiran Harris
            Audio Engineering Lead: Ben Cordell
            Technical editing: Simon Monsour
            Transcriptions: Katy Moore

            Continue reading →

            Open positions: 1on1 team

            We’re looking for candidates to join our 1on1 team.

            The 1on1 team at 80,000 Hours talks to people who want to have a positive impact in their work and helps them find career paths tackling the world’s most pressing problems. We’re keen to expand our team by hiring people who can help with at least one (and hopefully more!) of the following responsibilities:

            • Advising: talking one-on-one to talented and altruistic applicants in order to help them find high-impact careers.
            • Running our headhunting product: working with hiring managers at the most effective organisations to help them find exceptional employees.
            • Improving our systems: building tech-based systems to support our team members.

            If you think you’d be interested in taking on more than one of these duties, and enjoy wearing multiple hats in your job, we strongly encourage you to apply. The start dates of these roles are flexible, although we’re likely to prioritise candidates who can start sooner, all else equal.

            These roles have starting salaries from £50,000 to £85,000 (depending on skills and experience) and are ideally London-based. We’re able to sponsor visa applications.

            About 80,000 Hours

            Our mission is to get talented people working on the world’s most pressing problems by providing them with excellent support, advice, and resources on how to do so. We’re also one of the largest sources introducing people to the effective altruism community,

            Continue reading →