The public is more concerned about AI causing extinction than we thought

What does the public think about risks of human extinction?

We care a lot about reducing extinction risks and think doing so is one of the best ways you can have a positive impact with your career. But even before considering career impact, it can be natural to worry about these risks — and as it turns out, many people do!

In April 2023, the US firm YouGov polled 1,000 American adults on how worried they were about nine different potential extinction threats. It found the following percentages of respondents were either “concerned” or “very concerned” about extinction from each threat:

We’re particularly interested in this poll now because we have recently updated our page on the world’s most pressing problems, which includes several of these extinction risks at the top.

Knowing how the public feels about these kinds of threats can impact how we communicate about them.

For example, if we take the results at face value, 46% of the poll’s respondents are concerned about human extinction caused by artificial intelligence. Maybe this surprisingly high figure means we don’t need to worry as much as we have over the last 10 years about sounding like ‘sci fi’ when we talk about existential risks from AI, since it’s quickly becoming a common concern!

How does our view of the world’s most pressing problems compare?

Continue reading →

Give feedback on the new 80,000 Hours career guide

We’ve spent the last few months updating 80,000 Hours’ career guide (which we previously released in 2017 and which you’ve been able to get as a physical book). This week, we’ve put our new career guide live on our website. Before we formally launch and promote the guide — and republish the book — we’d like to gather feedback from our readers!

How can you help?

First, take a look at the new career guide.

Note that our target audience for this career guide is approximately the ~100k young adults most likely to have high-impact careers, in the English-speaking world. Many of them may not yet be familiar with many of the ideas that are widely discussed in the effective altruism community. Also, this guide is primarily aimed at people aged 18–24.

When you’re ready, there’s a simple form to fill in:

Give feedback

Thank you so much!

Extra context: why are we making this change?

In 2018, we deprioritised 80,000 Hours’ career guide in favour of our key ideas series.

Our key ideas series had a more serious tone, and was more focused on impact. It represented our best and most up-to-date advice. We expected that this switch would reduce engagement time on our site, but that the key ideas series would better appeal to people more likely to change their careers to do good.

Continue reading →

    #152 – Joe Carlsmith on navigating serious philosophical confusion

    …if you really think that there’s a good chance that you’re not understanding things, then something that you could do that at least probably has some shot of helping is to put future generations in a better position to solve these questions — once they have lots of time and hopefully are a whole lot smarter and much more informed than we are…

    Joe Carlsmith

    What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?

    Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So… with these most basic questions unresolved, what’s a species to do?

    In today’s episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.

    To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity’s self-assured understanding of the world.

    The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn’t identified any particular rebuttal to this ‘simulation argument.’

    If true, it could revolutionise our comprehension of the universe and the way we ought to live.

    The second is the idea that “you can ‘control’ events you have no causal interaction with, including events in the past.” The thought experiment that most persuades him of this is the following:

    Perfect deterministic twin prisoner’s dilemma: You’re a deterministic AI system, who only wants money for yourself (you don’t care about copies of yourself). The authorities make a perfect copy of you, separate you and your copy by a large distance, and then expose you both, in simulation, to exactly identical inputs (let’s say, a room, a whiteboard, some markers, etc.). You both face the following choice: either (a) send a million dollars to the other (“cooperate”), or (b) take a thousand dollars for yourself (“defect”).

    Joe thinks, in contrast with the dominant theory of correct decision-making, that it’s clear you should send a million dollars to your twin. But as he explains, this idea, when extrapolated outwards to other cases, implies that it could be sensible to take actions in the hope that they’ll improve parallel universes you can never causally interact with — or even to improve the past. That is nuts by anyone’s lights, including Joe’s.

    The third disorienting idea is that, as far as we can tell, the universe could be infinitely large. And that fact, if true, would mean we probably have to make choices between actions and outcomes that involve infinities. Unfortunately, doing that breaks our existing ethical systems, which are only designed to accommodate finite cases.

    In an infinite universe, our standard models end up unable to say much at all, or give the wrong answers entirely. While we might hope to patch them in straightforward ways, having looked into ways we might do that, Joe has concluded they all quickly get complicated and arbitrary, and still have to do enormous violence to our common sense. For people inclined to endorse some flavour of utilitarianism, Joe thinks ‘infinite ethics’ spell the end of the ‘utilitarian dream‘ of a moral philosophy that has the virtue of being very simple while still matching our intuitions in most cases.

    These are just three particular instances of a much broader set of ideas that some have dubbed the “train to crazy town.” Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?

    Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.

    In the face of all of this, Joe suggests that there is a promising and robust path for humanity to take: keep our options open and put our descendants in a better position to figure out the answers to questions that seem impossible for us to resolve today — a position he calls “wisdom longtermism.”

    Joe fears that if people believe we understand the universe better than we really do, they’ll be more likely to try to commit humanity to a particular vision of the future, or be uncooperative to others, in ways that only make sense if you were certain you knew what was right and wrong.

    In today’s challenging conversation, Joe and Rob discuss all of the above, as well as:

    • What Joe doesn’t like about the drowning child thought experiment
    • An alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help others
    • What Joe doesn’t like about the expression “the train to crazy town”
    • Whether Elon Musk should place a higher probability on living in a simulation than most other people
    • Whether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promises
    • To what extent learning to doubt our own judgement about difficult questions — so-called “epistemic learned helplessness” — is a good thing
    • How strong the case is that advanced AI will engage in generalised power-seeking behaviour

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Milo McGuire and Ben Cordell
    Transcriptions: Katy Moore

    Continue reading →

    #151 – Ajeya Cotra on accidentally teaching AI models to deceive us

    I don’t know yet what suite of tests exactly you could show me, and what arguments you could show me, that would make me actually convinced that this model has a sufficiently deeply rooted motivation to not try to escape human control. I think that’s, in some sense, the whole heart of the alignment problem.

    And I think for a long time, labs have just been racing ahead, and they’ve had the justification — which I think was reasonable for a while — of like, “Come on, of course these systems we’re building aren’t going to take over the world.” As soon as that starts to change, I want a forcing function that makes it so that the labs now have the incentive to come up with the kinds of tests that should actually be persuasive.

    Ajeya Cotra

    Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don’t get to see any resumes or do reference checks. And because you’re so rich, tonnes of people apply for the job — for all sorts of reasons.

    Today’s guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.

    As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you’re monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.

    Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!

    Can’t we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won’t work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:

    • Saints — models that care about doing what we really want
    • Sycophants — models that just want us to say they’ve done a good job, even if they get that praise by taking actions they know we wouldn’t want them to
    • Schemers — models that don’t care about us or our interests at all, who are just pleasing us so long as that serves their own agenda

    In principle, a machine learning training process based on reinforcement learning could spit out any of these three attitudes, because all three would perform roughly equally well on the tests we give them, and ‘performs well on tests’ is how these models are selected.

    But while that’s true in principle, maybe it’s not something that could plausibly happen in the real world. After all, if we train an agent based on positive reinforcement for accomplishing X, shouldn’t the training process spit out a model that plainly does X and doesn’t have complex thoughts and goals beyond that?

    According to Ajeya, this is one thing we don’t know, and should be trying to test empirically as these models get more capable. For reasons she explains in the interview, the Sycophant or Schemer models may in fact be simpler and easier for the learning algorithm to creep towards than their Saint counterparts.

    But there are also ways we could end up actively selecting for motivations that we don’t want.

    For a toy example, let’s say you train an agent AI model to run a small business, and select it for behaviours that make money, measuring its success by whether it manages to get more money in its bank account. During training, a highly capable model may experiment with the strategy of tricking its raters into thinking it has made money legitimately when it hasn’t. Maybe instead it steals some money and covers that up. This isn’t exactly unlikely; during training, models often come up with creative — sometimes undesirable — approaches that their developers didn’t anticipate.

    If such deception isn’t picked up, a model like this may be rated as particularly successful, and the training process will cause it to develop a progressively stronger tendency to engage in such deceptive behaviour. A model that has the option to engage in deception when it won’t be detected would, in effect, have a competitive advantage.

    What if deception is picked up, but just some of the time? Would the model then learn that honesty is the best policy? Maybe. But alternatively, it might learn the ‘lesson’ that deception does pay, but you just have to do it selectively and carefully, so it can’t be discovered. Would that actually happen? We don’t yet know, but it’s possible.

    In today’s interview, Ajeya and Rob discuss the above, as well as:

    • How to predict the motivations a neural network will develop through training
    • Whether AIs being trained will functionally understand that they’re AIs being trained, the same way we think we understand that we’re humans living on planet Earth
    • Stories of AI misalignment that Ajeya doesn’t buy into
    • Analogies for AI, from octopuses to aliens to can openers
    • Why it’s smarter to have separate planning AIs and doing AIs
    • The benefits of only following through on AI-generated plans that make sense to human beings
    • What approaches for fixing alignment problems Ajeya is most excited about, and which she thinks are overrated
    • How one might demo actually scary AI failure mechanisms

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ryan Kessler and Ben Cordell
    Transcriptions: Katy Moore

    Continue reading →

    How 80,000 Hours has changed some of our advice after the collapse of FTX

    Following the bankruptcy of FTX and the federal indictment of Sam Bankman-Fried, many members of the team at 80,000 Hours were deeply shaken. As we have said, we had previously featured Sam on our site as a positive example of earning to give, a mistake we now regret. We felt appalled by his conduct and at the harm done to the people who had relied on FTX.

    These events were emotionally difficult for many of us on the team, and we were troubled by the implications it might have for our attempts to do good in the world. We had linked our reputation with his, and his conduct left us with serious questions about effective altruism and our approach to impactful careers.

    We reflected a lot, had many difficult conversations, and worked through a lot of complicated questions. There’s still a lot we don’t know about what happened, there’s a diversity of views within the 80,000 Hours team, and we expect the learning process to be ongoing.

    Ultimately, we still believe strongly in the principles that drive our work, and we stand by the vast majority of our advice. But we did make some significant updates in our thinking, and we’ve changed many parts of the site to reflect them. We wrote this post to summarise the site updates we’ve made and to explain the motivations behind them, for transparency purposes and to further highlight the themes that unify the changes.

    Continue reading →

    #150 – Tom Davidson on how quickly AI could transform the world

    By the time that the AIs can do 20% of cognitive tasks in the broader economy, maybe they can already do 40% or 50% of tasks specifically in AI R&D. So they could have already really started accelerating the pace of progress by the time we get to that 20% economic impact threshold.

    At that point you could easily imagine that really it’s just one year, you give them a 10x bigger brain. That’s like going from chimps to humans — and then doing that jump again. That could easily be enough to go from [AIs being able to do] 20% [of cognitive tasks] to 100%, just intuitively. I think that’s kind of the default, really.

    Tom Davidson

    It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.

    For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?

    You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”

    But this 1,000x yearly improvement is a prediction based on real economic models created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.

    As a teaser, consider the following:

    Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.

    You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.

    But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.

    And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.

    And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.

    To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore’s An Inconvenient Truth, and your first chance to play the Nintendo Wii.

    Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.


    Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.

    Luisa and Tom also discuss:

    • How we might go from GPT-4 to AI disaster
    • Tom’s journey from finding AI risk to be kind of scary to really scary
    • Whether international cooperation or an anti-AI social movement can slow AI progress down
    • Why it might take just a few years to go from pretty good AI to superhuman AI
    • How quickly the number and quality of computer chips we’ve been using for AI have been increasing
    • The pace of algorithmic progress
    • What ants can teach us about AI
    • And much more

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Simon Monsour and Ben Cordell
    Transcriptions: Katy Moore

    Continue reading →

    Some thoughts on moderation in doing good

    Here’s one of the deepest tensions in doing good:

    How much should you do what seems right to you, even if it seems extreme or controversial, vs how much should you moderate your views and actions based on other perspectives?

    If you moderate too much, you won’t be doing anything novel or ambitious, which really reduces how much impact you might have. The people who have had the biggest impact historically often spoke out about entrenched views and were met with hostility — think of the civil rights movement or Galileo.

    Moreover, simply following ethical ‘common sense’ has a horrible track record. It used to be common sense to think that homosexuality was evil, slavery was the natural order, and that the environment was there for us to exploit.

    And there is still so much wrong with the world. Millions of people die of easily preventable diseases, society is deeply unfair, billions of animals are tortured in factory farms, and we’re gambling our entire future by failing to mitigate threats like climate change. These huge problems deserve radical action — while conventional wisdom appears to accept doing little about them.

    On a very basic level, doing more good is better than doing less. But this is a potentially endless and demanding principle, and most people don’t give it much attention or pursue it very systematically. So it wouldn’t be surprising if a concern for doing good led you to positions that seem radical or unusual to the rest of society.

    Continue reading →

    Why we’re adding information security to our list of priority career paths

    Information security could be a top option for people looking to have a high-impact career.

    This might be a surprising claim — information security is a relatively niche field, and it doesn’t typically appear on canonical lists of do-gooder careers.

    But we think there’s an unusually strong case that information security skills (which allow you to protect against unauthorised use, hacking, leaks, and tampering) will be key to addressing problems that are extremely important, neglected, and tractable. We now rank this career among the highest-impact paths we’ve researched.

    In the introduction to our recently updated career review of information security, we discuss how poor information security decisions may have played a decisive role in the 2016 US presidential campaign. If an organisation is big and influential, it needs good information security to ensure that it functions as intended. This is true whether it’s a political campaign, a major corporation, a biolab, or an AI company.

    These last two cases could be quite important. We rank the risks from pandemic viruses and the chances of an AI-related catastrophe among the most pressing problems in the world — and information security is likely a key part of reducing these dangers.

    That’s because hackers and cyberattacks — from a range of actors with varying motives — could try to steal crucial information, such as instructions for making a super-virus or the details of an extremely powerful AI model.

    Continue reading →

    Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

    You have to ask yourself this question: Could someone have done otherwise?

    Whether it’s murdering someone, or just leaving the butter out: Could they have done otherwise? If the answer is no, then I think that has big implications.

    Keiran Harris

    In this episode of 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.

    They cover:

    • Keiran’s views on free will, and how he came to hold them
    • What it’s like not experiencing sustained guilt, shame, and anger
    • Whether Luisa would become a worse person if she felt less guilt and shame, specifically whether she’d work fewer hours, or donate less money, or become a worse friend
    • Whether giving up guilt and shame also means giving up pride
    • The implications for love
    • The neurological condition ‘Jerk Syndrome’
    • And some practical advice on feeling less guilt, shame, and anger

    Who this episode is for:

    • People sympathetic to the idea that free will is an illusion
    • People who experience tons of guilt, shame, or anger
    • People worried about what would happen if they stopped feeling tons of guilt, shame, or anger

    Who this episode isn’t for:

    • People strongly in favour of retributive justice
    • Philosophers who can’t stand random non-philosophers talking about philosophy
    • Non-philosophers who can’t stand random non-philosophers talking about philosophy

    Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Milo McGuire
    Transcriptions: Katy Moore

    Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

    Continue reading →

    Are we doing enough to stop the worst pandemics?

    COVID-19 has been devastating for the world. While people debate how the response could’ve been better, it should be easy to agree that we’d all be better off if we can stop any future pandemic before it occurs. But we’re still not taking pandemic prevention very seriously.

    A recent report in The Washington Post highlighted one major danger: some research on potential pandemic pathogens may actually increase the risk, rather than reduce it.

    Back in 2017, we talked about what we thought were several warning signs that something like COVID might be coming down the line. It’d be a big mistake to ignore these kinds of warning signs again.

    It seems unfortunate that so much of the discussion of the risks in this space is backward-looking. The news has been filled with commentary and debates about the chances that COVID accidentally emerged from a biolab or that it crossed over directly from animals to humans.

    We’d appreciate a definitive answer to this question as much as anyone, but there’s another question that matters much more but gets asked much less:

    What are we doing to reduce the risk that the next dangerous virus — which could come from an animal, a biolab, or even a bioterrorist attack — causes a pandemic even worse than COVID-19?

    80,000 Hours ranks preventing catastrophic pandemics as among the most pressing problems in the world.

    Continue reading →

    How much should you research your career?

    In career decisions, we advise that you don’t aim for confidence — aim for a stable best guess.

    Career decisions have a big impact on your life, so it’s natural to want to feel confident in them.

    Unfortunately, you don’t always get this luxury.

    For years, I’ve faced the decision of whether to focus more on writing, organisation building, or something else. And despite giving it a lot of thought, I’ve rarely felt more than 60% confident in one of the options.

    How should you handle these kinds of situations?

    The right response isn’t just to guess, flip a coin, or “follow your heart.”

    It’s still worth identifying your key uncertainties, and doing your research: speak to people, do side projects, learn about each path, etc.

    Sometimes you’ll quickly realise one answer is best. If we plot your confidence against how much research you’ve done, it’ll look like this:
    Deliberation graph

    But sometimes that doesn’t happen. What then?

    Stop your research when your best guess stops changing.

    That might look more like this:
    Deliberation graph

    This can be painful. You might only be 51% confident in your best guess, and it really sucks to have to make a decision when you feel so uncertain.

    But certainty is not always achievable. You might face questions that both (i) are important but (ii) can’t realistically be resolved — which I think is the situation I faced.

    Continue reading →

    #149 – Tim LeBon on how altruistic perfectionism is self-defeating

    What concerns me, potentially, is that idea of “doing the most good”. I think the way that we’re designed as human beings, we’re going to favour ourselves to some extent. We’re going to favour those nearest and dearest to us. Even if logically we should be totally impartial, there’s going to be a bit of our brain that rebels against that, I suspect.

    So having it as an imperative to “do the most good you can” all the time — even if that isn’t actually what is meant, I think some people might take it to be that — then that potentially makes them very vulnerable.

    Tim LeBon

    Being a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself.

    But inevitably, something goes wrong and you fail to meet that high bar. Now you feel terrible about yourself, and worry others are judging you for your failure. Feeling low and reflecting constantly on whether you’re doing as much as you think you should makes it hard to focus and get things done. So now you’re performing below a normal level, making you feel even more ashamed of yourself. Rinse and repeat.

    This is the disastrous cycle today’s guest, Tim LeBon — registered psychotherapist, accredited CBT therapist, life coach, and author of 365 Ways to Be More Stoic — has observed in many clients with a perfectionist mindset.

    Tim has provided therapy to a number of 80,000 Hours readers — people who have found that the very high expectations they had set for themselves were holding them back. Because of our focus on “doing the most good you can,” Tim thinks 80,000 Hours both attracts people with this style of thinking and then exacerbates it.

    But Tim, having studied and written on moral philosophy, is sympathetic to the idea of helping others as much as possible, and is excited to help clients pursue that — sustainably — if it’s their goal.

    Tim has treated hundreds of clients with all sorts of mental health challenges. But in today’s conversation, he shares the lessons he has learned working with people who take helping others so seriously that it has become burdensome and self-defeating — in particular, how clients can approach this challenge using the treatment he’s most enthusiastic about: cognitive behavioural therapy.

    As Tim stresses, perfectionism isn’t the same as being perfect, or simply pursuing excellence. What’s most distinctive about perfectionism is that a person’s standards don’t vary flexibly according to circumstance, meeting those standards without exception is key to their self-image, and they worry something terrible will happen if they fail to meet them.

    It’s a mindset most of us have seen in ourselves at some point, or have seen people we love struggle with.

    Untreated, perfectionism might not cause problems for many years — it might even seem positive providing a source of motivation to work hard. But it’s hard to feel truly happy and secure, and free to take risks, when we’re just one failure away from our self-worth falling through the floor. And if someone slips into the positive feedback loop of shame described above, the end result can be depression and anxiety that’s hard to shake.

    But there’s hope. Tim has seen clients make real progress on their perfectionism by using CBT techniques like exposure therapy. By doing things like experimenting with more flexible standards — for example, sending early drafts to your colleagues, even if it terrifies you — you can learn that things will be okay, even when you’re not perfect.

    In today’s extensive conversation, Tim and Rob cover:

    • How perfectionism is different from the pursuit of excellence, scrupulosity, or an OCD personality
    • What leads people to adopt a perfectionist mindset
    • The pros and cons of perfectionism
    • How 80,000 Hours contributes to perfectionism among some readers and listeners, and what it might change about its advice to address this
    • What happens in a session of cognitive behavioural therapy for someone struggling with perfectionism, and what factors are key to making progress
    • Experiments to test whether one’s core beliefs (‘I need to be perfect to be valued’) are true
    • Using exposure therapy to treat phobias
    • How low-self esteem and imposter syndrome are related to perfectionism
    • Stoicism as an approach to life, and why Tim is enthusiastic about it
    • How the Stoic approach to what we can can’t control can make it far easier to stay calm
    • What the Stoics do better than utilitarian philosophers and vice versa
    • What’s good about being guided by virtues as opposed to pursuing good consequences
    • How to decide which are the best virtues to live by
    • What the ancient Stoics got right from our point of view, and what they got wrong
    • And whether Stoicism has a place in modern mental health practice.

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Simon Monsour and Ben Cordell
    Transcriptions: Katy Moore

    Continue reading →

    #148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t

    I think if we had the level of enthusiasm and public support for all climate solutions that we had for solar and wind, we can definitely solve climate change. It doesn’t mean that I’m sure that any one of those technologies will definitely succeed, but there are enough really good bets worth making.

    And it’s not like solar was cheap in 2000; that was not obviously a great energy solution — there were lots of articles done about renewables never getting cheap, never being a significant part of the energy supply. So I think that’s evidence for how technological change, at a first approximation, is to a large degree the result of decisions.

    Johannes Ackva

    If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no.

    Today’s guest, Johannes Ackva — the climate research lead at Founders Pledge, where he advises major philanthropists on their giving — thinks the best strategy is actually pretty different, and one few are adopting.

    In reality you don’t want to reduce emissions for its own sake, but because emissions will translate into temperature increases, which will cause harm to people and the environment.

    Crucially, the relationship between emissions and harm goes up faster than linearly. As Johannes explains, humanity can handle small deviations from the temperatures we’re familiar with, but adjustment gets harder the larger and faster the increase, making the damage done by each additional degree of warming much greater than the damage done by the previous one.

    In short: we’re uncertain what the future holds and really need to avoid the worst-case scenarios. This means that avoiding an additional tonne of carbon being emitted in a hypothetical future in which emissions have been high is much more important than avoiding a tonne of carbon in a low-carbon world.

    That may be, but concretely, how should that affect our behaviour? Well, the future scenarios in which emissions are highest are all ones in which clean energy tech that can make a big difference — wind, solar, and electric cars — don’t succeed nearly as much as we are currently hoping and expecting. For some reason or another, they must have hit a roadblock and we continued to burn a lot of fossil fuels.

    In such an imaginable future scenario, we can ask what we would wish we had funded now. How could we today buy insurance against the possible disaster that renewables don’t work out?

    Basically, in that case we will wish that we had pursued a portfolio of other energy technologies that could have complemented renewables or succeeded where they failed, such as hot rock geothermal, modular nuclear reactors, or carbon capture and storage.

    If you’re optimistic about renewables, as Johannes is, then that’s all the more reason to relax about scenarios where they work as planned, and focus one’s efforts on the possibility that they don’t.

    To Johannes, another crucial thing to observe is that reducing local emissions in the near term is probably negatively correlated with one’s actual full impact. How can that be?

    If you want to reduce your carbon emissions by a lot and soon, you’ll have to deploy a technology that is mature and being manufactured at scale, like solar and wind.

    But the most useful thing someone can do today to reduce global emissions in the future is to cause some clean energy technology to exist where it otherwise wouldn’t, or cause it to become cheaper more quickly. If you can do that, then you can indirectly affect the behaviour of people all around the world for decades or centuries to come.

    And Johannes notes that in terms of speeding up technological advances and cost reductions, a million dollars spent on a very early-stage technology — one with few, if any, customers — packs a much bigger punch than buying a million dollars’ worth of something customers are already spending $100 billion on per year.

    For instance, back in the early 2000’s, Germany subsidised the deployment of solar panels enormously. This did little to reduce carbon emissions in Germany at the time, because the panels were very expensive and Germany is not very sunny. But the programme did a lot to drive commercial R&D and increase the scale of panel manufacturing, which drove down costs and went on to increase solar deployments all over the world. That programme is long over, but continues to have impact by prompting solar deployments today that wouldn’t be economically viable if Germany hadn’t helped the solar industry during its infancy decades ago.

    In today’s extensive interview, host Rob Wiblin and Johannes discuss the above considerations, as well as:

    • Retooling newly built coal plants in the developing world
    • Specific clean energy technologies like geothermal and nuclear fusion
    • Possible biases among environmentalists and climate philanthropists
    • How climate change compares to other risks to humanity
    • In what kinds of scenarios future emissions would be highest
    • In what regions climate philanthropy is most concentrated and whether that makes sense
    • Attempts to decarbonise aviation, shipping, and industrial processes
    • The impact of funding advocacy vs science vs deployment
    • Lessons for climate change focused careers
    • And plenty more

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ryan Kessler
    Transcriptions: Katy Moore

    Continue reading →


    Why journalism could be a high-impact career

    Some of the most promising ways to have a positive impact with a career in journalism include:

    • Encouraging the adoption of good policies or discouraging the adoption of bad policies
      • A single article or reporter is unlikely to be solely responsible for a given policy change, but they can play a significant role in influential coverage.
    • Acting as a check on bad or dangerous actors in the public arena
      • Public officials and figures can be forced out of their positions as a result of news reporting, and fear of exposure might have a chilling effect on bad acts.
    • Inspiring readers to take specific high-impact actions, like making donations or changing their careers to work on pressing problems
    • Helping to promote positive values, such as respect for the interests of nonhuman animals
    • Supporting social or political movements that are trying to do good — we’re especially excited about journalism that informs people about the ideas of the effective altruism community
      • Also, you can potentially strengthen ideas and communities you agree with by subjecting them to analysis and criticism.
    • Instilling better reasoning skills in readers — often by acting as a model — and keeping the public informed to promote good decision-making
    • Positively shaping the discourse to better prioritise major problems and solutions,

    Continue reading →

    What our research has found about AI — and why it matters

    Everyone’s suddenly talking a lot about artificial intelligence — and we have many helpful resources for getting up to speed.

    With the release of GPT-4, Bing, DALL-E, Claude, and many other AI systems, it can be hard to keep track of all the latest developments in artificial intelligence. It can also be hard to keep sight of the big picture: what does this emerging technology actually mean for the world?

    This is a huge topic — and a lot is still unknown. But at 80,000 Hours, we’ve been interested in and concerned about AI for many years, and we’ve researched the issue extensively. Now, even major media outlets are taking seriously the kinds of things we’ve been worried about. Given all the excitement in this area, we wanted to share a round-up of some of our top content and findings about AI from recent years.

    Some of our top articles on AI:

    Continue reading →

    Longtermism: a call to protect future generations

    When the 19th-century amateur scientist Eunice Newton Foote filled glass cylinders with different gases and exposed them to sunlight, she uncovered a curious fact. Carbon dioxide became hotter than regular air and took longer to cool down.

    Remarkably, Foote saw what this momentous discovery meant.

    “An atmosphere of that gas would give our earth a high temperature,” she wrote in 1857.

    Though Foote could hardly have been aware at the time, the potential for global warming due to carbon dioxide would have massive implications for the generations that came after her.

    If we ran history over again from that moment, we might hope that this key discovery about carbon’s role in the atmosphere would inform governments’ and industries’ choices in the coming century. They probably shouldn’t have avoided carbon emissions altogether, but they could have prioritised the development of alternatives to fossil fuels much sooner in the 20th century, and we might have prevented much of the destructive climate change that present people are already beginning to live through — which will affect future generations as well.

    We believe it would’ve been much better if previous generations had acted on Foote’s discovery, especially by the 1970s, when climate models were beginning to reliably show the future course of warming global trends.

    If this seems right, it’s because of a commonsense idea: to the extent that we are able to, we have strong reasons to consider the interests and promote the welfare of future generations.

    Continue reading →

    Preventing an AI-related catastrophe

    I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.

    Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.2 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. I estimate that there are around 400 people worldwide working directly on this.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

    Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

    Continue reading →

    #147 – Spencer Greenberg on stopping valueless papers from getting into top journals

    You can get your result in a top journal by tricking the reviewers into thinking that it was a valuable or interesting finding when in fact it was essentially a valueless or completely uninteresting finding.

    And this only works if you can trick the peer reviewers, because it’s not like they want to publish everything. Peer reviewers can be brutal; a lot of peer reviewers reject stuff. So unless you’ve tricked them into thinking there’s value when there’s not, this method won’t work. So it has to be pretty subtle.

    Spencer Greenberg

    Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don’t get the same result if the experiments are repeated.

    Two key reasons are ‘p-hacking’ and ‘publication bias’. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they’re actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a ‘null result’ never saw the light of day. The resulting phenomenon of publication bias is one we’ve understood for 60 years.

    Today’s repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years.

    He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, “when the original study’s p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference.”

    To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results. (So far they’re two for three.)

    According to Spencer, things are gradually improving. For example he sees more raw data and experimental materials being shared, which makes it much easier to check the work of other researchers.

    But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren’t yet fully appreciated. One of these Spencer calls ‘importance hacking’: passing off obvious or unimportant results as surprising and meaningful.

    For instance, do you remember the sensational paper that claimed government policy was driven by the opinions of lobby groups and ‘elites,’ but hardly affected by the opinions of ordinary people? Huge if true! It got wall-to-wall coverage in the press and on social media. But unfortunately, the whole paper could only explain 7% of the variation in which policies were adopted. Basically the researchers just didn’t know what made some campaigns succeed while others didn’t — a point one wouldn’t learn without reading the paper and diving into confusing tables of numbers. Clever writing made their result seem more important and meaningful than it really was.

    Another paper Spencer describes claimed to find that people with a history of trauma explore less. That experiment actually featured an “incredibly boring apple-picking game: you had an apple tree in front of you, and you either could pick another apple or go to the next tree. Those were your only options. And they found that people with histories of trauma were more likely to stay on the same tree. Does that actually prove anything about real-world behaviour?” It’s at best unclear.

    Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper’s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it’s far from easy to stop people exaggerating the importance of their work.

    In this wide-ranging conversation, Rob and Spencer discuss the above as well as:

    • When you should and shouldn’t use intuition to make decisions.
    • How to properly model why some people succeed more than others.
    • The difference between what Spencer calls “Soldier Altruists” and “Scout Altruists.”
    • A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found.
    • Spencer’s experiment to see whether a 15-minute intervention could make people more likely to sustain a new habit two months later.
    • The most common way for groups with good intentions to turn bad and cause harm.
    • And Spencer’s low-guilt approach to a fulfilling life and doing good, which he calls “Valuism.”

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ben Cordell and Milo McGuire
    Transcriptions: Katy Moore

    Continue reading →

    In which career can you make the biggest contribution?

    One of the most common career paths for people who want to do good is healthcare. So we worked with a doctor, Greg Lewis, to estimate the number of lives saved by a typical clinical doctor in the UK. Greg estimated that the average doctor enables the people they treat to live several hundred years of extra healthy life over the course of their career — equivalent to saving several lives.

    This is a lot of impact compared to most jobs, but it’s less than many expect (and we think less than many of the careers we recommend most highly).

    One reason is that issues like health in rich countries already receive a (relatively) large amount of attention.

    In this article, we’ll touch on another reason: the impact of a clinical doctor is limited by the number of people they can treat with their own two hands, which puts a cap on the potential size of their contribution.

    For instance, Greg decided to switch from clinical medicine to research into health policy, since an improvement to key government policies could affect millions of people — far more than he could ever treat himself.

    This illustrates a broader point: careers that do good are often associated with certain job titles — doctor, teacher, charity worker, and so on. Intuitively, people group careers into those that ‘help’ and everything else.

    But your job title isn’t what matters —

    Continue reading →

    Why you should think about virtues — even if you’re a consequentialist

    The idea this week: virtues are helpful shortcuts for making moral decisions — but think about consequences to decide what counts as a virtue.

    Your career is really ethically important, but it’s not a single, discrete choice. To build a high-impact career you need to make thousands of smaller choices over many years — to take on this particular project, to apply for that internship, to give this person a positive reference, and so on.

    How do you make all those little decisions?

    If you want to have an impact, you hope to make the decisions that help you have a bigger impact rather than a smaller one. But you can’t go around explicitly estimating the consequences of all the different possible actions you could take — not only would that take too long, you’d probably get it wrong most of the time.

    This is where the idea of virtues — lived moral traits like courage, honesty, and kindness — can really come in handy. Instead of calculating out the consequences of all your different possible actions, try asking yourself, “What’s the honest thing to do? What’s the kind thing to do?”

    A few places I find ‘virtues thinking’ motivating and useful:

    • When I am facing a difficult work situation, I sometimes ask myself, “What virtue is this an opportunity to practise?”

    Continue reading →