What skills or experience are most needed within professional effective altruism in 2018? And which problems are most effective to work on? New survey of organisational leaders.

Read this to see the 2019 data.

Update April 2019: We think that our use of the term ‘talent gaps’ in this post (and elsewhere) has caused some confusion. We’ve written a post clarifying what we meant by the term and addressing some misconceptions that our use of it may have caused. Most importantly, we now think it’s much more useful to talk about specific skills and abilities that are important constraints on particular problems rather than talking about ‘talent constraints’ in general terms. This page may be misleading if it’s not read in conjunction with our clarifications.

What are the most pressing needs in the effective altruism community right now? What problems are most effective to work on? Who should earn to give and who should do direct work? We surveyed managers at organisations in the community to find out their views. These results help to inform our recommendations about the highest impact career paths available.

Our key finding is that for the questions that we asked 12 months ago, the results have not changed very much. This gives us more confidence in our survey results from 2017.

We also asked some new questions, including about the monetary value placed on our priority paths, discount rates on talent and how current leaders first discovered and got involved in effective altruism.

Continue reading →

New career review on becoming an academic researcher: Highlights on your chances of success, which fields have highest GRE scores, & having impact outside research

We recently published a new career review on becoming an academic researcher by Jess Whittlestone. It covers issues such as:

  • Entry requirements and what it takes to excel.
  • What are your chances of success?
  • How to maximise your impact within academia.
  • How to assess your personal fit at each stage of your career.
  • Which field are best to enter?
  • How to establish your career early on, and trade-off impact against career advancement.
  • Review of the pros and cons of the path.

Check out our career review of academic research →

Here are some extracts from the full profile.

Research isn’t the only way academics can have a large impact

When we think of academic careers, research is what first comes to mind, but academics have many other pathways to impact which are less often considered. Academics can also influence public opinion, advise policy-makers, or manage teams of other researchers to help them be more productive.

If any of these routes might turn out to be a good fit for you, then it makes the path even more attractive. We’ll sketch out some of these other paths:

1. Public outreach

Peter Singer’s career began in an ordinary enough way for a promising young academic, studying philosophy at Oxford University. But he soon started moving in a different direction from his peers,

Continue reading →

October job board update

Our job board now lists 128 vacancies, with 45 additional opportunities since last month.

If you’re actively looking for a new role, we recommend checking out the job board regularly – when a great opening comes up, you’ll want to maximise your preparation time.

The job board remains a curated list of the most promising positions to apply for that we’re currently aware of. They’re all high-impact opportunities at organisations that are working on some of the world’s most pressing problems:

Check out the job board →

They’re demanding positions, but if you’re a good fit for one of them, it could be your best opportunity to have an impact.

If you apply for one of these jobs, or intend to, please do let us know.

A few highlights from the last month

Continue reading →

    #44 – Paul Christiano on how OpenAI is developing real solutions to the 'AI alignment problem', and his vision of how humanity will progressively hand over decision-making to AI systems

    Paul Christiano is one of the smartest people I know and this episode has one of the best explanations for why AI alignment matters and how we might solve it. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening – Paul works on AI himself and has a very unusually thought through view of how it will change the world. Even though I’m familiar with Paul’s writing I felt I was learning a great deal and am now in a better position to make a difference to the world.

    A few of the topics we cover are:

    • Why Paul expects AI to transform the world gradually rather than explosively and what that would look like
    • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us
    • Why AI systems will probably be granted legal and property rights
    • How an advanced AI that doesn’t share human goals could still have moral value
    • Why machine learning might take over science research from humans before it can do most other tasks
    • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan.

    Here’s a situation we all regularly confront: you want to answer a difficult question, but aren’t quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who are smart enough to figure it out. The bad news is that they disagree.

    If given plenty of time – and enough arguments, counterarguments and counter-counter-arguments between all the experts – should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over?

    In other words: does ‘debate’, in principle, lead to truth?

    According to Paul Christiano – researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities – this question is of more than mere philosophical interest. That’s because ‘debate’ is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are.

    It’s a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight.

    If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful.

    But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don’t know that’s the case.

    According to Paul, it’s clear that if the judge is weak enough, there’s no reason that an honest debater would be at an advantage. But the hope is that there is some threshold of competence above which debates tend to converge on more accurate claims the longer they continue.

    Most real world debates are set up under highly suboptimal conditions; judges usually don’t have a lot of time to think about how to best get to the truth, and often have bad incentives themselves. But for AI safety via debate, researchers are free to set things up in the way that gives them the best shot. And if we could understand how to construct systems that converge to truth, we would have a plausible way of training powerful AI systems to stay aligned with our goals.

    This is our longest interview so far for good reason — we cover a fascinating range of topics:

    • What could people do to shield themselves financially from potentially losing their jobs to AI?
    • How important is it that the best AI safety team ends up in the company with the best ML team?
    • What might the world look like if several states or actors developed AI at the same time (aligned or otherwise)?
    • Would artificial general intelligence grow in capability quickly or slowly?
    • How likely is it that transformative AI is an issue worth worrying about?
    • What are the best arguments against being concerned?
    • What would cause people to take AI alignment more seriously?
    • Concrete ideas for making machine learning safer, such as iterated amplification.
    • What does it mean to say that a crow-like intelligence could be much better at science than humans?
    • What is ‘prosaic AI’?
    • How do Paul’s views differ from those of the Machine Intelligence Research Institute?
    • The importance of honesty for people and organisations
    • What are the most important ways that people in the effective altruism community are approaching AI issues incorrectly?
    • When would an ‘unaligned’ AI nonetheless be morally valuable?
    • What’s wrong with current sci-fi?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    List of 80,000 Hours content from the last 4 months, summary of what was most popular, and plans for future releases.

    Cross posted from the Effective Altruism Forum.

    Here’s your regular reminder of everything 80,000 Hours has released over the last four months, since our last roundup. If you’d like to get these updates more regularly, you can join our newsletter.

    We’ve done a major redesign of our job board, increasing the number of vacancies listed there from ~20 to over 100. It has doubled its traffic since the first half of the year, and is now one of the five most popular pages on the whole site.

    1. High impact job board

    We’ve released two in-depth articles that should be of special interest to the community:

    1. These are the world’s highest impact career paths according to our research. This is an update of our top recommended careers.
    2. Should you play to your comparative advantage when choosing your career? New theoretical content on the relevance of comparative advantage, and thoughts on how to practically evaluate it.

    We released 9 podcast episodes totalling 19.5 hours, covering lots of key topics in EA in significant depth (in chronological order):

    1. How the audacity to fix things without asking permission can change the world, demonstrated by Tara Mac Aulay
    2. Tanya Singh on ending the operations management bottleneck in effective altruism
    3. Finding the best charity requires estimating the unknowable.

    Continue reading →

    Recent research we’ve published: Our top 10 careers for social impact; Congressional staffing; Comparative advantage; And can you guess which psychology experiments will replicate?

    We recently published a number of new articles that you might have missed if you don’t follow us on social media (Facebook and Twitter) or our research newsletter.

    Probably our most important release for this year is this article summarising many of our key findings since we started in 2011:

    It outlines our new suggested process anyone can use to generate a short-list of high-impact career options given their personal situation.

    It then describes the top five key categories of career we most often recommend, which should produce at least one good option for almost all graduates, and why we’re enthusiastic about them.

    It goes on to list and explains the top 10 “priority paths” we want to draw attention to, because we think they can enable to right person to do a particularly large amount of good for the world.

    Second, if you’re trying to figure out which job is the best fit for you, or how to coordinate with other people – for example the effective altruism community – you will want to read:

    Third, if you’d like to influence government or work in politics, you should check out our comprehensive review of the pros and cons of being a Congressional Staffer and how to become one:

    Continue reading →

    #43 – Daniel Ellsberg on the creation of nuclear doomsday machines, the institutional insanity that maintains them, & how they could be dismantled

    In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”.

    Daniel Ellsberg – leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency – claims in his new book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a military colleague wondered how so many details of the nuclear systems they were constructing had managed to leak to the filmmakers.

    The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today.

    If the system can’t contact military leaders, it checks for signs of a nuclear strike. Should its computers determine that an attack occurred, it would automatically launch all remaining Soviet weapons at targets across the northern hemisphere.

    As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity.

    You might think the United States would have a more sensible nuclear launch policy. You’d be wrong.

    As Ellsberg explains based on first-hand experience as a nuclear war planner in the early stages of the Cold War, the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth.

    The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe.

    The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival.

    Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if you tell the enemy you’ve done it.

    Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity.

    Even setting aside the above, the size of the Russian and American nuclear arsenals today makes them doomsday machines of necessity. According to Ellsberg, if these arsenals are ever launched, whether accidentally or deliberately, they would wipe out almost all human life, and all large animals.

    Strategically, the setup is stupid. Ethically, it is monstrous.

    If the US or Russia sent its nuclear arsenal to destroy the other, would it even make sense to retaliate? Ellsberg argues that it doesn’t matter one way or another. The nuclear winter generated by the original attack would be enough to starve to death most people in the aggressor country within a year anyway. Retaliation would just slightly speed up their demise.

    So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization?

    Daniel explores these questions eloquently and urgently in his book (that everyone should read), and this conversation is a gripping introduction. We cover:

    • Why full disarmament today would be a mistake
    • What are our greatest current risks from nuclear weapons?
    • What has changed most since Daniel was working in and around the government in the 50s and 60s?
    • How well are secrets kept in the government?
    • How much deception is involved within the military?
    • The capacity of groups to commit evil
    • How Hitler was more cautious than America about nuclear weapons
    • What was the risk of the first atomic bomb test?
    • The effect of Trump on nuclear security
    • What practical changes should we make? What would Daniel do if he were elected president?
    • Do we have a reliable estimate of the magnitude of a ‘nuclear winter’?
    • What would be the optimal number of nuclear weapons for the US and its allies to hold?
    • What should we make of China’s nuclear stance? What are the chances of war with China?
    • Would it ever be right to respond to a nuclear first strike?
    • Should we help Russia get better attack detection methods to make them less anxious?
    • How much power do lobbyists really have?
    • Has game theory had any influence over nuclear strategy?
    • Why Gorbachev allowed Russia’s covert biological warfare program to continue
    • Is it easier to help solve the problem from within the government or at outside orgs?
    • What gives Daniel hope for the future?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    American with a science PhD? Get a fast-track into AI and STEM policy by applying for the acclaimed AAAS Science & Technology Fellowship by Nov 1.

    Within just four years of finishing her PhD in biophysics, Jessica Tuchman Mathews was Director of Global Issues for President Carter’s National Security Council. In her first year in the role she helped put together a nuclear non-proliferation pact among 15 countries including the US and the Soviet Union.

    Later in her career, Jessica served as Deputy to the Undersecretary of State for Global Affairs, wrote a weekly column for the Washington Post, and most recently served as President of the Carnegie Endowment for International Peace, an influential Washington-based foreign policy think tank.

    What launched such an successful career? In our conversation with Jessica, she argued it was the AAAS Science & Technology (S&T) Policy Fellowship. Jessica was selected as one of their inaugural fellows in 1973.

    In this article we argue that for eligible people interested in our top recommended problem areas and S&T policy careers the AAAS S&T Policy Fellowship is a valuable springboard that could rapidly advance your career as it did for Jessica.

    The opportunity

    At 80,000 Hours, we think the AAAS Policy Fellowship is one of the best routes into the US Government for people with a STEM or social science PhD, or an engineering masters and three years of industry experience.

    Policy fellows work within the US Government for one year in policy-related roles relevant to science and technology. Nearly 300 fellows are accepted each year, and almost all of them take assignments within the executive branch,

    Continue reading →

    #42 – Amanda Askell on moral empathy, the value of information & the ethics of infinity

    Consider two familiar moments at a family reunion.

    Our host, Uncle Bill, is taking pride in his barbequing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But were it viewed as a moral position rather than personal preference – as they might if instead Becky were avoiding meat on religious grounds – it would usually receive a very different reaction.

    An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table: the family mostly think that he has no business in trying to foist his regressive preferences on other people’s personal lives. But if considered not as a matter of personal taste, but rather as a moral position – that Bill genuinely believes he’s opposing mass-murder – his comment might start a serious conversation.

    Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. Across the political spectrum, we’re unable to get inside the mindset of people who expresses views that we disagree with, and see the issue from their point of view.

    A common cause of conflict, as above, is confusion between personal preferences and moral positions. Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues.

    One seeming path to progress involves contraception. A lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception – so why can’t we compromise and agree to have much more contraception available?

    According to Amanda, a charitable explanation is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions.

    So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world. Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually figure out why we disagree and perhaps even converge on agreement.

    Today’s episode blends such practical topics with cutting-edge philosophy. We cover:

    • The problem of ‘moral cluelessness’ – our inability to predict the consequences of our actions – and how we might work around it
    • Amanda’s biggest criticisms of social justice activists, and of critics of social justice activists
    • Is there an ethical difference between prison and corporal punishment? Are both or neither justified?
    • How to resolve ‘infinitarian paralysis’ – the inability to make decisions when infinities get involved.
    • What’s effective altruism doing wrong?
    • How should we think about jargon? Are a lot of people who don’t communicate clearly just trying to scam us?
    • How can people be more successful while they remain within the cocoon of school and university?
    • How did Amanda find her philosophy PhD, and how will she decide what to do now?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    #41 – David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher

    With 698 inmates per 100,000 citizens, the U.S. is the world’s leader in incarcerating people. But what effect does this actually have on crime?

    According to David Roodman, Senior Advisor to Open Philanthropy, the marginal effect is zero.

    This stunning rebuke to the American criminal justice system comes from the man Holden Karnofsky called “the gold standard for in-depth quantitative research”. His other investigations include the risk of geomagnetic storms, whether deworming improves health and test scores, and the development impacts of microfinance – all of which we also cover in this episode.

    In his comprehensive review of the evidence, David says the effects of crime can be split into three categories; before, during, and after.

    Does having tougher sentences deter people from committing crime? After reviewing studies on gun laws and ‘three strikes’ in California, David concluded that the effect of deterrence is zero.

    Does imprisoning more people reduce crime by incapacitating potential offenders? Here he says yes, noting that crimes like motor vehicle theft have gone up in a way that seems pretty clearly connected with recent Californian criminal justice reforms (though the effect on violent crime is far lower).

    Finally, do the after-effects of prison make you more or less likely to commit future crimes?

    This one is more complicated.

    His literature review suggested that more time in prison made people substantially more likely to commit future crimes when released. But concerned that he was biased towards a comfortable position against incarceration, David did a cost-benefit analysis using both his favoured reading of the evidence and the devil’s advocate view; that there is deterrence and that the after-effects are beneficial.

    For the devil’s advocate position David used the highest assessment of the harm caused by crime, which suggests a year of prison prevents about $92,000 in crime. But weighed against a lost year of liberty, valued at $50,000, and the cost of operating prisons, the numbers came out exactly the same.

    So even using the least-favourable cost-benefit valuation of the least favourable reading of the evidence — it just breaks even.

    The argument for incarceration melts further when you consider the significant crime that occurs within prisons, de-emphasised because of a lack of data and a perceived lack of compassion for inmates.

    In today’s episode we discuss how to conduct such impactful research, and how to proceed having reached strong conclusions.

    We also cover:

    • How do you become a world class researcher? What kinds of character traits are important?
    • Are academics aware of following perverse incentives?
    • What’s involved in data replication? How often do papers replicate?
    • The politics of large orgs vs. small orgs
    • How do you decide what questions to research?
    • How concerned should a researcher be with their own biases?
    • Geomagnetic storms as a potential cause area
    • How much does David rely on interviews with experts?
    • The effects of deworming on child health and test scores
    • Is research getting more reliable? Should we have ‘data vigilantes’?
    • What are David’s critiques of effective altruism?
    • What are the pros and cons of starting your career in the think tank world? Do people generally have a high impact?
    • How do we improve coordination across groups, given our evolutionary history?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    Randomised experiment: If you’re genuinely unsure whether to quit your job or break up, then you probably should

    One of my favourite studies ever is ‘Heads or Tails: The Impact of a Coin Toss on Major Life Decisions and Subsequent Happiness’ by economist Steven Levitt of ‘Freakonomics’.

    Levitt collected tens of thousands of people who were deeply unsure whether to make a big change in their life. After offering some advice on how to make hard choices, those who remained truly undecided were given the chance to use a flip of a coin to settle the issue. 22,500 did so. Levitt then followed up two and six months later to ask people whether they had actually made the change, and how happy they were out of 10.

    People who faced an important decision and got heads – which indicated they should quit, break up, propose, or otherwise mix things up – were 11 percentage points more likely to do so.

    It’s very rare to get a convincing experiment that can help us answer as general and practical a question as ‘if you’re undecided, should you change your life?’ But this experiment can!

    I wish there were much more social science like this, for example, to figure out whether or not people should explore a wider variety of different jobs during their career (for more on that one see our articles on how to find the right career for you and what job characteristics really make people happy).

    The widely reported headline result was that people who made a change in their life as a result of the coin flip were 0.48 points happier out of 10,

    Continue reading →

    #40 – Katja Grace on forecasting future technology & how much we should trust expert predictions.

    Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But how seriously should we take these predictions?

    Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’, thinks we should treat such guesses as only weak evidence. But she also says there might be much better ways to forecast transformative technology, and that anticipating such advances could be one of our most important projects.

    Note: Katja’s organisation AI Impacts is currently hiring part- and full-time researchers.

    There’s often pessimism around making accurate predictions in general, and some areas of artificial intelligence might be particularly difficult to forecast.

    But there are also many things we’re now able to predict confidently — like the climate of Oxford in five years — that we no longer give ourselves much credit for.

    Some aspects of transformative technologies could fall into this category. And these easier predictions could give us some structure on which to base the more complicated ones.

    One controversial debate surrounds the idea of an intelligence explosion; how likely is it that there will be a sudden jump in AI capability?

    And one way to tackle this is to investigate a more concrete question: what’s the base rate of any technology having a big discontinuity?

    A significant historical example was the development of nuclear weapons. Over thousands of years, the energy density of explosives didn’t increase by much. Then within a few years, it got thousands of times better. Discovering what leads to such anomalies may allow us to better predict the possibility of a similar jump in AI capabilities.

    Katja likes to compare our efforts to predict AI with those to predict climate change. While both are major problems (though Katja and 80,000 Hours have argued that we should prioritise AI safety), only climate change has prompted hundreds of millions of dollars of prediction research.

    That neglect creates a high impact opportunity, and Katja believes that talented researchers should strongly consider following her path.

    Some promising research questions include:

    • What’s the relationship between brain size and intelligence?
    • How frequently, and when, do technological trends undergo discontinuous progress?
    • What’s the explanation for humans’ radical success over other apes?
    • What are the best arguments for a local, fast takeoff?

    In today’s interview we also discuss:

    • Why is AI impacts one of the most important projects in the world?
    • How do you structure important surveys? Why do you get such different answers when asking what seem to be very similar questions?
    • How does writing an academic paper differ from posting a summary online?
    • When will unguided machines be able to produce better and cheaper work than humans for every possible task?
    • What’s one of the most likely jobs to be automated soon?
    • Are people always just predicting the same timelines for new technologies?
    • How do AGI researchers different from other AI researchers in their predictions?
    • What are attitudes to safety research like within ML? Are there regional differences?
    • Are there any other types of experts we ought to talk to on this topic?
    • How much should we believe experts generally?
    • How does the human brain compare to our best supercomputers? How many human brains are worth all the hardware in the world?
    • How quickly has the processing capacity for machine learning problems been increasing?
    • What can we learn from the development of previous technologies in figuring out how fast transformative AI will arrive?
    • What are the best arguments for and against discontinuous development of AI?
    • Comparing our predictions of climate change and AI development
    • How should we measure human capacity to predict generally?
    • How have things changed in the AI landscape over the last 5 years?
    • How likely is an AI explosion?
    • What should we expect from a post AI dominated economy?
    • Should people focus specifically on the early timeline scenarios even if they consider them unlikely?
    • How much influence can people ever have on things that will happen in 20 years? Are there any examples of people really trying to do this?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    Should you play to your comparative advantage when choosing your career?

    “Do the job that’s your comparative advantage” might sound like obvious advice, but it turns out to be more complicated.

    In this article, we sketch a naive application of comparative advantage to choosing between two career options, and show that it doesn’t apply. Then we give a more complex example where comparative advantage comes back into play, and show how it’s different from “personal fit”.

    In brief, we think comparative advantage matters when you’re closely coordinating with a community to fill a limited number of positions, like we are in the effective altruism community. Otherwise, it’s better to do whatever seems highest-impact at the margin.

    In the final section, we give some thoughts on how to assess your comparative advantage, and some mistakes people might be making in the effective altruism community.

    The following are some research notes on our current thoughts, which we’re publishing for feedback. We’re pretty uncertain about many of the findings, and even how to best define the terms, and could easily see ourselves changing our minds if we did more research.

    Reading time: 10 minutes

    When does comparative advantage matter?
    A simple example where it doesn’t

    Here’s a case where you might think comparative advantage applies, but it actually doesn’t. (We’ll define terms more carefully in the next section.)

    Imagine there are two types of role, research and outreach. There are also two people, Carlie and Dave,

    Continue reading →

    #39 – Spencer Greenberg on the scientific approach to solving difficult everyday questions.

    Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner?

    Spencer Greenberg, founder of ClearerThinking.org, has a process for working out such real life problems.

    Let’s work through one here: how likely is it that you’ll enjoy listening to this episode?

    The first step is to figure out your ‘prior probability’: your estimate of how likely you are to enjoy the interview before getting any further evidence.

    Other than applying common sense, one way to figure this out is ‘reference class forecasting’. That is, looking at similar cases and seeing how often something is true, on average.

    Spencer is our first ever return guest (Dr Anders Sandberg appeared on episodes 29 and 33 – but only because his one interview was so fascinating that we split it into two).

    So one reference class might be, how many Spencer Greenberg episodes of the 80,000 Hours Podcast have you enjoyed so far? Being this specific limits bias in your answer, but with a sample size of just one – you’ll want to add more data points to reduce the variance of the answer (100% or 0% are both too extreme answers).

    Zooming out, how many episodes of the 80,000 Hours Podcast have you enjoyed? Let’s say you’ve listened to 10, and enjoyed 8 of them. If so 8 out of 10 might be a reasonable prior.

    If we want a bigger sample we can zoom out further: what fraction of long-form interview podcasts have you ever enjoyed?

    Having done that you’d need to update whenever new information became available. Do the topics seem more interesting than average? Did Spencer make a great point in the first 5 minutes? Was this description unbearably self-referential?

    In the episode we’ll explain the mathematically correct way to update your beliefs over time as new information comes in: Bayes Rule. You take your initial odds, multiply them by a ‘Bayes Factor’ and boom – updated probabilities. Once you know the trick it’s even easy to do it in your head. We’ll run through several diverse case studies of updating on evidence.

    Speaking of the Question of Evidence: in a world where Spencer was not worth listening to, how likely is it that we’d invite him back for a second episode?

    Also in this episode:

    • How could we generate 20-30 new happy thoughts a day? What would that do to our welfare?
    • What do people actually value? How do EAs differ from non EAs?
    • Why should we care about the distinction between intrinsic and instrumental values?
    • Should hedonistic utilitarians really want to hook themselves up to happiness machines?
    • What types of activities are people generally under-confident about? Why?
    • When should you give a lot of weight to your existing beliefs?
    • When should we trust common sense?
    • Does power posing have any effect?
    • Are resumes worthless?
    • Did Trump explicitly collude with Russia? What are the odds of him getting re-elected?
    • What’s the probability that China and the US go to War in the 21st century?
    • How should we treat claims of expertise on nutrition?
    • Why were Spencer’s friends suspicious of Theranos for years?
    • How should we think about the placebo effect?
    • Does a shift towards rationality typically cause alienation from family and friends? How do you deal with that?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    #38 – Yew-Kwang Ng on anticipating EA decades ago & how to make a much happier world

    Will people who think carefully about how to maximize welfare eventually converge on the same views?

    The effective altruism community has spent the past 10 years debating how best to increase happiness and reduce suffering, and gradually narrowed in on the world’s poorest people, all sentient animals, and future generations.

    Yew-Kwang Ng, Professor of Economics at Nanyang Technological University in Singapore, worked totally independently on this exact question since the 70s. Many of his early conclusions are now conventional wisdom within effective altruism – though other views he holds remain controversial or little-known.

    For instance, he thinks we ought to explore increasing pleasure via direct brain stimulation, and that genetic engineering may be an important tool for increasing happiness in the future.

    His work has suggested that the welfare of most wild animals is on balance negative and he hopes that in the future this is a problem humanity will work to solve. Yet he thinks that greatly improved conditions for farm animals could eventually justify eating meat.

    And he has spent most of his life forcefully advocating for the view that happiness, broadly construed, is the only intrinsically valuable thing.

    If it’s true that careful researchers will converge as Prof Ng believes, these ideas may prove as prescient as his other, now widely accepted, opinions.

    See below for our summary and appreciation of Kwang’s top publications and insights throughout a lifetime of research.

    Born in Japanese-occupied Malaya during WW2, Kwang has led an exceptional life. While in high school he was drawn to physics, mathematics, and philosophy, yet he chose to study economics because of his dream: to establish communism in an independent Malaysia.

    But events in the Soviet Union and the Chinese ‘cultural revolution’, in addition to his burgeoning knowledge and academic appreciation of economics, would change his views about the practicability of communism. He would soon complete his journey from young revolutionary to academic economist, and eventually become a columnist writing in support of Deng Xiaoping’s Chinese economic reforms in the 80s.

    He got his PhD at Sydney University in 1971, and has since published over 250 peer-reviewed papers – covering economics, biology, politics, mathematics, philosophy, psychology, and sociology, with a particular focus on ‘welfare economics’.

    In 2007, he was made a Distinguished Fellow of the Economic Society of Australia, the highest award the society bestows.

    In this episode we discuss how he developed some of his most unusual ideas and his fascinating life story, including:

    • Why Kwang believes that ‘Happiness Is Absolute, Universal, Ultimate, Unidimensional, Cardinally Measurable and Interpersonally Comparable’
    • What are the most pressing questions in economics?
    • Did Kwang have to worry about censorship from the Chinese government when promoting market economics, or concern for animal welfare?
    • Welfare economics and where Kwang thinks it went wrong
    • The need to move towards a morality based on happiness
    • What are the key implications of Kwang’s views for how a government ought to set its priorities?
    • Could promoting these views accidentally give support to oppressive governments?
    • Why does Kwang think the economics profession as a whole doesn’t agree with him on many things?
    • Why he thinks we should spend much more to prevent climate change and whether other are economists convinced by his arguments
    • Kwang’s proposed field: welfare biology.
    • Does evolution tend to create happy or unhappy creatures?
    • Novel ways to substantially increase human happiness
    • What would Kwang say to listeners who might want to build on his research in the future?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    #37 – GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

    What’s the value of preventing the death of a 5-year-old child, compared to a 20-year-old, or an 80-year-old?

    The global health community has generally regarded the value as proportional to the number of health-adjusted life-years the person has remaining – but GiveWell, one of the world’s foremost charity evaluators, no longer uses that approach. They found that contrary to the years-remaining’ method, many of their staff actually value preventing the death of an adult more than preventing the death of a young child. But there’s plenty of disagreement, with the team’s estimates spanning a four-fold range.

    As James Snowden – a research consultant at GiveWell – explains in this episode, there’s no way around making these controversial judgement calls based on limited information. If you try to ignore a question like this, you just implicitly take an unreflective stance on it instead. And for each charity they investigate there’s 1 or 2 dozen of these highly uncertain parameters that need to be estimated.

    GiveWell has been working to find the best way to make these decisions since its inception in 2007. Lives hang in the balance, so they want their staff to say what they really believe and bring whatever private knowledge they have to the table, rather than just defer to their managers, or an imaginary consensus.

    Their strategy is to have a massive spreadsheet that lists dozens of things they need to know, and to ask every staff member to give a figure and justification. Then once a year, the GiveWell team gets together to identify what they really disagree about and think through what evidence it would take to change their minds.

    Often the people who have the greatest familiarity with a particular intervention are the ones who drive the decision, as others choose to defer to them. But the group can also end up with very different answers, based on different prior beliefs about moral issues and how the world works. In that case then use the median of everyone’s best guess to make their key decisions.

    In making his estimate of the relative badness of dying at different ages, James specifically considered two factors: how many years of life do you lose, and how much interest do you have in those future years? Currently, James believes that the worst time for a person to die is around 8 years of age.

    We discuss his experiences with doing such calculations, as well as various other topics:

    • Why GiveWell’s recommendations have changed more than it looks.
    • What are the biggest research priorities for GiveWell at the moment?
    • How do you take into account the long-term knock-on effects from interventions?
    • If GiveWell’s advice were going to end up being very different in a couple years’ time, how might that happen?
    • Are there any charities that James thinks are really cost-effective which GiveWell hasn’t funded yet?
    • How does domestic government spending in the developing world compare to effective charities?
    • What are the main challenges with policy related interventions?
    • What are the main uncertainties around interventions to reduce pesticide suicide? Are there any other mental health interventions you’re looking at?
    • How much time do you spend trying to discover novel interventions?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    #36 – Tanya Singh on ending the operations management bottleneck in effective altruism

    Almost nobody is able to do groundbreaking physics research themselves, and by the time his brilliance was appreciated, Einstein was hardly limited by funding. But what if you could find a way to unlock the secrets of the universe like Einstein nonetheless?

    Today’s guest, Tanya Singh, sees herself as doing something like that every day. She’s Executive Assistant to one of her intellectual heroes who she believes is making a huge contribution to improving the world: Professor Bostrom at Oxford University’s Future of Humanity Institute (FHI).

    She couldn’t get more work out of Bostrom with extra donations, as his salary is already easily covered. But with her superior abilities as an Executive Assistant, Tanya frees up hours of his time every week, essentially ‘buying’ more Bostrom in a way nobody else can. She also help manage FHI more generally, in so doing freeing up more than an hour of staff time for each hour she works. This gives her the leverage to do more good than other people or other positions.

    In our previous episode, Tara Mac Aulay objected to viewing operations work as predominately a way of freeing up other people’s time:

    “A good ops person doesn’t just allow you to scale linearly, but also can help figure out bottlenecks and solve problems such that the organization is able to do qualitatively different work, rather than just increase the total quantity”, Tara said.

    Tara’s right that buying time for people at the top of their field is just one path to impact, though it’s one Tanya says she finds highly motivating. Other paths include enabling complex projects that would otherwise be impossible, allowing you to hire and grow much faster, and preventing disasters that could bring down a whole organisation – all things that Tanya does at FHI as well.

    In today’s episode we discuss all of those approaches, as we dive deeper into the broad class of roles we refer to as ‘operations management’. We discuss the arguments we made in ‘Why operations management is one of the biggest bottlenecks in effective altruism’, as well as:

    • Does one really need to hire people aligned with an org’s mission to work in ops?
    • The most notable operations successes in the 20th Century.
    • What’s it like being the only operations person in an org?
    • The role of a COO as compared to a CEO, and the options for career progression.
    • How do good operation teams allow orgs to scale quickly?
    • How much do operations staff get to set their org’s strategy?
    • Which personal weaknesses aren’t a huge problem in operations?
    • How do you automate processes? Why don’t most people do this?
    • Cultural differences between Britain and India where Tanya grew up.

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    #35 – Tara Mac Aulay on the audacity to fix the world without asking permission

    How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’.

    At 15 she took her first job – an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure.

    That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator.

    In this episode – available in audio and summary or transcript below – Tara demonstrates how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious.

    People with an operations mindset spot failures others can’t see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face.

    But as Tara’s experience shows they need to figure out what actually motivates the authorities who often try to block their reforms.

    We explore how people with this skill set can do as much good as possible, what 80,000 Hours got wrong in our article ‘Why operations management is one of the biggest bottlenecks in effective altruism’, as well as:

    • Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform.
    • How a student can save a hospital millions with a simple spreadsheet model.
    • The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better.
    • What most people misunderstand about operations, and how to tell if you have what it takes.
    • And finally, operations jobs people should consider applying for, such as those open now at the Centre for Effective Altruism.

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →

    #34 – We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.

    In 1991 Edwin Edwards won the Louisiana gubernatorial election. In 2001, he was found guilty of racketeering and received a 10 year invitation to Federal prison. The strange thing about that election? By 1991 Edwards was already notorious for his corruption. Actually, that’s not it.

    The truly strange thing is that Edwards was clearly the good guy in the race. How is that possible?

    His opponent was former Ku Klux Klan Grand Wizard David Duke.

    How could Louisiana end up having to choose between a criminal and a Nazi sympathiser?

    It’s not like they lacked other options: the state’s moderate incumbent governor Buddy Roemer ran for re-election. Polling showed that Roemer was massively preferred to both the career criminal and the career bigot, and would easily win a head-to-head election against either.

    Unfortunately, in Louisiana every candidate from every party competes in the first round, and the top two then go on to a second – a so-called ‘jungle primary’. Vote splitting squeezed out the middle, and meant that Roemer was eliminated in the first round.

    Louisiana voters were left with only terrible options, in a run-off election mostly remembered for the proliferation of bumper stickers reading “Vote for the Crook. It’s Important.”

    We could look at this as a cultural problem, exposing widespread enthusiasm for bribery and racism that will take generations to overcome. But according to Aaron Hamlin, Executive Director of The Center for Election Science (CES), there’s a simple way to make sure we never have to elect someone hated by more than half the electorate: change how we vote.

    He advocates an alternative voting method called approval voting, in which you can vote for as many candidates as you want, not just one. That means that you can always support your honest favorite candidate, even when an election seems like a choice between the lesser of two evils.

    While it might not seem sexy, this single change could transform politics. Approval voting is adored by voting researchers, who regard it as the best simple voting system available. (For whether your individual vote matters, see our article on the importance of voting.)

    Which do they regard as unquestionably the worst? First-past-the-post – precisely the disastrous system used and exported around the world by the US and UK.

    Aaron has a practical plan to spread approval voting across the US using ballot initiatives – and it just might be our best shot at making politics a bit less unreasonable.

    The Center for Election Science is a U.S. nonprofit which aims to fix broken government by helping the world adopt smarter election systems. They recently received a $600,000 grant from Open Philanthropy to scale up their efforts.

    Get this episode now by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or check out the transcript below.

    In this comprehensive conversation Aaron and I discuss:

    • Why hasn’t everyone just picked the best voting system already? Why is this a tough issue?
    • How common is it for voting systems to produce suboptimal outcomes, or even disastrous ones?
    • What is approval voting? What are its biggest downsides?
    • The positives and negatives of different voting methods used globally
    • The difficulties of getting alternative voting methods implemented
    • Do voting theorists mostly agree on the best voting method?
    • Are any unequal voting methods – where those considered more politically informed get a disproportional say – viable options?
    • Does a lack of general political knowledge from an electorate mean we need to keep voting methods simple?
    • How does voting reform stack up on the 80,000 Hours metrics of scale, neglectedness and solvability?
    • Is there anywhere where these reforms have been tested so we can see the expected outcomes?
    • Do we see better governance in countries that have better voting systems?
    • What about the argument that we don’t want the electorate to have more influence (because of their at times crazy views)?
    • How much does a voting method influence a political landscape? How would a change in voting method affect the two party system?
    • How did the voting system affect the 2016 US presidential election?
    • Is there a concern that changing to approval voting would lead to more extremist candidates getting elected?
    • What’s the practical plan to get voting reform widely implemented? What’s the biggest challenge to implementation?
    • Would it make sense to target areas of the world that are currently experiencing a period of political instability?
    • Should we try to convince people to use alternative voting methods in their everyday lives (when going to the movies, or choosing a restaurant)?
    • What staff does CES need? What would they do with extra funding? What do board members do for a nonprofit?

    The 80,000 Hours podcast is produced by Keiran Harris.

    Continue reading →