Open position: Recruiter

The role

You’ll be managed by Sashika Coxhead, our Head of Recruiting, and will have the opportunity to work closely with hiring managers from other teams.

Initial responsibilities will include:

  • Project management of active recruiting rounds. For example, overseeing the candidate pipeline and logistics of hiring rounds, making decisions on initial applications, and managing candidate communications.
  • Sourcing potential candidates. This might include generating leads for specific roles, publicising new positions, reaching out to potential candidates, and answering any questions they have about working at 80,000 Hours.
  • Taking on special projects to improve our recruiting systems. For example, you might help to build an excellent applicant tracking system, test ways to improve our ability to generate leads, or introduce strategies to make our hiring rounds more efficient.

Depending on your skills and interests, you might also:

  • Take ownership of a particular area of our recruiting process, e.g. proactive outreach to potential candidates, our applicant tracking system, or metrics for the recruiting team’s success.
  • Conduct screening interviews where needed, to assess applicants’ fit for particular roles at 80,000 Hours.

After some time in the role, we’d hope for you to sit on internal hiring committees. This involves forming an inside view on candidates’ performance; discussing uncertainties with the hiring manager and committee; and, with the other committee members, giving final approval on who to make offers to.

Continue reading →

    Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?

    We’ve argued that preventing an AI-related catastrophe may be the world’s most pressing problem, and that while progress in AI over the next few decades could have enormous benefits, it could also pose severe, possibly existential risks. As a result, we think that working on some technical AI research — research related to AI safety — may be a particularly high-impact career path.

    But there are many ways of approaching this path that involve researching or otherwise advancing AI capabilities — meaning making AI systems better at some specific skills — rather than only doing things that are purely in the domain of safety. In short, this is because:

    • Capabilities work and some forms of safety work are intertwined.
    • Many available ways of learning enough about AI to contribute to safety are via capabilities-enhancing roles.

    So if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?

    We think this is a hard question! Capabilities-enhancing roles could be beneficial or harmful. For any role, there are a range of considerations — and reasonable people disagree on whether, and in what cases, the risks outweigh the benefits.

    So we asked the 22 people we thought would be most informed about this issue — and who we knew had a range of views —

    Continue reading →

    #138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

    What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more.

    The question is a classic that makes for great dorm-room philosophy discussion. But it’s hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective.

    Today’s guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself.

    That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations.

    Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering.

    As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves — a position known as ‘philosophical hedonism’ — has been one of the most enduringly popular ideas in ethics.

    And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things?

    Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value “a radical and important philosophical contribution.”

    So what convinces Sharon that philosophical hedonism deserves another go?

    Stepping back for a moment, any answer to the question “What has intrinsic value?” faces a serious challenge: “How do we know?” It’s far from clear how something having intrinsic value can cause us to believe that it has intrinsic value. And if there’s no causal or rational connection between something being valuable and our believing that it has value, we could only get the right answer by some extraordinary coincidence. You may feel it’s intrinsically valuable to treat people fairly, but maybe there’s just no reason to trust that intuition.

    Since the 1700s, many philosophers working on so-called ‘metaethics’ — that is, the study of what ethical claims are and how we could know if they’re true — have despaired of us ever making sense of or identifying the location of ‘objective’ or ‘intrinsic’ value. They conclude that when we say things are ‘good,’ we aren’t really saying anything about their nature, but rather just expressing our own attitudes, or intentions, or something else.

    Sharon disagrees. She says the answer to all this has been right under our nose all along.

    We have a concept of value because of our experiences of positive sensations — sensations that immediately indicate to us that they are valuable and that if someone could create more of them, they ought to do so. Similarly, we have a concept of badness because of our experience of suffering — sensations that scream to us that if suffering were all there were, it would be a bad thing.

    How do we know that pleasure is valuable, and that suffering is the opposite of valuable? Directly!

    While I might be mistaken that a painting I’m looking at is in real life as it appears to me, I can’t be mistaken about the nature of my perception of it. If it looks red to me, it may or may not be red, but it’s definitely the case that I am perceiving redness. Similarly, while I might be mistaken that a painting is intrinsically valuable, I can’t be mistaken about the pleasurable sensations I’m feeling when I look at it, and the fact that among other qualities those sensations have the property of goodness.

    While intuitive on some level, this arguably implies some very strange things. Most famously, the philosopher Robert Nozick challenged it with the idea of an ‘experience machine’: if you could enter into a simulated world and enjoy a life far more pleasurable than the one you experience now, should you do so, even if it would mean none of your accomplishments or relationships would be ‘real’? Nozick and many of his colleagues thought not.

    The idea has also been challenged for failing to value human freedom and autonomy for its own sake. Would it really be OK to kill one person to use their organs to save the lives of five others, if doing so would generate more pleasure and less suffering? Few believe so.

    In today’s interview, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes these counterarguments are misguided. A philosophical hedonist shouldn’t get in an experience machine, nor override an individual’s autonomy, except in situations so different from the classic thought experiments that it no longer seems strange they would do so.

    Host Rob Wiblin and Sharon cover all that, as well as:

    • The essential need to disentangle intrinsic, instrumental, and other sorts of value
    • Why Sharon’s arguments lead to hedonistic utilitarianism rather than hedonistic egoism (in which we only care about our own feelings)
    • How do people react to the ‘experience machine’ thought experiment when surveyed?
    • Why hedonism recommends often thinking and acting as though it were false
    • Whether it’s crazy to think that relationships are only useful because of their effects on our subjective experiences
    • Whether it will ever be possible to eliminate pain, and whether doing so would be desirable
    • If we didn’t have positive or negative experiences, whether that would cause us to simply never talk about goodness and badness
    • Whether the plausibility of hedonism is affected by our theory of mind
    • And plenty more

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ryan Kessler
    Transcriptions: Katy Moore

    Continue reading →

    Kuhan Jeyapragasan on effective altruism university groups

    In this episode of 80k After Hours, Rob Wiblin interviews Kuhan Jeyapragasan about effective altruism university groups.

    From 2015 to 2020, Kuhan did an undergrad and then a master’s in maths and computer science at Stanford — and did a lot to organise and improve the EA group on campus.

    Rob and Kuhan cover:

    • The challenges of making a group appealing and accepting of everyone
    • The concrete things Kuhan did to grow the successful Stanford EA group
    • Whether local groups are turning off some people who should be interested in effective altruism, and what they could do differently
    • Lessons Kuhan learned from Stanford EA
    • The Stanford Existential Risks Initiative (SERI)

    Who this episode is for:

    • People already involved in EA university groups
    • People interested in getting involved in EA university groups

    Who this episode isn’t for:

    • People who’ve never heard of ‘effective altruism groups’
    • People who’ve never heard of ‘effective altruism’
    • People who’ve never heard of ‘university’

    Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ryan Kessler
    Transcriptions: Katy Moore

    Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

    Continue reading →

    ‘S-risks’

    People working on suffering risks or s-risks attempt to reduce the risk of something causing vastly more suffering than has existed on Earth so far. We think research to work out how to mitigate these risks might be particularly important. You may also be able to do important work by building this field, which is currently highly neglected — with fewer than 50 people working on this worldwide.

    Continue reading →

    #137 – Andreas Mogensen on whether effective altruism is just for consequentialists

    Effective altruism, in a slogan, aims to ‘do the most good.’ Utilitarianism, in a slogan, says we should act to ‘produce the greatest good for the greatest number.’ It’s clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism?

    Today’s guest, Andreas Mogensen — senior research fellow at Oxford University’s Global Priorities Institute — does reject utilitarianism, but as he explains, this does little to dampen his enthusiasm for effective altruism.

    Andreas leans towards ‘deontological’ or rule-based theories of ethics, rather than ‘consequentialist’ theories like utilitarianism which look exclusively at the effects of a person’s actions.

    Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially.

    However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger’s wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do.

    In a world as full of preventable suffering as our own, this simple ‘principle of beneficence’ is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance.

    As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one’s income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they’ll get the biggest ‘bang for buck’.

    For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves.

    What arguments could a non-utilitarian moral theory mount against such giving?

    Perhaps it could interfere with the achievement of other important moral goals? In response to this Andreas notes that alleviating the suffering of people in severe poverty is an important goal that should compete with alternatives. And furthermore, giving 10% is not so much that it likely disrupts one’s ability to, for instance, care for oneself or one’s family, or participate in domestic politics.

    Perhaps it involves the violation of important moral prohibitions, such as those on stealing or lying? In response Andreas points out that the activities advocated by effective altruism researchers almost never violate such prohibitions — and if a few do, one can simply rule out those options and choose among the rest.

    Many approaches to morality will say it’s permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless.

    In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover:

    • Should we treat philosophical thought experiments that feature very large numbers with great suspicion?
    • If we had to allow someone to die to avoid preventing the football World Cup final from being broadcast to the world, is that permissible or not? If not, what might that imply?
    • What might a virtue ethicist regard as ‘doing the most good’?
    • If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be?
    • If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view?

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ben Cordell and Beppe Rådvik
    Transcriptions: Katy Moore

    Continue reading →

    Should you go to law school in the US to have a high-impact career?

    This article outlines considerations for readers deciding whether to get a US law degree, explains how it might help them have a positive impact, and describes the law school experience. It is based on public resources about law school and careers in law and policy, as well as the experiences of people who have recently attended or considered attending law school. Consider this post a series of “best guesses” about how to approach the decision about whether to attend law school, rather than a definitive guide or empirical study.

    I. Why law school could be an important step in a promising career path

    The most common jobs for lawyers are probably not the most promising from the perspective of doing as much good as you can. A large share of US lawyers, especially graduates of top-ranked law schools, work in large private law firms, organising corporate transactions or defending large companies from lawsuits.

    But law school can be a promising route into high-impact careers in policy and government, especially for people interested in eventually holding senior roles for which formal credentials are highly valued.

    Since we believe working in policy to address some of the most pressing problems in the world is among the most promising career paths for people who want to have a high positive impact, law school may be a particularly appealing place to start, especially for people who are early in their career. Policy work in the US is a particularly high-priority path,

    Continue reading →

    Andrés Jiménez Zorrilla on the Shrimp Welfare Project

    In this episode of 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It’s the first project in the world focused on shrimp welfare specifically and now has six full-time staff.

    They cover:

    • The evidence for shrimp sentience
    • How farmers and the public feel about shrimp
    • The scale of the problem
    • What shrimp farming looks like
    • The killing process, and other welfare issues
    • Shrimp Welfare Project’s strategy
    • History of shrimp welfare work
    • What it’s like working in India and Vietnam
    • How to help

    Who this episode is for:

    • People who care about animal welfare
    • People interested in new and unusual problems
    • People open to shrimp sentience

    Who this episode isn’t for:

    • People who think shrimp couldn’t possibly be sentient
    • People who got called ‘shrimp’ a lot in high school and get anxious when they hear the word over and over again

    Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ben Cordell and Ryan Kessler
    Transcriptions: Katy Moore

    Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

    Continue reading →

    What could an AI-caused existential catastrophe actually look like?

    At 5:29 AM on July 16, 1945, deep in the Jornada del Muerto desert in New Mexico, the Manhattan Project carried out the world’s first successful test of a nuclear weapon.

    From that moment, we’ve had the technological capacity to wipe out humanity.

    But if you asked someone in 1945 to predict exactly how this risk would play out, they would almost certainly have got it wrong. They may have thought there would have been more widespread use of nuclear weapons in World War II. They certainly would not have predicted the fall of the USSR 45 years later. Current experts are concerned about India–Pakistan nuclear conflict and North Korean state action, but 1945 was before even the partition of India or the Korean War.

    That is to say, you’d have real difficulty predicting anything about how nuclear weapons would be used. It would have been even harder to make these predictions in 1933, when Leo Szilard first realised that a nuclear chain reaction of immense power could be possible, without any concrete idea of what these weapons would look like.

    Despite this difficulty, you wouldn’t be wrong to be concerned.

    In our problem profile on AI, we describe a very general way in which advancing AI could go wrong. But there are lots of specifics we can’t know much about at this point.

    Continue reading →

    #136 – Will MacAskill on what we owe the future

    1. People who exist in the future deserve some degree of moral consideration.
    2. The future could be very big, very long, and/or very good.
    3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
    4. So trying to make the world better for future generations is a key priority of our time.

    This is the simple four-step argument for ‘longtermism’ put forward in What We Owe The Future, the latest book from today’s guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

    From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

    Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

    But Will is upfront that longtermism is also counterintuitive. To start with, he’s willing to contemplate timescales far beyond what’s typically discussed:

    If we last as long as a typical mammal species, that’s another 700,000 years. If we last until the Earth is no longer habitable, that’s hundreds of millions of years. If we manage one day to take to the stars and build a civilisation there, we could live for hundreds of trillions of years. […] Future people [could] outnumber us a thousand or a million or a trillion to one.

    A natural objection to thinking millions of years ahead is that it’s hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn’t matter how important something might be if you can’t predictably change it.

    This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

    But over seven years he gradually changed his mind, and in What We Owe The Future, Will argues that in fact there are clear ways we might act now that could benefit not just a few but all future generations.

    He highlights two effects that could be very enduring: “…reducing risks of extinction of human beings or of the collapse of civilisation, and ensuring that the values and ideas that guide future society are better ones rather than worse.”

    The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren’t coming back.

    But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

    In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

    For thousands of years, almost everyone — from philosophers to slaves themselves — regarded slavery as acceptable in principle. At the time the British Empire ended its participation in the slave trade, the industry was booming and earning enormous profits. It’s estimated that abolition cost Britain 2% of its GDP for 50 years.

    So why did it happen? The global abolition movement seems to have originated within the peculiar culture of the Quakers, who were the first to argue slavery was unacceptable in all cases and campaign for its elimination, gradually convincing those around them with both Enlightenment and Christian arguments. If a few such moral pioneers had fallen off their horses at the wrong time, maybe the abolition movement never would have gotten off the ground and slavery would remain widespread today.

    If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don’t eliminate a bad practice now, it may be with us forever. In today’s in-depth conversation, we discuss the possibility of a harmful moral ‘lock-in’ as well as:

    • How Will was eventually won over to longtermism
    • The three best lines of argument against longtermism
    • How to avoid moral fanaticism
    • Which technologies or events are most likely to have permanent effects
    • What ‘longtermists’ do today in practice
    • How to predict the long-term effect of our actions
    • Whether the future is likely to be good or bad
    • Concrete ideas to make the future better
    • What Will donates his money to personally
    • Potatoes and megafauna
    • And plenty more

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ben Cordell
    Transcriptions: Katy Moore

    Continue reading →

    Do recent breakthroughs mean transformative AI is coming sooner than we thought?

    Is transformative AI coming sooner than we thought?

    It seems like it probably is, which would mean that work to ensure this transformation goes well (rather than disastrously) is even more urgent than we thought.

    In the last six months, there have been some shocking AI advances:

    This caused the live forecast on Metaculus for when “artificial general intelligence” will arrive to plunge — the median declined 15 years, from 2055 to 2040.

    You might think this was due to random people on the internet over-updating on salient evidence, but if you put greater weight on the forecasters who have made the most accurate forecasts in the past, the decline was still 11 years.

    Last year, Jacob Steinhardt commissioned professional forecasters to make a five-year forecast on three AI capabilities benchmarks. His initial impression was that the forecasts were aggressive, but one year in, actual progress was ahead of predictions on all three benchmarks.

    Particularly shocking were the results on a benchmark of difficult high school maths problems. The state-of-the-art model leapt from a score of 7% to 50% in just one year — more than five years of predicted progress. (And these questions were hard — e.g.

    Continue reading →

    #135 – Samuel Charap on key lessons from five months of war in Ukraine

    After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we’re in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over.

    So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.

    As Sam lays out, Russia controls much of Ukraine’s east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter.

    Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict.

    In today’s brisk conversation, Rob and Sam cover the following topics:

    • Current territorial control and the level of attrition within Russia’s and Ukraine’s military forces.
    • Russia’s current goals.
    • Whether Sam’s views have changed since March on topics like: Putin’s motivations, the wisdom of Ukraine’s strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends.
    • Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion.
    • Whether there’s anything to learn from many of our worst fears — such as the use of bioweapons on civilians — not coming to pass.
    • What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires).
    • Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it’s still a long shot.

    Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

    Producer: Keiran Harris
    Audio mastering: Ben Cordell
    Transcriptions: Katy Moore

    Continue reading →