Kuhan Jeyapragasan on effective altruism university groups

In this episode of 80k After Hours, Rob Wiblin interviews Kuhan Jeyapragasan about effective altruism university groups.

From 2015 to 2020, Kuhan did an undergrad and then a master’s in maths and computer science at Stanford — and did a lot to organise and improve the EA group on campus.

Rob and Kuhan cover:

  • The challenges of making a group appealing and accepting of everyone
  • The concrete things Kuhan did to grow the successful Stanford EA group
  • Whether local groups are turning off some people who should be interested in effective altruism, and what they could do differently
  • Lessons Kuhan learned from Stanford EA
  • The Stanford Existential Risks Initiative (SERI)

Who this episode is for:

  • People already involved in EA university groups
  • People interested in getting involved in EA university groups

Who this episode isn’t for:

  • People who’ve never heard of ‘effective altruism groups’
  • People who’ve never heard of ‘effective altruism’
  • People who’ve never heard of ‘university’

Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

Continue reading →

‘S-risks’

People working on suffering risks or s-risks attempt to reduce the risk of something causing vastly more suffering than has existed on Earth so far. We think research to work out how to mitigate these risks might be particularly important. You may also be able to do important work by building this field, which is currently highly neglected — with fewer than 50 people working on this worldwide.

Continue reading →

#137 – Andreas Mogensen on whether effective altruism is just for consequentialists

Effective altruism, in a slogan, aims to ‘do the most good.’ Utilitarianism, in a slogan, says we should act to ‘produce the greatest good for the greatest number.’ It’s clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism?

Today’s guest, Andreas Mogensen — senior research fellow at Oxford University’s Global Priorities Institute — does reject utilitarianism, but as he explains, this does little to dampen his enthusiasm for effective altruism.

Andreas leans towards ‘deontological’ or rule-based theories of ethics, rather than ‘consequentialist’ theories like utilitarianism which look exclusively at the effects of a person’s actions.

Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially.

However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger’s wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do.

In a world as full of preventable suffering as our own, this simple ‘principle of beneficence’ is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance.

As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one’s income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they’ll get the biggest ‘bang for buck’.

For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves.

What arguments could a non-utilitarian moral theory mount against such giving?

Perhaps it could interfere with the achievement of other important moral goals? In response to this Andreas notes that alleviating the suffering of people in severe poverty is an important goal that should compete with alternatives. And furthermore, giving 10% is not so much that it likely disrupts one’s ability to, for instance, care for oneself or one’s family, or participate in domestic politics.

Perhaps it involves the violation of important moral prohibitions, such as those on stealing or lying? In response Andreas points out that the activities advocated by effective altruism researchers almost never violate such prohibitions — and if a few do, one can simply rule out those options and choose among the rest.

Many approaches to morality will say it’s permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless.

In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover:

  • Should we treat philosophical thought experiments that feature very large numbers with great suspicion?
  • If we had to allow someone to die to avoid preventing the football World Cup final from being broadcast to the world, is that permissible or not? If not, what might that imply?
  • What might a virtue ethicist regard as ‘doing the most good’?
  • If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be?
  • If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Continue reading →

Should you go to law school in the US to have a high-impact career?

This article outlines considerations for readers deciding whether to get a US law degree, explains how it might help them have a positive impact, and describes the law school experience. It is based on public resources about law school and careers in law and policy, as well as the experiences of people who have recently attended or considered attending law school. Consider this post a series of “best guesses” about how to approach the decision about whether to attend law school, rather than a definitive guide or empirical study.

I. Why law school could be an important step in a promising career path

The most common jobs for lawyers are probably not the most promising from the perspective of doing as much good as you can. A large share of US lawyers, especially graduates of top-ranked law schools, work in large private law firms, organising corporate transactions or defending large companies from lawsuits.

But law school can be a promising route into high-impact careers in policy and government, especially for people interested in eventually holding senior roles for which formal credentials are highly valued.

Since we believe working in policy to address some of the most pressing problems in the world is among the most promising career paths for people who want to have a high positive impact, law school may be a particularly appealing place to start, especially for people who are early in their career. Policy work in the US is a particularly high-priority path,

Continue reading →

Andrés Jiménez Zorrilla on the Shrimp Welfare Project

In this episode of 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It’s the first project in the world focused on shrimp welfare specifically and now has six full-time staff.

They cover:

  • The evidence for shrimp sentience
  • How farmers and the public feel about shrimp
  • The scale of the problem
  • What shrimp farming looks like
  • The killing process, and other welfare issues
  • Shrimp Welfare Project’s strategy
  • History of shrimp welfare work
  • What it’s like working in India and Vietnam
  • How to help

Who this episode is for:

  • People who care about animal welfare
  • People interested in new and unusual problems
  • People open to shrimp sentience

Who this episode isn’t for:

  • People who think shrimp couldn’t possibly be sentient
  • People who got called ‘shrimp’ a lot in high school and get anxious when they hear the word over and over again

Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

Continue reading →

What could an AI-caused existential catastrophe actually look like?

At 5:29 AM on July 16, 1945, deep in the Jornada del Muerto desert in New Mexico, the Manhattan Project carried out the world’s first successful test of a nuclear weapon.

From that moment, we’ve had the technological capacity to wipe out humanity.

But if you asked someone in 1945 to predict exactly how this risk would play out, they would almost certainly have got it wrong. They may have thought there would have been more widespread use of nuclear weapons in World War II. They certainly would not have predicted the fall of the USSR 45 years later. Current experts are concerned about India–Pakistan nuclear conflict and North Korean state action, but 1945 was before even the partition of India or the Korean War.

That is to say, you’d have real difficulty predicting anything about how nuclear weapons would be used. It would have been even harder to make these predictions in 1933, when Leo Szilard first realised that a nuclear chain reaction of immense power could be possible, without any concrete idea of what these weapons would look like.

Despite this difficulty, you wouldn’t be wrong to be concerned.

In our problem profile on AI, we describe a very general way in which advancing AI could go wrong. But there are lots of specifics we can’t know much about at this point.

Continue reading →

#136 – Will MacAskill on what we owe the future

  1. People who exist in the future deserve some degree of moral consideration.
  2. The future could be very big, very long, and/or very good.
  3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
  4. So trying to make the world better for future generations is a key priority of our time.

This is the simple four-step argument for ‘longtermism’ put forward in What We Owe The Future, the latest book from today’s guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

But Will is upfront that longtermism is also counterintuitive. To start with, he’s willing to contemplate timescales far beyond what’s typically discussed:

If we last as long as a typical mammal species, that’s another 700,000 years. If we last until the Earth is no longer habitable, that’s hundreds of millions of years. If we manage one day to take to the stars and build a civilisation there, we could live for hundreds of trillions of years. […] Future people [could] outnumber us a thousand or a million or a trillion to one.

A natural objection to thinking millions of years ahead is that it’s hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn’t matter how important something might be if you can’t predictably change it.

This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

But over seven years he gradually changed his mind, and in What We Owe The Future, Will argues that in fact there are clear ways we might act now that could benefit not just a few but all future generations.

He highlights two effects that could be very enduring: “…reducing risks of extinction of human beings or of the collapse of civilisation, and ensuring that the values and ideas that guide future society are better ones rather than worse.”

The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren’t coming back.

But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

For thousands of years, almost everyone — from philosophers to slaves themselves — regarded slavery as acceptable in principle. At the time the British Empire ended its participation in the slave trade, the industry was booming and earning enormous profits. It’s estimated that abolition cost Britain 2% of its GDP for 50 years.

So why did it happen? The global abolition movement seems to have originated within the peculiar culture of the Quakers, who were the first to argue slavery was unacceptable in all cases and campaign for its elimination, gradually convincing those around them with both Enlightenment and Christian arguments. If a few such moral pioneers had fallen off their horses at the wrong time, maybe the abolition movement never would have gotten off the ground and slavery would remain widespread today.

If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don’t eliminate a bad practice now, it may be with us forever. In today’s in-depth conversation, we discuss the possibility of a harmful moral ‘lock-in’ as well as:

  • How Will was eventually won over to longtermism
  • The three best lines of argument against longtermism
  • How to avoid moral fanaticism
  • Which technologies or events are most likely to have permanent effects
  • What ‘longtermists’ do today in practice
  • How to predict the long-term effect of our actions
  • Whether the future is likely to be good or bad
  • Concrete ideas to make the future better
  • What Will donates his money to personally
  • Potatoes and megafauna
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

Do recent breakthroughs mean transformative AI is coming sooner than we thought?

Is transformative AI coming sooner than we thought?

It seems like it probably is, which would mean that work to ensure this transformation goes well (rather than disastrously) is even more urgent than we thought.

In the last six months, there have been some shocking AI advances:

This caused the live forecast on Metaculus for when “artificial general intelligence” will arrive to plunge — the median declined 15 years, from 2055 to 2040.

You might think this was due to random people on the internet over-updating on salient evidence, but if you put greater weight on the forecasters who have made the most accurate forecasts in the past, the decline was still 11 years.

Last year, Jacob Steinhardt commissioned professional forecasters to make a five-year forecast on three AI capabilities benchmarks. His initial impression was that the forecasts were aggressive, but one year in, actual progress was ahead of predictions on all three benchmarks.

Particularly shocking were the results on a benchmark of difficult high school maths problems. The state-of-the-art model leapt from a score of 7% to 50% in just one year — more than five years of predicted progress. (And these questions were hard — e.g.

Continue reading →

#135 – Samuel Charap on key lessons from five months of war in Ukraine

After a frenetic level of commentary during February and March, the war in Ukraine has faded into the background of our news coverage. But with the benefit of time we’re in a much stronger position to understand what happened, why, whether there are broader lessons to take away, and how the conflict might be ended. And the conflict appears far from over.

So today, we are returning to speak a second time with Samuel Charap — one of the US’s foremost experts on Russia’s relationship with former Soviet states, and coauthor of the 2017 book Everyone Loses: The Ukraine Crisis and the Ruinous Contest for Post-Soviet Eurasia.

As Sam lays out, Russia controls much of Ukraine’s east and south, and seems to be preparing to politically incorporate that territory into Russia itself later in the year. At the same time, Ukraine is gearing up for a counteroffensive before defensive positions become dug in over winter.

Each day the war continues it takes a toll on ordinary Ukrainians, contributes to a global food shortage, and leaves the US and Russia unable to coordinate on any other issues and at an elevated risk of direct conflict.

In today’s brisk conversation, Rob and Sam cover the following topics:

  • Current territorial control and the level of attrition within Russia’s and Ukraine’s military forces.
  • Russia’s current goals.
  • Whether Sam’s views have changed since March on topics like: Putin’s motivations, the wisdom of Ukraine’s strategy, the likely impact of Western sanctions, and the risks from Finland and Sweden joining NATO before the war ends.
  • Why so many people incorrectly expected Russia to fully mobilise for war or persist with their original approach to the invasion.
  • Whether there’s anything to learn from many of our worst fears — such as the use of bioweapons on civilians — not coming to pass.
  • What can be done to ensure some nuclear arms control agreement between the US and Russia remains in place after 2026 (when New START expires).
  • Why Sam considers a settlement proposal put forward by Ukraine in late March to be the most plausible way to end the war and ensure stability — though it’s still a long shot.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Continue reading →

Risks from atomically precise manufacturing

Both the risks and benefits of advances in atomically precise manufacturing seem like they might be significant, and there is currently little effort to shape the trajectory of this technology. However, there is also relatively little investment going into developing atomically precise manufacturing, which reduces the urgency of the issue.

Continue reading →

Expression of interest: Head of Operations

80,000 Hours

80,000 Hours provides research and support to help people switch into careers that effectively tackle the world’s most pressing problems.

We’ve had over 8 million visitors to our website, and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

The internal systems team

This role is on the internal systems team, which is here to build the organisation and systems that support 80,000 Hours to achieve its mission.

We oversee 80,000 Hours’ office, tech systems, organisation-wide metrics and impact evaluation, as well as HR, recruiting, finances, and much of our fundraising.

Currently, we have four full-time staff, some part-time staff, and receive support from the Centre for Effective Altruism (our fiscal sponsor).

The role

As 80,000 Hours’ Head of Operations, you would:

  • Oversee a wide range of our internal operations, including team-wide processes, much of our fundraising, our office, finances, tech systems, data practices, and external relations.
  • Manage a team of two operations specialists, including investing in their professional development and identifying opportunities for advancement where appropriate.
  • Grow your team to build capacity in the areas you oversee, including identifying 80,000 Hours’ operational needs and designing roles that will address these.
  • Develop our internal operations strategy — in particular,

Continue reading →

    Open position: Marketer

    Applications for this position are now closed.

    We’re looking for a new marketer to help us expand our readership and scale up our marketing channels.

    We’d like to support the person in this role to take on more responsibility over time as we expand our marketing team.

    80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

    We’ve had over 8 million visitors to our website, and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Survey.

    Even so, about 90% of US college graduates have never heard of effective altruism, and we estimate that just 0.5% of students at top colleges are highly engaged in EA. As a marketer with 80,000 Hours, you would help us achieve our goal of reaching all students and recent graduates who might be interested in our work. We anticipate that the right person in this role could help us grow our readership to 5–10 times its current size, and lead to hundreds or thousands of additional people pursuing high-impact careers.

    We’re looking for a marketing generalist who will:

    • Start managing (and eventually own) our two largest existing marketing channels:
      • Sponsorships with people who have large audiences,

    Continue reading →

      #134 – Ian Morris on what big-picture history teaches us

      Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs.

      Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women.

      Why such big systematic changes — and why these changes specifically?

      That’s the question best-selling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years.

      There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the ‘right’ way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer?

      In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels.

      On this theory, it’s technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength.

      There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another.

      Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career.

      In Why the West Rules—For Now, he set out to understand why the Industrial Revolution happened in England and Europe went on to dominate much of the rest of the world, rather than industrialisation kicking off somewhere else like China, with China going on to establish colonies in Europe. (In a word: geography.)

      In War! What is it Good For?, he tried to explain why it is that violent conflicts often lead to longer lives and higher incomes (i.e. wars build empires which suppress interpersonal violence internally), while other times they have the exact opposite effect (i.e. advances in military technology allow nomads to raid and pull apart these empires).

      In today’s episode, we discuss all of Ian’s major books, taking on topics such as:

      • Whether the evidence base in history — from document archives to archaeology — is strong enough to persuasively answer any of these questions
      • Whether or not wars can still lead to less violence today
      • Why Ian thinks the way we live in the 21st century is probably a short-lived aberration
      • Whether the grand sweep of history is driven more by “very important people” or “vast impersonal forces”
      • Why Chinese ships never crossed the Pacific or rounded the southern tip of Africa
      • In what sense Ian thinks Brexit was “10,000 years in the making”
      • The most common misconceptions about macrohistory

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →