#144 – Athena Aktipis on cancer, cooperation, and the apocalypse

The larger and more complex a group is, all else being equal, the easier it will be for cheating to arise and go undetected and potentially undermine the system — unless you have other mechanisms there that can sort of protect, monitor, or respond.

Athena Aktipis

What’s the opposite of cancer?

If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.

But today’s guest Athena Aktipis says that the opposite of cancer is us: it’s having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function.

If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.

As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise:

  • Cells will proliferate when they shouldn’t.
  • Cells won’t die when they should.
  • Cells won’t engage in the kind of division of labour that they should.
  • Cells won’t do the jobs that they’re supposed to do.
  • Cells will monopolise resources.
  • And cells will trash the environment.

When we think about animals in the wild, or even bacteria living inside our cells, we understand that they’re facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics.

We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster.

Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since Homo sapiens came about.

Here’s a quote from Athena:

So you have to go and kind of put yourself on a different spatial scale and time scale, and just shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we’re going to map it onto anything like what we experience, a day is at least 10 years for them, right?

So it’s a very, very different way of thinking. Then once you shift to that, you’re like, “Oh, wow, there’s so much that could be happening in terms of adaptation inside the body, how cells are actually evolving inside the body over the course of our lifetimes.” That shift just opens up all this potential for using evolutionary approaches in adaptationist thinking to generate hypotheses that then you can test.

You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss:

  • Cheating within cells themselves
  • Cooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or stars
  • Whether it’s too out-there to think of humans as engaging in cancerous behaviour.
  • Why our anti-contagious-cancer mechanisms are so successful
  • Why elephants get deadly cancers less often than humans, despite having way more cells
  • When a cell should commit suicide
  • When the human body deliberately produces tumours
  • The strategy of deliberately not treating cancer aggressively
  • Superhuman cooperation
  • And much more

And at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse, including:

  • Staying happy while thinking about the apocalypse
  • Practical steps to prepare for the apocalypse
  • And whether a zombie apocalypse is already happening among Tasmanian devils

And if you’d rather see Rob and Athena’s facial expressions as they laugh and laugh while discussing cancer and the apocalypse — you can watch the video of the full interview.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire
Video editing: Ryan Kessler
Transcriptions: Katy Moore

Continue reading →

Open position: Content associate

About the 80,000 Hours web team

80,000 Hours provides free research and support to help people find careers tackling the world’s most pressing problems.

We’ve had over 10 million visitors to our website (with over 100,000 hours of reading time per year), and more than 3,000 people have told us that they’ve significantly changed their career plans due to our work. We’re also the largest single source of people getting involved in the effective altruism community, according to the most recent EA Community Survey.

Our articles are read by thousands, and are among the most important ways we help people shift their careers towards higher-impact options.

The role

As a content associate, you would:

  • Support the 80,000 Hours web team flexibly across a range of articles and projects.
  • Proofread 80,000 Hours articles before release, suggest style improvements, and check for errors.
  • Upload new articles and make changes to the site.
  • Ensure that our newsletters are sent out error-free and on time to the over 250,000 people on our mailing list.
  • Provide analytical support for the team, improving our ability to use data to measure and increase our impact.
  • Manage the gathering of feedback on our website from both readers and subject matter experts.
  • Generate ideas for new pieces.
  • Generally help grow the impact of the site.

Some of the types of pieces you could work on include:

  • Career reviews — e.g.

Continue reading →

    My thoughts on parenting and having an impactful career

    When my husband and I decided to have children, we didn’t put much thought into the broader social impact of the decision. We got together at secondary school and had been discussing the fact we were going to have kids since we were 18, long before we found effective altruism.

    We made the actual decision to have a child much later, but how it would affect our careers or abilities to help others still wasn’t a large factor in the decision. As with most people though, the decision has, in fact, had significant effects on our careers.

    Raising my son, Leo — now three years old — is one of the great joys of my life, and I’m so happy that my husband and I decided to have him. But having kids can be challenging for anyone, and there may be unique challenges for people who aim to have a positive impact with their careers.

    I’m currently the director of the one-on-one programme at 80,000 Hours and a fund manager for the Effective Altruism Infrastructure Fund. So I wanted to share my experience with parenting and working for organisations whose mission I care about deeply. Here are my aims:

    • Give readers an example of a working parent who also thinks a lot about 80,000 Hours’ advice.
    • Discuss some of the ways having kids is likely to affect the impact you have in your career, for people who want to consider that when deciding whether to have kids.

    Continue reading →

    The quick, medium, and long versions of career planning

    I think it’s a good idea to consider how you’re feeling about your career each year. At least, intellectually I think it’s good. In practice, I find it really hard. Compared to others I know, I’m not as naturally drawn to personal reflection and goal-setting. I intended to reflect on my own career over the festive period… and ended up bailing because I found it too stressful.

    But it is important! Without making time to check in on the big career questions, you might stay too long at a job, miss opportunities for doing more good, or fail to push yourself to grow — I’ve certainly been there before.

    So I suggest doing a career review this January — but committing to a realistic volume of work. You can start small. You can also try getting help — ask a friend to act as an “accountability buddy” or apply to talk one-on-one with someone from 80,000 Hours.

    I’m committing to do it too this month — that’s one of the reasons I’m writing this newsletter!

    Here are some of our tools and resources that you could use at whatever level of detail works for you:

    The quick version (30–60 minutes)

    Try our annual career review tool

    These guided questions help you reflect on the last year, consider whether to change your job, and make a plan for this year.

    Continue reading →

    2022 in review

    As 2023 gets underway, we’re taking a look back at the content we produced in 2022 and highlighting some particular standouts.

    We published a lot of new articles and podcasts to help our readers have impactful careers — below are some of our favourite pieces from the year.

    Standout posts and articles

    My experience with imposter syndrome — and how to (partly) overcome it
    80,000 Hours team member Luisa Rodriguez wrote this powerful and insightful piece on a challenge many people face when trying to have an impactful career. In it, she describes clearly what it’s like to have imposter syndrome from her own first-hand experience and provides a lot of helpful advice and guidance on how to manage it. I think a lot of people will benefit from reading it.

    Know what you’re optimising for
    Alex Lawsen, one of 80,000 Hours’ advisors, has noticed that people often fall into the trap of trying to optimise the wrong things — like students who spend so much time worrying that their homework is neatly written, rather than actually understanding and learning from the material. The piece offers practical advice for overcoming this issue.

    Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?
    This is a really challenging question that we are still struggling with, and this post was one way we have grappled with it. We asked 22 people we trust as thoughtful and informed experts in the field of AI safety for their thoughts,

    Continue reading →

      #143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

      John F. Kennedy had ordered a stand down to all U-2 flights during the Cuban Missile Crisis, because he knew this was exactly the kind of thing that could get out of control. And he did not have total control, but a U-2 flight still went off and somebody got shot down.

      And Kennedy’s response, which is like peak Kennedy, was, “There’s always some son of a bitch who doesn’t get the message.” And I think that’s a really good way of looking at dynamic complex systems.

      Jeffrey Lewis

      America aims to avoid nuclear war by relying on the principle of ‘mutually assured destruction,’ right? Wrong. Or at least… not officially.

      As today’s guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official ‘OPLANs’ (military operation plans), the US is committed to ‘dominating’ in a nuclear war with Russia. How would they do that? “That is redacted.”

      We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint.

      As Jeffrey tells it, ‘mutually assured destruction’ was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn’t it still the de facto reality? Yes and no.

      Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US’ plan to prevail in a nuclear war and conclude that “it’s freaking madness.” They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won’t use the weapons.

      But Jeffrey thinks that’s a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It’s what the generals and admirals have all prepared for.

      What matters is the ‘not calm moment’: the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There’s only minutes to decide.

      Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn’t want to take because of how information and options were processed and presented to them. In the heat of the moment, it’s natural to reach for the plan you’ve prepared — however mad it might sound.

      In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining:

      • Why inter-service rivalry is one of the biggest constraints on US nuclear policy
      • Two times the US sabotaged nuclear nonproliferation among great powers
      • How his field uses jargon to exclude outsiders
      • How the US could prevent the revival of mass nuclear testing by the great powers
      • Why nuclear deterrence relies on the possibility that something might go wrong
      • Whether ‘salami tactics’ render nuclear weapons ineffective
      • The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow them to have the most missiles
      • The problems that arise when you won’t talk to people you think are evil
      • Why missile defences are politically popular despite being strategically foolish
      • How open source intelligence can prevent arms races
      • And much more.

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      How we’re thinking about where to donate to charity this year

      Charitable giving can be hugely impactful — if you’re careful about where you donate.

      One of the simplest ways to have an impact with your career is to donate a portion of your income.

      But if you’re going to do that, where you donate can make a huge difference. Within a cause area, the best charities could be 10 times better than typical charities, and hundreds of times better than poorly performing charities.

      650 million people — 8% of the world’s population — live in extreme poverty. That’s surviving on less than $2.15 a day (in 2017 prices).

      Meanwhile, if you earn over $25,000 a year in the US, you’re probably in the richest ~10% of the world’s population.

      You can send money directly to the world’s poorest people with minimal overhead via GiveDirectly.

      And that’s just a start. GiveWell estimates that through donating to its top charities (focused on extremely cost-effective, evidence-backed public health interventions), your donation will go 5–10 times further than a direct cash transfer.

      And we’d guess that if you target donations towards effective organisations tackling the world’s most pressing problems, you can do even more good.

      So, this giving season, we’ve updated our guide to picking where to donate: How to choose where to donate.

      Continue reading →

      How to choose where to donate

      If you want to make a difference, and are happy to give toward wherever you think you can do the most good (regardless of cause area), how do you choose where to donate? This is a brief summary of the most useful tips we have.

      How to choose an effective charity
      First, plan your research

      One big decision to make is whether to do your own research or delegate your decision to someone else. Below are some considerations.

      If you trust someone else’s recommendations, you can defer to them.

      If you know someone who shares your values and has already put a lot of thought into where to give, then consider simply going with their recommendations.

      But it can be better to do your own research if any of these apply to you:

      • You think you might find something higher impact according to your values than even your best advisor would find (because you have unique values, good research skills, or access to special information — e.g. knowing about a small project a large donor might not have looked into).
      • You think you might be able to productively contribute to the broader debate about which charities should be funded (producing research is a public good for other donors).
      • You want to improve your knowledge of effective altruism and charity evaluation.

      Consider entering a donor lottery.

      A donor lottery allows you to donate into a fund with other small donors,

      Continue reading →

      #142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

      You speak a very small, and probably therefore fascinating and very complicated language. You marry somebody who speaks another one from several villages over. The two of you move to the city, [where] there’s some big giant lingua franca.

      What are you going to speak to your kids? You’re going to speak that big fat language.

      That kills languages, because after a while, there are very few people left in the village. The big language is the language of songs, the big language is what you text in. That’s a very hard thing to resist.

      John McWhorter

      John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages.

      He’s also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work John has also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.

      Our show is mostly about the world’s most pressing problems and what you can do to solve them. But what’s the point of hosting a podcast if you can’t occasionally just talk about something fascinating with someone whose work you appreciate?

      So today, just before the holidays, we’re sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him:

      • Can you communicate faster in some languages than others, or is there some constraint that prevents that?
      • Does learning a second or third language make you smarter, or not?
      • Can a language decay and get worse at communicating what people want to get across?
      • If children aren’t taught any language at all, how many generations does it take them to invent a fully fledged one of their own?
      • Did Shakespeare write in a foreign language, and if so, should we translate his plays?
      • How much does the language we speak really shape the way we think?
      • Are creoles the best languages in the world — languages that ideally we would all speak?
      • What would be the optimal number of languages globally?
      • Does trying to save dying languages do their speakers a favour, or is it more of an imposition?
      • Should we bother to teach foreign languages in UK and US schools?
      • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?
      • Will AI models speak a language of their own in the future, one that humans can’t understand, but which better serves the tradeoffs AI models need to make?

      We then put some of these questions to the large language model ChatGPT, asking it to play the role of a linguistics professor at Colombia University.

      We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits!

      And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full interview.

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Ben Cordell
      Video editing: Ryan Kessler
      Transcriptions: Katy Moore

      Continue reading →

      Information security in high-impact areas

      Introduction

      As the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email. The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.

      The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.

      Podesta was suspicious, but the campaign’s IT team erroneously wrote the email was “legitimate” and told him to change his password. The IT team provided a safe link for Podesta to use, but it seems he or one of his staffers instead clicked the link in the forged email. That link was used by Russian intelligence hackers known as “Fancy Bear,” and they used their access to leak private campaign emails for public consumption in the final weeks of the 2016 race, embarrassing the Clinton team.

      While there are plausibly many critical factors in any close election, it’s possible that the controversy around the leaked emails played a non-trivial role in Clinton’s subsequent loss to Donald Trump. This would mean the failure of the campaign’s security team to prevent the hack — which might have come down to a mere typo — was extraordinarily consequential.

      These events vividly illustrate how careers in infosecurity at key organisations have the potential for outsized impact. Ideally, security professionals can develop robust practices that reduce the likelihood that a single slip-up will result in a significant breach.

      Continue reading →

      #141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

      It does seem like we’re on track towards having many instances of the largest models rolled out. Maybe every person gets a personal assistant. And as those systems get more and more intelligent, the effects that they have on the world increase and increase. And the interactions that they have with the people who are nominally using them become much more complicated.

      Maybe it starts to become less clear whether they’re being deceptive and so on… But we don’t really have concrete solutions right now.

      Richard Ngo

      Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.

      But do they really ‘understand’ what they’re saying, or do they just give the illusion of understanding?

      Today’s guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from ‘acting out’ as they become more powerful, are deployed and ultimately given power in society.

      One way to think about ‘understanding’ is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer.

      However, as Richard explains, another way to think about ‘understanding’ is as a functional matter. If you really understand an idea, you’re able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable.

      One experiment conducted by AI researchers suggests that language models have some of this kind of understanding.

      If you ask any of these models what city the Eiffel Tower is in and what else you might do on a holiday to visit the Eiffel Tower, they will say Paris and suggest visiting the Palace of Versailles and eating a croissant.

      One would be forgiven for wondering whether this might all be accomplished merely by memorising word associations in the text the model has been trained on. To investigate this, the researchers found the part of the model that stored the connection between ‘Eiffel Tower’ and ‘Paris,’ and flipped that connection from ‘Paris’ to ‘Rome.’

      If the model just associated some words with one another, you might think that this would lead it to now be mistaken about the location of the Eiffel Tower, but answer other questions correctly. However, this one flip was enough to switch its answers to many other questions as well. Now if you asked it what else you might visit on a trip to the Eiffel Tower, it will suggest visiting the Colosseum and eating pizza, among other changes.

      Another piece of evidence comes from the way models are prompted to give responses to questions. Researchers have found that telling models to talk through problems step by step often significantly improves their performance, which suggests that models are doing something useful with that extra “thinking time”.

      Richard argues, based on this and other experiments, that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve.

      We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck — or at least something sufficiently close to a duck it doesn’t matter.

      In today’s conversation, host Rob Wiblin and Richard discuss the above, as well as:

      • Could speeding up AI development be a bad thing?
      • The balance between excitement and fear when it comes to AI advances
      • Why OpenAI focuses its efforts where it does
      • Common misconceptions about machine learning
      • How many computer chips it might require to be able to do most of the things humans do
      • How Richard understands the ‘alignment problem’ differently than other people
      • Why ‘situational awareness’ may be a key concept for understanding the behaviour of AI models
      • What work to positively shape the development of AI Richard is and isn’t excited about
      • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field

      Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Milo McGuire and Ben Cordell
      Transcriptions: Katy Moore

      Continue reading →

      Marcus Davis on founding and leading Rethink Priorities

      You might say there’s nothing you can do now [to help wild animals]… Something like that might be true, but one of the things we can do is figure out what we should be trying to attempt.

      So if you try to understand the welfare of these animals, just gather basic facts about what their lives are like, this could help you understand how you should do this.

      Marcus Davis

      In this episode of 80k After Hours, Rob Wiblin interviews Marcus Davis about Rethink Priorities.

      Marcus is co-CEO there, in charge of their animal welfare and global health and development research.

      They cover:

      • Interventions to help wild animals
      • Aquatic noise
      • Rethink Priorities strategy
      • Mistakes that RP has made since it was founded
      • Careers in global priorities research
      • And the most surprising thing Marcus has learned at RP

      Who this episode is for:

      • People who want to learn about Rethink Priorities
      • People interested in a career in global priorities research
      • People open to novel ways to help wild animals

      Who this episode isn’t for:

      • People who think global priorities research sounds boring
      • People who want to host very loud concerts under the sea

      Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

      Producer: Keiran Harris
      Audio mastering: Milo McGuire and Ben Cordell
      Transcriptions: Katy Moore

      Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

      Continue reading →

      Four values at the heart of effective altruism

      What actually is effective altruism?

      Effective altruism isn’t about any particular way of doing good, like AI alignment or distributing malaria nets. Rather, it’s a way of thinking.

      Last summer, I wrote a new introduction to effective altruism for effectivealtruism.org. In it, I tried to sum up the effective altruism way of thinking in terms of four values. (I wrote this newsletter before FTX collapsed, but maybe that makes it even more important to reiterate the core values of EA.)

      1. Prioritisation. Resources are limited, so we have to make hard choices between potential interventions. While helping 10 people might feel as satisfying as helping 100, those extra 90 people really matter. And it turns out that some ways of helping achieve dramatically more than others, so it’s vital to really try to roughly compare ways we might help in terms of scale and effectiveness.
      2. Impartial altruism. It’s reasonable and good to have special concern for one’s own family, friends, life, etc. But when trying to do good in general, we should give everyone’s interests equal weight — no matter where or even when they live. People matter equally. And we should also give due weight to the interests of nonhumans.
      3. Open truth-seeking. Rather than starting with a commitment to a certain cause, consider many different ways to help and try to find the best ones you can. Put serious time into deliberation and reflection,

      Continue reading →

      Why being open to changing our minds is especially important right now

      If something surprises you, your view of the world should change in some way.

      We’ve argued that you should approach your career like a scientist doing experiments: be willing to test out many different paths and gather evidence about where you can have the most impact.

      More generally, this approach of open truth-seeking — being constantly, curiously on the lookout for new evidence and arguments, and always being ready to change our minds — is a virtue we think is absolutely crucial to doing good.

      One of our first-ever podcast episodes was an interview with Julia Galef, author of The Scout Mindset (before she wrote the book!).

      Julia argues — in our view, correctly — that it’s easy to end up viewing the world like a soldier, when really you should be more like a scout.

      Soldiers have set views and beliefs, and defend those beliefs. When we are acting like soldiers, we display motivated reasoning: for example, confirmation bias, where we seek out information that supports our existing beliefs and misinterpret information that is evidence against our position so that it seems like it’s not.

      Scouts, on the other hand, need to form correct beliefs. So they have to change their minds as they view more of the landscape.

      Acting like a scout isn’t always easy:

      • There’s lots of psychological evidence suggesting that we all have cognitive biases that cloud our thinking.

      Continue reading →

      Regarding the collapse of FTX

      The collapse of FTX is likely to cause a tremendous amount of harm – to customers, employees, and many others who have relied on FTX. We are deeply concerned about those affected and, along with our community, are grappling with how to respond.

      Though we do not know for sure whether anything illegal happened, we unequivocally condemn any immoral or illegal actions that may have taken place.

      Prior to this, we had celebrated Sam Bankman-Fried’s apparent success, had held him up as a positive example of someone pursuing a high-impact career, and had written about how we encouraged him to use a strategy of earning to give (for example here). We feel shaken by recent events, and are not sure exactly what to say or think.

      In the meantime, we will start by removing instances on our site where Sam was highlighted as a positive example of someone pursuing a high-impact career, since, to say the least, we no longer endorse that. We are leaving up discussions of Sam in places that seem important for transparency, for example this blog post on the growth of effective altruism in 2021.

      In the coming weeks and months we will be thinking hard about what we should do going forward and ways in which we should have acted differently.

      If you are out there trying the best you can to use your career to help solve the world’s most pressing problems with honesty and integrity,

      Continue reading →

        #140 – Bear Braumoeller on the case that war isn’t in decline

        Imagine I have a deck of 96 cards. The most common card has 1,000 battle deaths, but one of the cards is World War I, and one of the cards is World War II. How worried should you be about drawing a card from that deck?

        You could say, “Well, most of them are 1,000 battle deaths, so I shouldn’t be too worried.” But at the same time, World War I and World War II are in there, and if the deck hasn’t changed, we really need to be thoughtful about when it is we’re going to draw another card.

        Bear Braumoeller

        Is war in long-term decline? Steven Pinker’s The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out.

        But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe.

        Today’s guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age.

        The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours.

        If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we’re as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st.

        Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster.

        He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone.

        Among other metrics, Bear looks at:

        • Battlefield deaths alone, as a percentage of combatants’ populations, and as a percentage of world population.
        • The total number of wars starting in a given year.
        • Rates of war initiation as a fraction of all country pairs capable of fighting wars.
        • How likely it was during different periods that a given war would double in size.
        Image source.

        In a nutshell, and taking in the full picture painted by these different measures, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, “only the dead have seen the end of war”.

        That’s not to say things are the same in all periods. Depending on which indication of warlikeness you give the greatest weight, you can point to some periods that seem violent or pacific beyond what might be explained by random variation.

        For instance, Bear points out that war initiation really did go down a lot at the end of the Cold War, with peace probably fostered by a period of unipolar US dominance, and the end of great power funding for proxy wars.

        But that drop came after a period of somewhat above-average warlikeness during the Cold War. And surprisingly, the most peaceful period in Europe turns out not to be 1990–2015, but rather 1815–1855, during which the monarchical ‘Concert of Europe,’ scarred by the Napoleonic Wars, worked together to prevent revolution and interstate aggression.

        Why haven’t modern ideas about the immorality of violence led to the decline of war, when it’s such a natural thing to expect? Bear is no Enlightenment scholar, but his book notes (among other reasons) that while modernity threw up new reasons to embrace pacifism, it also gave us new reasons to embrace violence: as a means to overthrow monarchy, distribute the means of production more equally, or protect people a continent away from ethnic cleansing — all motives that would have been foreign in the 15th century.

        In today’s conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as:

        • What would Bear’s critics say in response to all this?
        • What do the optimists get right?
        • What are the biggest problems with the Correlates of War dataset?
        • How does one do proper statistical tests for events that are clumped together, like war deaths?
        • Why are deaths in war so concentrated in a handful of the most extreme events?
        • Did the ideas of the Enlightenment promote nonviolence, on balance?
        • Were early states more or less violent than groups of hunter-gatherers?
        • If Bear is right, what can be done?
        • How did the ‘Concert of Europe’ or ‘Bismarckian system’ maintain peace in the 19th century?
        • Which wars are remarkable but largely unknown?
        • What’s the connection between individual attitudes and group behaviour?
        • Is it a problem that this dataset looks at just the ‘state system’ and ‘battlefield deaths’?

        Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

        Producer: Keiran Harris
        Audio mastering: Ryan Kessler
        Transcriptions: Katy Moore

        Continue reading →

        The importance of considering speculative ideas

        Let’s admit it: some of the things we think about at 80,000 Hours are considered weird by a lot of other people.

        Our list of the most pressing problems has some pretty widely accepted concerns, to be sure: we care about mitigating climate change, preventing nuclear war, and ensuring good governance.

        But one of our highest priorities is preventing an AI-related catastrophe, which sounds like science fiction to a lot of people. And, though we know less about them, we’re also interested in speculative issues — such as atomically precise manufacturing, artificial sentience, and wild animal suffering. These aren’t typically the kind of issues activists distribute flyers about.

        Should it make us nervous that some of our ideas are out of the mainstream? It’s probably a good idea in these cases to take a step back, reexamine our premises, and consult others we trust about our conclusions. But we shouldn’t be too shocked if some of our beliefs end up at odds with common sense — indeed, I think everyone has good reason to be open to believing weird ideas.

        One of the best reasons for this view relates to another of 80,000 Hours’ top priorities: preventing catastrophic pandemics. I’d guess few people think it’s strange to be concerned about pandemics now, as COVID-19 has killed more than 6 million people worldwide and thrown the global economy into chaos.

        Continue reading →

        #139 – Alan Hájek on puzzles and paradoxes in probability and expected value

        Length is not bounded, volume is not bounded, time, spacetime curvature — various things are not bounded. And why should utility be?

        Normally when you do have a bounded quantity, you can say why it is and you can say what the bound is. Think of, say, angle: if you think of it one way, angle is bounded by zero to 360 degrees, and it’s easy to explain that. Probability is bounded with a top value of 1, bottom value of 0. Not so easy to say in the case of utility.

        Alan Hájek

        A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

        The standard way of analysing gambling problems, ‘expected value‘ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for ‘0.5 * $2 = $1’ in expected earnings. A 25% chance of winning $4, for ‘0.25 * $4 = $1’ in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that’s despite the fact that you know with certainty you can only ever win a finite amount!

        Today’s guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

        The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

        We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn’t find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

        These issues regularly show up in 80,000 Hours’ efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

        Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

        Insisting that people give up a sure thing in favour of a vanishingly low chance of a very large impact strikes some people as peculiar or even fanatical. But one of Alan’s PhD students, Hayden Wilkinson, discovered that rejecting expected value on this basis requires you to swallow even more bitter pills, like giving up on the idea that if A is better than B, and B is better than C, then A is also better than C.

        Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we’re better off looking for ways our probability estimates might be wrong.

        In today’s conversation, Alan and Rob explore these issues and many others:

        • Simple rules of thumb for having philosophical insights
        • A key flaw that hid in Pascal’s wager from the very beginning
        • Whether we have to simply ignore infinities because they mess everything up
        • What fundamentally is ‘probability’?
        • Some of the many reasons ‘frequentism’ doesn’t work as an account of probability
        • Why the standard account of counterfactuals in philosophy is deeply flawed
        • And why counterfactuals present a fatal problem for one sort of consequentialism

        Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

        Producer: Keiran Harris
        Audio mastering: Ben Cordell and Ryan Kessler
        Transcriptions: Katy Moore

        Continue reading →

        Open position: Recruiter

        The role

        You’ll be managed by Sashika Coxhead, our Head of Recruiting, and will have the opportunity to work closely with hiring managers from other teams.

        Initial responsibilities will include:

        • Project management of active recruiting rounds. For example, overseeing the candidate pipeline and logistics of hiring rounds, making decisions on initial applications, and managing candidate communications.
        • Sourcing potential candidates. This might include generating leads for specific roles, publicising new positions, reaching out to potential candidates, and answering any questions they have about working at 80,000 Hours.
        • Taking on special projects to improve our recruiting systems. For example, you might help to build an excellent applicant tracking system, test ways to improve our ability to generate leads, or introduce strategies to make our hiring rounds more efficient.

        Depending on your skills and interests, you might also:

        • Take ownership of a particular area of our recruiting process, e.g. proactive outreach to potential candidates, our applicant tracking system, or metrics for the recruiting team’s success.
        • Conduct screening interviews where needed, to assess applicants’ fit for particular roles at 80,000 Hours.

        After some time in the role, we’d hope for you to sit on internal hiring committees. This involves forming an inside view on candidates’ performance; discussing uncertainties with the hiring manager and committee; and, with the other committee members, giving final approval on who to make offers to.

        Continue reading →

          Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?

          We’ve argued that preventing an AI-related catastrophe may be the world’s most pressing problem, and that while progress in AI over the next few decades could have enormous benefits, it could also pose severe, possibly existential risks. As a result, we think that working on some technical AI research — research related to AI safety — may be a particularly high-impact career path.

          But there are many ways of approaching this path that involve researching or otherwise advancing AI capabilities — meaning making AI systems better at some specific skills — rather than only doing things that are purely in the domain of safety. In short, this is because:

          • Capabilities work and some forms of safety work are intertwined.
          • Many available ways of learning enough about AI to contribute to safety are via capabilities-enhancing roles.

          So if you want to help prevent an AI-related catastrophe, should you be open to roles that also advance AI capabilities, or steer clear of them?

          We think this is a hard question! Capabilities-enhancing roles could be beneficial or harmful. For any role, there are a range of considerations — and reasonable people disagree on whether, and in what cases, the risks outweigh the benefits.

          So we asked the 22 people we thought would be most informed about this issue — and who we knew had a range of views —

          Continue reading →