How 80,000 Hours has changed some of our advice after the collapse of FTX

Credit: William Santos (CC0)

Following the bankruptcy of FTX and the federal indictment of Sam Bankman-Fried, many members of the team at 80,000 Hours were deeply shaken. As we have said, we had previously featured Sam on our site as a positive example of earning to give, a mistake we now regret. We felt appalled by his conduct and at the harm done to the people who had relied on FTX.

These events were emotionally difficult for many of us on the team, and we were troubled by the implications it might have for our attempts to do good in the world. We had linked our reputation with his, and his conduct left us with serious questions about effective altruism and our approach to impactful careers.

We reflected a lot, had many difficult conversations, and worked through a lot of complicated questions. There’s still a lot we don’t know about what happened, there’s a diversity of views within the 80,000 Hours team, and we expect the learning process to be ongoing.

Ultimately, we still believe strongly in the principles that drive our work, and we stand by the vast majority of our advice. But we did make some significant updates in our thinking, and we’ve changed many parts of the site to reflect them. We wrote this post to summarise the site updates we’ve made and to explain the motivations behind them, for transparency purposes and to further highlight the themes that unify the changes.

We also support many efforts to push for broader changes in the effective altruism community, like improved governance.1 But 80,000 Hours’ written advice is primarily aimed at personal career choices, so we focused on the actions and attitudes of individuals in these updates to the site’s content.

The changes we made

We think that while ambition in doing good is still underrated by many, we think it’s more important now to emphasise the downsides of ambition. Our articles on being more ambitious and the potential for accidental harm had both mentioned the potential risks, but we’ve expanded on these discussions and made the warnings more salient for the reader.

We expanded our discussion of the reasons against pursuing a harmful career. And we’ve added more discussion in many places, most notably our article on the definition of “social impact” and in a new blog post from Benjamin Todd on moderation, about why we don’t encourage people to focus solely, to the exclusion of all other values, on aiming at what they think is impartially good.

We also used this round of updates to correct some other issues that came up during the reflections on our advice after the collapse of FTX.

The project to make these website changes was implemented by Benjamin Todd, Cody Fenwick and Arden Koehler, with some input from the rest of the team.

Here is a summary of all the changes we made:

  • We updated our advice on earning to give to include Sam as a negative example, and we discussed at more length the risks of harm or corruption. We express more scepticism about highly ambitious earning to give (though we don’t rule it out, and we think it can still be used for good with the right safeguards).
  • In our article on leverage, we added discussion of the downsides and responsibility that comes with having a lot of leverage, such as the importance of governance and accountability for influential people.
  • We clarified our views on risk and put more emphasis on how you should generally only seek upsides after limiting downsides, for both yourself and the world.
  • We put greater emphasis on respecting a range of values and cultivating character in addition to caring about impact, as well as not doing things that seem very wrong from a commonsense perspective for what one perceives as the “greater good.”
  • We added a lot more advice on how to avoid accidentally doing harm.
  • We took easy opportunities to tone down language around maximisation and optimisation. For instance, we talk about doing more good, or doing good as one important goal among several, rather than the most good. There’s a lot of room for debate about these issues, and we’re not in total agreement on the team about the precise details, but we generally think it’s plausible that Sam’s unusual willingness to fully embrace naive maximising contributed to the decision making behind FTX’s collapse.
  • We slightly reduced how much we emphasise the importance of getting involved with the effective altruism community, which now has a murkier historical impact compared to what we thought before the collapse. (To be clear, we still think there are tons of great things about the EA community, continue to encourage people to get involved in it, and continue to count ourselves as part of it!)
  • We released a newsletter about character virtue and a blog post about moderation.
  • We’ve started doing more vetting of the case studies we feature on the site.
  • We have moved the “Founder of new project tackling top problems” out of our priority paths and into the “high-impact but especially competitive” section on the career reviews page. This move was in part driven by the change in the funding landscape after the collapse of FTX — but also because the recent proliferation of new such projects likely reduces the marginal value of the typical additional project.

We’re still considering some other changes, such as to our ranking of effective altruism community building and certain other careers, as well as doing even more to emphasise character, governance, oversight, and related issues. But we didn’t want to wait to be ‘done’ with these edits, to the degree we ever will be ‘done’ learning lessons from this episode, before sharing this interim update with readers.

Some of the articles that saw the most changes were:

We’ve also updated some of our marketing materials, mostly by toning down calls to “maximise impact.” We still think it’s really important to be scope sensitive, and helping more individuals is better than helping fewer — some of the core ideas of effective altruism. But handling these ideas in a naive way, as the maximising language may incline some toward, can be counterproductive and miss out on important considerations.

We think there’s a lot more we can learn from what happened. Here are some of the reflections members of the 80k team have had:

We think the edits we’ve made are only a small part of the response that’s needed, but hopefully they move things in the right direction.

Notes and references

  1. We think it’d be good if the effective altruism community did more to avoid attracting reckless or dangerous individuals, such as those who could be drawn to aggressive optimising for one target while disregarding important moral norms. These aims could be furthered by significant reforms in institutional practices, especially governance of important organisations. Governance improvements (e.g. doing more to avoid conflicts of interest) could also help with improving people’s judgement.