Doing good together — how to coordinate effectively, and avoid single-player thinking

Sapiens can cooperate in flexible ways with countless numbers of strangers. That’s why we rule the world, whereas ants eat our leftovers and chimps are locked up in zoos.

The historian, Yuval Harari, claims in his book Sapiens that better coordination has been the key driver of human progress. He highlights innovations like language, religion, human rights, nation states and money as valuable because they improve cooperation among strangers.1

If we work together, we can do far more good. This is part of why we started the effective altruism community in the first place: we realised that by working with others who want to do good in a similar way — based on evidence and careful reasoning — we could achieve much more. (If you’re new to effective altruism, and want to get involved in person, see this short guide).

But unfortunately we, like other communities, often don’t coordinate as well as we could.

Instead, especially in effective altruism, people engage in “single-player” thinking. They work out what would be the best course of action if others weren’t responding to what they do. But once you’re part of a community that does respond to your actions, this assumption breaks down. We need to develop new rules of thumb for doing good — the strategies and approaches that work well in a single-player situation often don’t work once you’re collaborating with a community.

In this article, we’ll overview everything we’ve learned about how best to coordinate. By this we mean how individuals within a group can best work together to maximise their collective impact.

We’ll start by covering the basic mechanisms of coordination, including the core concepts in economics that are relevant, and some new concepts like indirect trade and shared aims communities.

We’ll also list the practical ways in which doing good changes when coordinating.

We’ll use the effective altruism community as a case study. However, we think the same ideas apply to many communities with common aims, such as environmentalism, social enterprise, international development, and so on.

We still face a great deal of uncertainty about these questions, many of which have seen little study. This also makes the topic one of the most intellectually interesting in the community right now, and we’d like to see it turn into a new branch of research.

This article is part of our advanced series. If you’re new to 80,000 Hours, see our main career guide first. In particular, this article assumes you’ve read our introduction to working together as a community.

Thank you especially to Max Dalton for comments on this article, as well as Carl Shulman, Will MacAskill and many others.

Reading time: 40 minutes.

Table of Contents

The bottom line

Through trade, coordination and economies of scale, individuals can achieve greater impact by working within a community than they could working individually.

These opportunities are often overlooked when people make a “single-player” analysis of impact, in which the actions of others are held fixed. This can lead to inaccurate estimates of impact, missed opportunities for trade, and leave people open to coordination failures such as the prisoner’s dilemma.

Coordinating well requires having good mechanisms, such as markets (e.g. certificates of impact), norms that support coordination, common knowledge, and other structures.

Communities of entirely self-interested people can still cooperate to a large degree, but “shared aims” communities can likely cooperate even more.

For the effective altruism community to coordinate better we should:

  • Aim to uphold “nice norms”, such as helpfulness, honesty, compromise and friendliness to an exceptional degree, but also be willing to withhold aid from people who break these norms.

  • Consider the impact of our actions on “community capital” — the potential of the community to have an impact in the future. We might prioritise actions that grow the influence of the community, improve the community’s reputation, set an example by upholding norms, or set up community infrastructure to spread information and enable trades.

  • Work out which options to take using a “portfolio approach”, in which individuals consider how they can push the community towards the best possible allocation of its resources. For example, this means considering one’s comparative advantage as well as personal fit when comparing career opportunities.

When allocating credit in a community, people shouldn’t just carry out a simple counterfactual analysis. Rather, if they take an opportunity that another community member would have taken otherwise, then they also “free up” community resources, and the value of these needs to be considered as well. When actors in a community are “coupled”, then they need to be evaluated as a single agent when assigning credit.

You can be coordinating with multiple communities at once, and communities can also coordinate among themselves.

Many of the issues we raise haven’t received much systematic study. We’d like to see a new research program focused on the question of how (partially) altruistic agents can best work together in order to maximise their collective impact. See some proposals for research in this research proposal by Max Dalton and the Global Priority Institute’s research agenda.

Theory: What are the mechanisms behind coordination?

What is coordination, why is it important, and what prevents us from coordinating better? In this section, we cover these theoretical questions.

Much of coordination is driven by trade – swaps that members of the community make with each other for mutual benefit. We’ll introduce four types of trade used to coordinate. Then we’ll introduce the concept of coordination failures, and explain how they can be avoided through norms and other systems.

We’ll show how coordinating communities, such as “market” and “shared aims” communities can achieve a greater impact than “single-player” communities.

In the next section, we’ll apply these mechanisms to come up with practical suggestions for how to coordinate better applied to the effective altruism community in particular. If you just want the practical suggestions, skip ahead. But the mechanisms of coordination are crucial concepts to understand, and also cause some of the world’s most pressing problems.

Single player communities and their problems

When trying to do good, one option is to ignore coordination. In 2016, we called this the “single player” approach.

In the single-player approach, each individual works out which action is best at the margin according to their goals, assuming that everyone else’s actions stay constant.

Assuming what everyone else does is constant is a good bet in many circumstances. Sometimes your actions are too small to impact others, or other actors don’t care what you do.

For instance, if you’re considering donating in an area where all the other donors give based on emotional connection, then their donations won’t change depending on how much you give. So, you can just work out where they’ll give, then donate to the most effective opportunity that’s still unfunded.

This style of analysis is often encouraged within the effective altruism community, since introductions to effective altruism often include “thinking at the margin” as a key concept. Marginal analysis encourages people to assume that the actions of others are fixed.

However, when you’re part of a community (and in other situations), other actors are responsive to what you do, and this assumption breaks down. This doesn’t mean that thinking at the margin is wrong, rather it means you need to be careful to define the right margin. In a community, a marginal analysis needs to take account of how the actions of others will change in response to you.

Ignoring this responsiveness causes a number of problems, which leads to members of the community having less impact.

First, it can lead you to over or underestimate the impact of different actions. For instance, if you think the Future of Humanity Institute (FHI) currently has a funding gap of $100,000, you might make a single-player analysis that donations up to this limit will be about as cost-effective as FHI on average. But in fact, if you don’t donate, then additional donors will probably come in to fill the gap, at least over time, making your donations less effective than they first seem.2

On the other hand, if you think other donors will donate anyway, then you might think your donations will have zero impact. But that’s also not true. If you fill the funding gap, then it frees up other donors to take different opportunities, while saving the charity time.

Either way, you can’t simply ignore what other donors will do. We’ll explain how donors in a community should decide where to give and attribute impact later.

Second, and more importantly, taking a single-player approach overlooks the possibility of trade.

Trade is important because everyone has different strengths, weaknesses and resources. For instance, if you know a lot about charity A, and another donor knows a lot about charity B, then you can swap information. This allows you to both make better decisions than if you had decided individually.

By making swaps, everyone in the community can better achieve their goals — it’s positive sum. Much of what we’ll talk about in this article is about how to enable more trades.

Being able to trade doesn’t only enable useful swaps. Over time, it allows people to specialise and achieve economies of scale, allowing for an even greater impact. For instance, a donor could become a specialist in health, then tell other donors about the best opportunities in that area. Other donors can specialise elsewhere, and the group can gain much better expertise than they could alone. They can also share resources, such as tax advice, and so each get more for the same cost.

Third, sometimes, following individual interest leads to worse outcomes for everyone. A single-player approach can lead to “coordination failures”, which are studied as part of game theory. The most well-known coordination problem is the prisoner’s dilemma.3 In case you’re not already familiar with the prisoner’s dilemma, one round works as follows:

Imagine that you and a co-conspirator have been arrested after robbing a bank, and are being held in separate jail cells. Now you must decide whether to “cooperate” with each other— by remaining silent and admitting nothing— or to “defect” from your partnership by ratting out the other to the police. You know that if you both cooperate with each other and keep silent, the state doesn’t have enough evidence to convict either one of you, so you’ll both walk free, splitting the loot— half a million dollars each, let’s say. If one of you defects and informs on the other, and the other says nothing, the informer goes free and gets the entire million dollars, while the silent one is convicted as the sole perpetrator of the crime and receives a ten-year sentence. If you both inform on each other, then you’ll share the blame and split the sentence: five years each.

Here’s a table of the payoffs:

 Other person
Remain silent (co-operate)Rat out other person (defect)
YouRemain silent (co-operate)You: $0.5m
Them: $0.5m
You: 10 years jail
Them: $1m
Rat out other person (defect)You: $1m
Them: 10 years in jail
You: 5 years jail
Them: 5 years jail

Here’s the problem. No matter what your accomplice does, it’s always better for you to defect. If your accomplice has ratted you out, ratting them out in turn will give you five years of your life back— you’ll get the shared sentence (five years) rather than serving the whole thing yourself (ten years). And if your accomplice has stayed quiet, turning them in will net you the full million dollars— you won’t have to split it. No matter what, you’re always better off defecting than cooperating, regardless of what your accomplice decides. To do otherwise will always make you worse off, no matter what. In fact, this makes defection not merely the equilibrium strategy but what’s known as a dominant strategy.

In standard cases of trade, both people act in their self-interest, but it leads to a win-win situation that’s better for both. However, in the prisoner’s dilemma, acting in a self-interested way leads, due to the structure of the scenario, to a lose-lose. We’re going to call a situation like this a “coordination failure”. (More technically, you can think of it as when the Nash equilibrium of a coordination game is not the optimum).

We’re going to come back to this concept regularly over the rest of this article. Although there are other forms of coordination failure, the prisoner’s dilemma is perhaps the most important, because it’s very simple, but it’s also difficult to avoid because everyone has an incentive to defect.

This makes it a good test case for a community: if you can avoid prisoner dilemma style coordination failures, then you’ll probably be able to overcome easier coordination problems too. So, much of this discussion about how to coordinate is about how to overcome the prisoner’s dilemma.

What’s more, situations with a structure similar to the prisoner’s dilemma seem fairly common in real life. They arise when there is potential for a positive sum gain, but taking it carries a risk of getting exploited.

For instance, elsewhere, we’ve covered the example of two companies locked in a legal battle. If you hire an expensive lawyer, and your opponent doesn’t, then you’ll win. If your opponent hires an expensive lawyer as well, then you tie. However, if you don’t hire an expensive lawyer, then you’ll either tie or lose. So, either way, it’s better to hire the expensive lawyer.

However, it would have been best for both sides if they had agreed to hire cheap lawyers or settle quickly. Then they could have split the saved legal fees and come out ahead.4

If we generalise the prisoner’s dilemma to multiple groups, then we get a “tragedy of the commons”. These also seem common, and in fact lie at the heart of many key global problems.

One example is climate change. It would be better for humanity at large if every country cut their emissions, avoiding the possibility of run-away warming.

However, from the perspective of each individual country, it’s better to defect — benefit from the cuts that other countries make, while gaining an economic edge.

Coordination failures also crop up in issues like avoiding nuclear war, and the development of new technologies that provide benefits to their deployer but create global risks, such as AI and biotechnology.

You can see a list of other examples on wikipedia and in this article, or for a more poetic introduction, see Meditations on Moloch.

Summing up, there are several problems with single player thinking:

  • Broadly, if other actors in a community are responsive to what you do, then ignoring these effects will cause you to mis-estimate your impact.

  • One particularly important mistake is to overlook the possibility of trade, which can enable each individual to have a much greater impact since it can be positive sum, and it can also enable specialisation and economies of scale.

  • Single-player thinking also leaves you vulnerable to coordination failures such as the prisoner’s dilemma, reducing impact.

So, how can we enable trade and avoid coordination failures, and therefore maximise the collective impact of a community? We’ll now introduce several different types of mechanisms to increase coordination.

Market communities

One option to achieve more coordination is to set up a “market”. In a market, each individual has their own goals, but they make bids for what they want with other agents (often using a common set of institutions or intermediaries). This is how we organise much of society; for instance buying and selling houses, food and cars.

Market communities facilitate trade. For instance, if you really want to rent a house with a nice view, then you can pay more for it, and in doing so, compensate someone else who doesn’t value the view as much. This creates a win-win — you get the view, and the other person gets more money, which they can use to buy something they value more.

If the market works well, then the prices will converge towards those where everyone’s preferences are better satisfied.

The price signals then incentivise people to build more of the houses that are most in-demand. And then we also get specialisation and economies of scale.

It’s possible to trade directly through barter, but this involves large transaction costs (and other issues). For instance, if you have a spare room in your house and want to use it in a barter, you need to find someone who wants your room who has something you want in return, which is hard. It’s far easier to use a currency — rent the room for money, then you can spend the money on whatever you want in return.

The introduction of a currency allows far more trades to be made, making it arguably one of the most important innovations in history.

There have been some suggestions to use markets to enable more trade within the effective altruism community, such as certificates of impact — an alternative funding model for non-profits.

Today, non-profits make a plan, then raise money from donors to carry out the plan. With certificates of impact, once someone has an impact, they make a certificate, which they sell to donors in an auction.

These certificates can be exchanged, creating a market for impact. This creates a financial incentive for people to accurately understand the value of the certificates and to create more of them (i.e. do more good). There are many complications with the idea, but there’s a chance this funding model could be more efficient than the non-profit sector’s current one. If you’re interested, we’d encourage you to read more.

Market communities can support far more coordination, and are a major reason for the wealth of the modern world. However, market communities still have downsides.

The key problem is that market prices can fail to reflect the prices that would be optimal from a social perspective due to “market failures”.

For instance, some actions might create effects on third parties, which aren’t captured in prices — “externalities”. Or if one side has better information than another (“asymmetric information”), then people might refuse to trade due to fear of being exploited (the “lemon problem”).

Likewise, people who start with the most currency get the most power to have their preferences satisfied; whereas if you think everyone’s preferences count equally, this could lead to problems.

Market communities can also still fall prey to coordination failures like the tragedy of the commons.

Even absent these problems, market trades often still involve significant transaction costs, so mutually beneficial trades can fail to go ahead. It’s difficult to find trading partners, verify that they upheld their end of the bargain, and enforce contracts if they don’t.

When these transaction costs get too high relative to the gains from the trade, the trade stops being worthwhile.

Transaction costs tend to be low in large markets for simple goods (e.g. apples, pencils), but high for complex goods where it’s difficult to ascertain their quality, such as hiring someone to do research or management.

Some of the difficulties of hiring people are called principal-agent problems. The basic idea is that because the manager (the “principal”) doesn’t have perfect oversight, if there are even small differences in the aims of the person being hired (“agent”) and the person who hired them, then the agent will typically do something quite different from what the principal would most want in order to better further the agent’s own aims.

For instance, in divorce proceedings, the lawyers will often find small ways to spark conflict and draw out the process, because they’re paid per hour. This can often make the legal fees significantly higher than what would be ideal from the perspective of the divorcing couple.

Principal-agent problems seem to be one of the mechanisms behind community talent constraints.5

Economists spend most of their efforts studying market mechanisms for coordination, which makes sense given that markets have unleashed much of the wealth of the modern world. However, it’s possible to have coordination through other means. In the next section, we’ll discuss some ways to avoid coordination failures and then non-market mechanisms of trade.

How to avoid coordination failures in self-interested communities, and one reason it pays to be nice

Even if agents are selfish, and only care about their own aims, it’s still possible to avoid coordination failures. We’ll cover three main types of solution that communities can use to have a greater collective impact: structures, common knowledge, and norms.

Through structures

One way to avoid a prisoner’s dilemma is to give power to another entity that enforces coordination. For instance, if each prisoner is a member of the mafia, then they know that if they defect they’ll be killed. This means that each prisoner will be better off by staying quiet (regardless of what the other says), and this lets them achieve the optimal joint outcome. Joining the mafia changed the incentive structure, allowing a better result.

We do this in society with governments and regulation. Governments can (in theory) mandate that everyone take the actions that will lead to the optimal outcome, then punish those who defect.

Governments can also enforce contracts, which provide another route to avoiding coordination failures. Instead of the mafia boss who will kill prisoners who defect, each prisoner could enter into a contract saying they will pay a penalty if they defect.

In all modern economies, we combine markets with governments to enforce contracts and regulation to address market failures.

Most communities don’t have as much power as governments, but they can still set up systems to exclude defectors, and achieve some of these benefits.

Through common knowledge

In some coordination failures, a better outcome exists for everyone, but it requires everyone to switch to the better option at the same time. This is only possible in the presence of “common knowledge” — each person knows that everyone else is going to take the better option.

This means that avoiding coordination failures often requires the dissemination of this knowledge through the community, such as through media, or simple gossip. Communities that coordinate have developed mechanisms to spread this knowledge in a trusted way.

Read more about common knowledge.

Through nice norms and reputation

Perhaps the most important way to avoid coordination failures in informal communities is through norms and reputation.

In a community where you participate over time, you are essentially given the choice to cooperate or defect over and over again. The repeated nature of the game changes the situation significantly. Instead of defecting, it becomes better to earn a reputation for playing nice. This lets you cooperate with others, and do better over the long-term.

This sounds common sense, but it was also studied more rigorously by Robert Axelrod in a series of tournaments in 1980.

Rather than theorise about which approach was best, Axelrod invited researchers to submit algorithms designed to play prisoner’s dilemma. In each “round”, each algorithm was pitted against another, and given the choice of defecting or cooperating. The aim was to earn as many points as possible over all the rounds. This research is summarised in a paper The Evolution of Cooperation from 1981 (pdf), and later expanded into a book with the same name in 1984.

As we showed, in a single round of prisoner’s dilemma the best strategy is to defect. However in the tournament with multiple rounds, the algorithm that scored the most was “tit for tat”. It always starts by cooperating, but if its opponent defects, it defects the next time too. This algorithm was able to cooperate with other nice algorithms, while never getting exploited more than once by defectors. This means that although tit for tat at best tied with its immediate opponent, it built up an edge over time.6

Axelrod also performed an “ecological” version of the tournament, in which the number of copies of an algorithm in the tournament depends on its total score at that moment, with the aim of mimicking an evolutionary process. In this case, it was found that groups of cooperative algorithms all being nice to each other would become more and more prevalent over time.

What’s more, if every algorithm adopts tit for tat, then they all always cooperate, leading to the best outcome for everyone, so it’s a great strategy for a group to adopt as a whole (especially if common knowledge of this exists).

Further research suggested that sometimes it’s even better to employ “tit for two tats”.

In this approach, if someone defects against you, you give them a second chance to cooperate before also defecting on them.

This prevents a “death spiral” in which initial defection causes two algorithms to get locked in a cycle of defecting. Tit for two tats is a bit more vulnerable to algorithms that defect, so whether it’s better depends on the fraction of aggressive defectors in the pool.

Axelrod summed up the behavior of the characteristics of the most successful algorithms as follows:7

* Be nice: cooperate, never be the first to defect.
* Be provocable: return defection for defection, cooperation for cooperation.
* Don’t be envious: focus on maximizing your own ‘score’, as opposed to ensuring your score is higher than your ‘partner’s’.
* Don’t be too clever: or, don’t try to be tricky. Clarity is essential for others to cooperate with you.

It turned out that adopting these simple principles or “norms” of behaviour led to the best outcome for the agent. These mirror the norms of ordinary ethical behaviour — kindness, justice, forgiveness, resisting envy and honest straightforwardness — suggesting that following these norms is not only better for others, but also benefits the agent concerned.

This probably isn’t a coincidence. Human societies have likely evolved norms that support good outcomes for the individuals and societies involved.

What are the practical implications of this research? It’s difficult to know the extent to which Axelrod’s tournament is a good model of messy real life, but it certainly suggests that if you have ongoing involvement in a community, it’s better to obey the cooperative norms sketched above.

This is especially true because these norms seem to be deeply rooted in human behaviour. People will often punish defectors and reward cooperation almost automatically, out of a sense of justice and gratitude, even if doing so is of no benefit to them.

What’s more, in real life agents can share information about your reputation. In Axelrod’s tournaments, the algorithms didn’t know how others had behaved in other games. However, in real life if you defect against one person, then others will find out. This magnifies the costs of defection, especially in a tight knit community. Likewise, if you have a history of cooperating, then others will be more likely to cooperate with you, increasing the rewards of being nice.

So, when reputation is at play, the rewards to being nice, which was already the best strategy, go up even more.

What’s more, technology has made it easier and easier to track reputation over time, suggesting that the rewards to being nice are going up.

One striking fact about all of the above is that it still applies even if you only care about your own aims: even if you’re selfish, it’s better to be nice. This means they’re an example of “reciprocal altruism”. Reciprocal altruism is when it’s rational for a selfish agent to act altruistically, on the understanding that other agents will return the favour (reciprocate) at a later date.

However, obeying these norms is not only better for the agent concerned, it’s also better for everyone else in the community. This is because it helps the community get into the equilibrium where everyone cooperates, maximising the collective payoff.

For more on how coordination failures can be solved, see An equilibrium of no free energy by Eliezer Yudkowsky.

Pre-emptive and indirect trade

Now let’s return to ways to increase trade beyond market communities. We saw that even if we have a market, many worthwhile trades can fail to go ahead due to transaction costs. Communities have come up with several ways to avoid this problem.

One mechanism we call “pre-emptive trade”. In a pre-emptive trade, if you see an opportunity to benefit another community member at little cost to yourself, you take it. The hope is that they will return the favour in the future. This allows more trades to take place, since the trade can still go ahead even when the person isn’t able to pay it back immediately and when you’re not sure it will be returned. Instead the hope is that if you do lots of favours, on average you’ll get more back than you put in.

Pre-emptive trade is another example of reciprocal altruism — you’re being nice to other community members in the hope they will be nice in return. It’s yet another reason to be nice.

Going one level further, we get “indirect trade”. For instance, in some professional networks, people try to follow the norm of “paying it forward”. Junior members get mentoring from senior members, without giving the senior members anything in return. Instead, the junior members are expected to mentor the next generation of junior members. This creates a chain of mentoring from generation to generation. The result is that each generation gets the mentoring they need, but aid is never directly exchanged.

Other groups sometimes follow the norm of helping whoever is in need, without asking for anything directly in return. The understanding is that in the long-term, the total benefits of being in the group are larger than being out of it.

For instance, maybe A helps B, then B helps C and then C helps A. Then, all three have benefited, but no direct trades were ever made. Rather than “I’ll scratch my back if you scratch mine”, it’s a ring of back scratching.

This is also how families work. If you can help your brother with his homework, you don’t (usually) make an explicit bargain that your brother will help you with something else. Rather, you just help your brother. In part, there’s an understanding that if everyone does this, the family will be stronger, and you’ll all benefit in the long-term. Maybe your brother will help you with something else, or maybe because your brother does better in school, your mother has more time to help you with something else, and so on.

We call this “indirect trade” because people trade via the community rather than in pairs. It’s a way to have more trade without a market.

However, we can see that pre-emptive and indirect trade can only exist in the presence of a significant amount of trust. Normal markets require trust to function, because participants can worry about getting cheated, but ultimately people make explicit bargains that are relatively easy to verify. With indirect trade, however, you need to trust that each person you help will go on to help others in the future, which is much harder to check up on.

Moreover, from each individual’s perspective, there’s an incentive to cheat. It’s best to accept the favours without contributing anything in return. This means that the situation is similar to a prisoner’s dilemma.

Communities can avoid these issues through the mechanisms we’ve already covered. If community members can track the reputation of other members, then other members can stop giving favours to people who have a reputation for defecting. Likewise, if everyone obeys the cooperative norms we covered in the previous section, then these forms of trade can go ahead, creating further reason to follow the norms.

Pre-emptive and indirect trade can also only exist when each member expects to gain more from the network in the long-term than they put in. This is why professional networks tend to involve people of roughly similar ability and influence.

We’ve now seen several reasons to default to being nice to other community members: avoiding coordination failures and generating pre-emptive and indirect trades. There’s also some empirical evidence to back up the idea that it pays to be nice. In Give and Take, Adam Grant presents data that the most successful people in many organisations are “givers” — those who tend to help other people without counting whether they get benefits in return.

However, he also shows that some givers end up unsuccessful, and one reason is that they get exploited by takers. This shows that you need to be both nice and provocable. This is exactly what we should expect based on what we’ve covered.

Shared aim communities and “trade plus”

Everything we’ve covered so far is based on the assumption that each agent only cares about their own aims i.e. is perfectly self-interested.

What’s remarkable is that even if people are self-interested, they can still cooperate to a large degree through the mechanisms we’ve covered and end up ahead according to their own values. This suggests that even if you disagree with everyone in a community, you can still profitably work with them.

However, what about if the agents also care about each others’ wellbeing, or share a common goal? We call these “shared aims” communities. The effective altruism community is an example, because at least to some degree, everyone in the community cares about the common goal of social impact, and our definitions of this overlap to some degree. Likewise, environmentalists want to protect the environment, feminists want to promote women’s rights, and so on. How might cooperation be different in these cases?

This is a question that has received little research. Most research on coordination in economics, game theory and computer science, has focused on selfish agents. Our speculation, however, is that shared aims communities have the potential to achieve a significantly greater degree of cooperation.

One reason is that members of a shared aims community might trust each other more, enabling more trade. But further than this, in a shared aims community you don’t even need to trade, because if you help another community member, then that achieves your goals as well. For instance, if you advise someone on their career and help them have a greater impact, you increase your own impact too. So, you both succeed.

We call trade when nothing is given in return “trade plus” or “trade+”. It’s even more extreme than indirect trade, since it’s worth helping people even if you never expect anyone else in the community to give you anything in return.

Trade+ has the advantage of potentially even lower transaction costs, since you no longer need to verify that the party has given you what they promised. This is especially valuable with complex goods and in situations with principal-agent problems.

For instance, you could decide you will only give advice to people in the community in exchange for money. But, it’s hard to know whether advice will be useful before you hear it, so people are often reluctant to pay for advice.

Providing advice for money also creates lots of overheads, such as a contract, tracking payments, paying tax and so on. A community can probably operate more efficiently if people provide advice for free when they see a topic they know something about.

On the other hand, trade+ can only operate in the presence of a large degree of trust. Each community member needs to believe that the others sincerely share the aims to a large enough degree. The more aligned you are with other community members, the more trade+ you can do.

As well as having the potential for more trade, shared aims communities should also in theory be the most resistant to coordination failures. Going back to the prisoner’s dilemma, if each agent values the welfare of the other, then they should definitely both coordinate.

They can also be more resilient to market failures, because people will care about the negative externalities of their actions, and have less incentive to exploit other community members.

In sum, shared aims communities can use all of the coordination mechanisms used by self-interested communities, but apply them more effectively, and use additional mechanisms, such as trade+.

To what extent is the effective altruism community a shared aims community?

It’s true there are some differences in values within the community, but there is also a lot of agreement. We mostly value welfare to a significant degree, including the welfare of animals, and we also mostly put some weight on the value of future generations.

What’s more, even when values are different, our goals can still converge. For instance, people with different values might still have a common interest in global priorities research, since everyone would prefer to have more information about what’s most effective rather than less.

People in the community also recognise there are reasons for our aims to converge, such as:

  1. Normative uncertainty – if someone similar to you values an outcome strongly, that’s reason to be uncertain about whether you should value it or not, which merits putting some value on it yourself. Find out more in our podcast with Will MacAskill.

  2. Epistemic humility – if other agents similar to you think that a statement, P, is true, then that’s evidence you should believe it too. Read more.

Taking these factors into account leads to greater convergence in values and aims.

Summary: how can shared aims communities best coordinate?

We’ve shown that by coordinating, the members of a community can achieve a greater impact than if they merely think in a single player way.

We’ve also sketched some of the broad mechanisms for coordination, including:

  1. Trade.
  2. Specialisation and economies of scale (made possible by trade).
  3. Strategies for avoiding coordination failures, such as structural solutions, common knowledge, nice norms and reputation.

We’ve also shown there are several types of trade.

Type of tradeWhat it involvesNeed to share aims?ExampleComments
Market trade via barterTwo parties make an explicit swapNoYou help a friend with their maths homework in exchange for help with your Chinese homeworkSignificant transaction costs finding parties and verifying the conditions are upheld. Market and coordination failures possible.
Market trade via currencyOne party buys something off another, in exchange for a currency that can be used to buy other goods.NoBuying a house.Lower transaction costs finding parties, but still need to verify. Market failures and coordination failures possible.
Indirect trade and pre-emptive trade.A group of people help each other, in the understanding that in the long-term they’ll all benefitNoSome professional networks and groups of friends. Paying it forward.Only works if there’s enough trust or a mechanism to exclude defectors, and the mutual gains are larger enough to make everyone win in expectation.
Trade+You help someone else because if they succeed, it furthers your aims tooYesYou mentor a young person who you really want to see succeed.Low transaction costs and resistant to coordination failures, but only works if there’s enough trust that others share your aims.

And we’ve sketched three types of community:

  • Single-player communities — where other people’s actions are (mostly) treated as fixed, and the benefits of coordination are mainly ignored.

  • Market communities — which use markets and price signals to facilitate trade, but are still vulnerable to market failures and coordination failures.

  • Shared aims communities — where members share a common goal, potentially allowing for trade+ and even more resilience to coordination failures, which may allow for the greatest degree of coordination.

We also argued that the effective altruism community is at least partially a shared aims community.

Now, we’ll get more practical. What specific actions should people in shared aims communities do differently in order to better coordinate, and therefore have a greater collective impact?

This is also a topic that could use much more research, but in the rest of this article we will outline our current thinking. We’ll use the effective altruism community as a case study, but we think what we say is relevant to many shared aims communities.

We’ll cover three broad ways that the best actions change in a shared aims community: adopt cooperative norms, consider community capital, and take the portfolio approach to working out what to do.

1. Adopt “nice norms” to an exceptional degree

Why follow cooperative norms?

Much of “common sense” morality is about following ethical norms, such as being kind and honest — simple principles of behaviour that people aim to uphold. What’s more, many moral systems, including types of virtue ethics, deontology and rule utilitarianism, hold that our primary moral duty is to uphold these norms.

Effective altruism, however, encourages people to think more about the actual impact of their actions. It’s easy to think, therefore, that people taking an effective altruism approach to doing good will put less emphasis on these common sense norms, and violate them if it appears to lead to greater impact.

For instance, it’s tempting to exaggerate the positive case for donating in order to raise more money, or be less considerate to those around you in order to focus more on the most highest-priority ways of doing good.

However, we think this would be a mistake.

First, effective altruism doesn’t say that we should ignore other moral considerations, just that more weight should be put on impact. This is especially true if you consider moral uncertainty.

Second, and more significantly, when we consider community coordination, these norms become vitally important. In fact, it might even be more important for members of the effective altruism community to uphold nice norms than is typical — to uphold them to an exceptional degree. Here are some reasons why.

First, as we’ve seen, upholding nice norms even allows self-interested communities to achieve better outcomes for all their community members, since it lets them trade more and avoid coordination failures.

Further, the effective altruism community is partly a shared aims community. This means the potential for coordination is even higher, and so the rewards of following nice norms is also even higher than in a normal community, creating greater reason to follow nice norms than normal.

What’s more, by failing to follow these norms, you not only harm your own ability to coordinate, but you also reduce trust in the community more broadly. You might also encourage others to defect too, starting a negative cycle. This means that the costs of defecting spread across the community, harming the ability of other people to coordinate too — it’s a negative externality. If you also care about what other community members achieve, then again, this gives you even greater reasons to follow nice norms.

The effective altruism community also has a mission that will require coordinating with other communities. This is hard if its members have a reputation for dishonesty or defection. Indeed, people normally hold communities of do-gooders to higher standards, and will be suspicious if people claim to be socially motivated but are frequently dishonest and unkind. We’ll come back to the importance of community reputation in a later section.

On the other side, if a community becomes well known for its standards of cooperativeness, then it can also encourage outsiders to be more cooperative too.

In combination, there are four levels of effect to consider:

 Effects on reputationEffects on social capital
Internal effectsEffects on your reputation within the communityEffects on community social capital
External effectsEffects on the community's reputationEffects on societal social capital

A self-interested person, however, would only care about the top right box. But if you care about all four, as you should in a shared aims community with a social mission, there is much greater reason than normal to follow norms.

You can see a more detailed version of an argument in “Considering Considerateness”.

Which norms should the community aim to uphold in particular? In the next couple of sections we get more specific about which norms we should focus on, how to uphold them, and how to balance them against each other. If this is too much detail, you might like to skip ahead to community capital.

In summary, we’ll argue the following norms are important among the effective altruism community in particular.

  1. Be exceptionally helpful.
  2. But withhold aid from those who don’t cooperate, and forgive.
  3. Be exceptionally honest.
  4. Be exceptionally friendly.
  5. Be willing to compromise.

Be exceptionally helpful

If you see an opportunity to help someone else with relatively little cost to yourself, take it.

Defaulting to helpfulness is perhaps the most important norm, because it underpins so many of the mechanisms for coordination we covered earlier. It can trigger pre-emptive and indirect trades; and it can help people cooperate in prisoner’s dilemmas. Doing both of these builds trust, allowing further trades and cooperation.

It’s likely that people aren’t as helpful as would be ideal even from a self-interested perspective. Like the other norms we’ll cover, being helpful involves paying a short-term cost (providing the help) in exchange for a long-term gain (favours coming back to you). Humans are usually bad at doing this.

Indeed, it’s even worse because situations where we can aid other community members often have a prisoner’s dilemma structure. In a single round, the dominant strategy is to accept aid but not give aid, since whatever your partner does, you come out ahead. It’s easy to be short-sighted and forget that in a community we’re in a long-term game, where it ultimately pays to be nice.

The general arguments for why shared aims communities should uphold nice norms to an exceptional degree apply equally to helpfulness:

  • It’s worth being even more helpful than normal due to the possibility of trade+.
  • By being helpful you’ll also encourage others in the community to be more helpful, further multiplying the benefits.
  • Being helpful can also help the community as a whole develop a reputation for helpfulness, which enables the community to be more successful and better coordinate with other groups. For this reason, it’s also worthwhile being helpful to other communities (though to a lesser degree than within the community since aims are less aligned).

How can we be more helpful?

One small concrete habit to be more helpful is “five minute favours” — if you can think of a way to help another community member in five minutes, do it.8 Two classic five minute favours are making an introduction and recommending something to read.

But it can be worth taking on much bigger ways to help others. For instance, we know someone who lent another community member $10,000 so they could learn software engineering. This let them donate much more than they would have, creating much more than $10,000 of value for the community (and they repaid the loans).

Another option is that if you see a great opportunity, such as an idea for a new organisation, or a way to get press coverage, it can be better to give this opportunity to someone else who’s in a better position to take it. It’s like making a pass in football to someone who’s in a better position to score.

Withhold aid from those who don’t cooperate, but show forgiveness

One potential downside of being helpful is that it could attract “freeloaders” — people who take help from the community but don’t share the aims, and don’t contribute back — especially as the community gains more resources. There are plenty of examples of people using community trust to exploit the members, such as in cases of affinity fraud.

But being nice doesn’t mean being exploitable. As we saw, the best algorithms in Axelrod’s tournament would withdraw co-operation from those who defected on them. We also saw that if freeloaders can’t be excluded from a community, it’s hard to maintain the trust needed for indirect trade. This means it’s important to track people’s reputation, and stop helping those who don’t contribute.

However, this decision involves a difficult balancing act. First, withdrawing help too quickly could make the community unwelcoming. Second, negative information tends to be much more sticky than positive, so one negative rumour can have a disproportionately negative impact on someone’s reputation.

A third risk are “information cascades”. For instance, if you hear something negative about Bob, then you might tell Cate your impression, giving Cate a negative impression too. Then if David hears that both you and Cate have a negative impression, which is now two people, David could form an even more negative impression, and so on and so on. This is the same mechanism that contributes to financial bubbles.

It can also happen on the positive side — a positive impression of a small number of people can spread into a consensus. Amanda Askell called this “buzztalk” and pointed out that it tends to disproportionately benefit those who are similar to existing community members.

How can a community avoid some of these problems, while also being resistant to freeloaders?

One strategy is to withdraw aid but be highly forgiving, especially with people who are new, but to be less forgiving over time.

In Algorithms to Live By, the authors speculate that a strategy of “exponential backoff” could be optimal. This involves doubling the penalty each time a transgression is made. For instance, the first time someone breaks a norm, you might forgive them right away, the next time you might briefly withhold aid for a month, then withhold aid for 2 months, 4 months and so on. You’re always willing to forgive but it takes longer and longer each time.

Second, you can withdraw cooperation, but in a friendly way. If you think someone is freeloading, or otherwise having a negative impact on the community, then don’t actively go after them or spread negative rumours. Rather, start by letting them know about the problem privately. If they persist, quietly stop providing support. You can remain friendly throughout.

The downside of this strategy is that you’re not alerting others to the issue. However, we suspect it’s better to be cautious, and only go after someone’s reputation in the most serious cases.

Third, if you are using information about the cooperativeness of other community members, always ask for the reasons behind someone’s impressions. This can help to avoid information cascades.

Find out: are their impressions based on information directly about the person’s abilities and cooperativeness, or are they just based on someone else’s impressions? Direct data about behaviour should be weighted more highly, and second hand impressions should be downweighted. Watch out for cases when lots of people have a negative impression based on the same small number of data points. Similarly, if you are sharing your view of someone, try to give the reasons for your assessment, not just your view. (Read more about “beliefs” vs. “impressions”.)

A fourth strategy is to have community managers, and other structures to collect information. Julia Wise serves as the community manager for effective altruism, and there are also managers who focus on the major geographical areas.

If you’re concerned that someone is freeloading, or otherwise being harmful to the community, you can let a manager know. This helps them put together multiple data points, and avoid information cascades.

What’s more, if action does need to be taken, the community manager is usually in a better position. For instance, they can exclude people from events, and have more experience dealing with these kinds of situations. We’ll talk about other infrastructure to support coordination later.

Be exceptionally honest

By honesty, we mean both telling the truth and fulfilling promises.

Honesty is just as important as helpfulness in enabling trade, because it supports trust. This is true even in self-interested markets — if people think a market is full of scammers who are lying about the product, they won’t participate. As we showed, trust is even more important in shared aims communities, and for communities that want a positive reputation with others.

Honesty is especially important within the effective altruism community, because it’s also an intellectual community. Intellectual communities have even greater need for honesty, because it’s hard to make progress towards the truth if people hide their real thinking.

What does exceptional honesty look like? Normal honesty requires not telling falsehoods about important details, but it can be permissible to withhold relevant information. If you were fundraising from a donor, it wouldn’t be seen as dishonest if you didn’t point out the reasons they might not want to donate.

However, exceptional honesty probably requires that in high stakes situations, you actively point out important negative information. Not doing so could lead the community to have less accurate beliefs over time, and withholding negative information still reduces trust.

The major downside of this norm is that sharing negative information can be unfriendly. Being criticised makes people defensive and annoyed. What’s more, negative information tends to be much more memorable than positive. This is why it’s common sense that people should aim to say more positive things than negative.

It also seems common for rumours to persist, even when they’ve already been shown to be untrue. This is because not everyone who hears the original rumour will hear the debunking, so rumours are hard to unwind. What’s more, hearing the debunking of a rumour can make some people suspect it was true in the first place. Most people would not want to see the article “why [your name] is definitely not into bestiality” at the top of Google search.

We’re still unclear how to trade friendliness against honesty, and there’s significant disagreement in the community. However, there are ways to be both honest and friendly, and this is probably where to focus first. For instance:

  1. Be as friendly as possible the rest of the time. This gives your relationship more “capital” for criticism later.

  2. Only say negative information that’s important. If no major decision rides on the information, or the person is only making a small mistake, probably don’t mention it.

  3. Make extra effort to ensure that negative claims are accurate.

  4. Be less negative online. Online debate makes everyone much less friendly than they’d ever be in person, because it’s harder to read people, you get less feedback and online debate easily turns into a competition to win in front of an audience. Negative claims are often best discussed in person.

  5. Start by sharing negative opinions in private, on a need-to-know basis. For instance, if you think someone is behaving badly, let them know one-on-one first so they have a chance to change. This helps to reduce needless negative reputation risks, and preserves options. If you don’t see a change, you can always go public later.

One form of honesty that doesn’t come across as unfriendly is honesty about your own shortcomings and uncertainties (i.e. humility). So, putting greater emphasis on humility is a way to make the community more honest with fewer downsides. In particular, intellectual humility — being open about what you don’t know — is vital for our success as an intellectual movement.

With people who are not honest, we can respond to them in a similar way to those who are not helpful. We can start by privately giving them feedback, then we can start to withhold aid, and then alert community managers.

Read more about why to be honest in this essay by Brian Tomasik and Sam Harris’s book, Lying.

Be exceptionally friendly

As covered, the downside of being willing to withhold cooperation and being exceptionally honest is that it can make the community unwelcoming. So, we need the norm of friendliness to balance these others.

Unfriendliness also seems like a greater danger for the effective altruism community compared to other communities because a core tenet involves focusing on the most effective projects. This means ranking projects, and, implicitly not supporting the less effective ones. This is an unfriendly thing to do by normal standards.

We also observe that the community tends to attract people who are especially analytical and relatively weak on social skills. We also do a lot of interaction online, which encourages unfriendliness.

By “unfriendly” we mean there’s a tendency to have unpleasant social experiences when interacting with the community. In particular, we’re talking about the factors over and above the other norms we’ve covered, such as honesty, forgiveness, and helpfulness (though all of these can contribute to friendliness). Instead, we mean additional elements like being rude, boasting, making people uncomfortable, and making people feel treated as a means to an end.

Why is friendliness so important? We’re all human, and we’re involved in an ambitious project, with people different from ourselves. This makes it easy for emotions to get frayed.

Frayed emotions make it hard to work together, increasing the costs of trade, decreasing cooperation. Frayed emotions also make it hard to have a reasonable intellectual discussion and get to the truth. It will be hard to be exceptionally honest with each other if we don’t have a bedrock of positive interactions beforehand. In this way, friendliness is like social oil that enables the other forms of cooperation.

What’s more, many social movements fracture, and achieve less than they could. Often this seems to be based on personal falling outs between community members. If we don’t want this to happen, then we need to be even more friendly than the typical social movement.

It’s even worth being friendly and polite to those who aren’t friendly to you, since it lets you take the high-ground, and helps to defuse the situation. (Though you don’t need to aid and cooperate with them in other ways.)

What are some ways we can be more friendly?

One option is to get in the habit of pointing out positive behaviours and achievements. Our natural tendency is to focus on criticism, which can easily make the culture negative. But you can adopt habits like always pointing out something positive when you reply to someone online, or trying to say one positive thing at each event you go to.

Relatedly, we can spend more time getting to know other community members as people: find out what they’re interested in, what their story is, and have fun. Since effective altruism is all about having an impact, it can be easy for people to feel like they’re just instruments in service of the goal. So, we need to make extra efforts to focus on building strong connections.

Both of the above are ways to build social capital which makes it easier to get through difficult situations. Another way to increase friendliness is to better deal with difficult situations when they’ve already arisen.

A common type of a difficult situation is a disagreement. If handled badly, a disagreement can lead each side to become even more entrenched in their views, and could ultimately cause a split in the community.

But there are a couple of great guides to dealing with disagreements. One is Daniel Dennett’s four steps. Robert Wiblin also wrote “Six ways to get along with people who are totally wrong”.

These guides recommend more specific applications of the general principles we’ve already covered: (i) always stay polite (ii) point out how you agree with your opponent and what you’ve learned with them (iii) when giving negative feedback, be highly specific, and only say what is necessary.

Be more willing to compromise

One special case of friendliness and helpfulness is compromising. Compromise is how to deal with other community members when you have a disagreement you’ve not been able to resolve.

If you’re aiming to maximise your impact, it can be tempting not to compromise, and instead to focus on the option you think is best. But from a community point of view, this doesn’t make as much sense. Instead, if others are being cooperative with you, then we argue it’s probably best to follow the following principle (quoting from Brian Tomasik):

If you have an opportunity to significantly help other value systems at small cost to yourself, you should do so.
Likewise, if you have opportunity to avoid causing significant harm to other value systems by foregoing small benefit to yourself, you should do so.

Here’s why. Suppose there are two options you could take. By your lights, they’re about equally valuable, whereas according to others in the community, one is much better than the other:

 Option AOption B
Your assessment of value10 units11 units
Others’ assessment of value30 units10 units

From a single player perspective, it would be better to take option B, since it’s 1 unit higher impact. However, in a community that’s coordinating, we argue it’s often best to take option A.

First, you’re likely wrong about option A. Due to both epistemic humility and moral uncertainty, if others in the community think option A is much better, you should take that as evidence that it’s better than you think. At the very least, it could be worth talking to others about whether you’re wrong.

But, even if we put this aside and assume that everyone’s assessments are fixed, there’s still reason to take option A. For instance, you might be able to make a direct trade: you agree to take option A in exchange for something else. Since option A is only 1 unit of impact worse in your eyes, but it’s 20 units of impact of impact higher for others, the others make a large surplus. This means it should be easy for them to find something else that’s of value to you to give in return. (Technically, this is a form of moral trade.)

However, rather than trying to make an explicit trade, we suggest it’s probably often better just to take option A. Just doing it will generate goodwill from the others, enabling more favours and cooperation in the future, to everyone’s benefit. This would make it a form of pre-emptive or indirect trade.

Compromise can also help avoid the unilateralist’s curse, which is another form of coordination problem. It’s explained in a paper by Bostrom, Sandberg and Douglas:

A group of scientists working on the development of an HIV vaccine have accidentally created an airborne transmissible variant of HIV. They must decide whether to publish their discovery, knowing that it might be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons. Most members of the group think publication is too risky, but one disagrees. He mentions the discovery at a conference, and soon the details are widely known.

In this case, the discovery gets shared, even though almost all of the scientists thought it shouldn’t have been, leading to a worse outcome according to the majority. In general, if community members act in a unilateralist way, then the agenda gets set by whichever members are least cautious, which is unlikely to lead to the best actions.

Bostrom points out that this could be avoided if the scientists agreed to compromise by going with the results of a majority vote, rather than their individual judgement of what’s best.

Alternatively, if all the scientists agreed to the principle above — don’t pursue actions that a significant number of others think are harmful — then they could have avoided the bad outcome too.

Unfortunately, we’ve seen real cases like this in the community.

Instead, it would be better if everyone had a policy of avoiding actions that others think are negative, unless they have very strong reasons to the contrary. When it comes to risky actions, a reasonable policy is to go with the median view.

How to uphold norms?

We’ve covered a number of norms:

  1. Being exceptionally helpful, honest and friendly.
  2. Exclude those who violate the norms, but be forgiving, especially initially.
  3. Be more willing to compromise.

If we had to sum them up in a line, it might be: treat others as you would like to be treated.

If we can uphold these norms, then we have the potential to achieve a much greater level of coordination.

Of course, they are not perfect guides. There will be times when it’s better to break a norm, or when they strongly conflict. In these cases, we need to slow down and think through what course of action will lead to the greatest long-term impact. However, the norms are a strong default.

How can we better uphold these norms?

As individuals, we should focus on building character and habits.

We’ve covered specific ways we can uphold each of these norms as individuals. More broadly, we need to focus on building character. Upholding these norms requires constant work, because they involve short-term costs for a long-term gain. For instance, having a reputation of being honest means people will trust you, which is vital. But maintaining this reputation will sometimes require revealing negative facts about yourself, which is difficult in the moment.

To overcome this, we can’t rely on willpower each time an opportunity to lie comes up. Rather, we need to build habits of acting honestly, so that our default inclination is to tell the truth. This creates a further reason to follow nice norms: following them when it’s easy builds our habits and character, making it easier to follow them when it’s hard.

As a community, we should state the norms we want to uphold, and appoint people to encourage the norms.

One other small measure is to publicly proclaim the norms we want to uphold. The Centre for Effective Altrusim took a step in this direction by publishing their guiding principles, and this article goes into more depth. Public statements help individuals get clearer on what we want to aim towards.

Having the norms written down also makes it easier for individuals to spot when the norms are violated, and then give feedback to people violating the norms, and eventually to withhold aid.

To handle more serious situations, we can appoint community managers. They can keep track of issues with the norms, and perhaps use formal powers, like excluding people from official events.

Organisations can consider the norms in deciding who to hire, or who to fund, or who to mentor.

This helps to ensure that leaders in the community uphold the norms.

Finally, we can better uphold the norms by establishing community capital to share information, which is what the next section is about.

2. Value community capital and set up community infrastructure

The first ways communities can better coordinate is by upholding norms. The second way, which we cover in this section, is by building community capital.

What is community capital?

An individual can either try to have an immediate impact, or they can invest in their skills, connections and credentials, and put themselves in a better position to make a difference in the future, which we call career capital.

When you’re part of a community, there’s an analogous concept: the ability of the community to achieve an impact in the future, which we call “community capital”.

Community capital depends firstly on the career capital of its members. However, due to the possibility of coordination, community capital is potentially greater than the sum of individual career capital. Roughly, we think the relationship is likely:

Community capital = (Sum of individual career capital) * (Coordination ability)

This extra factor is why being in a community can let us have a greater impact.

Coordination ability depends on whether members use the mechanisms we cover in this article, such as norm following, infrastructure and the portfolio approach. It also depends on an additional factor — the reputation of the community — which we’ll cover shortly.

Coordination ability is similarly important to the sum of the individual career capital of the members because they multiply together. This could explain why small but well coordinated groups are sometimes able to compete with much larger ones, such as the campaign to ban landmines, and many other social or political movements.

This means that when comparing career options (or other types of options), you should not only consider the normal elements of our career framework, but also you impact on the influence and coordination ability of the community. Our career framework becomes:

  1. Role impact
  2. Career capital
  3. Personal fit
  4. Plus: Community capital

We’ll explain some of the implications of this extra factor for career choice in what follows.

How can you help to build community capital?

The existence of community capital provides an extra avenue for impact: rather than do good directly, or build your individual career capital, you can try to build community capital.

As covered, to build community capital you can either increase the career capital of its membership, or you can increase its ability to coordinate.

We’ve pointed out that increasing the membership of the community can easily be more effective than trying to do good directly. The basic argument is the “multiplier argument”. It often seems possible to spend several months on community building, and find another community member who goes on to invest perhaps ten times as much time into whatever the community thinks is highest-priority. If you can do this, then it’s about ten times more effective than trying to have an impact directly.

One important caveat is that the contribution of new members depends on their resources, dedication and realization. These factors multiply together, and since they each vary widely, it has a much greater impact to bring in a small number of aligned, talented and dedicated people, compared to perhaps hundreds of people who are only tangentially involved.

Another caveat is that many community building efforts are much less effective than this, and it’s even easy for them to cause harm, for instance, by harming the community’s reputation or displacing even better projects.

A third caveat is that the larger the membership of a community, the harder it is to coordinate. This is because the more people there are, the harder it is for information to spread, to maintain the right norms, and to avoid tragedies of the commons. So, increasing the membership of the community decreases coordination ability, attenuating the benefits.

Rather than increasing the membership of the community, you can increase community capital by increasing its ability to coordinate.

Our guess is that increasing coordination ability is likely a bigger bottleneck than increasing the membership right now. We roughly estimate that the community might be able to have twice as much impact if it could coordinate better, and achieving this gain seems easier than doubling the community’s membership while maintaining the same level of coordination.

This is partly because increased coordination seems more neglected than increasing membership. It’s also simply because the community has grown significantly in the last five years, and coordination infrastructure often becomes more valuable the larger the community becomes.

Our survey showed that a majority of leaders in the community broadly agree – they think it’s higher-impact to increase coordination than to find new people who are moderately involved.

We can increase coordination ability by increasing norm following as covered. But there are two other ways to increase coordination ability, which we’ll now cover: setting up community infrastructure and improving the community’s reputation.

The importance of the community’s reputation

If you do something that’s widely seen as bad, it will harm your own reputation, but it will also harm the reputation of the communities you’re in. This means that when you join a community, your actions also have externalities on other community members. These can often be bigger than the effects of your actions on yourself.

On the other hand, a community’s positive reputation is a major asset. It lets members of the community quickly establish trust and credibility, which makes it easier to work with other groups. It also lets the community grow more quickly to a greater scale.

For instance, the startup accelerator Y Combinator started to take off after Airbnb and Dropbox became household names, because that gave them credibility with investors and entrepreneurs. Our community needs its own versions of these widely recognised success stories.

Likewise, if you hold yourself to high standards of honesty, then it builds the community’s reputation for honesty, making the positive impact of your honesty even greater than it normally would be.

In this way, reputation effects magnify the goodness or badness of your actions in the community, especially those that are especially salient and memorable.

The importance of these externalities can increase with the size of the community. In a community of 100 people, if you do something controversial, then it harms or benefits the reputation of 100 other people; while in a community of 1000, it affects the reputation of 1000.

On the other hand, as the community gets larger, your actions don’t matter as much — you make up a smaller fraction of the community, which means you have less impact on its reputation.

In most cases, we think the two effects roughly cancel out, and reputation externalities don’t obviously increase or decrease with community size. But we think there’s one important exception: viral publicity, especially if it’s negative.

If someone in the community was in the news because they committed murder, this would be really bad for the community’s reputation, almost no matter its size. This is because a single negative story can still be memorable and spread quickly, and so have a significant impact on how the community is perceived independent of whether the story is representative or not. It’s the same reason that people are far more afraid of terrorism and plane crashes than the statistics suggest they should be.

This would explain why organisations seem to get more and more concerned to avoid negative stories as they get larger.

Perhaps the most likely way for the effective altruism community to have a major set back is to get caught up in a major negative scandal, which permanently harms our reputation.

The importance of avoiding major negative stories has a couple of consequences.

First, it means we should be more reluctant to take on controversial projects under the banner of effective altruism than we would if we were only acting individually.

Second, it makes it more worth investing time in better framing our messages, and our ability to respond to PR crises.

Third, it makes it even more important to uphold the cooperative norms we covered earlier. Being dishonest, unfriendly and unhelpful is a great way to ruin the community’s reputation as well as your own.

Setting up community infrastructure

Another way to increase the coordination ability of a community is to set up community infrastructure: systems that help the community to coordinate better.

For instance, when there were only 100 people in the community, having a job board wasn’t really needed, since the key job openings could spread by word-of-mouth and it was easy to keep track of them. However, when there are thousands of people and tens of organisations, it becomes much harder to keep track of the available jobs. This is why we set up a job board in 2017.

Community infrastructure can become more and more valuable as the community grows. If you can make 1000 people 1% more effective, that’s like having the impact of 10. While making 10,000 people 1% more effective is like having the impact of 100.9 This means that as the community grows, new types of projects become worth taking.

Community infrastructure enables coordination in a number of ways:

  1. Lets people share information about needs and resources, which helps people make better decisions and also facilitating trades.
  2. Lets people share information about reputation, encouraging people to uphold the norms.
  3. Provides mechanisms through which trades can be made.
  4. Provides ways to share common resources, leading to economies of scale.

Here are some more examples of infrastructure we already have:

  1. The EA Global conferences and local groups — they provide a way to meet other community members, share information, and start working together.
  2. The EA Funds — enable multiple small donors to pool their resources, paying for more research.
  3. The EA Forum — a venue for sharing information about different causes and opportunities, as well as discuss community issues.
  4. The newsletter — aggregates information about all the content and organisations in the community.
  5. Content published by the organisations (e.g. the 80,000 Hours career guide) — aggregates information and research, to avoid doubling up work.

What are some other examples of infrastructure that might be useful?

We could see attempts to set up a market for impact as a type of community infrastructure, as we covered earlier.

Another category of projects are more objective ways to share information about reputation, intentions and skills. Right now, it can be difficult to know who to help or who to hire, since it’s hard to know how aligned people are. The best you can do is see if someone has attended events or taken the Giving What We Can Pledge, or rely on word of mouth. It might be useful to have more options to track, such as a fellowship. Though, projects like these are high stakes, so we’d want to see any attempt to consult widely and be cautious.

There are many more ways we could better pool and share information. For instance, donors often run into questions about how to handle tax when donating a large fraction of their income, and these issues are often not well handled by normal accountants because it’s rare to donate so much. It could be useful for community members to pool resources to pay for tax advice, and then turn it into guides that others could use.

One general principle is to avoid needless proliferation of decision-makers. This is because in general the more decision-makers there are, the harder it is to coordinate:

  1. There are more groups who need to be informed about the needs of the community.
  2. It’s harder to avoid tragedies of the commons the more people involved, since if one person starts to defect, it can trigger others to defect too.
  3. It’s harder to avoid the unilateralist’s curse the more people there are who might defect.

This is an argument for centralising key functions in the community, such as giving donation advice, or running local groups.

That said, the benefits of centralisation need to be weighed against the costs of less diversity of opinion, so our impression is that often the optimal balance of the two is to have between one and five groups working on the same issue.

Summary on community capital

When coordinating, increasing community capital can become a key goal. Individuals can increase community capital by:

  1. Increasing its quality-adjusted membership.
  2. Increasing norm following.
  3. Improving community infrastructure.
  4. Improving reputation.

They mean that once you’re part of a community, there’s a whole class of new career options that open up – those that involve building community capital — and new considerations, such as community reputation, become key in evaluating projects.

Besides changing which norms you follow and considerations around community capital, there is a third way that the highest-impact careers change when you’re in a community. You need to evaluate which options are best in a different way, which we call the portfolio approach.

3. Take the portfolio approach

Introducing the approach

Once you’re part of a community, how should individuals work out what’s best to do?

If the community is unresponsive to what you do, you can (mostly) take a single-player approach to working out the best action.

However, if you can coordinate with the rest of the community, then this will become misleading. Instead, you should start to take the “portfolio approach” (name from Owen Cotton-Barratt).

The portfolio approach means starting to view the community more like a single entity maximising its impact. To work out the best option, do something like the following three steps:

  1. Make a list of the best opportunities open to the community as a whole — everything that is above the “community bar”.
  2. Work out the ideal allocation of people over those opportunities to maximise impact.
  3. Consider what you can do as an individual to best move the community towards that ideal allocation at the margin over the coming years.

In contrast, in the single-player approach, you would try to work out what the community will do without you, then look for the best remaining opportunity holding everyone else fixed. This is different because it ignores opportunities to trade and “shuffle people around” the best opportunities.

These steps are often very difficult, but we can look for rules of thumb that approximate them.

Moreover, in practice the community won’t be perfectly responsive, and lots of worthwhile “shuffles” won’t happen. However, we can still consider a partially responsive community, in between the two extremes:

  1. Pure single-player: everyone else is fixed, choose the best remaining opportunity.
  2. Fully responsive community: everyone is willing to swap to any job above the community bar.

As we’ll show, even in the partially responsive case where there’s a lot of uncertainty about the best allocation, the portfolio approach has implications for many key questions in career planning, such as which problem area to focus on, which career to take and whether to invest in career capital or have an impact right away.

Be more willing to specialise

Let’s suppose you want to build and sell a piece of software. One approach would be to learn all the skills needed yourself – design, engineering, marketing and so on.

A better approach in the long-term is to form a team who are skilled in each area, and then build it together. Although you’ll have to share the profits with the others, the profits are potentially much larger, so everyone will win.

This is because each specialist can master their function, become far more productive, and together achieve economies of scale (e.g. sharing legal fees, an office, fundraising).

Once it’s possible to trade, then it becomes better for people to specialise in the key functions needed by the community, and work together.

Instead, if you take a single-player perspective, then the incentive is to “keep your options open”: by staying flexible, you can take whichever opportunities are being missed by the community.

There’s certainly value in staying flexible, but it misses the potential for bigger gains. If everyone tries to stay flexible, then we end up with a community of people earning to give or with generalist skills (e.g. going into consulting). Instead, if everyone specialises, then we get a community of experts. We expect that in the long-term a community of well allocated specialists can have over ten times as much impact in their chosen areas than a community of generalists, so even if the community has to give up some flexibility, it will likely end up ahead.

As the community gets larger, the need for specialists will increase. You might not think that anthropology is a high-impact field, but in our podcast with Dr Beth Cameron, we learned that anthropologists played a major role in the fight against Ebola, because they understood the burial practices that were spreading the disease.

As another example, the community may already be a little short of historians, who could study issues like the history of philanthropy, the history of welfare, and the history of social movements. In the long-term, we’ll want almost every academic discipline involved.

80,000 Hours has not always helped with this problem. We’ve promoted the idea of gaining “flexible career capital”, which has encouraged people to go into generic prestigious options, such as consulting and software engineering, rather than become experts in the most pressing global problems.

How to balance flexibility and specialisation?

We’ve argued that taking a portfolio approach warrants a greater degree of specialisation, but how much greater?

Specialisation carries costs of lower flexibility, and since the community only coordinates imperfectly, not all of the benefits of specialisation can be captured. Overall, we’re unsure where the balance should lie.

One source of evidence is our survey of talent needs in the community. This found that both generalist and specialist skill-sets were in-demand. On the generalist side, people listed generalist research, management & operations, and broad knowledge of policy. On the specialist side, people listed machine learning, economics and biorisk.

Overall, we think it’s worth considering specialisation more seriously than a couple of years ago, but it’s not always the best way to go.

For instance, suppose that you think AI risk is overall more pressing than pandemic risk, but have a background in medicine. A single-player perspective might suggest that you try to earn to give or switch into machine learning, and we know someone who made that switch several years ago.

However, as the community grows, we’ll also need some experts in relevant areas of biology to work on biorisk. Taking a community perspective, the question becomes “who in the community is relatively best placed to take these options?”. Someone who already has a background in medicine is plausibly among the best placed. Greg Lewis studied medicine and decided to focus on biorisk in part for these reasons.

Another consideration is how easy it will be to attract specialists in the future. If, over the next few years, we’ll be able to find specialists that are needed, then it won’t be worth existing community members spending years developing their skills.

Our impression, however, is that in some areas, such as policy, it’s quite hard to find existing specialists who deeply share the community’s mindset. What’s more, becoming a trusted community member who’s fully up-to-speed can easily take several years, and this makes it harder than it looks to absorb new specialists. It also seems like we’re going to need a lot of specialists, at least in certain areas and skills.

A final type of consideration relates to job satisfaction. The advantage of being a generalist is that you have more flexibility to change area if you’re not finding it satisfying. In the main career guide, we presented this as a significant reason in favour of flexibility.

However, having a well-developed skill-set is also satisfying, because it leads to feelings of achievement and competence. So, overall we’re not sure which path is better for job satisfaction.

In conclusion, we’re not sure exactly how much more community members should specialise, but we think taking one or two more steps in the direction of specialisation is probably reasonable. This might mean, say, being willing to spend several more years training a specialist skill than you would have otherwise. It’s also more reason to favour directly trying to enter a high-impact path rather than “staying flexible” before you’re sure.

What to specialise in? See our most recent talent survey and list of priority paths.

Do more to explore and gain information

A special case of specialisation is focusing on paths that gain information for the community.

When you’re part of a community, it becomes more valuable to gain information, because it can be shared with everyone else. For instance, if you discover a new important problem area, then the rest of the community can enter it, multiplying the impact of your discovery.

On an extreme single player perspective, however, new information is only valuable to the extent that it informs your decisions.

How much should the community invest in gaining new information? This is an “explore-exploit” problem that we’ll explore in a separate article.

As a very rough rule of thumb, we think it would be worth, from a ideal portfolio perspective, to invest 5-30% of the community’s total resources into gaining more information. This could either mean doing research or testing out new approaches.

If there are 1,000 dedicated community members, this would imply that 50-300 should do full time exploration, whether testing new approaches or doing global priorities research. Our guess is that the number of people doing full-time exploration is somewhat less than 50. Instead, almost everyone clusters into the top couple of areas, such as global poverty, ending factory farming, meta and global catastrophic risks, and focuses on a similar range of approaches.

However, there are a couple of reasons why we might not want to aim for so much exploration.

First, global priorities research or running a test project are difficult, and it might be that too few people have sufficient personal fit.

Second, if the community is also not fully responsive to new information, then the value of information is reduced.

Third, creating new projects makes coordination harder and poses reputation risks, and these indirect costs also need to be weighed against the value of the information.

You can see more defence for doing more exploration here and read more about value of information in general as a consideration.

We can also do a much better job of sharing the information we already have. One way to do this is to write up your findings on the Effective Altruism Forum. For instance, recently Daniel Dewey, Open Philanthropy Project’s program officer on risks for advanced artificial intelligence wrote up his thoughts on one of the main approaches to ensuring that artificial intelligence is robust and safe. A quick format is for one person to interview an expert via email.

There are more reasons to spread out over opportunities

Gaining information is one reason to spread out over a wider range of opportunities. Each type of opportunity that’s taken gives us information about what is effective, which can increase the impact of the community over time. This means that if we think as a community, then we should work on a wider range of projects than if everyone thinks individually.

In particular, the value of information should push us to work on new areas and interventions where we’re especially uncertain about their effectiveness. It doesn’t mean putting more effort into existing opportunities that are well understood, such as malaria net distribution.

Taking a portfolio approach also provides several other reasons to spread out. One is diminishing returns. For instance, the first $10m invested in global health will probably have more impact than going from $100m to $110m.10

From a single-player perspective, the most natural thought is to invest everything in whichever area you think is highest-impact right now (though taking account of personal fit).

However, in a community, we need to think about what overall allocation would be ideal.

We’ll want to allocate people to the top area until we hit diminishing returns that make it less effective than the next best area, then to the second area, third area and so on.

So the end result will be that a variety of areas end up above the “community bar”.11

As an individual, the key question becomes “which opportunities above the community bar are most neglected by other community members right now?” rather than “which single opportunity seems best in general?”.

This will mean that before the community reaches equilibrium (as is probably the case now), it’ll be most effective to work on the top areas. But over time, once equilibrium is reached, marginal returns will be similar across a variety of problem areas.

The same applies when we consider investments over time. Some opportunities pay off quickly and others pay off a long time in the future. Each type of opportunity has diminishing returns, so we should want some community members to focus on short-term impact, while others focus on gaining career and community capital that will pay off in the long-term.

Another reason to spread out is due the principle of compromise we argued for earlier: if you find an opportunity to benefit other community members at small cost to yourself, take it.

This might mean than even if you think working on global health is less effective than AI safety, if you find an opportunity to have a big impact in global health, it might be worth taking in order to compromise with other community members.

One community can also compromise with other communities.

In order to maximise the impact of the effective altruism community, we’ll want to coordinate with other communities. This means we should uphold nice norms when interacting with them, including the principle of compromise. This can mean working on areas that are less effective by our lights, but widely seen as effective by others.

For instance, most people alive today want to focus more on short-term issues rather than those that benefit distant future generations. This creates a reason for people who want to focus on long-term issues to look for actions that also create major benefits to present generations (so long as they give up a relatively small amount of value from a long-term perspective).

The aim of being nice to other communities is also a reason against “moral advocacy” – attempting to persuade others of your values rather than working with them. Read more.

A fourth reason to spread out is in order to improve community capital. Having concrete successes in areas that are widely regarded as useful improves the community’s reputation — global health and factory farming are perhaps the best areas here since it’s easiest to have tangible results that many would recognise.

Another factor is that if we spread out over multiple areas, then we can build expertise and connections in these areas. These resources take time to build, so starting now gives the community more options in the future.

In sum, if we try to weigh all these factors, perhaps, even if everyone perfectly agrees on what the top priorities are, then a rough estimate of the ideal community allocation over problem areas might look something like the following (these numbers are not stable and could easily go up or down by 10ppt or more):

  • 40% in the most effective area
  • 23% in the 1-2 second most effective areas
  • 12% into global priorities research
  • 15% spread out over a variety of neglected areas that might become top in the future
  • 10% into areas with high probability of success

This assumes that each area has a similar capacity for more resources. In reality, if returns diminish faster in some areas compared to others, then their share will be reduced. If people also disagree about the priorities, then the spread will be wider still.

You can read more about these factors in the Open Philanthropy Project’s 2018 update on cause selection. The problem of allocating funding over areas as a large foundation is similar to the problem faced by the community as a whole.

Individuals should then enter the areas that are below their ideal allocation, starting with the most effective ones first, as well as those that match their comparative advantage, which we’ll explain next.

Consider your comparative advantage

So far, we’ve suggested that taking a portfolio approach means people should consider a wider range of options — especially those involving greater specialisation, gaining information and a wider range of problem areas and interventions. Which of these options should you take?

In our main career guide, we recommend narrowing down promising options based on “personal fit” i.e. where you’re likely to be most productive compared to others who typically enter that field.

However, we think that the portfolio approach suggests that you also consider a slightly different rule of thumb: do what’s to your comparative advantage. Roughly, because you can trade with other community members, you need to consider what you’re best at relative to others in the community. This can sometimes mean taking options with worse personal fit, to have a greater community impact.

Comparative advantage is a complex issue and we go into more depth in a separate article, but here are some key points to illustrate the difference.

Suppose you’re choosing between software engineering to earn to give and work in operations in a non-profit. You think both are high-impact on average, but you’re good at software engineering and average at operations (compared to people who take those roles in general). If you were to use personal fit as the deciding factor, then it would suggest software engineering.

However, suppose there is someone else in the community who could take the operations role instead or otherwise work in finance where they could donate more than you could in software engineering. It seems clear that the ideal allocation is for you to work in operations, taking the role with lower personal fit, but freeing up the other person to donate even more in finance.

What’s going on is that the community needs a certain number of people doing operations and a certain number funding non-profits in roughly the right ratio (technically, there’s complementarity between the two inputs, and diminishing returns). If there are lots of people in the community with high earning potential, then the “bar” for being one of those earning to give goes up. This can mean that even someone with high earning potential compared to people in general might be relatively better placed to do operations.

How can you work out where you have a comparative advantage?

This is difficult, because in theory you’d need to consider all the possible permutations of everyone in the community. But it seems possible to get some clues, and by considering these, you can enable the community to get towards a slightly better allocation, even if it remains far from ideal.

We recommend you start by working out which options seem highest-impact from a single-player perspective (i.e. the roles that have a good combination of average impact and personal fit), and then adjust your estimate of what’s best based on evidence about comparative advantage.

One approach to getting evidence about where your comparative advantage lies is to use surveys to see what skills are most common in the community, and which skills are most in-demand. If a skill is already common, then all else equal, it’s harder to have a comparative advantage in it. While if there’s a great need for a certain skill, then it’s easier.

We carry out annual surveys of the greatest needs in the community, and hope in the future to carry out surveys of which skills people have.

As an example, in the current community there’s an unusually large number of people with high-earning potential and who are good at philosophy, and this means it’s hard to be relatively best suited to these roles. However, operations skills seem in relatively short supply.

One complication is that what’s actually relevant is the future distribution of skills, and we can only guess at this.

It’s also possible to estimate your comparative advantage more directly in more constrained situations.

For instance, if there are several people within a team, then you can consider all the possible arrangements between roles, and try to work out which is highest-impact overall. Again, this could mean doing something with lower personal fit.

For instance, imagine a startup team where the founders are all great engineers (compared to other typical engineers) and only average at other skills. But they need one person doing sales, another doing management and only one doing engineering. In this situation, the two engineers who are least bad at sales & management relative to engineering should do it, even though they have worse personal fit with these skills.

If you’re applying for a certain job in the community, then you can try to get a sense of your comparative advantage relative to others who are applying for that job by talking to people in charge of hiring about your strengths and weaknesses relative to others who might take it.

All this means that when you’re working with a community, we need to modify our career framework. From a single-player perspective, when you’re comparing options in terms of long-term impact, consider:

  1. Role impact potential (problem area and method effectiveness)
  2. Career capital potential (skills, connections, credentials)
  3. Personal fit

But when you take a portfolio approach, you need to expand this to:

  1. Role impact potential (which is now evaluated by finding the best roles that are being underinvested in by the community)
  2. Career capital potential.
  3. Personal fit (your ability compared to people who normally take the job)
  4. Relative fit (your ability compared to others in the community, which determines your comparative advantage)
  5. Community capital (as earlier)

How much weight to put on comparative advantage depends on how responsive the community is and how much information you can gain about it.

Overall, comparative advantage is complicated to assess, and we remain uncertain about exactly how to factor it in. You can see this separate article which goes much more in-depth.

You can see the portfolio approach applied to the question of what to do about AI safety in this talk by Owen Cotton-Barratt.

When deciding where to donate, consider splitting or thresholds

If you take a single-player approach to deciding where to donate, then you want to predict where others will give, then fill the most effective remaining gaps. This makes sense when the others donating won’t respond to what you do, which could be approximately true if the other donors aren’t community members. However, it breaks down within the community.

One way it can break down is that it leads to people “playing donor of last resort”, which is a situation a bit like prisoner’s dilemma. Each individual donor can either give now or wait. If they wait, there’s a chance that someone else will fill the funding gap. While if the gap doesn’t get filled, they can always fill it later. So, waiting seems like the dominant strategy.

However, if everyone waits, then fundraising takes a long time for the charities, costing the time and attention of the leadership, making it harder to plan, and forcing the charities to hold larger reserves. It would be better to have an ecosystem where funding gaps are filled rapidly.

It also means creating an antagonistic relationship with other donors. Rather than honestly communicating which opportunities you think are best, the incentive is to try to get others to donate so that you don’t have to.

One way to avoid these negative effects is to take the “fair share” approach: if you find a promising donation opportunity but other donors are interested, then offer to cover a percentage of the gap, where the percentage is proportional to your total annual giving (up to a max of 50% of the charity’s budget so as to avoid over reliance on one donor). If everyone does this, then funding gaps get filled quickly, and the burden is shared fairly over different donors. This is the approach we recommend to our key donors.

Another option is the “threshold approach”: if you find a donation opportunity that’s clearly above the community bar, then donate everything there until the gap is filled (or you reach 50% of the gap). If everyone does this, then gaps are filled quickly, and everything worth funding gets funded.

A nice feature of the threshold approach is that it means each individual donor does less research, and allows the community to fill opportunities few people yet know about. Rather than compare each opportunity to every other opportunity, each donor only needs to compare opportunities to the threshold opportunity. If it’s better than the threshold, give, and otherwise don’t. This means donors can specialise in different areas according to their comparative advantage.

In practice, you can combine both. Donors can specialise in an area, then try to find opportunities above the threshold within that area. Once they find these opportunities, they can do a fair share split with other interested donors.

For more detail on this topic, see these research notes by Owen Cotton-Barratt & Zachary Leather.

Consider community investments as a whole

When donors invest money to donate later, if they’re part of a community, then they should think about their investments as one part of the whole portfolio held by the community.

For instance, if one large donor mainly held stock in Facebook, then other donors should avoid holding Facebook in order to balance out the overall allocation. Likewise, if other donors weren’t taking enough risk relative to what would be optimal for the community, then you should take extra risk in order to shift the community towards the ideal allocation.

This can mean that from an altruistic point of view, it can make sense for your investments to be less diversified than would be ideal from a single-player perspective. For instance, normally you wouldn’t want to hold all your wealth in a single private business because you would very exposed to a single source of risk. However, if other donors don’t also hold that business, then the overall community portfolio will still be diversified.

Wrapping up on the portfolio approach

Thinking as a community means we need to adjust the way we analyse many key questions we tackle at 80,000 Hours, such as where to donate, which problem areas to focus on, giving now vs. later, and which career is the best fit.

First, we should consider a wider range of options, including those that are more specialised and gain information, and we should spread out over a wider range of problem areas and types of opportunity. Second, individuals should focus on moving the community towards its ideal allocation and applying their comparative advantage, rather than personal fit alone.

We’ll be adding notes about this into the relevant sections of our advanced series. We will also mention coordination in the next version of the main career guide, though it is less relevant there since people aren’t already taking a community perspective.

How to attribute impact in communities?

Problems with a simple single player approach, and the value of “freed up” resources

Being part of a community adds several complications to the question of how to assign impact.

Suppose Amy takes a high-impact opportunity, O. How much impact has Amy had?

The naive answer is that her impact is equal to the value of O.

However, this is usually not right because we also need to consider the counterfactual.

Suppose if Amy hadn’t taken O, then Bob likely would have instead. In other words, Amy’s impact was partially “replaceable”. Then, her impact is less than the value of O.

A simple single-player analysis might be that her impact is as follows:

Amy’s impact = (impact of O) – (probability Bob would have done O anyway)*(impact of O)

This counterfactual correction is often ignored in the social sector, leading people to overstate their impact.

However, if Amy and Bob are part of a community working together, then we need to make our analysis one step more complicated.

If Amy takes O, then she frees up Bob to do something else instead. If Amy and Bob are fairly aligned, then Bob’s next best opportunity is probably pretty good by Amy’s lights, so it would be a mistake for Amy to ignore it.

A simple community analysis of Amy’s impact might be as follows:

Amy’s impact = (impact of O) – (% value of O that would have happened anyway) + (impact of freed up resources)

We often see people in the community consider the first counterfactual adjustment, but not add in the benefits of the freed up resources. For instance, people often say “I shouldn’t take this job, because someone else will take it instead”.

But if that’s true, it means they free up that other person. Instead, they should consider their comparative advantage compared to the other person.

In fact, we make this mistake in our own impact analyses in order to be conservative.

For instance, when we work out how many people we caused to take the Giving What We Can pledge, we exclude anyone who said they would have taken the pledge otherwise due to other groups, but we don’t add in the impact those other groups were able to have instead with the time we saved them. (We also don’t count the value of pledges we helped with but were “pushed over the line” by another group.)

It’s OK to ignore the third “freed up” resources term if Bob is not aligned with Amy and would do something low impact otherwise. Then, Amy can approximately ignore what Bob would do and we get back to the simple single-player analysis. But if Amy and Bob are aligned, then it leads to Amy underestimating her impact.

Also note the two corrections partially balance out. If Bob would definitely have done O anyway, then if Amy takes O, she definitely frees up Bob to do something else, so the third term is large. On the other hand, if Bob wouldn’t have done O anyway, then the two correction terms are zero.

This said, Amy’s impact will usually still be less than the value of O. That’s because (i) Bob’s second best opportunity is probably less effective than O due to diminishing returns and (ii) Bob is probably not completely aligned with Amy, so will do something less valuable in her eyes. However, Amy’s impact is not as much reduced as in the simple single-player analysis.

The problem of double counting

Another problem with allocating impact in the community that has been raised is the problem of “double counting”

For instance, suppose GiveWell does some research, and The Life You Can Save (TLYCS) promotes it, leading to donations of $10, and suppose that the money wouldn’t have been raised without the actions of both groups.

Now, suppose each group fundraises for its own operational expenses. GiveWell says if it hadn’t done this research, nothing would have been raised, so its impact is $10. GiveWell also said it needed $6 to funds its operations, achieving a cost-effectiveness ratio of 5:3, which is worth funding.

TLYCS makes the same claims and also raises $6.

Each organisation considered individually looks cost-effective, but collectively $12 has been spent in order to raise $10, which is a $2 loss to the community.

Clearly we want to avoid situations like this. How can we do this?

Counterfactual impact can add up to more than 100%

If the question we want to answer is “how much impact did each group have?” then there is no reason why both groups can’t be counterfactually responsible for 100% of the impact.

Nothing would have been raised if either GiveWell or TYLCS didn’t act, so the impact of both groups is to raise $10. If you want more explanation of how this is possible, see this comment by John Halstead. This means there is no conceptual problem with double counting.

However, we still face a practical problem: how can donors avoid paying more than $10 to produce $10 of impact? We need a set of rules of thumb that donors can use to avoid this mistake, and correctly align incentives in the community to produce the maximum possible long-term impact per dollar.

Solutions to the practical problem of double counting

How common is the practical problem of double counting?

One point is that we don’t think the practical problem is that common in practice. This is because either it is relatively clear that one group caused a large fraction of impact, or because in unclear cases, many organisations assume that most of their impact would have happened anyway when fundraising.

For instance, at 80,000 Hours, when we evaluate the value of the plan changes we cause, we almost always assume the plan change would have happened eventually. Typically we model our impact as a 1-3 year speed-up, justifying our effectiveness based on only a small fraction of the total.

As another example, Giving What We Can has assumed in past evaluations that about 70% of the pledged money would have been given otherwise, leaving plenty of room for other groups to fundraise based on this money raised before donors would pay for more than 100%.

The organisations also often undercount their impact in the ways covered in the previous section — not adding in the value of freed up resources and not counting ways they assisted other groups.

In fact, it’s possible (if not likely common) to end up with two organisations claiming zero credit for an action — the opposite of double counting. Suppose if TLYCS didn’t do the outreach, then GiveWell would have done it instead, and vice versa. Then, if we take the simple single-player analysis, neither TLYCS or GiveWell get any credit. This doesn’t align incentives well because we want at least one of the organisations to do the outreach.

In addition, sometimes it’s appropriate to “double count”. For instance, suppose that GiveWell has already done its research, and would have done that research no matter what TLCYS does. Then, consider the question, how much should a donor be willing to pay to fund TLYCS to do the outreach? The answer is up to $10. This is because if the donor doesn’t give, then zero will be raised, whereas if the donor does give, then $10 will be raised. The donor shouldn’t consider the costs of GiveWell’s research, because those costs are already sunk.

Coupling and assigning credit

However, in other cases, what each organisation does depends on the actions of the others, and the situation can become unclear.

What can we do to avoid overpaying in these cases?

We don’t yet have a solution to this problem, but here is one useful and conservative step:

When there is a risk of donors overpaying for the collective impact of several groups, they should treat all the groups involved as a single entity.

For instance, rather than calculating the effectiveness of GiveWell and TLYCS individually, we should work out the total expected impact of both groups, and the total expected costs of both groups, and then only fund the combination if the impact is higher than the costs.

This is what GiveWell has usually done in the past when doing cost-effectiveness analysis. For instance, when estimating the cost-effectiveness of malaria nets, it would include the costs borne by every group involved, essentially treating all the groups as one agent. The downside of this approach is that it ignores genuine opportunities to “leverage” other groups who might have spent the resources on less effective opportunities otherwise. GiveWell now tries to explicitly estimate these more complicated cases.

However, if we treat the two groups as a single entity, then we’re still faced with a tricky remaining issue: how do we allocate the funding between the two groups. We’ve not yet come across an entirely satisfactory way to do this, but here are some options.

Many people have the instinct that the $10 should be split equally between the two. However, the organisations might have made contributions of different value, so weighting them equally could under-incentivise the more difficult action. (Assuming it’s possible to understand the value of a contribution independently from counterfactual impact.)

What’s more, the 50% split solution will create a problem with how agents are defined. If we split TLYCS into two teams that were both counterfactually necessary, should we now split the $10 three ways, between GiveWell, TLYCS-team-1 and TLYCS-team-2? This would mean TLYCS now gets $6.66 rather than $5.

In business, the $10 would be split depending on the negotiating power of the two organisations. Whoever was more happy to walk away from the deal could force the other party to give them a larger share of the total. This, however, doesn’t seem like a good system within our community, since the negotiating power of an organisation depends more on how much money it already has than the amount we want to incentivise a certain type of project.

Another idea is to split the $10 in proportion to the costs incurred by each organisation. But this incentivises inefficiency — the more an organisation spends, the greater the fraction of credit it gets.

A fourth proposal is to use Shapley Values, which are a system for assigning financial rewards developed in Game Theory, but we haven’t yet evaluated these to see if they have the right properties.

Right now, our best proposal is just to use judgement: split the $10 depending on how an intuitive assessment of how significant the contribution of each organisation was. We could think about that as something along the lines of how difficult it would have been for another group to replicate the work, though this might just be a return to counterfactual analysis.

In sum, it’s possible that major donors to organisations in the community should spend less time making estimates of the cost-effectiveness of each organisation. Instead, they should estimate the cost-effectiveness of the community as a whole, and then assign credit to each organisation based on how valuable their contribution was to the community. This avoids the problem of overpaying, but it creates a new problem of how to evaluate the value of the contribution of each group.

What happens when you’re part of several communities?

In this article, we’ve considered how the best actions change if you’re part of a community, with a particular focus on the effective altruism community. But in reality, we’re all part of multiple, overlapping communities.

For instance, someone in the effective altruism community might also be part of the AI safety community, which doesn’t perfectly overlap. On a smaller scale, that person will coordinate more closely with local groups, and you could even think of your close friends as a type of community. On a bigger scale, they’re part of the global economy.

You have the potential to coordinate with all of these communities to a greater or lesser degree. The amount of coordination that’s possible depends on how effective the community’s infrastructure and norms are, and how well aligned you are with its aims.

Different communities will be more or less important to you depending on what aims they strive for. Your close friends are probably pretty important because you have a significant interest in helping each other and have close communication. If you think AI safety is a top problem area, then the AI safety community will be relatively much more important than the global economy, which only very weakly aims to improve AI safety.

You can imagine concentric rings of communities that become less and less important the further away they are from you.

The points we’ve covered about how to coordinate then apply with each community, but to a diminishing degree as the communities become less important to you.

Communities can also coordinate with other communities, so the rules of thumb we’ve considered can apply again at a higher level. For instance, the effective altruism community can compromise with other groups, act honestly with outsiders, share information, and so on.

Conclusion: moving away from a naive single player analysis

The crucial realisation of effective altruism is that some actions have far more impact than others. Upon this realisation, it can be tempting to start to make a radical shift in priorities, and focus exclusively on those most effective actions. In particular:

  1. Put less emphasis on ordinary standards of kindness and honesty, in order to better further those top priorities.

  2. Don’t think so much about helping others who are trying to have an impact so you can focus more on the top priorities.

  3. Hold out and stay flexible to take the best opportunities that others aren’t taking at the margin.

  4. Be more willing to take high-stakes risks to have more impact.

  5. Not take actions that others would have taken otherwise.

However, we’ve seen that instead of acting individually to take the top actions, it’s likely much higher impact to do good together, because communities can trade, specialise, achieve economies of scale.

Moreover, when you think about how to best unlock the benefits of coordination, it turns out that these shifts are misplaced, and result from making an overly narrow single-player analysis.

Instead, we’ve seen that maximising coordination within a shared aims community probably means you should:

  1. Uphold norms of kindness and honesty to a greater degree than normal, because they make it easier to coordination and improve the community’s reputation.

  2. Make significant investments in community capital, such as by improving its reputation, infrastructure and membership, in order to help other community members have a greater impact. This can often have a greater impact than you could individually, or through building your individual career capital.

  3. Be more willing to specialise, to focus on exploration, and even to take opportunities with worse personal fit if they play to your comparative advantage.

  4. Be less willing to take risks that might damage the community’s reputation.

  5. Sometimes take actions that would have been taken otherwise in order to free up other community members to do something else.

We can sum up these shifts by adding two extra factors to our career framework. In addition to role impact, career capital and personal fit, when you’re in a community, you also need to consider:

  1. Relative fit — your ability relative to other community members, which determines your comparative advantage.

  2. Contribution to community capital — the extent to which your actions increase the influence of the community and its ability to coordinate.

These differences apply within self-interested communities, and apply even more strongly within shared aims communities, which we argued the effective altruism community is to some degree.

In many ways, these shifts represent a return to a more common sense approach to doing good — be nice, collaborative, and cautious, don’t worry so much about replaceability, pursue a wider range of areas, and focus more on the areas where you have the most interest.

However, we now have a better understanding of the reasons for these common sense positions, and the conditions under which they hold and don’t hold. And, there still remains major differences in approach compared to where we started.

Everything we’ve covered could also use far more research. Much of the formal study of coordination within game theory is relatively recent — the results of Axelrod’s tournaments, for instance, were only published in the 1980s. And there is almost no formal study of coordination within partially altruistic or shared aims communities.

Theoretical issues aside, there is also a huge amount more to learn about more practical rules of thumb, such as how donors should allocate credit in the community and which norms are most important, and practical projects, such as which forms of community infrastructure are most valuable. Coordination seems to us like one of the most intellectually interesting topics within effective altruism research right now.

We hope this article acts as a base upon which further research can be built. We’re excited to see the community deepen its understanding of coordination, and reach its full potential for impact.

Further reading

Want to get involved with this research?

See this research proposal by Max Dalton and the topics in this research agenda by the Global Priorities Institute at Oxford, and get in touch with them.

Get involved in the effective altruism community

This article explains how.

Notes and references

  1. "Takeaways from Sapiens by Yuval Harari" by Corey Breier. Archived link, retrieved 21-Sep-2018.
  2. Though if the delay becomes significant, that's a real cost due to the urgency of the problem, time sensitive opportunities that will be missed, and other reasons for discounting.
  3. Quoting from Christian, Brian; Griffiths, Tom. Algorithms to Live By: The Computer Science of Human Decisions (p. 235). Henry Holt and Co.. Kindle Edition
  4. For instance, consider two athletes considering doping with a potentially dangerous drug. If you dope and your opponent doesn’t, then you win. If you dope and your opponent does, then you tie. So, doping leads to better payoffs than not doping (win&tie vs. tie&lose). However, it would be best for both athletes if they could agree not to dope, and tie without incurring the risk of side effects.
  5. Consider a simple model where potential hires are either aligned and trusted or not. People who aren’t already trusted and aligned might have different aims from the community. This means that if you hire them, they need much more oversight, which becomes harder and harder the more complex the output. This extra oversight makes them more expensive to employ per unit of output than someone who can already be trusted to do the right thing. This means you can have a pressing need to hire more trusted people - a type of “talent constraint” - but this can’t be resolved simply by raising salaries, since the costs of hiring non-trusted people are so much higher.
  6. One exception is that if every other algorithm always defects, then it's better to defect too.
  7. Quote taken from Wikipedia. Archived link, retrieved 21-Sep-2018.
  8. We took this term from Give and Take by Adam Grant.
  9. Though the best opportunities to set up infrastructure get taken over time, so the overall value of infrastructure opportunities could either go up or down over time.
  10. There can also be increasing returns to working on an area, due to economies of scale. Our impression is that these are exhausted relatively soon, but this could provide reason not to invest tiny amounts in a very wide number of areas.
  11. If all the areas have the same capacity for more resources, then we should expect most to get invested in the top area, then less in the second, then less again in the third and so on.

    In practice, however, the areas have very different capacity for more funding e.g. returns probably diminish much faster in AI safety than in global health. This could mean we end up investing the majority in global health even if AI safety starts out much more effective (though this is not our current assessment)