We encourage people to work on problems that are neglected by others and large in scale. Unfortunately those are precisely the problems where people can do the most damage if their approach isn’t carefully thought through.

If a problem is very important, then setting back the cause is very bad. If a problem is so neglected that you’re among the first focused on it, then you’ll have a disproportionate influence on the field’s reputation, how likely others are to enter it, and many early decisions that could have path-dependent effects on the field’s long-term success.

We don’t particularly enjoy writing about this admittedly demotivating topic. Ironically, we expect that cautious people – the folks who least need this advice – will be the ones most likely to take it to heart.

Nonetheless we think cataloguing these risks is important if we’re going to be serious about having an impact in important but ‘fragile’ fields like reducing extinction risk.

In this article, we’ll list six ways people can unintentionally set back their cause. You may already be aware of most of these risks, but we often see people neglect one or two of them when new to a high stakes area – including us when we were starting 80,000 Hours.

Unfortunately, we don’t have a way to eliminate these risks entirely. The reality is that balancing these risks against the potential upside of new projects requires difficult judgment calls.

Fortunately, even when people start projects whose risks exceed their benefits, they often learn and improve over time. Their early mistakes might be seen as just another cost of training – so long as the errors aren’t catastrophic and they do learn from experience.

To that end, we finish by outlining how to reduce the chances of making things worse, even in the highest stakes areas. In brief, it raises the importance of finding good mentors, consistently seeking advice and feedback from experienced colleagues, and ensuring you’re a good fit for a project before you take actions with potentially large or long-term consequences.

The difficulty of knowing when you’re having a negative impact

What we’re concerned about in this article is the chance of leaving the world worse than it would have been, given what would have occurred had you not acted.

Unfortunately this can happen even if the most direct effects of your work are positive. For instance, you might do something helpful, but in the process get in the way of somebody who’s even better qualified.

Imagine a (misguided) first year medical student who comes across a seriously injured pedestrian on a busy street, announces that they know first aid, and provides care on their own. They have good intentions and look as though they’re helping. But imagine that a passerby who was about to call an ambulance refrained because the student showed up and took charge. In that case, the counterfactual may actually have been better medical care at the hands of an experienced doctor, making their apparent help an illusion. The medical student should have called an ambulance instead.

Few people persist doing actions that are obviously harmful, but making things counterfactually worse like this is probably quite common and hard to detect.

Of course in this situation we would also need to think about the impact of freeing up the ambulance to attend to even more serious scenarios. Which is to say that measuring true impact can get complicated fast.

Where these risks are greatest

One of our biggest concerns about this article is the possibility that we’ll accidentally discourage people from starting high-value projects. So we want to emphasize that we don’t think the risks we’re going to discuss are equally pressing in every field.

For example, we believe that global health is a relatively safe problem to work on. It is an enormous area, that’s generally acknowledged as legitimate, that has an established reputation, and where success is often measurable. (Although it is of course still possible to fail and accidentally cause harm.)

Reducing extinction risk is generally a riskier problem to work on and transformative AI policy and strategy might be the very riskiest that we recommend. This problem’s legitimacy is disputed, its reputation is not yet established, success is hard to measure, and a key goal is coordinating groups with conflicting interests. In addition, given these challenges, some argue things are going unusually well – an equilibrium it might be best not to interfere with.

More generally, we expect that unintended consequences are concentrated in the following situations:

  • Unestablished fields that lack an existing reputation or direction.
  • Fields with bad feedback loops, where you can’t tell if you’re succeeding.
  • Fields without an expert consensus, where the nature of a problem is hard to measure, and people judge it based on the people involved.
  • Fields that involve conflict with another group who could use your mistakes to paint your whole field in a bad light.
  • Fields in which uncovering or spreading sensitive information can cause harm.
  • Fields where things are already going unexpectedly well.

We’ll discuss many of these situations in more detail later in this article. Here, we’ll just elaborate a bit on the last factor because we think it’s the least intuitive.

If a community working on a challenging problem has, against the odds, started to make major progress, then random shocks to that state of affairs are more likely to do harm than good. Conversely, if there’s a problem lots of people are working on, and they’re doing much worse than you’d expect them to, getting involved is likely to improve things, because there’s little going right that you could accidentally break. These are instances of ‘regression to the mean’.

The rest of this article will focus on the specific risks you should watch out for when working in these fragile fields.

Ways to cause a negative impact – from most to least obvious

We’ll now discuss six ways people unintentionally cause harm. Just note that this post doesn’t deal with cases where someone made the best decision available but had a negative impact through sheer bad luck – there’s nothing to be done about that. You could also be successful solving an issue, but for it to turn out that the issue was actually negative to solve, but we’re also going to bracket issues of problem selection (read more).

You take on a challenging project and make a mistake through lack of experience or poor judgment

Early on in our history we made a number of classic new-founder mistakes. We hired too quickly, prioritised marketing and outreach before achieving product-market fit, and spread our focus too widely without regard to longer-term strategy. Fortunately, most of these errors just slowed us down rather than creating permanent problems for the field of effective altruism.

This is the most obvious and visible category of negative impact: you do something that makes the problem worse in a way that someone with greater expertise would have foreseen ahead of time.

This category includes doctors who don’t wash their hands, people who deliver harmful social programs (our go-to example is Scared Straight), and academics who publish incorrect findings because they use bad statistical methods.

A common form of mistake among novices is to lack strategic judgement. For instance, you might call attention to information that’s more sensitive than you realised.

This is an especially dangerous trap for people new to working on reducing extinction risk. Imagine learning that a new technology could cause a catastrophe if it’s misused as a weapon. Your first instinct might be to raise public awareness so policymakers are pressured to develop countermeasures or a nonproliferation strategy. While this may be useful in certain circumstances, increasing the profile of a threat can backfire by making it rise to the attention of bad actors. Being careful with sensitive information sounds obvious, but if you’re new to an area it’s often not obvious exactly what information is most sensitive.

We highly recommend the work of Nick Bostrom, philosopher at Oxford University’s Future of Humanity Institute and an expert on extinction risks, for an overview of how to avoid these particular pitfalls, which are known as ‘information hazards.’

Another common oversight is failing to appreciate how damaging interpersonal conflict can be and how hard it is to avoid. Interpersonal conflicts can harm a whole field by reducing trust and solidarity, which impedes coordination and makes recruitment much more difficult. Nobody wants to join a field where everybody is fighting with each other.

An example from history: Dr. Ignaz Semmelweis realised in 1847 that cleaning doctors’ hands could save patients’ lives. His colleagues were at first willing to indulge his whim, and infection rates plummeted on his unit. But after a series of miscommunications and political conflicts within the hospital system, Semmelweis came to be regarded as a crank and was demoted. The practice of handwashing was abandoned, and thousands of patients died from infection over the next decades until later researchers proved him right. If he’d prioritized clear communication of his ideas and better relationships with his colleagues, the establishment might not have been so tragically late in realizing that his ideas were correct.

The risk of making a misjudgment is a good reason not to rush into solving a complex problem without getting the necessary training, mentoring, supervision or advice, and to embed yourself in a community of colleagues who may notice before you make a major mistake. Semmelweis’s story also highlights the importance of being good at communicating your ideas, not just developing them.

The unilateralist’s curse

One particularly easy way to make a mistake that causes a substantial negative impact is to act unilaterally in contexts where even one person mistakenly taking a particular action could pose widespread costs to your field, or the world as a whole. Nick Bostrom has explained that if people act based only on their personal judgement in this context, risky actions will be taken too often. He’s named this phenomenon ‘the unilateralist’s curse.’

Let’s say that there’s a field of ten people each trying to estimate the expected value of a potentially risky initiative that would turn out to have a negative impact if it’s taken. Even if, on average, the group’s estimate is correct, there will be some people whose estimate of the value is too high and others whose estimate is too low. If everybody acts based on their own judgment alone, then whether the initiative is started will be determined entirely by whether the most optimistic member of the whole group, i.e. the one who most overestimated the initiative’s value, thinks it will be positive. This is a recipe for going ahead with a lot of bad projects.

Fortunately, the curse can be lifted if you take the judgement of the rest of your field into account and refrain from taking unilateral action when most of them would disagree. This is a strong argument for developing a good network and general knowledge about the range of views in your field before taking major actions.

We write more about the importance of compromise as a norm in our article on coordinating within a community.

Reputational harm

Everyone understands that one risk of failure is that it tarnishes your reputation. But, unfortunately, people will sometimes decide that your mistakes reflect on your field as a whole. This means that messing up can also set back other people in your field.

There are a lot of ways to do this. For instance, imagine you’re excited about a totally new way to improve the world and want to start fundraising to get it off the ground, so you go and talk to all the millionaire donors who seem like they might be interested. Unfortunately, you haven’t really thought through objections to your ideas, and so repeatedly come across as naive. Those donors decide not to fund you, and are also less likely to take meetings with anyone else who wants to do something similar in the future.

It doesn’t take much imagination to think up other possibilities:

  • Imagine that a few of those millionaires do decide to fund you. Unfortunately you don’t have any management experience, make some bad hires, and the project falls apart. This leaves a bitter taste in their mouth, and even worse, your failed project is now the first thing that comes to people’s minds when someone brings up your approach.
  • Your project goes well, but to get off the ground you exaggerate your accomplishments and credentials. Your field develops a reputation for dishonesty, which interferes with the work of dozens of related groups.
  • You give a talk trying to convince people to worry about an issue. Unfortunately, even though you’re correct, you’re a bad public speaker – and so make them less open to working on it.

Perhaps the most subtle way you can damage a small field’s reputation is a willingness to do very visible but mediocre or unimpressive work that only makes a small direct contribution to the problem at hand.

Unfortunately, when passing judgement on something we often call to mind whatever we associate with it, and then ask ourselves ‘how good is a typical example’? This is how a CV that says you published five papers in top academic journals can be more impressive than a CV which says you published those papers and also wrote ten that couldn’t get published anywhere at all.

Watering down a field with mediocre work can also impede its long-run growth. Should researchers working on a neglected topic publish mediocre or unimpressive research contributions in addition to their best ones? Sharing all of their insights with the world might advance science the most in the short-run. However, fields where the best research is diluted by lots of marginal contributions may develop a lackluster reputation, deterring the most promising grad students.

When a field’s just getting started, any given paper could end up accidentally serving as another person’s introduction to the subject area. Researchers might be better off really nailing just their best ideas – the ones that would be attractive to promising students who’ve never seen work in the area before.

These reputational risks are a reason to put your best foot forward and to consider how your work affects others’ perceptions of your field, but we don’t want to overemphasize them. Of course, the ideal isn’t to publish your single best idea and then retire to avoid besmirching your reputation with anything less.

Allowing perfectionism and reputational concerns to get in the way of potential positive contributions can also be a mistake. Overall, your effect on your field’s status is just one factor to consider among many when making choices in your career.

Unfortunately, it’s difficult to know when reputation risks exceed the benefits of sharing new ideas or promoting an important message. That said, here are some heuristics:

  • If you’re working in a field that’s already large or well-established, or aren’t advocating for ideas that are controversial, you should be much less worried about this risk. Any work that you do is unlikely to have much of an effect on the field’s reputation as a whole.
  • If you’re doing academic research, getting work accepted into a well-respected journal is a pretty good signal that it won’t cause reputational harm. It’s worth developing mentors who have a good sense of which journals in your area are well-respected.
  • Outside of academia, you can ask other people in your field for their frank opinions about your work, perhaps organising your own anonymous peer review if you worry they will be too polite.
  • You should be less concerned about this risk if your work is not public facing, so it can be valuable to gain experience and solicit lots of advice ahead of projects that might gain attention outside your community. Regardless of your level of experience, it’s almost always worth getting feedback before a major project goes public.
  • If you want to share an idea without that piece serving as someone’s introduction to you or your field, you can always write it up and share it privately.
  • Some other ideas listed below.

Resource diversion

Almost every project eventually looks for funding and people to hire. Many also try to grab people’s attention.

Unfortunately, there’s only so much money, people and attention in the world.

If you hire someone, they won’t be working elsewhere. If you accept a donation, that money isn’t going to someone else. And if someone is reading this article, they aren’t reading a different one.

This means a project that directly does good can still be counterproductive if those resources naturally would have gone somewhere even better.

This risk becomes larger inasmuch as i) you are an unusually good salesperson, ii) you exaggerate your impact, iii) donors and employees can’t tell what projects are the best, and iv) you draw on resources that are likely to be used well in your absence.

Lock in suboptimal choices

When 80,000 Hours was new we promoted the idea of ‘earning to give’ for effective charities, especially by working in finance. We did think this was a good option, though we weren’t confident it was the best. Nonetheless we took the opportunity to get some easy publicity. This led to us, as well as the broader effective altruism community, becoming heavily associated with earning to give. To this day (despite issuing several statements saying most people should not earn to give), many people think it’s our top recommended path.

As we learned, another way you can cause harm is to set your field on a worse trajectory than it would have taken otherwise, and then finds hard to escape. This is most likely in the earliest stages when your strategy isn’t yet set, people don’t have a view about you, and the field is small, so single actions can meaningfully change the direction of the field as a whole. It’s not a very big concern in larger, more well-established fields.

The decisions you make when you’re just starting out and know the least can stick around or even snowball out of control. This is for a number of reasons:

  1. Most people who know of you will only ever offer a tiny amount of attention, so it’s hard to change their first impression;
  2. Once the media has written about you, people will keep finding those articles;
  3. Terms are hard to change – we would struggle to abandon the term ‘effective altruism’ today even if we decided we didn’t like it;
  4. Once you define what you believe, you will tend to attract people who agree with that view, further entrenching it;
  5. People find it very hard to fire colleagues, change management structures, or abandon their strategy, so bad choices often carry on even once they’re known to be problematic.

These effects can be hard to notice because we never get to see how alternative choices would have turned out.

It’s for these kinds of reasons that we discourage people from translating our work into other cultural contexts without thinking seriously about whether it’s delivering the optimal message.

Crowding out

If you announce that you’re going to work on a particular problem, or experiment with a particular approach, you can discourage other people from doing the same thing, because they will feel like you’ve got it handled.

For example, once 80,000 Hours said it was going to do research into how to do the most good with your career, we made it less likely that anyone else in the effective altruism community would do the same. To this day, we don’t really have any competitors.

This has become a larger concern for us as we’ve become more focussed on deepening our understanding of our priority paths and top global priorities. There’s a chance our existence might discourage people from doing research into careers focused on other problems, like global health and development.

We’re not telling you that you should only start a new project if you’re certain you’ll succeed. The other side of this problem, is that it’s bad when qualified people successfully identify a gap, but don’t take initiative because they think it’s already being handled by others, or wait for somebody better to come along.

We’re just pointing out that announcing the start of your project isn’t costless for the rest of your field, so it’s worth doing at least a bit of due diligence before moving ahead and potentially discouraging others. Get advice from people you trust to be honest about whether you’re a reasonable fit for the project you’re considering. Ask around to see if anybody else in your field has similar plans; maybe you should merge projects, collaborate, or coordinate on which project should move forward.

Lastly, try to be honest with yourself about the likelihood that you’ll actually follow through with your plans. One of the most avoidable (and costly) forms of crowding out is when people announce a project but never really get it off the ground. For example, we’ve heard several people say they don’t want to start a local effective altruist group because one already exists but then the existing group soon becomes neglected or entirely inactive.

Create other coordination problems

We have written an article on the substantial benefits that can come from large groups cooperating effectively. But it’s also true that people who fail to coordinate well with other groups can do significant damage.

Larger groups are harder to coordinate than smaller ones. Whether you’re doing research, advocacy or dealing with outsiders, joining a field obligates your peers to invest time making sure you and they are in sync. Furthermore, a lot of coordination relies on high trust, and it’s hard to maintain trust in a larger or shifting group where you don’t have established relationships. Adding people to an area has some direct positive impact but it also creates an extra cost in the form of more difficult coordination. This makes the bar for growing a cause (especially a small one) higher than it first seems.

How can you mitigate these risks?

The above may seem like a gospel of despair. There are so many ways to accidentally do harm.

Unfortunately, we can’t give you a way to avoid them all. You’ll have to use your judgement and weigh the potential upside of projects against these risks. That said, we do think there are steps you can take beyond just keeping these potential downsides in mind, trying to anticipate them, and steering clear where you can. We already mentioned some of these steps above, but here we’ll elaborate and add some more.

Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom

First, when entering a field, consider starting out by working for an established organisation with the capacity to supervise your work. Learn about the field and make sure you understand the views of established players. The time this takes will vary a lot according to the area you’re working in. We generally think it makes sense to work in an area for at least 1-3 years before doing higher stakes or independently led projects, but in particularly complicated areas — like those that require lots of technical knowledge or graduate study — this can take five years or more.

Second, if you think you’ve identified a neglected approach within a fragile field, try to understand why people with more experience haven’t taken it. It’s possible you found a promising gap but maybe your approach has been tried and failed before, or others are avoiding it because of a flaw you don’t have the context to see.

Third, it’s impossible to anticipate all of these pitfalls on your own and many require tacit or experiential knowledge to identify. Before taking on a risky project in a fragile field, you should try to get adequate training, connect with mentors, and seek out advice from people who better know the lay of the land.

If you’re struggling to find training opportunities or develop the required network, it may be better to stick to safer problems and methods instead of acting unilaterally in a delicate area.

Be cooperative and follow ‘norms of niceness’

Fourth, when interacting with other members of your field, following the “nice norms” we’ve promoted elsewhere can help avoid some of these problems. In particular, the norm of compromise — avoiding actions that might cause large negatives in the view of others in the community — is important. Likewise, following these norms helps to maintain the reputation of the community and trust within it, which mitigates two of the risks we outlined. For more on this topic, we’d also recommend this article on why communities of do-gooders should be exceptionally considerate.

Hire slowly

Fifth, employers in fragile fields should consider hiring slowly. Adding an employee who’s a bad fit, or absorbing staff faster than you’re capable of training and supervising them, isn’t just a lost chance at increased productivity. It can affect your organisation’s reputation or culture for a long time, and increase your chance of harming your field on the whole.

Vet projects before funding them

Sixth, donors should understand that it can be harmful to naively expand the resources in a fragile field without adequately vetting the specific projects being funded. If you fund people unprepared to avoid these pitfalls, the worst case scenario isn’t that their projects have no effect, it’s that they actually hurt the field as a whole. If you don’t have the capacity to vet projects, you might consider donating to more established organisations, joining a donor lottery, or giving to an experienced grantmaker who can regrant funds on your behalf.

Match your capabilities to your project

Seventh, people should match their capabilities to the fragility of the projects and problems they take on. Get honest advice from experts about whether you’re a good personal fit for a project and whether you’re prepared to take it on. Continue to get feedback over the course of the project.

If you’re about to act on an opportunity that may be irreversible, or may only come around once, ask yourself if you’re the best person to do it. Passing on opportunities to others can be a great way to network and create value. For example, you usually shouldn’t represent your field to the media if you’re still a novice. It may be very important that particularly influential people have a good first impression of your field. This is an instance where coordination can be important. Before contacting someone high stakes (e.g. someone very wealthy or famous, or an important government official) ask yourself if somebody more experienced and with more credentials would do an even better job. It can also seem unprofessional or disorganised if several people from the same community all reach out to somebody famous at the very same time.

Conversely, if you’ve already developed the competence, training, and network to mitigate these risks in a particular field, your comparative advantage is probably taking on the risky projects that others should stay away from. You also might be able to add value by providing mentorship, training, and/or supervision to promising entrants to your field.

Cultivate good judgement

For people who work on fragile problems, it’s particularly important to cultivate good judgement. We’ve sketeched some thoughts on how to improve judgement. We hope to find more evidence-backed methods in future.

Conclusion

We hope this discussion of ways to do bad hasn’t been demotivating. We think most projects that have gone forward in the EA community have had a positive expected value and when we hear about new projects we’re typically excited, not wary. Even projects that are ill-conceived to start with typically improve over time as the founders get feedback and learn from experience. So whenever you consider these risks (and our advice for mitigating them) make sure to weigh them against the potentially massive benefits of working on some of the world’s most pressing problems.