We advise people to work on problems that are important but neglected, and to try to increase the contribution they’re able to have.

These steps make it easier to have a big impact, but they also increase your potential to make things worse: the more important the problem, the worse it is to set it back; the more neglected an area, the more effect you have on its trajectory; and the more influence you have, the more it matters if you’re wrong.

This holds even if you’re not doing anything directly harmful and trying to be cautious — it’s easy to make things worse by accident, and indeed to make them much worse.

In some areas of life, your downsides are relatively capped. If you try to write a great novel, and no one wants to publish it, the worst thing you’ve done is waste some time.

But we’ll show that when it comes to doing good — especially in ‘fragile fields’ — there are many ways to set back the broader field, and so the downsides aren’t limited in the same way. The potential for negative impact can be as big or greater than the potential for positive.

So if you’re going to try to have an impact, and especially if you’re going to be ambitious about it, it’s very important to carefully consider how you might accidentally make things worse.

This doesn’t mean sticking to ‘sure things’ that everyone agrees with. To have a significant impact, you need to take some bets against conventional wisdom, and anything novel will attract a measure of controversy. The question is how to make those bets skillfully without doing a bunch of harm along the way.

We don’t particularly enjoy writing about this admittedly demotivating topic. Ironically, we expect that cautious people — the folks who least need this advice — will be the ones most likely to take it to heart. But we think it’s important to discuss if we’re going to be serious about working on the world’s most pressing problems.

In this article, we’ll list six ways people can unintentionally set back their cause. You may already be aware of most of these risks, but we often see people neglect one or two of them when new to a high stakes area — including us when we were starting 80,000 Hours.

Unfortunately, we don’t have a way to eliminate these risks entirely. The reality is that balancing these risks against the potential upside of new projects requires difficult judgement calls.

To that end, we finish by outlining seven ways to reduce the chances of making things worse, even in the highest stakes areas. The very simple version is to eliminate any options that might have very big downsides. But if you can’t do that: consider many perspectives; don’t be a naive optimizer or unilateralist; have some humility about your views; build expertise, improve your judgement and match your expertise to the difficulty of the project; and avoid hard-to-reverse decisions such as growing too fast.

Why knowing when you’re having a negative (counterfactual) impact is harder than it first seems

What we’re concerned about in this article is the chance of leaving the world worse than it would have been, given what would have occurred had you not acted.

One way to do this is to take a big, ill-conceived gamble that’s more likely to do harm than good.

But unfortunately, this can happen even if the most direct effects of your work are clearly positive. For instance, you might do something helpful, but in the process get in the way of somebody who’s even better qualified.

Imagine a (misguided) first year medical student who comes across a seriously injured pedestrian on a busy street, announces that they know first aid, and conducts it on the injured person. They have good intentions and look as though they’re helping. But imagine that a passerby who was about to call an ambulance refrained because the student showed up and took charge. In that case, the counterfactual may actually have been better medical care at the hands of an experienced doctor, making their apparent help an illusion. The medical student should have called an ambulance instead.

Few people probably persist doing actions that are obviously harmful in this way, but making things counterfactually worse like this is probably quite common and hard to detect.

Of course in this situation we would also need to think about the impact of freeing up the ambulance to attend to even more serious scenarios. Which is to say that measuring true impact can get complicated fast.

Fragile fields and where these risks are greatest

One of our biggest concerns about this article is the possibility that we’ll accidentally discourage people from starting high-value projects. So we want to emphasise that we don’t think the risks we’re going to discuss are equally pressing in every field.

For example, we believe that global health is a relatively safe problem to work on. It is an enormous area, that’s generally acknowledged as legitimate, that has an established reputation, and where success is often measurable. (Although it is of course still possible to fail and accidentally cause harm.)

Reducing extinction risk is generally a riskier problem to work on and transformative AI policy and strategy might be the very riskiest that we recommend. This problem’s legitimacy is disputed, its reputation is not yet established, success is hard to measure, and a key goal is coordinating groups with conflicting interests.

More generally, we expect that unintended consequences are concentrated in the following situations:

  • Unestablished fields that lack an existing reputation or direction
  • Fields with bad feedback loops, where you can’t tell if you’re succeeding
  • Fields without an expert consensus, where the nature of a problem is hard to measure, and people judge it based on the people involved
  • Fields that involve conflict with another group who could use your mistakes to paint your whole field in a bad light
  • Fields in which uncovering or spreading sensitive information can cause harm
  • Fields where things are already going unexpectedly well, since random shocks are more likely to make things worse rather than better

We’ll discuss many of these situations in more detail later in this article.

The rest of this article will focus on the specific risks you should watch out for when working in these fragile fields.

Ways to cause an unintended negative impact – from most to least obvious

We’ll now discuss six ways people unintentionally cause harm. Just note that this post doesn’t deal with cases where someone made the best decision available but had a negative impact through sheer bad luck – there’s nothing to be done about that. You could also be successful solving an issue, but for it to turn out that the issue was actually negative to solve, but we’re also going to bracket issues of problem selection (read more).

1. You take on a challenging project and make a mistake through lack of expertise or poor judgement

The world is extremely complicated, and most projects have significant unforeseen effects, which can easily be negative. The worse your judgement, the more likely this is.

Early on in our history we made a number of classic new-founder mistakes. We hired too quickly, prioritised marketing and outreach before achieving product-market fit, and spread our focus too widely without regard to longer-term strategy. Fortunately, most of these errors just slowed us down rather than creating permanent problems for the field of effective altruism.

This is the most obvious and visible category of negative impact: you do something that makes the problem worse in a way that someone with greater competence would have foreseen ahead of time.

This category includes doctors who don’t wash their hands, people who deliver harmful social programs (our go-to example is Scared Straight), and academics who publish incorrect findings because they use bad statistical methods.

A common form of mistake among novices is to lack strategic judgement. For instance, you might call attention to information that’s more sensitive than you realised.

This is an especially dangerous trap for people new to working on reducing extinction risk. Imagine learning that a new technology could cause a catastrophe if it’s misused as a weapon. Your first instinct might be to raise public awareness so policymakers are pressured to develop countermeasures or a nonproliferation strategy. While this may be useful in certain circumstances, increasing the profile of a threat can backfire by making it rise to the attention of bad actors (an example of an information hazard). Being careful with sensitive information sounds obvious, but if you’re new to an area it’s often not obvious exactly what information is most sensitive.

Another common oversight is failing to appreciate how damaging interpersonal conflict can be and how hard it is to avoid. Interpersonal conflicts can harm a whole field by reducing trust and solidarity, which impedes coordination and makes recruitment much more difficult. Nobody wants to join a field where everybody is fighting with each other.

An example from history: Dr. Ignaz Semmelweis realised in 1847 that cleaning doctors’ hands could save patients’ lives. His colleagues were at first willing to indulge his whim, and infection rates plummeted on his unit. But after a series of miscommunications and political conflicts within the hospital system, Semmelweis came to be regarded as a crank and was demoted. The practice of handwashing was abandoned, and thousands of patients died from infection over the next decades until later researchers proved him right. If he’d prioritised clear communication of his ideas and better relationships with his colleagues, the establishment might not have been so tragically late in realising that his ideas were correct.

The risk of making a misjudgment is a good reason (when possible) not to rush into solving a complex problem without getting the necessary training, mentoring, supervision or advice, and to embed yourself in a community of colleagues who may notice before you make a major mistake. Semmelweis’s story also highlights the importance of being good at communicating your ideas, not just developing them.

The unilateralist’s curse

One particularly easy way to make a mistake that causes a substantial negative impact is to act unilaterally in contexts where even one person mistakenly taking a particular action could pose widespread costs to your field, or the world as a whole. Nick Bostrom has explained that if people act based only on their personal judgement in this context, risky actions will be taken too often. He’s named this phenomenon ‘the unilateralist’s curse,’ illustrated with the following example:

A group of scientists working on the development of an HIV vaccine have accidentally created an airborne transmissible variant of HIV. They must decide whether to publish their discovery, knowing that it might be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons. Most members of the group think publication is too risky, but one disagrees. He mentions the discovery at a conference, and soon the details are widely known.

More generally, let’s say that there’s a field of ten people each trying to estimate the expected value of a potentially risky initiative that would turn out to have a negative impact if it’s taken. Even if, on average, the group’s estimate is correct, there will be some people whose estimate of the value is too high and others whose estimate is too low. If everybody acts based on their own judgement alone, then whether the initiative is started will be determined entirely by whether the most optimistic member of the whole group, i.e. the one who most overestimated the initiative’s value, thinks it will be positive. This is a recipe for going ahead with a lot of bad projects.

Fortunately, the curse can be lifted if you take the judgement of the rest of your field into account and refrain from taking unilateral action when most of them would disagree.

2. Reputational harm

Everyone understands that one risk of failure is that it tarnishes your reputation. But, unfortunately, people will sometimes decide that your mistakes reflect on your field as a whole. This means that messing up can also set back other people in your community, area of research, or profession, too.

One striking example of this phenomenon is the collapse of FTX, whose CEO, Sam Bankman-Fried, said he was practicing effective altruism by earning to give. He was later charged with fraud in December 2022. Not only is his own reputation in tatters, but it’s been a huge setback for the reputation of all of the areas he was associated with and claimed he wanted to help.

In general, the bigger and higher-profile the failure, the bigger the damages.

This is why it’s justifiable for larger projects to be subject to more and more vetting by their stakeholders, and to be more concerned about avoiding controversy.

It also shows how seeking media coverage can easily become a double-edged sword, especially becoming a household name. A project with a relatively low profile is unlikely to get much attention if it fails.

Plus a ‘normal’ failure is going to be much less interesting than a project that blows up in a controversial or entertaining way.

Unfortunately, this is one reason why it can be costly to make your life countercultural or unusual in lots of ways that are unrelated to your project. If you fail and it turns out for example you were (as a made-up example) really into My Little Pony, whether that had anything to do with the success of your project or not, it makes it a much more interesting story, so it’s going to get more attention, and cause more damage to the reputation of the cause you were trying to support.

There are also many more subtle ways to hurt the reputation of your field. These aren’t as big of a deal, but can be easier to cause, so are worth having in the back of your mind.

For instance, imagine you’re excited about a totally new way to improve the world and want to start fundraising to get it off the ground, so you go and talk to all the millionaire donors who seem like they might be interested. Unfortunately, you haven’t really thought through objections to your ideas, and so repeatedly come across as naive. Those donors decide not to fund you, and are also less likely to take meetings with anyone else who wants to do something similar in the future.

It doesn’t take much imagination to think up other possibilities.

Perhaps the most subtle way you can damage a small field’s reputation is a willingness to do very visible but mediocre or unimpressive work that only makes a small direct contribution to the problem at hand.

Unfortunately, when passing judgement on something we often call to mind whatever we associate with it, and then ask ourselves ‘how good is a typical example?’ This is how a CV that says you published five papers in top academic journals can be more impressive than a CV which says you published those papers and also wrote ten that couldn’t get published anywhere at all.

Watering down a field with mediocre work can also impede its long-run growth. Should researchers working on a neglected topic publish mediocre or unimpressive research contributions in addition to their best ones? Sharing all of their insights with the world might advance science the most in the short-run. However, fields where the best research is diluted by lots of marginal contributions may develop a lacklustre reputation, deterring the most promising grad students.

When a field’s just getting started, any given paper could end up accidentally serving as another person’s introduction to the subject area. Researchers might be better off really nailing just their best ideas — the ones that would be attractive to promising students who’ve never seen work in the area before.

These reputational risks are a reason to put your best foot forward and to consider how your work affects others’ perceptions of your field, but we don’t want to overemphasise them. Of course, the ideal isn’t to publish your single best idea and then retire to avoid besmirching your reputation with anything less.

Allowing perfectionism and reputational concerns to get in the way of potential positive contributions can also be a mistake. Overall, your effect on your field’s status is just one factor to consider among many when making choices in your career.

Unfortunately, it’s difficult to know when reputation risks exceed the benefits of sharing new ideas or promoting an important message. Some key points to consider: (i) it’s much less of a concern when working in a field that’s already large and well-established (ii) if the work can be subject to peer review by a respected journal or otherwise have expert approval, then it’s probably OK (iii) if in doubt, consider sharing the work privately with a specific group rather than posting it publicly.

3. Resource diversion

Almost every project eventually looks for funding and people to hire. Many also try to grab people’s attention.

Unfortunately, there’s only so much money, people and attention in the world.

If you hire someone, they won’t be working elsewhere. If you accept a donation, that money isn’t going to someone else. And if someone is reading this article, they aren’t reading a different one.

This means a project that directly does good can still be counterproductive if those resources naturally would have gone somewhere even better.

This risk becomes larger inasmuch as i) you are an unusually good salesperson, ii) you exaggerate your impact, iii) donors and employees can’t tell what projects are the best, and iv) you draw on resources that are likely to be used well in your absence (rather than bringing in ‘new’ resources that wouldn’t have been focused on doing good otherwise).

When doing an impact evaluation of your project, it’s important to try to roughly compare your impact to the impact that would have been possible if the resources you used had gone to another impactful project they might have plausibly gone to. For instance, our readers who work on international development often use GiveDirectly as ‘baseline’ that could absorb a lot of funds.

4. Locking in suboptimal choices

When 80,000 Hours was new, we promoted the idea of ‘earning to give’ for effective charities, especially by working in finance. We did think this was a good option, though we weren’t confident it was the best. Nonetheless we took the opportunity to get some easy publicity. This led to us, as well as the broader effective altruism community, becoming heavily associated with earning to give. To this day (despite issuing several statements saying most people should not earn to give), many people think it’s our top recommended path.

As we learned, another way you can cause harm is to set your field on a worse trajectory than it would have taken otherwise, and then finds hard to escape. This is most likely in the earliest stages when your strategy isn’t yet set, people don’t have a view about you, and the field is small, so single actions can meaningfully change the direction of the field as a whole. It’s not a very big concern in larger, more well-established fields.

The decisions you make when you’re just starting out and know the least can stick around or even snowball out of control. This is for a number of reasons:

  1. Most people who know of you will only ever offer a tiny amount of attention, so it’s hard to change their first impression;
  2. Once the media has written about you, people will keep finding those articles, shaping future perceptions. In particular, journalists often draw from the work of earlier journalists.
  3. Terms are hard to change — we would struggle to abandon the term ‘effective altruism’ today even if we decided we didn’t like it;
  4. Once you define what you believe, you will tend to attract people who agree with that view, further entrenching it;
  5. People find it very hard to fire colleagues, change management structures, or abandon their strategy, so bad choices often carry on even once they’re known to be problematic.

These effects can be hard to notice because we never get to see how alternative choices would have turned out.

5. Crowding out

If you announce that you’re going to work on a particular problem, or experiment with a particular approach, you can discourage other people from doing the same thing, because they will feel like you’ve got it handled.

For example, once 80,000 Hours said it was going to do research into how to do the most good with your career, we delayed anyone else in the effective altruism community from starting a similar project.

This has become a larger concern for us as we’ve become more focussed on deepening our understanding of our priority paths and top global priorities. There’s a chance our existence might discourage people from doing research into careers focused on other problems, like global health and development.

We’re not telling you that you should only start a new project if you’re certain you’ll succeed. The other side of this problem is that it’s bad when qualified people successfully identify a gap but don’t take initiative because they think it’s already being handled by others — or they wait for somebody better to come along.

We’re just pointing out that announcing the start of your project isn’t costless for the rest of your field, so it’s worth doing at least a bit of due diligence before moving ahead and potentially discouraging others. Get advice from people you trust to be honest about whether you’re a reasonable fit for the project you’re considering. Ask around to see if anybody else in your field has similar plans; maybe you should merge projects, collaborate, or coordinate on which project should move forward.

Lastly, try to be honest with yourself about the likelihood that you’ll actually follow through with your plans. One of the most avoidable (and costly) forms of crowding out is when people announce a project but never really get it off the ground. For example, we’ve heard several people say they don’t want to start a local effective altruist group because one already exists, but then the existing group soon becomes neglected or entirely inactive.

6. Creating other coordination problems

We have written an article on the substantial benefits that can come from large groups cooperating effectively. But it’s also true that people who fail to coordinate well with other groups can do significant damage.

Larger groups are harder to coordinate than smaller ones. Whether you’re doing research, advocacy or dealing with outsiders, joining a field obligates your peers to invest time making sure you and they are in sync. Furthermore, a lot of coordination relies on high trust, and it’s hard to maintain trust in a larger or shifting group where you don’t have established relationships. Adding people to an area has some direct positive impact, but it also creates an extra cost in the form of more difficult coordination. This makes the bar for growing a cause (especially a small one) higher than it first seems.

How can you mitigate these risks?

The above may seem like a gospel of despair. There are so many ways to accidentally make things worse.

Unfortunately, we can’t give you a way to avoid them all. You’ll have to use your judgement and weigh the potential upside of projects against these risks. That said, we do think there are steps you can take beyond just keeping these potential downsides in mind, trying to anticipate them, and steering clear where you can. We already mentioned some of these steps above, but here we’ll elaborate and add some more.

1. Ideally, eliminate courses of action that might have a big negative impact

If you’re comparing several options, and one of them might have a big negative impact, and the others won’t, then the simplest and safest course of action is to eliminate the one with big downside risks, and then choose the one with the most potential of what remains.

This is also the advice we give in our article on ambition: limit your downsides, then seek upsides.

By ‘big negative’ we don’t just mean that you might fail. Rather, we mean set back your field, or make things much worse than you found them.

Why does it normally make sense to just eliminate these options? The next couple of sections provide some justification. In brief: If you think a course of action might have both big costs and benefits, but the benefits outweigh the costs, that relies on getting the details of your estimate right…but your estimate is probably not right. It’s better to take actions that seem good on a wider variety of perspectives. Likewise these actions are often controversial (so not epistemically humble and maybe unilateralist), have reputational risks for the whole field, and often contribute to a breakdown in coordination.

Of course you might find that most of your options have the potential for large negative effects, or as is often the case, the courses of action with the most upsides also have the biggest downsides.

In that situation, you have to reason things through more carefully — the following sections aim to help you do that.

2. Don’t be a naive optimizer

Your understanding of the situation is certain to be incomplete. You’re probably missing crucial information and considerations.

And this runs deep. Your ‘model’ of the situation probably spits out a very uncertain answer about what’s best (“known unknowns”). But then there’s also the chance your model itself is wrong, and you’re thinking about the situation entirely wrong, and this could be in ways you haven’t even considered (“unknown unknowns”). And beyond that, it’s uncertain how to even reason about these kinds of situations — there isn’t a single clearly accepted theory of what how to make decisions under uncertainty.

So if you simply pick a goal that seems good to you, and aggressively pursue it, there are bound to be other important outcomes you’re ignoring.

The more aggressively you pursue your goal, the more likely you are to unintentionally screw over those other outcomes and do a bunch of harm.

This is especially the case if the other outcomes are harder to measure than your main target, which is usually the case. As the earlier sections show, there are lots of indirect and hard-to-track ways to make things worse, like influencing reputation or coordination; it’s seductive to trade these against ‘hard’ outcomes like donating more money, growing your organisation, or landing a promotion.

This is related to “Goodhart’s law“: when a measure becomes a target, it ceases to be a good measure. This is because there are probably edge cases to the measure that are easier to achieve than the real thing we care about. (It’s also related to why the AI alignment problem matters.)

For instance, British hospitals were taking too long to admit patients, so a penalty was instituted for wait times longer than four hours. In response, some hospitals asked ambulances to drive slower because long trips shortened hospital wait times.

Similarly, doing good is a highly complex and nebulous goal. So if you pick one approach to doing it, and pursue it aggressively, it becomes tempting to cut corners, or simply miss other important factors.

Some examples of being a naive optimiser:

  • Trying to earn as much money as possible, so you can donate as much as possible (but ignoring issues around reputation, character, cooperative norms; as well as other pathways to doing good such as spreading important ideas and building career capital)
  • Ignoring all considerations except for the one you think is most important (e.g. that there might be a lot of future generations)
  • An issue for entrepreneurs is to get so convinced by the mission of their own organisation that they cut legal and ethical corners to make it succeed.

How can you avoid being a naive optimiser?

There is no widely accepted solution to the problem of how to reason in the face of deep model uncertainty, but here are some ideas that make sense to us:

  • Consider multiple models of the situation — many understandings of what matters, many potential outcomes, and many perspectives. This should include what conventional wisdom and other experts would say. You should actively seek out the best arguments against your approach. This gives you the best possible chances of spotting missing considerations.
  • Take courses of action that either seem good according to many models, or that are very good on one perspective and roughly neutral on the others. In contrast, avoid courses of action that seem crazy on some reasonable perspectives.
  • Be ambitious but not aggressive.

Read more: Effective altruism is about maximisation, and maximisation is dangerous.

3. Have a degree of humility

Not only are you probably wrong about what’s best, others probably also disagree with you.

In trying to have an impact, there’s a very difficult tradeoff to make between doing what seems best to you, and deferring to others.

If you defer too much, you’ll never do anything innovative or novel. Also, sometimes ‘common sense’ can lead you to do harm – for example, most people don’t seem to think factory farming is morally bad, but we think it is.

But if you simply go with your own views, there’s a good chance of being wrong, and that can easily make things worse.

Indeed, if you don’t have any special knowledge that others lack, then you could argue your guess about isn’t better than anyone else’s, and you should just do what the average person thinks.

Where to fall on this spectrum of contrarianism is one of the most important drivers of why people take different approaches to doing good.

Here are some more detailed tips on how to strike the balance:

  • Make a distinction between your “impressions” (what seems best to you) and your “all considered view” (what you believe after taking account of other people’s views). It’s important to develop your own impressions, so that you’re adding to the collective wisdom about what to do, but high-stakes action should generally be taken based on your all considered view.
  • Put more weight on your own impressions the more reasons there are to trust your views (such as expertise or a track record), and when you can clearly point to information or values that you have which others don’t. If a random uninformed person disagrees with your project, that’s usually not worth worrying about.
  • Gather lots of views. A large number of non-experts can easily be more accurate than a small number of experts.
  • Weight someone’s views more by track record than superficial indicators of expertise. In many fields (e.g. most social science), experts aren’t much better at making predictions than random chance. The people with the best judgement are often informed generalists with the right mindset. Either way, try to evaluate how trustworthy people are based on their track record of making similar judgement calls.
  • Weight the views of others by their strength, both in terms of the stakes and their degree of confidence. If even someone trustworthy believes your project is ‘meh,’ that’s not a big deal either way. If they’re convinced it’s very harmful, you should be a lot more cautious.
  • Consider the stakes. In some contexts, like private intellectual discussion or doing small test projects, it’s OK to run with your wacky views. The greater the potential harms, the more important it is to consider a wide range of views.
  • Don’t rely only on your own judgement for high-stakes decisions. For instance, if you’re the CEO of a larger project, it’s important to have a board and strong cofounders to check your most important decisions.

One additional reason for humility: a course of action you think is best is almost certainly not as good as you think it is.

This follows because you’ve ranked potential actions based on your current understanding. But your current understanding is incomplete, so your analysis contains errors. If you think something is unusually good, that could either be due to correct reasoning, or it could be because you made an unusually large error in your analysis. This is an example of regression to the mean and means the best actions are typically closer to average than they seem.

Read more about how much to defer.

Don’t be a unilateralist

Here’s a special case of when to defer.

Earlier we covered the unilateralist’s curse, a situation in which if everyone in a field acts according to their best judgement, they’ll end up taking overly risky actions.

To avoid this:

  1. Run your project past other informed people in the field.
  2. If possible, try to sync up about the value of the project.
  3. If, after doing that, a significant minority thinks it’ll cause a significant negative impact, then don’t do it.

When doing this, it’s very important to distinguish other people thinking your project is merely not impactful from actively harmful.

If others think your project isn’t impactful (and haven’t thought about it much), that isn’t much reason to not do it. There will always be people who aren’t excited about your project.

But if lots of people think it’s likely to be harmful, and you push ahead anyway, then you’re being a unilateralist.

The importance of humility and getting multiple perspectives is one reason why it’s important to…

4. Develop expertise, get trained, build a network, and benefit from your field’s accumulated wisdom

When entering a field, consider starting out by working for an established organisation with the capacity to supervise your work. Learn about the field and make sure you understand the views of established players. The time this takes will vary a lot according to the area you’re working in. We generally think it makes sense to work in an area for at least 1-3 years before doing higher stakes or independently led projects, but in particularly complicated areas — like those that require lots of technical knowledge or graduate study — this can take longer.

This is not to only give you knowledge of the crucial issues and main perspectives in the field, but also to make sure you gain the advisors you need to do the steps we covered above.

There can be exceptions. For instance in an emergency like COVID-19, it wasn’t possible for some of those who could contribute to the response to first spend a year training. While lacking relevant expertise should still be a reason for caution, we don’t think that everyone without pre-existing expertise should have done nothing.

Second, if you think you’ve identified a neglected approach within a fragile field, try to understand why people with more experience haven’t taken it. It’s possible you found a promising gap, but maybe your approach has been tried and failed before, or others are avoiding it because of a flaw you don’t have the context to see.

If you’re struggling to find training opportunities or develop the required network, it may be better to stick to safer problems and methods instead of acting unilaterally in a delicate area.

One especially important way to increase your expertise is to improve your judgement. In the world of doing good, we normally lack measurable outcomes, and that means we need to instead rely on judgement to estimate the costs and benefits of different paths. We have notes on how to improve your judgement in a separate article.

5. Follow cooperative norms

As your career advances and you get more influence, you may face temptations to do something seriously harmful, dishonest or widely considered unethical ‘for the greater good.’ This is almost never a good idea, and we’ve written a separate article about why.

This principle is just one example of the broader importance of cooperative norms. If you’re working on tackling an important social problem alongside other people, you’ll likely achieve much more if you’re:

  • Honest and of high integrity — so you’re able to trust each other and be trusted, both to tell the truth, and in the sense that you’ll stick to your agreements and commonly agreed upon rules
  • Helpful — willing to benefit others working on the problem even if it doesn’t immediately benefit you back
  • Able to compromise and trade — willing to do things others regard as valuable (or avoid things others regard as very bad) even if it’s not what seems optimal to you
  • Polite and respectful — so you don’t create unnecessary drama and bad feelings
  • Judicious — willing to withdraw cooperation from people who don’t follow the norms

Someone who goes around violating these norms not only impedes their own projects, but also contributes to a broader breakdown of cooperation in the area, making things worse for everyone.

Moreover, it can impede the ability of people working on the problem to work with the rest of the world. For instance, if someone lies in the name of preventing factory farming, that could create a reputation for dishonesty for the field in general, which will make it harder for them to do most projects in the future — an example of reputational harm that we covered earlier.

Of course, violating some norms — even making enemies — does not always make an action overall a bad idea: think of some cases of civil disobedience. However, it’s worth being extremely careful in these cases. Uncooperative actions dramatically raise the chance of doing much more harm than good, and you need stronger evidence to think it’s a good idea — including probably systematically trying out other courses of action first.

Read more about why to follow these norms in our article on coordination and in “Considering Considerateness.”

6. Match your capabilities to your project and influence

Try to match your capabilities to your degree of influence and the fragility of the problems you’re working on. As the stakes get higher, the more vetting, expertise and caution to bring.

If you’re running a few events at university, you don’t need to subject your plans to a huge amount of vetting. If you’re advocating for a major change to government policy, then you do.

Get honest advice from experts about whether you’re a good personal fit for a project and whether you’re prepared to take it on. Continue to get feedback over the course of the project.

As your project becomes more influential and successful, it becomes more important to keep seeking out advisors and colleagues who will stand up to you. Unfortunately the opposite is often the case: as you get more successful, people will be less inclined to doubt you, and more worried about criticising you. To avoid this, you may need to purposely set up structures and processes to limit the role of your judgement.

If you’ve already developed the competence, training, and network to mitigate these risks in a particular field, your comparative advantage is probably taking on the risky projects that others should stay away from. You also might be able to add value by providing mentorship, training, and/or supervision to promising entrants to your field.

7. Avoid hard-to-reverse actions

Earlier we spoke about how it’s possible to accidentally lock in suboptimal choices — so it’s important to look out for and (all else equal) avoid hard-to-reverse actions..

For instance, as we discussed, a media campaign that’s going to reach lots of people for the first time will create first impressions that can be hard to undo.

Another example of a hard-to-reverse action is to grow too fast, especially if you’re working in a small but fragile field.

Hiring, growing your funding or increasing your influence normally look like concretely good things from the perspective of your project in the short-term — being bigger means more impact — but it can be more ambiguous when you consider the field as a whole and the long-term effects.

Rapid growth is hard to unwind, since it would mean firing people, so you get locked into the new, bigger state. More people are harder to coordinate, and require training and management, which is often in short supply. This leads to more of the issues we’ve covered, like unilateralism, breakdown of norms and errors of judgement. Going fast creates more reputation risks, which might have been avoided if you’d gone more slowly. (Ignoring these more diffuse harms is another example of naive optimization.)

Of course, growing more slowly also has big costs; so it’s a question of balance. However, our sense is that the benefits of growth are usually more tangible than the costs, so people are more likely overestimate the value of growth than underestimate it.

Conclusion

We hope this discussion of ways to do bad hasn’t been too demotivating.

As we wrap up, it’s important to remember that few people even try to tackle the world’s most pressing problems. And if more people don’t try, these problems won’t get solved.

It’s also important to remember there are no perfect projects. Even projects that go on to have a lot of impact usually involve a measure of controversy or dysfunction.

When young, it’s easy to think that when you make it to the adult world, you’ll meet the competent people who run things, who ‘know what they’re doing’ and are going to solve the world’s problems for us. But for the most part, all you’ll find are relatively normal people doing their best in the face of a lot of limitations.

If you’re reading this article, you can probably help.

The aim of this article isn’t to discourage you from trying to do good at all, but rather to help you be more effective in how you go about it.

To tackle the world’s problems, we need to be ambitious, but not reckless; willing to stand up to the status quo, but not arrogant; driven and determined, but not assholes.

Striking the right balance isn’t always easy, but by thinking through both sides, we can try to do our best.

Learn more about accidental harm

  • Watch this talk about how to avoid having a negative impact with your project.

Starting your own project?

If you want to work on one of the world’s most pressing problems, our team might be able to speak with you one-on-one. We can help you think through particular career opportunities and projects. We can also help you make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

What’s nextSpeak to our team one-on-one to make your new career plan

If you’ve read our advanced series, our 1-1 team might be keen to talk to you. They can help you check your plan, reflect on your values, and maybe make connections with mentors, jobs, and funding opportunities. (It’s free.)

Speak to our team

Learn even more

We have hundreds more articles on the site. You can filter them by cause, career path, and other topics to find those that are most helpful to your situation.

See all our research by topic