I’m extremely pro peer-updating in general, but from the perspective of the community as a whole — I’d much rather a lot of people having a lot of personally formed views.

Anonymous

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.

But it’s the thirteenth in this series of posts with anonymous answers — many of which are likely to be useful to everyone. You can find the complete collection here.

We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

Did you just land on our site for the first time? After this you might like to read about 80,000 Hours’ key ideas.

In April 2019 we posted some anonymous career advice from someone who wasn’t able to go on the record with their opinions. It was well received, so we thought we’d try a second round, this time interviewing a larger number of people we think have had impressive careers so far.

It seems like a lot of successful people have interesting thoughts that they’d rather not share with their names attached, on sensitive and mundane topics alike, and for a variety of reasons. For example, they might be reluctant to share personal opinions if some readers would interpret them as “officially” representing their organizations.

As a result we think it’s valuable to provide a platform for people to share their ideas without attribution.

The other main goal is to showcase a diversity of opinions on these topics. This collection includes advice that members of the 80,000 Hours team disagree with (sometimes very strongly). But we think our readers need to keep in mind that reasonable people can disagree on many of these difficult questions.

We chose these interviewees because we admire their work. Many (but not all) share our views on the importance of the long-term future, and some work on problems we think are particularly important.

This advice was given during spoken interviews, usually without preparation, and transcribed by us. We have sometimes altered the tone or specific word choice of the original answers, and then checked that with the original speaker.

As always, we don’t think you should ever put much weight on any single piece of advice. The views of 80,000 Hours, and of our interviewees, will often turn out to be mistaken.

What are the biggest flaws of the effective altruism community?

Groupthink

Groupthink seems like a problem to me. I’ve noticed that if one really respected member of the community changes their mind on something, a lot of other people quickly do too. And there is some merit to that, if you think someone is really smart and shares your values — it does make sense to update somewhat. But I see it happening a lot more than it probably should.


Something I feel is radically undersupplied at the moment is just people who are really trying to figure stuff out — which takes years. So the person I’m mainly thinking about as the kind of paragon of this is Carl Shulman, where he’s spent years and years just really working out for himself all the most important arguments related to having a positive influence in the long-run, and moral philosophy, and meta-ethics, and anthropics and, well — basically everything. And the number of people doing that is very small at the moment. Because there’s not really a path for it.

If you go into academia, then you write papers. But that’s just one narrow piece of the puzzle. It’s a similar case in most research organisations.

Whereas just trying to understand basically everything, and how it all fits together — and not really deferring to others, and actually trying to work out everything yourself, is so valuable. And I feel like very few people are trying to do that. Maybe Carl counts, Paul Christiano counts, Brian Tomasik counts, I think Eric Drexler as well.

If you’re someone who’s considering research in general, I think there’s an enormous value here because there’s just so few people doing it.

I think there are plenty of people who are intellectually capable of this, but it does require a certain personality. If we were in a culture where having your own worldview — even if it didn’t seem that plausible — was an activity that was really valued, and really praised, then a lot more people could be doing this.

Whereas I think the culture can be more like “well, there’s a very narrow band of super-geniuses who are allowed to do that. And if you do it, you’re going to be punished for not believing the median views of the community.”

I’m extremely pro peer-updating in general, but from the perspective of the community as a whole — I’d much rather a lot of people having a lot of personally formed views. I feel like I learn a lot more from reading opinions on a subject from ten people who each have different, strong, honest views that they’ve figured out themselves, rather than ten people who are trying to peer-update on each other all the time.


Everyone’s trying to work at effective altruism (EA) orgs.

Too many people think that there’s some group of people who have thought things through really carefully — and then go with those views. As opposed to acknowledging that things are often chaotic and unpredictable, and that while there might be some wisdom in these views, it’s probably only a little bit.

Disagreeableness

I’m concerned that some of the social norms of EA are turning people off who would otherwise find the ideas compelling. There’s such a norm of disagreeableness in EA, it can seem like every conversation is a semi-dispute between smart people. I think it’s not clear to a lot of people who have been around EA for a long time just how unusual that norm is. For people new to EA, it can be pretty off-putting to see people fighting about small details. I don’t think this problem is obvious to everyone, but it seems concerning.

Too much focus on ‘the community’

Sometimes it isn’t that fun to be around the EA community.

I’d much prefer an emphasis on specific intellectual projects rather than a community. It sometimes feels like you’re held to this vague jurisdiction of the EA community — are you upholding the norms? Are you going to be subject to someone’s decision about whether this is appropriate for the community? It can seem like you’re assumed to have opted in to something you didn’t opt in to, something that has unclear norms and rules that maybe don’t represent your values.


I think sometimes people are too focused on what the community does, thinks, etc. What you’re doing shouldn’t depend too much on what other people are doing unless you personally agree with it. If the effective altruism community ended tomorrow it honestly wouldn’t affect what I’m doing with my life – I do what I do because I think the arguments for it are good and not because the effective altruism community thinks it’s good.

So I think the ideas would survive the non-existence of the community. And I think we should generally focus on the ideas independently (though if you really value the community, I understand why that might be important).

A ‘holier than thou’ attitude

Something that seems kinda bad is people having a ‘holier than thou’ attitude. Thinking that they’ve worked out what’s important, and most other people haven’t.

But the important part of EA is less the answers we’ve arrived at, and more the virtues in thinking that we’ve cultivated. If you want other people to pick up on your virtues, being a jerk isn’t the best way to do it.

Failing to give more people a vision of how they can contribute

I don’t think EA ever settled the question of “how big a mass movement does it want to be”? We raised a lot of good points on both sides, and then just ambivalently proceeded.

If we want to be a mass movement, we’re really failing to give average people, and even some well-above average people, a vision of how they can contribute.

A lot of people get convinced of the arguments for longtermism, and then encounter the fact that there aren’t really good places to donate for far-future stuff — and donating money is the most accessible way to contribute for a lot of people.

I worry that this creates a fairly large pool of money that may actually end up being spent on net-negative projects, because it’s just floating around looking for somebody to take it. That creates conditions for frauds, or at the very least for people whose projects aren’t well thought through — and maybe the reasons they haven’t received funding through official sources yet are good ones.

But there are a lot of people who want to help, and who haven’t been given any good opportunities. If we want to be a mass movement, I think we’re really failing by being too elitist and too hostile towards regular people.

We’re also not giving good, clear ways to donate to improving the far-future. I think that even if you’re convinced by the arguments for longtermism, unless you have a really good reason to think that a particular giving opportunity is going to be underrated by the institutions that are meant to be evaluating these things — you should consider donating to animal welfare or global development charities. Both of which are very important.

The arguments for why those causes are important are not undermined by the possibility of short AI timelines. If anything, saving someone’s life is a bigger deal if it means they make it to the singularity. It’s fine to say, “yep, I’m persuaded by these long-term future arguments, but I don’t actually see a way for my money to make a difference there right now, so I’m going to make donations to other areas where it’s clearer that my donation will have a positive effect.”

The community should be more willing to say this. I don’t think I’m the only person convinced by longtermism arguments who doesn’t think that a lot of people should donate to longtermist stuff, because there just aren’t that many good giving opportunities. People can be unwilling to say that, because “we don’t want your money” can sound snobby etc.


Deemphasizing growth. One way of countering lock-in in the media is to have new media stories with additional facets of EA. I think there are a lot of problems that would be great to have more EAs working on and donating to. EAs have expressed concern that recruiting more people would dilute the movement in terms of ability. But I think that it is okay to have different levels of ability in EA. You generally need to be near the top to be at an EA organisation or contributing to the EA forum. But if someone wants to donate 10% of their money to a charity recommended by EA, and not engage further, I think that’s definitely beneficial.


I’d like to see a part of EA devoted to a GiveWell-type ranking of charities working on the reduction of global catastrophic risks.

Longtermism has become a status symbol

Believing the arguments for longtermism has become something of a status thing. A lot of EAs will tend to think less of people if they either haven’t engaged with those arguments, or haven’t been convinced. I think that’s a mistake — you have to create conditions where people don’t lose respect for disagreeing, or your community will predictably be wrong about most things.

Not engaging enough with the outside world

I worry about there being an EA bubble — I’d like to see more engagement with the outside world. There are some people who aren’t ever going to be convinced by your view of the most important things, and it’s fine to not worry about them.

At the same time, there’s a risk of getting carried away talking with people who they really agree with — and then trying to transfer that to the rest of their career. They might say things at work that are too weird, or they might make overly risky career decisions that leave themselves without backup options.

Not following best hiring practices

There are some incompetent people in prominent positions at EA organisations — because the orgs haven’t put enough time into studying how to best find successful employees.

EA orgs should study best hiring practices. If a role is important, you need to get the right person — and that shouldn’t be on the basis of a cover letter, a resume and an interview. Everybody involved in hiring should read Work Rules!, and people should be implementing those principles.

Being too unwilling to encourage high standards

I think it does make sense to have messages for highly involved EAs to make sure they don’t burn out. However, this should probably be more in person rather than online, as these people are typically in in-person EA communities anyway. The large majority of EAs are not giving 10% of their money or not changing their career radically or working themselves to the bone, so I think they should be encouraged to meet high standards. I think we can keep our standards high, such that you donate 10% of your money, or do direct effective work, or volunteer 10% of your free time (roughly 4 hours a week) to EA organisations or maybe just promoting EA individually. I think EA can still grow much faster even with these high standards.


I don’t know if we should have the norm that donation should end when retirement starts. But maybe it was an appropriate compromise to not have it be too intimidating.

Doing non-technical research that isn’t actually useful

I’m sceptical of most forms of non-technical EA-ish research being practically useful.

I think there’s a few people who do excellent macro strategy research, like Nick Bostrom — but there’s a norm in the EA community of valuing when someone comes up with a new cool consideration or an abstract model that relates to an EA topic, and I think most of that work isn’t actually valuable. It’s the sort of thing where if you’re not exceptionally talented, it’s really difficult to do valuable work.


There can be a temptation among EAs to think that just writing considerations on interesting topics is the most useful thing that they could be doing. But I often see write-ups that are overly general, not empirically grounded enough, that only a few people are going to read — and of the people who read it none are likely to update their views as a result.

People can feel like if they write something and put it up on the internet that equals impact — but that’s only true if the right people read it, and it causes them to change their minds.

Abandoning projects too quickly

Often people don’t commit enough time to a project. Projects can be abandoned after 6 months when they should have probably been given years to develop.

Most people live in the centre of big cities

I think it’s a problem that the important organisations and individuals are mostly in EA hubs. This is especially problematic because all the EA hubs are in NATO cities, which likely would not survive full-scale nuclear war. A simple step to mitigate this problem is living in the suburbs or even outside the suburbs, but I think EAs have a bias towards city life (there is already a gradient in rent representing commuting costs, so if you actually think there is significant chance of nuclear war, it makes sense living outside of metros, especially if you can multitask while commuting). Even better would be locating outside NATO countries in ones such as Australia or New Zealand (because of lower pandemic risk as well).

Lack of support for entrepreneurs

I’d love to see someone create a good EA startup incubator. I don’t think anyone’s doing it well at the moment.

One of the biggest problems with EA is a lack of entrepreneurs that are ready to start a project on their own. But if we could get some of the best EAs to commit to allocating some of their time systematically to help people with the best proposals — get their new projects, or orgs ready to go — I think that would be the most effective way to utilise the resources we currently have at our disposal.

Valuing exceptional work in a non-effective job too highly

Many EAs have said that if one is building career capital in a noneffective job, you have to be an exemplary performer in that job. But I think that that takes so much effort that you are not able to develop background knowledge and expertise towards your actual effective work. One example is working hard for bonuses; in my experience, the marginal dollar per hour is very low for bonuses.

Too cautious

Maybe slightly too cautious overall. I understand the reasons for focusing on possible negative consequences, but I think generally I’m more pro “doing things”.

Too narrow

Thinking about the way that you are putting things, and the tone that they have, is very important. But it’s one of those things where people can fail to acknowledge the importance of it.

People who disagree with an idea find it very hard to say “I disagree with this, but I don’t quite know why”. It’s also very hard to say, “the thing is, I don’t really disagree with any of the claims you made, but I really do disagree with the way they were made, or what they seem to imply”.

I suspect when it comes to a lot of the criticisms of EA, people will try to present them as disagreements with the core ideas. And I think a lot of the people making these critiques don’t actually disagree with the core ideas, they’re really saying “it feels like you’re ignoring a bunch of things that feel important to me”.

So I would like to see EA grow, and be sensitive to those things. And maybe that means I want EA to be broader, I think I probably do. I would like there to be more people who disagree. I would like there to be more people who won’t present things in that way. It would be nice to see more moral views presented; I think these ideas are not restricted to the groups that are currently dominantly represented in EA. And so I think an epistemically virtuous version of EA probably is broader, in terms of actually gathering, and being compelling to people with a range of different views.


I think there is a bias in the existential risk community towards work at global top 20 universities. Something like 90 of percent of the work gets funded there, compared to general research where it might be about a couple percent in those universities. You could argue that for some problems you really need the smartest people in the world. But I think that lots of progress can be made with people not at those elite universities. And it is a lot cheaper at other universities.

Neglecting less popular funding opportunities

I think one mistake is Good Ventures not diversifying their investments (last time I checked, I think nearly all was still in Facebook).


There are still funding gaps that aren’t necessarily always recognised. There’s talk about earning-to-give being deprioritised, but that only makes sense for higher-profile EA cause areas. For areas that aren’t popular at all in the mainstream world — EA funding is essential. There are a lot of exciting projects that just don’t get done purely because of funding gaps.


I think Open Philanthropy putting $55 million into something [CSET] that is not even focused on transformative AI, let alone AGI was not a good idea considering all the other GCR reduction opportunities there are.


There are really large funding gaps both for existing and EA-aligned organisations yet to be funded. When a group gets funded, it also doesn’t mean they were able to get full funding. It can also be challenging to learn about all the different EA organisations as there’s no central hub. Lists are very scattered and it can be challenging for the community to learn about them all and what their needs are.

A lack of focus on broader global catastrophic risks

I think a common mistake long-term future EAs make is that existential risk means only extinction. In reality, there are many routes to far future impact that do not involve extinction right away.


I’ve heard a number of long-term future EAs express skepticism that any GCR interventions could actually be net beneficial to the present generation. However, there was the book Catastrophe: Risk and Response that made just this argument. Also, there are models showing both AGI and preparation for agricultural catastrophes are both highly cost-effective for the long-term future and for the present generation.

Being too siloed

I think EA is a little too siloed. I think it is useful to take into account impacts on multiple cause areas of particular interventions, like GCR interventions saving lives in the present generation.

I think it is great that EAs are proposing a lot of possible Cause Xs, but I would like to see more Guesstimate cost-effectiveness models to be able to evaluate them.

Not media savvy enough

EAs should try to be more media savvy. This applies to avoiding misconceptions around topics, earning-to-give etc.

But EAs should also recognise the importance of telling a good story. For longtermism, this is particularly hard. Showing a video of a starving child tugs on the heartstrings, but how do you do that for future generations? How do you do that for AI safety? I think EAs could spend more time thinking about how to communicate this stuff so that it resonates.

Also focus on the positives. That everyone can be a hero. If you focus on guilt, people switch off.

When I tell people that we’re trying to avoid catastrophic risk, they always think I’m talking about climate change.

How can EA better communicate that climate change isn’t the only big risk?

Enjoy this?

Sign up to our weekly newsletter to be notified about future entries in this series, and other new research:

Learn more

Other relevant articles

  1. Your career can help solve the world’s most pressing problems
  2. All the evidence-based advice we found on how to be successful in any job
  3. Find a high impact job on our job board
  4. Career advice I wish I’d been given when I was young

All entries in this series

  1. What’s good career advice you wouldn’t want to have your name on?
  2. How have you seen talented people fail in their work?
  3. What’s the thing people most overrate in their career?
  4. If you were at the start of your career again, what would you do differently this time?
  5. If you’re a talented young person how risk averse should you be?
  6. Among people trying to improve the world, what are the bad habits you see most often?
  7. What mistakes do people most often make when deciding what work to do?
  8. What’s one way to be successful you don’t think people talk about enough?
  9. How honest & candid should high-profile people really be?
  10. What’s some underrated general life advice?
  11. Should the effective altruism community grow faster or slower? And should it be broader, or narrower?
  12. What are the biggest flaws of 80,000 Hours?
  13. What are the biggest flaws of the effective altruism community?
  14. How should the effective altruism community think about diversity?
  15. Are there any myths that you feel obligated to support publicly? And five other questions.