Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

I was just thinking about what a unique situation university is.

Where and when else do you see such a large concentration of really caring, dedicated, driven, talented, ambitious people who are figuring out their values, and are actively trying to figure out what they want to do with the rest of their life?

Kuhan Jeyapragasan

In this episode of 80k After Hours, Rob Wiblin interviews Kuhan Jeyapragasan about effective altruism university groups.

From 2015 to 2020, Kuhan did an undergrad and then a master’s in maths and computer science at Stanford — and did a lot to organise and improve the EA group on campus.

Rob and Kuhan cover:

  • The challenges of making a group appealing and accepting of everyone
  • The concrete things Kuhan did to grow the successful Stanford EA group
  • Whether local groups are turning off some people who should be interested in effective altruism, and what they could do differently
  • Lessons Kuhan learned from Stanford EA
  • The Stanford Existential Risks Initiative (SERI)

Who this episode is for:

  • People already involved in EA university groups
  • People interested in getting involved in EA university groups

Who this episode isn’t for:

  • People who’ve never heard of ‘effective altruism groups’
  • People who’ve never heard of ‘effective altruism’
  • People who’ve never heard of ‘university’

Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type ’80k After Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Gershwin – Rhapsody in Blue, original 1924 version” by Jason Weinberger is licensed under creative commons

Highlights

'Bad Omens in Current Community Building'

Rob Wiblin: Since there have been local groups, there’s been disagreement between people about what kinds of events they should be running, what sort of culture they ought to be cultivating. So there’s all kinds of different topics that we could tackle there. I was keen to bring up some that recently appeared on the EA Forum a couple of weeks ago in this blog post, “Bad omens in current community building.” It was mostly positive, saying that local groups are great in a lot of ways and it’s important to get them right. But it suggested that local groups have perhaps gone too far in some particular directions, and could be turning off some people who should be interested in effective altruism if they really understood it properly and it was getting presented in a good light.

Rob Wiblin: One concern they had is that a lot of local groups measure their success in terms of how many heavily involved or highly engaged people who identify as part of the effective altruism community they’re causing to exist by informing other students about these ideas and encouraging them to come along to events. When you have that as your key metric, then you tend to get into this very persuasiveness mode where you’re just like, “All right, we got to get people into workshops so we’re going to hit them with a message; we’re going to figure out what is most likely to convert people” — and you start thinking about it in terms of conversion rather than sharing information.

Rob Wiblin: Do you have any thoughts on this general issue of downsides you might get from the idea of just persuading people and “creating EAs,” as some people will call it?

Kuhan Jeyapragasan: Yeah. I think I resonated with a bunch of the ideas in the post. I think having really strong epistemics is one of the integral parts of running a really great EA group or related group. One thought is, man, it’s really hard to run an EA group while being a full-time student, perhaps being involved with other extracurriculars, maybe having jobs on the side to help pay for tuition or other expenses. I think because of that, and maybe other intuitions around how it’s just a lot more exciting when your group has lots of members.

Kuhan Jeyapragasan: These problems seem extremely important, and for many of them there are just so few people working on them — or at the very least far fewer than the ideal number — often due to simple reasons like lack of awareness of these ideas, or perhaps the ideas not being presented in the most compelling way. I think the intuition that we should get as many people as possible is very understandable.

Kuhan Jeyapragasan: But at the same time, I think it’s really important to have good epistemics and show how rigorous thinking in the community can be, and why it’s so important to have really rigorous thinking, given how complex the problems we’re trying to solve are. If they weren’t so complex, it’s a lot more likely that they would’ve been solved already. So I definitely agree that it’s really important to both do your research, think through arguments, steelman counterarguments; and think of what you really think, what the best responses are, how uncertain you are about various claims, and ways to reduce your uncertainty.

Kuhan Jeyapragasan: It’s also just really good to be honest with people you’re talking to about how much you know, how confident you are, what your uncertainties are. Showing that you’re a really thoughtful, caring person, and that the other community members around you and your group are as well, I think is — for the people who could really resonate with the EA ideas and take these ideas really seriously and contribute a lot to the world’s most pressing problems — just a lot more compelling than something that’s pretty clearly a sales pitch or something that isn’t trying to be rigorous or isn’t properly caveated or qualified.

Lessons from Stanford EA

Kuhan Jeyapragasan: I think a mistake I made with Stanford EA early on — and I think I still haven’t maybe fully corrected, but trying to figure out how to find the right balance here — is that I think I was really compelled by the arguments around “These problems are so, so important and we really need really dedicated, sharp, talented people working on these problems.” I think I kind of tried to make Stanford EA this “factory of impact” — where we’re just running all these really big programmes and motivating people to the importance of these problems we’re trying to work on as seriously as they deserve, and giving them the respect they deserve.

Kuhan Jeyapragasan: As a result, I maybe leaned too hard into doing as much outreach as we can, running these really big programmes, spending as much time trying to introduce these ideas to people, helping them figure out their career plans, running socials, all these things. I think at a point I maybe — especially for the most involved members, the people who cared a lot about these ideas and cared a lot about doing the most good — was somewhat sacrificing what would’ve been most helpful for the most engaged members at the expense of doing more intro-level or top-of-the-funnel outreach, rather than prioritising helping our most engaged.

Rob Wiblin: Rewarding the people who are most engaged already.

Kuhan Jeyapragasan: Yeah. And given the nature of a lot of the problems we’re trying to solve, I think probably most of the impact will come from people who are focused and thinking really critically and rigorously about how to really tackle these problems.

Kuhan Jeyapragasan: A mistake I see both in myself and in other organisers is not taking into account how heavy-tailed impact might be, and acting accordingly — and not, say, back chaining from thinking about which problems seem most important to solve and how we’re actually going to solve them. Thinking about what the key bottlenecks are now and what they’re likely to be, and then determining what kinds of community-building work to put the most effort into, or what outcomes would, from a group-organising perspective, actually lead to the most progress on these problems.

Stanford Existential Risks Initiative (SERI)

Kuhan Jeyapragasan: SERI got started basically at the beginning of the pandemic with a pretty open-ended mandate of wanting to promote existential risk education and research at Stanford. It was pretty open-ended about what moving forward with that could look like. I think things are still pretty experimental. It’s hard to tell if you found your niche or how things could be better — going back to the opportunity cost discussion — but it’s definitely been a great learning experience.

Rob Wiblin: Is it just you or are there other people involved?

Kuhan Jeyapragasan: I’m currently the only full-time employee for SERI. Our Professor Directors, Stephen Luby and Paul Edwards, also spend obviously a lot of time teaching the Preventing Human Extinction course, and dealing with a high-level strategy for SERI and other administrative work. And then it’s mostly student organisers now. I think we’ll be hiring more full-time organisers, more staff members shortly. But yeah, right now it’s me.

Rob Wiblin: Very cool. What’s the reaction been to a university course on human extinction?

Kuhan Jeyapragasan: Actually quite positive, I believe. Stanford has these first-year requirements and one of these requirements is a programme called Thinking Matters — where every first-year student has to take a Thinking Matters class — and Preventing Human Extinction is one of these Thinking Matters options. I believe for the past two years — I don’t know about this year, but at least for the past two years — it was the Thinking Matters class with the highest enrolment. So that was pretty exciting.

Rob Wiblin: Wow. So Thinking Matters is trying to get students to maybe do some practical, interdisciplinary subject?

Kuhan Jeyapragasan: Yeah, I think so. I regrettably don’t have a great understanding of what actually links all the Thinking Matters courses, but I think it’s something about integrating academic ideas into how they’re relevant to society at large.

Rob Wiblin: Makes sense. Thinking back to as an undergrad, what would I do, I think the human extinction course would be a pretty appealing option. It sounds like it’s going to be all over the place and kind of exciting.

Kuhan's journey

Kuhan Jeyapragasan: I started officially running the group around September 2019, but I think even before that I had been doing a lot of planning over the summer, which actually involved a lot of just reading about cause prioritisation, longtermism, existential risk, community-building strategy from resources that other organisers had written up, talking to successful EA group organisers in the past — so like past Stanford EA organisers, Oxford, Cambridge, et cetera.

Kuhan Jeyapragasan: And then once the school year started, lots of individual messages to people, emails, Facebook posts, lots of advertising — I wanted to make sure that anyone who could plausibly be interested in EA with a one-sentence description of it would know about the group, and know how they could learn more and how to get more involved. I remember we had a big talk from Will MacAskill that happened to be on the same day that there was this societies fair for service-oriented groups. So we were aggressively advertising the Will MacAskill talk that was happening later that day, and would make flyers and cardboard signs and all that stuff.

Rob Wiblin: What drew you to getting involved? It sounds like you had some reservations about the local group, or at least you didn’t feel like you quite fit in. But nonetheless, you decided to become president of it and take it over. What pushed you to do that?

Kuhan Jeyapragasan: I remember even back in 2016, right after reading the 80,000 Hours career profile, I remember thinking, “Wow, these ideas are so compelling. Why don’t more people know about this? So many people I imagined would be as excited as I am if they just saw these arguments.” So I think the intuition around wanting to share these ideas, because they seem so compelling and correct — or very compelling based on my moral intuitions; I had that intuition from an early age.

Kuhan Jeyapragasan: And when I first encountered the arguments, I did talk to lots of friends about EA, even though I wasn’t involved with the group as much. I think the idea around the multiplier effect argument really stuck with me: we may only have approximately 80,000 hours in our career, but it probably takes way, way less than that to convince others to counterfactually switch their career plans to something much higher impact or directly addressing the world’s most pressing problems.

Kuhan Jeyapragasan: Then also I was just thinking about what a unique situation university is for getting career changes. Like where and when else do you see such a large concentration of really caring, dedicated, driven, talented, ambitious people who are figuring out their values, and are actively trying to figure out what they want to do with the rest of their life? And they’re pretty open-minded and flexible to making pretty big changes, or don’t even have plans yet, so are much more open to making tentative plans — they’re really starved for good advice to help them find some way that they can spend over half of their waking hours after graduating on something that’s both exciting to them and intellectually fulfilling, but also helps them achieve their goals and values. I think for many of them, impact and doing good is a pretty strong motivation.

Articles, books, and other media discussed in the show

Kuhan’s work:

EA community building:

Resources for EA groups and community building:

Everything else:

Related episodes

About the show

80k After Hours is a podcast by the team that brings you The 80,000 Hours Podcast. Like that show, it mostly still explores the best ways to do good — and some episodes are even more laser-focused on careers than most original episodes. But we also widen our scope, including things like how to solve pressing problems while also living a happy and fulfilling life, as well as releases that are just fun, entertaining, or experimental. Get in touch with feedback or suggestions by emailing [email protected].

Subscribe here, or anywhere you get podcasts: