Is it time for a new scientific revolution? Julia Galef on how to make humans smarter, why Twitter isn’t all bad, and where effective altruism is going wrong

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong.

Julia Galef – a well-known writer and researcher focused on improving human judgment, especially about high stakes questions – believes that if we could develop new techniques to resolve disagreements, predict the future and make sound decisions together, we could again dramatically improve the world. We brought her in to talk about her ideas.

Julia has hosted the Rationally Speaking podcast since 2010, co-founded the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements.

We have a detailed review of whether and how to follow Julia’s career path coming out soon – subscribe to our newsletter to be notified about it.

We ended up speaking about a wide range of topics, including:

  • Her research on how people can have productive intellectual disagreements.
  • Why she once planned on becoming an urban designer.
  • Why she doubts people are more rational than 200 years ago.
  • What the effective altruism community is doing wrong.
  • What makes her a fan of Twitter (while I think it’s dystopian).
  • Whether more people should write books.
  • Whether it’s a good idea to run a podcast, and how she grew her audience.
  • Why saying you don’t believe X often won’t convince people you don’t.
  • Why she started a PhD in economics but then quit.
  • Whether she would recommend an unconventional ‘public intellectual’ career like her own.
  • Whether the incentives in the intelligence community actually support sound thinking.
  • Whether big institutions will actually pick up new tools for improving decision-making if they are developed.
  • How to start out pursuing a career in which you also try to enhance human judgement and foresight.

If you subscribe to our podcast, you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts (RSS, SoundCloud, iTunes, Stitcher).

A full transcript is below, along with a coaching application form, overview of the discussion and extra resources to learn more.

Get free, one-on-one career advice to help you improve judgement and decision-making

We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:

Read more

Overview of the conversation

1m30s So what projects are you working on at the moment?
3m50s How are you working on the problem of expert disagreement?
6m0s Is this the same method as the double crux process that was developed at the Center for Applied Rationality?
10m Why did the Open Philanthropy Project decide this was a very valuable project to fund?
13m Is the double crux process actually that effective?
14m50s Is Facebook dangerous?
17m What makes for a good life? Can you be mistaken about having a good life?
19m Should more people write books?
21m10s Has the Rationally Speaking podcast been a good outlet for your ideas?
25m50s Is Twitter a good thing for the world or horrible? Why does Julia use it a lot?
29m So what have you learnt about communication from either being on Twitter or doing the podcast?
33m30s You spent some time in academia earlier in your career, right, and then you decided that it wasn’t the right fit for you?
37m10s So you’ve had an unconventional career since then – making your own future, starting your own projects. Is this a path you’d recommend?
42m40s Looking back are there any other paths that you wish you might have pursued earlier on?
45m10s Broadly speaking, the problem that you’re working on is improving human judgment and reasoning. How feasible do you think it is that some of the research that you’re involved with or aware of will actually be applied to significantly improve the way that decisions are made in major institutions?
49m50s Why is the intelligence community adopting many of these techniques? Do they have good incentives?
51m30s I mean, it seems like, taking a longer term view, people are more reasonable than they were 200 years ago, so bit by bit the quality of discourse in public has mostly been improving. Or has it?
53m40s What lights of hope are there – where are people getting more reasonable?
56m10s So there’s a lot of different that people could try to tackle the general problem of human rationality and irrationality. Are there any paths of study or work that you’d particularly like to highlight? Who are the best researchers and how do you get there?
1h02m30s So you’re involved in both the effective altruism and rationality communities. What kind of mistakes do you think they might be making at the moment?
1m08m30s What do you think is the biggest downside of the career path that you’ve taken?
1h10m10s Are there any things that you could imagine learning in the next few years that could really send you off in a different direction with your career? Working on different problems or tackling them in other ways?
1h11m20s Are there conferences that you go to regularly where people could, I guess, potentially meet you or network with other people if they’re interested in working on the same kind of topics?

Extra resources to learn more

Full transcript

Hi podcast listeners, I’m Rob Wiblin, Director of Research at 80,000 Hours.

Today I’m speaking with well-known writer and entrepreneur Julia Galef about her career path, research and opinions on a range of topics. We also talk about how people can pursue careers like her’s in which they try to enhance human judgement and decision-making. We have a lengthy profile on career paths of this kind slated to be released on our site in the next few weeks, so look out for that.

The conversation was recorded at Effective Altruism Global San Francisco, the largest annual conference for the effective altruism community . You can hear a bit of shouting in the background but I’m sure that will only add to the ambience.

If you’d like to get coaching to help you work on similar issues to Julia – improving human judgement, predictions and decision-making – I strongly suggest applying for free one-on-one coaching by clicking the link in the show notes or on the associated blog post.

As always I recommend you get the episode on your phone rather than listening to it on your computer. You can do that by searching for 80,000 Hours on your podcasting app.

And now I bring you Julia Galef.

Robert Wiblin: Today, I’m speaking with Julia Galef. Julia is a writer and speaker focused on improving human judgment, especially about high stakes questions. Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements. Thanks for coming on the podcast, Julia.

Julia Galef: My pleasure, Rob. Good to be here.

Robert Wiblin: So what have you been up to this year?

Julia Galef: I have a mix of projects right now. I’m doing the podcast, as you mentioned. Those come out every couple of weeks. I’m working on a book, which won’t be out for a little while. Both the podcast and the book are, in various respects, about improving human reasoning and judgment. And then the thing that you mentioned with the Open Philanthropy Project is this independent project that I conceived of, and Open Philanthropy agreed to contract me to do. It’s a part-time project.

Basically, what I’m trying to do is host … identify important questions, important in the sense that they could have a serious impact on the world. Like the answer to that question has a serious impact on how you try to impact the world. Try to identify important questions over which thoughtful, well-informed people disagree. So they have different models of the questions. So for example, is super intelligent AI something that’s on the horizon or not? If it comes, is it going to be … what’s the probability that the outcome is gonna be good or not? How should we be dealing with the housing crisis in San Francisco? Is ending mass incarceration a feasible or desirable goal? Things like that. Not questions like, I don’t know, is astrology real. Questions over which thoughtful, reasonable people can have different models.

So the project is identifying those questions, getting to know and starting dialogs with experts with different models of those questions, and then hosting conversations to try to get the root, to the crux of why the experts disagree. So comparing their models, holding their models up against each other, noticing the areas of overlap or non-overlap, and doing this process in conjunction with a bunch of interested non-experts, especially from tech, or finance, or government or the media. People who are interested in impacting the world positively, and have a disproportionate amount of resources or influence in the world, but aren’t experts themselves. So hopefully giving them a richer and more accurate understanding of the topic by hearing the best arguments on both sides, and listening to the experts talk to each other.

Robert Wiblin: So have you managed to solve the problem of expert disagreement yet?

Julia Galef: I was going to say, “Depends on your definition of solve,” but I think by any definition of solve, the answer is no, so.

Robert Wiblin: What kinds of techniques do you have and are they bearing any fruit?

Julia Galef: Sure. The word technique is a little bit strong. But one heuristic that I’ve been using that seems to be good is that I’ve come to think that it’s important to not frame these conversations as we’re trying to change each other’s minds, or even as we’re trying to converge and reach agreement. It seems to work better to just frame the goal as, let’s really understand as precisely as we can what our models are, and where they diverge from each other and why they diverge. And so the goal is framed as understanding the landscape of the different models and not as shifting someone’s opinion.

And I think that … So my current hunch is that even if your goal was shifting someone’s opinion, that this frame of trying to understand the models actually works better than having the goal of shifting an opinion.

Robert Wiblin: Because you have less resistance to the idea of understanding it if you don’t think that understanding means changing your mind?

Julia Galef: I think that’s part of it. I think it’s also that … when you’re focused on the goal of trying to change someone’s mind, you end up missing a lot of important details. So you end up trying to make arguments at them. But those arguments aren’t actually gonna be all that useful or relevant to them because you’ve missed something important about why they believe what they believe. And so all of that important groundwork of getting clarity on what their cruxes or their belief actually are, that groundwork seems to happen more readily when you frame the goal as let’s understand our respective models, as opposed to let’s try to converge.

Robert Wiblin: Is this the same as the double crux process that was developed at the Center for Applied Rationality?

Julia Galef: It’s related to it. The double crux process is framed as trying to reach convergence. And what I’m doing is … it’s very influenced by that. I talk about cruxes. I talk about trying to find … And I suppose I should define, for the listeners, what a crux is. A crux is an underlying belief, or assumption or premise, that is feeding into your view about the topic at hand. And feeding in a causally important way, so that if I change my mind about this underlying thing, it would also change my mind about the higher level question. So for example, if Rob and I disagree about whether it’s okay to eat animals, a crux might, for me, might be, well, I don’t actually think animals have the capacity to suffer. If I did, then I would think it’s not okay. I currently think it’s fine, but if I had a different opinion about whether animals can suffer, then I would not think it’s fine, so that’s a crux for me. You could have different cruxes about the same topic. Rob’s crux might be … [Note, in case it’s unclear, that Julia does believe animals suffer, she’s just describing a hypothetical disagreement and attempt to find a common crux between two people.]

Robert Wiblin: It’s wrong to cage animals, just in principle. To restrict their freedom, might be [crosstalk 00:05:27].

Julia Galef: Sure, yeah. And if … I’m trying to think of what could influence that. If you thought that the animals were just as happy being caged as not being caged, then you might say, “Well, okay, maybe it’s not wrong to cage them.” Or maybe you think that the lives of factory farmed animals are worse, have negative utility, but if you thought that they had positive utility, maybe you would be less confident that it was wrong to, I don’t know. Anyway.

The goal, though, is to just dig into the disagreement until you find … ideally find the double crux, the thing that …

Robert Wiblin: You both have in common, that if this thing was different, then you’d both change your mind.

Julia Galef: Exactly.

Robert Wiblin: Hopefully in the same direction!

Julia Galef: Yeah. So the double crux process that the Center for Applied Rationality has been tinkering with, and teaching people and practicing is kind of a more formal process that’s related to what I’m trying to do. What I’m trying to do is, it’s a little less formal, partly because I host these conversations as dinners and I don’t … it seems to be somewhat in tension with the goal of convivial dinner if we’ve got easels and whiteboards out. And also just my goal … I think understanding these issues is really valuable, and it’s sort of my main goal, but I have this other secondary goal that’s just promoting a norm, especially among important or influential people in these different fields. Promoting a norm of being curious about important questions.

And seriously engaging with those questions, seriously engaging with different models of that question. And genuinely trying to understand, to reach the best, more accurate understanding that you can of that question. Which I think is not a common norm at the moment. I think most people … it’s not that they’re not thoughtful or smart, it’s just … it’s not our default way of engaging with ideas, to try to seek out differing models and try to understand why the experts don’t agree with each other.

So I have this broader goal of creating this intellectual community and culture among at least a subset of people in these different fields, tech, and finance, and academia, and the media and so on, that at least asks these kinds of questions and approaches disagreements with this spirit. And so that’s somewhat broader and fuzzier than reaching our double crux. But I think would never the less be really valuable to achieve.

Robert Wiblin: So the Open Philanthropy Project only funds research if they think that it’s gonna be pretty damn valuable for the world. What kind of outcome are they hoping to see?

Julia Galef: Oh, no, to be clear, I’m a contractor. They didn’t give me a grant or anything.

Robert Wiblin: Okay. But nonetheless.

Julia Galef: Yeah. The vetting process is much less stringent and rigorous for contractors than for grants.

Robert Wiblin: So you slipped through the cracks, I suppose.

Julia Galef: Basically, yeah. I got in under the radar.

Robert Wiblin: Maybe I should become a contractor, but …

Julia Galef: It’s pretty good, I’d recommend it.

Robert Wiblin: Cool. But yeah, what kinds of things are they hoping will come out of it?

Julia Galef: I mean, I think Holden’s main goal is just get influential people to be aware of and seriously engage with questions … with topics that are if not EA, then EA adjacent. And by EA adjacent I mean … well, basically just what I said about important questions that have significant bearing on what we should do to positively impact the world to reduce risks, or to create a lot of value. And I guess EA ideas, more broadly, not just about object level causes like AI or animal welfare, things like that. But EA memes that have to do with how you think about things, like the very idea of asking yourself, “What would change my mind about this?” Or the idea of asking about evidence or tagging things with different epistemic status. Or looking for cruxes. It’s not unique to EA, but it’s pretty distinctive about the EA and rationality communities, and I think that way of thinking about things is something that Open Philanthropy would love to be more common in silicone valley or in the world in general.

Robert Wiblin: So is it perhaps more of an outreach project than a research project? Or is it a bit of both?

Julia Galef: I guess I wouldn’t call it a research project. I would call it … I don’t know. I guess you could it outreach. You could call it community building, since I am trying to create this intellectual community. It has an element of research to it in that I’m trying to learn methods of-

Robert Wiblin: Building of intellectual communities.

Julia Galef: Yes, and also methods of finding these cruxes more effectively, like the thing I mentioned about how to frame the conversation. There’s a bunch of other things I could mention along those lines, like kinds of thought experiments or questions that turn out to be really useful in these conversations and make the conversation more productive. So there’s an element of research to that, but it’s far from an RCT.

Robert Wiblin: So on the double crux process, I haven’t actually tried it. But I’ll tell you why I’m a bit suspicious of it, just having heard it described. So I feel like in life, you go through and you have tons of experiences, and you’re just constantly learning, you build up different reference classes of different kinds of categories of things. And as a result, you end up with different gut judgments. Different predictions about how people are gonna behave or what’s gonna happen if this is changed or that is changed.

And very often, it doesn’t just come down to some single disagreement that you have about a particular … If X then A, if Y then B. I imagine that very often you’ll just find people just have different worldviews in a thousand tiny ways, that through their life they’ve built up a different whole perspective on the world, with a thousand little brushstrokes. And each brushstroke … no single brushstroke makes the painting. Is that a problem that you encounter when you’re trying to find these cruxes?

Julia Galef: Yeah, I mean, I will say it doesn’t usually happen that there is a single double crux, such that if I change my mind about that, I would totally do a 180 on the important higher level question, and same thing for you. That’s pretty rare. CFAR has usually presented the technique in that way just, in the same way that when you’re an economic concept you give a simplified example, basically.

Robert Wiblin: You just do [crosstalk 00:11:23], yeah.

Julia Galef: Yeah. And I think it’s a useful framework to have in your mind as you’re talking with someone about the topic. What I find tends to happen … I’m just thinking about the last such discussion that I hosted. It was about whether … how big of a problem is it that tech companies can lay claim to our attention to the extent that they can, and keep us hooked on our devices or hooked on a platform like Facebook, et cetera. We had one person there arguing that this was a serious threat to human happiness and wellbeing, and sort of a threat to the fabric of society, basically, and that it should be a cause taken more seriously be EAs by their own standards, and so we were debating that. I like to call these conversations sometimes un-debates because it’s similar to a debate in that we’re discussing a disagreement that we have, but hopefully dissimilar to a debate in that we are collaboratively trying to understand our respective models instead of trying to argue or win.

Anyway, so to your question about what is this process of looking for cruxes actually … what does it look like? So we ended up identifying three or four major cruxes such that if we believe something differently about that crux, it would at least make us less confident in our view. And some of them were empirical things and some of there were values. So an empirical thing was being less … I, for example, I was less confident in the data showing a connection between use of these various apps and depression or anxiety. And I think, if that evidence was really solid, I would be taking this much more seriously. And so I can just do that thought experiment. Like, what if I found out this was a really, really well done research? In that world, I would be pretty concerned about this. Like, wow, this seems like a major detriment to human wellbeing and a lot of humans are affected and will continue to be affected more.

And then there were these more … I don’t even … more philosophical cruxes, like it turned out we disagreed about what criteria to use to determine whether humans are being hurt or not.

Robert Wiblin: A theory of value.

Julia Galef: Yeah, basically, or theory of agency or something. So my take was basically, look, if you ask people, like, look. Here’s how much time you spend on this app. Here’s the various evidence about how it impacts you. Do you want to continue spending the same amount of time, or would you rather have commitment devices where you could tie yourself to the mast and limit your Facebook use, or at least make it a little more difficult for you to use Facebook or something. If you gave people that information and they said, “Nope, I’m fine doing what I’m doing. I don’t wanna limit my access.” Then I would call that … people are not being hurt by this. Or at least I would not be willing to claim people are being hurt.

Robert Wiblin: What if they said, in the moment, that they were suffering while it was happening? I guess you’d just think that’s very unlikely.

Julia Galef: I think it’s unlikely that they would say they’re suffering in the moment and still endorse it, but if they did endorse it, then I would say … I guess maybe I’m more of a preference utilitarian or something than a hedonic utilitarian. Whereas this other guy at the dinner who had been making the strong case for why this problem was really important felt that, look, people are too … we’ve all been too corrupted by this thing to … we have false consciousness, basically.

Robert Wiblin: I was about to say false consciousness.

Julia Galef: Exactly.

Robert Wiblin: Yeah, that’s like a Marxist analysis of Facebook.

Julia Galef: That’s the phrase that came up at dinner. Yeah, and he was like, “We can’t really imagine what it would look like to have lives and societies that weren’t so dominated by technology, and so we have nothing really to compare this to.”

Robert Wiblin: Maybe the 13 year olds can’t, but I can remember not having Facebook. It wasn’t that long ago. How old am I?

Julia Galef: Fair, yeah. Anyway, so that’s-

Robert Wiblin: But that was the theory. I suppose at some point people won’t be able to remember the pre-social media era.

Julia Galef: Right. Right. Yeah, so that was a more philosophical crux.

Robert Wiblin: Yeah. So you said earlier that you’re writing a book. Is this something that you think more people should do to get their ideas out there? Should I be writing a book? I wouldn’t mind having written a book, but I’m less sure about the process of actually writing one.

Julia Galef: I feel you there. As someone in the throes of it, I definitely feel you. You know, the thing that I think books do really well is provide a nice container for a thesis or ideas, such that it’s easy to spread and talk about. And they do this better than blog posts, for the most part. I’ve heard people sometimes say, “Most books should be blog posts,” or “Most books should be articles,” or something like that, and I sympathize with that view.

Robert Wiblin: Most podcasts should be movies, maybe, also.

Julia Galef: I sympathize with that view. Although, even if there is a lot of padding in books, I think padding and redundancy can actually be good for making content stick and impact people, so I’m less annoyed by padding in books than some people are. But even if you could have expressed the same point in a blog post, having a book, for whatever reason, with a certain title that’s been added to the list of books about this topic, and it’s been written up in some articles, et cetera. It makes it just part of a public conversation in a way that’s really hard to do with … you know, even if you write a ton of blog posts on a topic. What would be a good example? I don’t know, Guns, Germs & Steel, or …

Robert Wiblin: Really stakes out some territory.

Julia Galef: Yeah, it’s a very long book with lots of detail, but it’s got kind of a thesis, and it’s … as you say Guns, Germs & Steel, and even if someone hasn’t read it, if they’ve heard about it, they sort of know the concept, and it provides this nice little handle for a point of view or thesis that makes it easier to talk about and makes people wanna talk about it. That’s something I think books can do really well.

Robert Wiblin: Okay. All now I need, I guess, is an idea for actually what to write about, and I could go ahead and …

Julia Galef: Also the hard part.

Robert Wiblin: Yeah. So you do quite a lot of different outreach activities with your ideas. So you’ve got the Rationally Speaking podcast that you’ve been doing, I guess, for seven years now.

Julia Galef: Over seven years now, yeah.

Robert Wiblin: How do podcasts compare to books, compare to Snapchat? I guess [crosstalk 00:17:33] as well, [inaudible 00:17:34].

Julia Galef: Yeah, I hear the kids today are … snapping their chats, or …

Robert Wiblin: Snapchatting their ideas, and …

Julia Galef: Yeah. I don’t understand what that’s all about.

Robert Wiblin: Sharing big ideas on …

Julia Galef: But, so I hear. So how do podcasts compare to other media?

Robert Wiblin: Yeah. Has it been a good vehicle for sharing your ideas?

Julia Galef: It really has. I didn’t have any particular grandiose plan when I started it, but I’ve bene really happy with how it’s gone so far. I have been … so obviously I try to pick guests who I think have interesting things to say and are doing interesting work that deserves more attention and so on, but underlying it all, the purpose of the podcast, or the driving force behind the choices that I’m making for the podcast, is really promoting this approach to epistemology that I support and I wish were more common. And so the kinds of question that I’m always trying to ask, and the stuff that I’m most interested to talk to the guests about is stuff like, what counts as good evidence and how confidently can we know things? And what are the standards of this field, and how good can they be? How much knowledge could we possibly have with confidence about question like the one you’re studying? Things like that.

When I can, I really like to get to this point in the conversations with the guest where they’re thinking in real time, basically. About the implications of their research, or the epistemic status of their claims, things like that. Because I wish that people thought in real time more often, as opposed to just regurgitating cached things that they’ve said again and again in different contexts, or that they have heard and think you’re supposed to say or supposed to think. Not to sound too arrogant or anything. To criticize all modern communication or anything like that.

Robert Wiblin: Yeah. I think we all do that from time to time. The question is just the balance.

Julia Galef: It’s not terrible, I just wish that, there on the margin, I think it would be good for our collective epistemic health if people spent more time talking about and thinking about things where they don’t have a cached answer, and are trying to think on the spot. So those are the kinds of practices and questions that I am trying to use my podcast to promote.

Robert Wiblin: How many people did you end up reaching?

Julia Galef: Oh, what’s the reach of my podcast?

Robert Wiblin: Yeah.

Julia Galef: I think now a typical episode gets about 35,000 listeners.

Robert Wiblin: Wow.

Julia Galef: Actually one of my episodes will get a lot more than that, like around 100,000, but that’s relatively rare. 35,000’s more like the modal number. Yeah.

Robert Wiblin: So it’s quite a lot, given the amount of effort that goes into a podcast isn’t that large. You can do it in a day.

Julia Galef: Yeah, although-

Robert Wiblin: Then it’s like speaking to an enormous auditorium, yeah.

Julia Galef: Yeah, it’s-

Robert Wiblin: Well, I guess like a sport stadium, really.

Julia Galef: Yeah, I guess, wow. It’s fun to visualize concretely my audience. Yeah, they tend to be smart and thoughtful people. Based on the comments I get or the emails I get, the people I talk to on Twitter. And also, so I’ve been running ads for GiveWell in the last few months, and GiveWell has told me that they’ve also tried running ads on other podcasts or other platforms, and a disproportionate amount of the donations they’ve gotten have come through my podcast, which makes me feel very proud of my audience.

Robert Wiblin: I suppose, yeah, you’ve taught your audience well.

Julia Galef: Exactly. Or chosen them well. Hard to distinguish.

Robert Wiblin: Yeah.

Julia Galef: Let’s be skeptical.

Robert Wiblin: Yeah. I mean, this podcast is also new. But I’ve generally found that the longer the content, the better the comments become, or at least the [crosstalk 00:20:54].

Julia Galef: Oh, because it leaves out the people who are …

Robert Wiblin: Right. I think a lot of people … The worst responses tend to come to people who’ve read the headline, or maybe only the first few words of the headline, sometimes. But if they have to actually go through and listen to a whole hour of conversation in order to get to something semi-outrageous that someone said, it’s a lot harder for them to respond just to that.

Julia Galef: Increase the cost of trolling or outrage.

Robert Wiblin: Exactly. Yeah, speaking of the horrors of social media and people being annoying online, you’re also quite a big star on Twitter.

Julia Galef: Oh, I don’t know if I’d go that far, but I’ve definitely been ramping up my use of it. And I like Twitter.

Robert Wiblin: Wow, okay.

Julia Galef: I’m a fan of it.

Robert Wiblin: It’s dystopian.

Julia Galef: It’s dystopian. I just don’t … I mean, look, I’ve heard the complaints about it. And I don’t really doubt them. I don’t doubt that people have really got experiences on Twitter. But yeah, my experience has just been great. I just find it … the comments that I get are, you know, they’re not all great comments, but they’re mostly sincere and engaged, and sometimes they’re really thoughtful and interesting. There’s so many interesting people on Twitter. There are all these social scientists who have conversations in real time with each other in public that we can all listen to and comment on about just the latest papers or debates in social science, like this move to lower the P value threshold to .005, I think it was. Just really cool to be able to see these conversations between experts about a thing that they actually disagree about in real time. And I … Sorry.

Robert Wiblin: Well, it’s true, I-

Julia Galef: I could sing the praises of Twitter much longer if you let me, so.

Robert Wiblin: I guess I can’t hate it that much because I read it every day, so yeah, I’m just stuck in this trap, I suppose, yeah.

Julia Galef: I guess this is evidence against my belief that if someone is consistently unhappy doing something, then they will choose to limit their access to it. You’re evidence against that.

Robert Wiblin: I suppose it depends what bubble you’re in. I mean, people are usually fairly friendly to me, but then when I read other people’s threads, you’ve got someone smart saying something very reasonable, and then you just read the responses and it’ll be like, I don’t know, Stephen Hawking’s view on quantum physics, and then just below you have like, “I just finished high school, but my view on quantum physics is … ” Yeah, it’s a bit … it’s frustrating. Although perpetually amusing, I suppose.

Julia Galef: Amusement is … or I guess bemusement is an attitude I strive for, as a substitute for indignation and outrage and frustration. I don’t always succeed. But, I don’t know. It’s not that I don’t also get really irritated by people misunderstanding, sometimes seemingly willfully misunderstanding my point, or other people’s points. I just try to keep in mind that communication does seem to be really hard and most of those-

Robert Wiblin: Especially with only 140 characters a time.

Julia Galef: Yeah. I gotta say the character limit is … there’s some advantages to it. It has forced me to be much more concise and to the point than I otherwise would. And it’s kind of funny to … I have this one blog spot I’d written that was like, four paragraphs, then I realized wait, I could literally just write this in 140 characters and I wouldn’t lose that much. So that’s kind of a good thing, but it does have this downside of … there’s some things you really just can’t say in 140 characters, and so you have to do these strings of tweets and it just gets so messy. Really does not feel optimal in that way, but.

Robert Wiblin: So what have you learnt about communication from either being on Twitter or doing the podcast?

Julia Galef: I would say … well, I’m continually adding to my stock of ways that people can misunderstand topic X or topic Y, and I think one thing that I didn’t appreciate enough when I started out writing blog posts or doing interviews, et cetera, is that it’s not enough to just … if you’re worried about people misunderstanding you and thinking you’re saying X, it’s not enough to just add a sentence in your interview or in your blog post or whatever, that says, “By the way, I’m not saying X.” People will still think you’re saying X and respond angrily as if you did, yes.

And maybe some of that is they’re intentionally misinterpreting you, but a lot of it is just, you know, people don’t read super closely, so they may miss or not quite parse that line. If they’re going in with the assumption, the expectation that you believe X, then I think it’s just easy for the human brain to reinterpret your line so it doesn’t quite have the corrective impact that you expect it will. It’s been helpful for me to model this process as people having priors about what you believe.

Robert Wiblin: And not reading it carefully.

Julia Galef: And not … well, yes. So them not reading it carefully is a separate problem. This other problem is they have priors that you believe and you can give them evidence that will try to budge them away from those priors, but if the priors are strong, you often need a lot of evidence to budge them from those priors. You have to not only say … Let’s say I made a post about criticizing some government intervention for being ineffective. They may pattern match me to, “Oh, she’s probably a libertarian, she hates government, whatever.”

And that may not be wholly irrational, either, because often it is the case that people who criticize government programs are statistically much more likely to be libertarian than people who don’t criticize government programs, or something like that. Don’t anchor too hard on this one example, but. So they have this assumption about me, and I can say like, “It doesn’t mean that all government programs are bad.” But if I just have that one paltry sentence, that might not update them that much from suspecting that deep down I really hate government. I might have to give stronger evidence like saying more sincere, positive things about examples of government programs that I think were effective. And just spending more time and emotionally salient content, budging from their assumption, from their prior that I hate government.

So that was one major lesson for me, was realizing that I can’t just say I don’t support X and cause people to believe, “Okay, she doesn’t support X.” And also that they’re not being completely irrational if they have a prior about what I believe based on what kind of people tend to say what I say.

Robert Wiblin: That makes sense. Did you find ways to make the podcast more popular over time? Like, adjusting the format or changing your hosting style? I’m asking for a friend, of course.

Julia Galef: To be honest, I’ve done embarrassingly little optimization. For a podcast that’s been around over seven years.

Robert Wiblin: You just talk to people.

Julia Galef: I mean, I’m mostly just doing what I enjoy, and shrugging and being like, “Well, whatever audience I get from the thing I enjoy, great.” And I’m very pleased that it’s as large as it is. I’m sure, however, that’s not a defense of not optimizing. I could optimize some and I think I would still be doing things that I enjoy maybe just as much, but maybe appealing to more people. So I keep vaguely thinking like, yeah, I should really do more research on how to improve podcasts or make them more widely appealing. I have some ideas. I have ways … experiments that I wanna try. With the podcast. That I could experiment and see what sticks.

Robert Wiblin: That was some caveating. I was about to pattern match you to people that just hate optimization [crosstalk 00:27:49]. You’ve convinced me, I’ve updated from that. So you spent some time in academia earlier in your career, right, and then you decided that it wasn’t the right fit for you?

Julia Galef: A little time, yeah. I mean, I spent one year in a PhD in economics before dropping out, if that counts as being in academia. Although I also … I was a research assistant for several years before that, to various social science researchers at Columbia when I was an undergrad, and then MIT and Harvard after I graduated. I spent a year at the National Bureau of Economic Research as a research assistant, and then a year at Harvard Business School writing case studies on international economics for a professor there. So I’ve some experience with academia aside from my one … my aborted foray into a doctorate.

Robert Wiblin: So why did you decide not to go down that route, because it seems pretty close to what you’re doing.

Julia Galef: I mean, the one year wasn’t … it wasn’t a sudden turnaround after one year where I was like, super pumped and then I quit after one year despite that. I’d kind of, by the point I started the PhD, I’d started having doubts about whether this was the right career track or field for me. But I thought I should give it a shot anyway, now that I’ve come this far, because I’d spent my undergrad studying statistics and doing research for professors with the idea that I would go into academia. I’d already put a lot … invested into it, so I didn’t wanna give up too quickly. But the reasons for leaving … I mean, they were both personal and intellectual or ideological.

The personal reasons were just, I think I really am a generalist by nature. I’ve optimized my career so far for getting to spend as much time thinking and talking to people about a wide variety of interesting and important topics, and I love that, and it’s really hard in academia to do stuff like that. I guess until you’re really tenured and you can just be a dilettante. You have to be really narrow and detail-oriented. So there were the personal reasons.

And then the ideological or intellectual reasons were … this was before the replication crisis hit, like a few years before. But nevertheless, I had noticed a bunch of these problems with social science methodology. Not just completely of my own accord. I’d talked to people who were really discerning about research methodology, who had concerns and I’d seen … there were some specific papers where I had inner knowledge into the workings of how that paper was put together, and it was like seeing the sausage being made. I remember talking to one professor who described how they had run some mini surveys ahead of time to figure out which wording of their question would be most likely to get the results that they wanted in the [inaudible 00:30:23] study that they actually published, and they felt no compunctions about doing this or telling me about it.

Robert Wiblin: “Do I really wanna enter this corrupted industry?”

Julia Galef: Yeah, and that’s not to say that there isn’t good research being done, or that I couldn’t have chosen to do good research if I’d really tried, but it felt like the deck would be stacked against me, if the incentives or … or if you get rewarded for publishing a lot, and trying to be a stickler for research quality makes it harder to get published, then it felt like academia’s already really hard and competitive, and this would be making it even harder on myself. Do I really wanna do that?

Robert Wiblin: So you’ve had an unconventional career since then. Seems like you’ve been making your own future, starting your own projects.

Julia Galef: Basically.

Robert Wiblin: And jumping from thing to thing. Is that a path that you would recommend? Has it felt risky at any point?

Julia Galef: I mean, I’ve been fortunate in having friends and family who I could live with, or my parents gave me some monthly stipends after I left grad school and I didn’t have a job. This isn’t something that everyone gets to do. I’m definitely lucky, and I recognize that. And it was super helpful to have that cushion of at least a couple years when I didn’t have to be really supporting myself in New York City fully and I could just explore, and meet people, and learn about different opportunities and so on.

I think that one generalizable piece of advice, even if you can’t do exactly what I did, is to, as much as is feasible for you, just spend a lot of time getting to know interesting and smart people working on cool things. And even if you can’t predict exactly how that will end up benefiting you, I have decent confidence that it will in some way. Those connections are how you hear about cool opportunities that aren’t public, or that’s how you end up finding people to work with on something that wouldn’t have occurred to you if you hadn’t known them. That kind of thing. That’s been really useful to me in the long run.

Robert Wiblin: So obviously one of the key decision points was deciding to leave your PhD. Have there been other crossroads where you’ve had really hard career decisions to make?

Julia Galef: Honestly, looking back at all of the shifts … or I don’t know about all. Certainly a lot of the career shifts that I’ve made, or shifts in my plans or how I’ve been thinking about my career, they’ve mostly been epistemological in some way. Like, I mentioned the econ one where I was nervous about the quality of research. When I was in college, this would be an early example, I switched from a … I was gonna be a political science major. And then switched to economics, and then switched to statistics, and basically I just, I was very interested in the questions that political science studies, but then just got frustrated with the lack of rigor in answering them, which is not entirely because political scientists aren’t rigorous people, they’re just very hard questions to get rigorous answers to because you can’t really run RCTs on countries, or rerun history, which is unfortunate.

So that was one. There was also this … I tend to gloss over this period of my career, just because it makes for a more complicated story, but I did spend a year and a half thinking I was going to go into urban design and architecture. Yeah. I don’t talk about it that much, but Google my name, you can find stuff I’ve written for Metropolis or The Architect’s Newspaper.

Robert Wiblin: When was this?

Julia Galef: It was 2009, 2008? Something like that.

Robert Wiblin: Wow, okay. Recently. Well, fairly recently.

Julia Galef: Well, I mean, nine years ago, but yeah.

Robert Wiblin: What was your thinking there?

Julia Galef: Right after I left my PhD, and I was just … basically my plan was I’ll be a freelance journalist as a way to learn about cool stuff being done. And so some of the freelance opportunities I was able to find were about urban design, and urban planning and architecture, and I’ve always been drawn to subjects that are about complex systems, and complex systems’ interaction with each other, and making complex systems work better. This was kind of what drew me to economics. And urban design and planning-

Robert Wiblin: So you don’t mean visual design, you mean thinking through the social science and economics of you lay out a city or organize its transport.

Julia Galef: I guess I mean all of those layers. There’s definitely a physical design layer. It was pretty cool, actually, to think about how the physical design of a downtown, or the physical design of a waterfront, or a park, or campus or something can make the space work better. Either work better socially, like cause people to have better social interactions in that space, or make it work better economically. It was just cool to think about the intersection between physical design and economics or psychology. Unfortunately, the rigor in those fields was also not that great. And I think partly that’s because designers tend to go into those … like, ask those questions, and designers … people who go into design are usually not the same people who are super interested in really rigorous social science methodology. And also, again, it’s kind of hard to do experiments about a downtown of a city.

So that was why I ended up shifting into science journalism, because scientists loved to answer questions like, “How do you know?” And, “What evidence are you using?” And when I’d ask those questions of designers, talking about their projects, they didn’t … they were confused or put off about me asking the question, or they would give an answer that was kind of orthogonal to what I was asking.

Robert Wiblin: “Why aren’t all these designers just obsessed with impact evaluation like me?”

Julia Galef: I know. I don’t really fault them, it’s just not really their thing, but.

Robert Wiblin: So looking back, say to when you graduated from high school, are there any other paths that you wish you might have pursued earlier on? Other than running for political office.

Julia Galef: Running for political office sounds horrifying. I want other people to do it.

Robert Wiblin: I was kind of teasing you there.

Julia Galef: Are there other … I mean, it’s easy to look back and wish that I had done things sooner, or not taken random detours into architecture and urban design. But all it like …

Robert Wiblin: But you don’t look back with regret that you didn’t commit yourself to dentistry or something.

Julia Galef: No, I mean, my life right now is just pretty amazing, by my standards. I remember someone asking me back in 2007 or something, whenever I dropped out of my PhD, “Well, what would you ideally like to do?” And I said, “Honestly, I would like to spend as much of my life as possible just talking to smart and interesting people about important things. That would be great.” And that’s not a defined career path, but I feel like I basically do that now. I have the podcast, I give talks sometimes, and this project for Open Philanthropy, for the Open Philanthropy project involves having interesting and important conversations with smart and thoughtful people, and I’m doing the thing I wanted to do. It’s hard to imagine it being that much better, for me, by my standards.

Robert Wiblin: Do you think you ended up in a good place in part because you explored so widely? You tried so many different things and …

Julia Galef: It is so hard for me to have conclusions about why … to the extent that I succeeded at my goals so far, why is that? It’s really hard to speculate.

Robert Wiblin: You don’t wanna generalize.

Julia Galef: I mean, the one thing I said earlier is something that I’m decently confident in, that if I look at the opportunities that I got that helped me progress to where I am now, they seem to be because I just met a lot of smart and cool and thoughtful people working on important things, and ended up getting opportunities I wouldn’t have gotten if I didn’t have that network of friends, basically. So I do think that’s good advice for people in general, who aren’t already confident about what they wanna do and have a clear path to follow.

Robert Wiblin: Broadly speaking, the problem that you’re working on is improving human judgment and reasoning. And it seems like one of the places that this would be most valuable would be in higher tiers of government, or other influential institutions like the World Bank, or perhaps the Bill and Melinda Gates Foundation. How feasible do you think it is that some of the research that you’re involved with or aware of, or that people like Philip Tetlock are doing on forecasting, could actually be applied to significantly improve the way that decisions are made in these important institutions.

Julia Galef: I mean, I think it would be amazing if legislators or policy makers were really training their judgment to improve their ability to be calibrated, to practice best practices of questioning their own judgment or seeking out people who disagree with them, et cetera, et cetera. Unfortunately, I think most of the problem comes down to incentives. And if you as a congressperson, for example, don’t get rewarded for your accuracy, then it’s just gonna be really hard to get you to try to improve your accuracy.

Robert Wiblin: I mean, most members of congress, like 90% or something, are reelected every year, so a lot of them just aren’t in that much danger. It’s a bit surprising that they don’t use the fact that they have very high reelection rates to … I mean, they’ve, in a sense, got quite a lot of discretion, then. They could vote for things or against things that they don’t like maybe more than they do, and they could just mouth off and just actually express their true opinion and try to be reasonable. Some of them would lose their seats, but many of them would then get to do what they believe.

Julia Galef: I don’t actually know how insecure congresspeople should feel about their seat. And maybe they feel more insecure than they should, or something. But I just don’t see an active pushing them to be more … So let’s say they knew their seat was secure, and they were well-intentioned and really did want to pass the best policies for the country. The impact of your decisions is so long-term and uncertain, so it’s really hard to tell if you made the right choice or not. And you get adulation or disapproval in the short run, based on whether your choice seems good or not. And so it just seems like … my rule is basically any time the benefits of accuracy are uncertain in the future, and the costs of trying to be more accurate are paid up front, in terms of effort or unpopularity, there’s gonna be a really strong pressure against accuracy.

Robert Wiblin: Yeah. I guess the kinds of people who tend to get elected are probably not the most intellectually fastidious people, and I think I was also-

Julia Galef: That’s probably true.

Robert Wiblin: And while it’s true that most of them are reelected when it comes to congregational elections each two years, so they also run the risk of getting primaried if they stick out too much, so their very party could vote to not nominate them again.

Julia Galef: Right, yeah, the primary step complicates things a bit.

Robert Wiblin: Yeah. So has your personal experience given you much insight into what places it might be possible to get more reforms? Are there some institutions that are more open to changing how they think about things and trying to become more rational?

Julia Galef: Well, the intelligence community has seemed quite interested in this. And in fact IARPA, which is the … so IARPA’s a newer spinoff of DARPA. Where DARPA is funding research that could produce innovations helpful to the defense community, to the military, and IARPA is doing the same thing but for the intelligence community. So they’ve been … it’s run by Jason Matheny and he was actually the one who funded Phil Tetlock’s work on forecasting that eventually got turned into the book Superforecasting. So Jason’s all about epistemic rigor and accuracy.

Robert Wiblin: Yeah, both Jason and Philip Tetlock have agreed to come on the podcast at some point, so hopefully we’ll be able to find a time soon to talk to them about that. So the intelligence community. Do you think that can be explained by the incentives being good for bureaucrats there?

Julia Galef: Oh, well, so I don’t actually think that the current intelligence community, or the intelligence community historically, is that incentivized to try to improve their accuracy. And if you look at the forecasts that people in the intelligence community make, they’re often hedgy and they’re not the kind of thing where you could really tell if the person was right or wrong, but I guess the reason that I named the intelligence community is for a couple of reasons. One, because there just happen to be people like Jason who are working on changing the incentives by experimenting with forecasting, tournaments and things like that. And two, because it at least seems like in the intelligence community, there are fewer disincentives for accuracy than there are in many cases that … I don’t know. If you’re a pundit, or something-

Robert Wiblin: You don’t have to appeal to the general public, do you?

Julia Galef: Yeah, you don’t … People aren’t pressuring to be either really mainstream appealing and likable, they’re not pressuring you to be contrarian and super original in your ideas. So at least in the absence of those pressures, I think there’s more hope for instituting new norms of accuracy.

Robert Wiblin: Are there any other places that you can think of where there’s been progress made? I mean, it seems like, taking a longer term view, people are more reasonable than they were 200 years ago, so bit by bit the quality of discourse in public has mostly been improving. Perhaps the last few years doesn’t look so good, but, yeah-

Julia Galef: You’re comparing to 200 years ago?

Robert Wiblin: Yeah. I’m just thinking. What we read today from those times is the most outstanding work by the very brightest people, but if you kind of-

Julia Galef: Are you talking about comparing the best to the best, or about the median to the median?

Robert Wiblin: Oh, no. More the median to the median, yeah. I’m thinking, just people are a lot more educated now, and yeah. You’re not even convinced that things have gotten more reasonable, that’s very interesting.

Julia Galef: I don’t know. Certainly we know more now, so we know more science. I think the US has always been … so I guess I’m just thinking about the US now, to keep it simple. Simpler. The US has always been pretty strongly anti-intellectual. Okay. So in one sense, we are maybe more reasonable in that there’s more scientific knowledge than there was before. In another sense, I feel like we are less reasonable in that the way topics are discussed is more linked to entertainment value and sensationalism than it used to be. And from what I’ve read, we’re also more polarized, and the more polarized you are, the harder it is to have reasonable discussions, because you instinctively react against whatever people, quote unquote “on the other side”, are saying. Maybe we’re more reasonable now. It’s not a clear slam dunk answer to me.

Robert Wiblin: Yeah. I guess it would be quite hard to answer this definitively, because you would have to find a way of randomly sampling a bunch of discourse and a bunch of discourse from 1820, it’s-

Julia Galef: Maybe if you look at newspaper …

Robert Wiblin: Yeah, newspaper [crosstalk 00:44:56].

Julia Galef: Like opinion pages or something of newspapers from those years.

Robert Wiblin: Yeah, and judge them. Yeah, interesting. Okay, well I’ll see if I can find if anyone’s actually looked into that.

Julia Galef: Yeah, that would be cool.

Robert Wiblin: That would be a cool thing to find out. I suspect it hasn’t been looked at. Okay, so maybe on the broader scale, we’re not getting more reasonable. But are there any lights of hope other than the intelligence community?

Julia Galef: Well, I’m pretty happy with what’s happening in social sciences. I mean, in a bunch of scientific fields. So I’ve just been paying attention more to the social sciences like the replication crisis is depressing in one sense, to realize that a large fraction of studies don’t replicate and that things like P hacking or misapplications of a statistical test just universally, throughout a particular field, in a way that really impacts the truth of the results. Stuff like that is really common, and finding that out in the replication crisis has been depressing, a bit.

But I feel like I’ve seen attitudes changing just in the last two years, even. That there’s much more pro-openness, pro-rigor, anti-P hacking attitudes being espoused now than there were even two years ago. I don’t know if this next thing I’m about to say is true, but I at least heard a rumor that in job interviews, or when deciding whether to hire someone to a research role, to a professorship, people are starting to look at stuff like do they share their data? Do they pre-register? Things like that. We now pre-register our medical studies, that’s been really good. So there’s a lot of maybe feeling like we’re worse off because we’re uncovering problems that had always been there, and they’re now visible when they weren’t before, but from what I can tell, a large fraction of scientists, maybe the majority of scientists, really wanna fix this problem and are spending a lot of cycles doing so. So that’s cool.

Robert Wiblin: As long as they don’t have to totally ruin their careers in order to do it, then they’ll be all right.

Julia Galef: Yeah, I mean, yeah. It’s a …

Robert Wiblin: Step by step.

Julia Galef: Yeah, if you care about something … you’d have to care about something quite a lot to be able to pursue it even at the expense of your own career. I think most of us humans are maybe not quite that altruistic, but you only need some amount of altruism to get a lot of progress collectively.

Robert Wiblin: So there’s a lot of different that people could try to tackle the general problem of human rationality and irrationality. Are there any paths of study or work that you’d particularly like to highlight?

Julia Galef: Like, fields that someone could go into or questions they could pursue?

Robert Wiblin: Yeah. If someone was 20 and they’re listening to this, and they’re thinking, “I really like Julia Galef. I like what she’s doing.” What they should they study, ideally, and where might they go to work once they graduate? Or is it just you have to be an eclectic public intellectual who dabbles in lots of different topics and tries every job?

Julia Galef: No, no. I think it would be great … there’s been a lot of research into irrationality, into heuristics and biases. This is the kind of thing that Daniel Kahneman [inaudible 00:47:51] won the Nobel Prize for a few years ago. There hasn’t been a ton of research on interventions, like realistic interventions, that might help improve judgment. Phil Tetlock is one of the few exceptions to that. Other than that … so I would say the amount of research on debiasing, or improving judgment, is I don’t know, maybe an order of magnitude smaller than the amount of research demonstrating the existence of judgment flaws. And even within that subset of research about debiasing, most of the interventions are, that I’ve seen, are pretty small-scale. They’re like, “If we tell someone about this bias, do they demonstrate it in a contrived experiment in a lab that day?” Which is a far cry from, “Can we improve someone’s judgment in a lasting way that impacts real life decisions that they may for their life, or their career?”

And the reason, of course, that that is so rarely studied is that’s it’s a very expensive thing to study. You need these long term studies. It’s hard to test things in real life, in a naturalistic setting as opposed to in a nice, simple, contrived lab experiment. But that is the kind of research that I think we actually need to have any shot at a really rigorous base of knowledge about improving judgment. So that’s the kind of research I would love to see someone do in academia, or, alternatively, fund as an independent funder, because, again, the incentives are somewhat stacked against you if you’re trying to get a lot of papers published as a young scientist.

Robert Wiblin: So the natural things to study, I guess, would be psychology or economics, or some other kind of social science?

Julia Galef: Oh, yeah. So I mean, I guess technically the kind of studies I’m talking about could be done in a bunch of different departments. It could be done in behavioral economics, or cognitive science. Maybe a few others. But maybe business, I’m not sure. But yeah, I think to get a feel for the landscape of topics and what interventions would be promising enough to try, I think studying behavioral economics and cognitive science is probably what you wanna do.

Robert Wiblin: So you’ve talked about IARPA and Philip Tetlock. Are there any other really outstanding research groups that you could join, once you’d skilled up later on in your career? Sounds like it’s a fairly small [crosstalk 00:50:05] area.

Julia Galef: Well, I mean, research groups is tough. I can think of particular professors doing work that seems good. I mean, Tom Griffiths’ lab at Berkeley, I think it’s the computational cognitive science lab? I might have got the name slightly wrong. But he’s doing great work on studying whether the brain’s intuitive decision making heuristics are optimal under certain conditions, and how can we tell. And then there’s also Dan Kahan at Yale Law School, who’s done a lot of work on … I think his lab is called Cultural Cognition Lab. And he has a blog. If you just Google “cultural cognition”, you can read a lot of his research. And that seems like well done and interesting and about important topics.

Robert Wiblin: So for the right person, they’re potential PhD supervisors, or mentors, perhaps.

Julia Galef: Perhaps, yeah. I don’t actually know about [crosstalk 00:50:55].

Robert Wiblin: You don’t wanna commit them to it. Great. So moving on a little bit. How do you think about your career going forward? Where do you think it might be in five or 10 years time?

Julia Galef: I would love in five years or 10 … 10 years is too … who knows what the world looking like in 10 years?

Robert Wiblin: Back to doing a little bit of engineering, perhaps.

Julia Galef: Yeah. God. Maybe I’ll be a dentist. I don’t know. I’ve decided social psychology is too unrigorous, I need dentistry [inaudible 00:51:20].

Robert Wiblin: Precise drilling.

Julia Galef: Yeah. But in five years, it seems not wholly implausible to me that we could have a loose-knit, unofficial community of a hundred people spanning VC, and tech, and in the government and the media who are really thoughtful, and curious, and have engaged with the 12 most important issues for the future of the world, and have heard the best arguments on both sides, and have revised their views somewhat over time, and are acting on those models that they have forged through this process.

To me, that seems both plausibly achievable in five years, and also like it would be really good for the world. Obviously a hundred is a minority, but it’s a hundred of relatively influential people in their different fields, who influence where funding goes. Potentially how lobbying money is spent to influence policy. What ideas are being put out into the public discourse. These are really useful things, and so I think people who are in a position to direct those resources, and public attention and so on, having even a subset of those be … have invested time over the course of several years, making their models of these topics more accurate, would be really valuable.

Robert Wiblin: So you’re involved in both the effective altruism and rationality communities. What kind of mistakes do you think they might be making at the moment?

Julia Galef: I mean, I’m quite a fan of both of those communities, I’ll just say off the bat. Well, so, okay. So these mistakes are not universal, but mistakes, things that seem plausibly like mistakes to me that I have seen at least some large subset of those communities making, would include leaning too heavily or putting too much trust in explicit reasoning. Which is not to say blind guessing or just pure intuition is optimal, but I think there are certain models like, I don’t know, utilitarianism. Frameworks, I guess, which often give counterintuitive answers.

And I think the rationality and EA communities are quite good, compared to most of the world, at saying, “Okay, well, just because it feels counterintuitive doesn’t mean it’s wrong, and this is what the logic spits out, and so we should really take that seriously.” And I think that’s great. I think the world needs a lot more of that. But at the same time, if something feels counterintuitive, or suspicious, or it feels maybe, I don’t know, sketchy, or like it might have ethical concerns around it or something, I think you should take those concerns seriously too, and try to interrogate what seems wrong about this. Is this … I guess I just, I don’t want people to lean too heavily or too completely on any one explicit reasoning framework. I thought that Paul Christiano did a good job in a recent blog post, which, I don’t know, maybe you can link to it.

Robert Wiblin: We can link to.

Julia Galef: Great. I think he called it Integrity for Consequentialists. And I don’t know exactly what his trajectory was of landing at this view, but basically, this is the kind of the thing that I think can happen sometimes if you allow yourself to be suspicious of some of these sketchy or counterintuitive conclusions of a framework like utilitarianism. You can say, “Well, gee, it seems maybe bad to have people breaking promises to each other if they think that’s the utilitarian thing to do.” And that’s a fork in the road, where on the one hand you can say, “Oh well, it’s the utilitarian thing,” then just do it, or you can say, “Hm, this seems maybe bad, let me think some more and see if I should be revising this model somehow.” And I think Paul’s post, Integrity for Consequentialists, is a really nice, elegant revision of a standard utilitarian model that I think works better. It’s probably not perfect, but.

And it’s the kind of thing that you won’t come to if you just trust the logic of your current framework even if it feels wrong. So, yeah. Putting more weight on stuff seeming weird or being uncomfortable with conclusions is one potential thing I would advise. And then I guess it’s not clear to me … it seems plausible to me that it might be a mistake for the EA community to be trying to grow and do as much outreach … Sorry, to grow as fast and do as much outreach as it is doing. It seems to me like the EA … so if the EA community was more like a political movement, then that would seem good.

Political movements need money and they need votes, and anyone can give money and votes, and so you wanna get as many new people in as possible. But there’s this other end of the spectrum that’s more like a scientific community, or something, and you don’t wanna just add as many people as you can to the scientific community, of anyone who wants to join. You wanna keep the epistemic standards and the quality of discussion really high, and so you have to be more selective about who you add. And I think EA is somewhere in between those two poles. And it’s not obvious to me what the right answer is, in terms of how fast to scale up, but-

Robert Wiblin: Should we be more elitist, or more broad, yeah.

Julia Galef: Yeah, basically. Basically. It may be a mistake. I might, upon more reflection, careful consideration, think it’s a mistake to grow as fast as we are trying to grow.

Robert Wiblin: Yeah. Interesting. I read that post by Paul Christiano. It was really good. So I’ll definitely link to it. I guess I haven’t noticed that many people being dishonest or betraying one another in that kind of way, but I only interact with who I trust, which is kind of the point. That if you behave poorly, then people are just not gonna wanna be around you. So I’ve chosen the people who I work with and the people who I’m friends are carefully chosen to be very trustworthy, reasonable people, so.

Julia Galef: Yeah, yeah. I think there are strategic arguments in favor of integrity and keeping promises even when it’s not locally utilitarian, or it doesn’t seem locally utilitarian. And so I think, to some extent … I mean, to be clear, I haven’t seen a ton of this, of actual promise breaking. I’ve seen a little bit of it. It’s not clear to me … the world in general is often dishonest and breaks promises. I suspect that the EA community is actually better than the average level of integrity in the world as a whole. So I’ve seen a little bit of, I don’t know, promise breaking in the name of utilitarianism. Maybe what I’ve seen more of than that is people endorsing that as a rule, as opposed to doing it themselves in a way that I was able to perceive.

Robert Wiblin: Okay. Interesting. Maybe we can talk about that more another time. So what do you think is the biggest downside of the career path that you’ve taken?

Julia Galef: The biggest … I mean, one downside is just lack of certainty. If you have a more well-defined career track, like let’s say you go into academia and you get tenure, or you become a doctor and you have a practice, or you become a lawyer and you become a partner, et cetera. There’s some stability there and some certainty about what things will look like for you 10 or 15 years down the road. And I don’t quite have that. I feel like I’ve built up some security, just through diversity. The kind of stability that comes from robustness, where I have a number of different irons in the fire, and maybe if one of them doesn’t work out, I can ramp up the others, or it won’t be completely catastrophic because I’m not putting all my eggs in one basket. So I’ve tried to build in some robustness that way, but I am kind of figuring it out as I go along, and that’s just something that’s gonna be true any time you do something that doesn’t have a standard template.

Robert Wiblin: Yeah. Someone who I was speaking to for another episode earlier today was saying that it’s a lot easier to do that when you have a partner who’s able to potentially financially support you if you need to run your own project, and I guess you were leaning on your parents earlier in your life.

Julia Galef: Yeah, right after I left grad school. Yeah.

Robert Wiblin: Is that true that, well, if you have money in the back or if you have [inaudible 00:59:10] then it’s a lot easier to take these career risks.

Julia Galef: That does make it easier. It’s true. Yeah. Yeah, absolutely.

Robert Wiblin: Are there any things that you could imagine learning in the next few years that could really send you off in a different direction with your career? Working on different problems or tackling them in other ways?

Julia Galef: Yeah, I mean, I think if I updated significantly in favor of one particular global catastrophic risk being imminent and likely, I might … The stuff that I’m doing is … it seems to me to be useful and valuable in the medium run, and useful in expectation across a lot of different possible … there’s no one useful consequence that I think is likely to result from what I’m doing. I just think that, in general, if we have influential people and decision-makers following, thinking and discussing procedures that are correlated with accuracy, then we get better results in the long run. But that’s a very indirect connection to draw, and it’s sort of … may not be the best thing to do if there’s one risk that suddenly looms, I might just shift my attention and resources to working on that one particular risk.

Robert Wiblin: Yeah. Makes sense. So we’ve been at EA Global all day, and I think we’re both pretty hungry, so we should go off and get dinner. But one last question is, are there any other conferences that you go to regularly where people could, I guess, potentially meet you or network with other people if they’re interested in working on the same kind of topics?

Julia Galef: Oh, well, I mean, one conference that I have been going to every year is the Northeast Conference on Science and Skepticism. Which is sort of like my roots. It was the origin of the podcast and thereby the origin of my current career trajectory. It’s in New York every year and it’s run by basically the skeptic community, so there’s some overlap between the skeptics and the rationalists or EAs.

They tend to focus on evidence and scientific knowledge, and scientific literacy and education and things like that. They don’t tackle the same kinds of questions that EA or the rational community does. They’re not quite as focused on what is the biggest, most important, impactful thing that there is to figure out. And they’re maybe somewhat more focused on just promoting the consensus view in a scientific field, against misinformation or pseudoscience, or fraud. Which I think is also valuable. So yeah, that’s in New York every year. I tend to do a live podcast taping at NECSS every year, and so you can check out information about the previous NECSS, and then as we get closer to the next one, it’ll have information about buying tickets and so on. It’s necss.org.

Robert Wiblin: Great. Well, my guest today has been Julia Galef. Thanks so much for coming on the podcast, Julia.

Julia Galef: My pleasure. This has been fun, Rob. Thank you.

Robert Wiblin: I hope you enjoyed that episode. Again, you can get personalised coaching to help you work on the same problems Julia is by applying on the 80,000 Hours site. The link is in the blog post or the episode show notes. If you enjoyed this episode we have much more where that came from – subscribe by searching for 80,000 Hours in your podcasting app. It would also be great if you could let a friend know about the show.

Thanks for listening, speak to you next week.

Get free, one-on-one career advice to help you improve judgement and decision-making

We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:

Read more

Author: Robert Wiblin

Rob studied both genetics and economics at the Australian National University (ANU), graduating top of his class and being named Young Alumnus of the Year in 2015.

He worked as a research economist in various Australian Government agencies, and then moved to the UK to work at the Centre for Effective Altruism, first as Research Director, then Executive Director, then Research Director for 80,000 Hours.

He was founding board Secretary for Animal Charity Evaluators and is a member of the World Economic Forum’s Global Shapers Community.