#11 – Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm
#11 – Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm
By Robert Wiblin · Published October 17th, 2017
What is the best, state-of-the-art therapy for depression? Do most meat eaters think it’s wrong to hurt animals? How likely do Americans think climate change is to cause human extinction? How do we make academics more intellectually honest, so we can actually trust their findings? How can we speed up social science research 10-fold? Do most startups improve the world, or make it worse? Why is research in top journals less reliable?
If you’re interested in these questions, this interview is for you.
A scientist, entrepreneur, writer and mathematician, Spencer Greenberg is constantly working to create tools to speed up and improve research and critical thinking. These include:
- Rapid public opinion surveys – which he has used to learn public opinion on animal consciousness, farm animal welfare, the impact of developing world charities and the likelihood of extinction by various different means;
- Tools to enable social science research to be run en masse very cheaply by anyone;
- ClearerThinking.org, a highly popular site for improving people’s judgement and decision-making;
- Ways to transform data analysis methods to ensure that papers only show true findings;
- Ways to decide which research projects are actually worth pursuing.
In this episode of the show, Spencer discusses all of these and more. If you don’t feel like listening, that just shows that you have poor judgement and need to benefit from his wisdom even more!
Highlights
Almost all Americans report believing that animals are conscious and feel pain. Almost all believe that it’s wrong to hurt cats and dogs. Fewer but still a majority think that it’s wrong to hurt farm animals. The main stated defence of meat consumption in answers to Spencer’s survey, was that many claim not to be sure whether animals suffer in farms. People report believing that developing world charities are much more cost-effective than they in fact are. Most Americans report believing that human extinction in the next century is incredibly unlikely – implausibly so in our view
Spencer has designed an app to deliver cognitive behavioural therapy to people with depression around the world at low cost. The early testing has been conducted carefully and is very promising. Spencer has also helped build a tool to allow non-technical people to rapidly produce high quality surveys for psychology research, that anyone can use online.
Spencer and Rob think plenty of startups are actually harmful for society. Spencer explains how companies regularly cause harm by injuring their customers or third parties, or by drawing people and investment away from more useful projects.
Articles, books, and other media discussed in the show
- Clearer Thinking apps: How rational are you, really? Take the test., What is your time really worth to you?, How well can you tell reality from bullshit?, How airtight are your estimates?, Could You Start A Successful Company? and many more.
- UpLift app for depression
- Why high-profile journals have more retractions
- Considering a vegetarian diet: Is meat-free really better?
- Investigations of animal welfare conditions in US farms
- Social Science as Lens on Effective Charity: results from four new studies – Spencer Greenberg
- TEDxBlackRockCity – Spencer Greenberg – Improve Your Life With Probability
- EA Entrepreneurship – Spencer Greenberg – EA Global 2015
- Guided Track – an easy tool for building studies and behaviour change programs
- 13 Ways Some Companies Make Money While Causing Harm
- Why Bill Gates and others are concerned about AI, and what to do about it
- The rise of robots in the German labour market, Estimating the impact of robots on productivity and employment, Robots and jobs: Evidence from the US.
- Most people report believing it’s incredibly cheap to save lives in the developing world
- Positly for recruiting research participants
Transcript
Hey listeners, this is the 80,000 Hours podcast, the show about the world’s most pressing problems and how you can use your career to solve them.
I’m Rob Wiblin, Director of Research at 80,000 Hours.
Today has so many fascinating bits because Spencer is a fascinating man. I particularly enjoyed the discussion of two public opinion surveys Spencer ran, which come at the end of the show.
I listen to lots of podcasts myself and always do it using an app on my phone so I can choose whatever speed I like. I started saving hours a week when I realised I could listen to shows at double speed. If you want, you can get this show on your phone by searching for 80,000 Hours in your podcasting app and subscribing. That’s 80,000 as a number.
As always you can apply for free coaching if you want to work on any of the problems discussed in this episode. The blog post with this episode has a full transcript and links to articles discussed in the show.
And now I bring you Spencer Greenberg.
Robert Wiblin: Today I’m speaking with Spencer Greenberg. Spencer is a polymath with many interests and achievements. To start with, Spencer is an entrepreneur. He founded Spark Wave, a startup foundry which creates novel software products designed to solve problems in the world, such as scalable care for depression, and technology for improving social science. He also founded clearerthinking.org, which offers free tools and training programmes that have been used by over 150,000 people, designed to help improve decision-making and reduce biases in people’s thinking. Spencer is also a mathematician with a PhD in Applied Math from NYU, with a specialty in machine learning. Previously he co-founded a quantitative investment firm, where he designed algorithms to make daily predictions about thousands of stocks. Spencer’s work has been featured by major media outlets such as the Wall Street Journal, The Independent, Life Hacker, Gizmodo, Fast Company, and The Financial Times. Thanks for coming on the podcast, Spencer.
Spencer G: Thanks for having me.
Robert Wiblin: So we plan to talk about a number of the projects Spencer has on the boil and what we can learn from his experience. First though, whenever I talk to you, I’m always impressed with what you’ve managed to get done lately, so what are you doing these days, Spencer?
Spencer G: Well, so at Spark Wave, as you mentioned, we’re a startup foundry. That means we actually try to create new companies, and specifically we’re trying to create companies to solve problems that we see in the world. So for example, we have an app we’ll be releasing fairly soon called Uplift, which helps people who are suffering from depression. We have another app we’re working on for people with anxiety, and we have a whole bunch of studies we’ve been running, for example, a study on habit formation that I’m really excited about.
Robert Wiblin: All right, we’ll get back to Spark Wave later on, but one of the most impressive things you’ve been doing over the last few years is trying to improve the research methodology in psychology. Tell us a bit about that.
Spencer G: Some of our projects at Spark Wave are actually building tools or technology platforms that social scientists and also other people can use to do better social research. So for example, a lot of people don’t realise this, but you could recruit people online for studies very inexpensively, and we’ve actually built a tool to help people do this so that you can quickly answer questions about human psychology. Whereas traditionally, you know, if you had to recruit them to the lab, it might cost 10 or 20 times as much money and time to get those kinds of studies run.
Robert Wiblin: Right. So what kinds of questions have you asked people?
Spencer G: So lately we’ve been looking at a whole bunch of different things. I mentioned this habits study. So what we’re looking at there is we’re trying to figure out which habit formation techniques actually work to get someone to stick to a new habit. So we actually developed 22 micro-habit formation interventions. Each of them is just a little thing you could do that might make you more likely to stick to a habit, like tell a friend that you wanna form the habit, or put a note card on your computer reminding yourself of the habit. We have 22 of these, and we’re actually randomising people to different combinations of them to try to figure out what actually works to get people to change their behaviour.
Robert Wiblin: I guess if you’re testing that many different interventions, then you’d need a pretty large sample, right?
Spencer G: Indeed you do need a large sample. And that’s one of the wonderful things about these online recruitment methods is that you can get very large samples, and because we don’t have to bring people into a lab, if it’s all done automatically and digitally, it’s much cheaper. In fact, our software sets up the entire study, so when someone enrolls, every email they get is already gonna be preset and delivered to them at the right time.
Robert Wiblin: The online recruitment process, that’s Amazon Mechanical Turk, or are there others?
Spencer G: Amazon Mechanical Turk is one of the biggest platforms for doing this kind of recruitment. We built a tool that actually sits on top of Mechanical Turk. It’s sort of an extra layer that makes it better for research purposes. It helps us do things really easily like, say, we did one study where we want exclusively those people in our next study because they already had been exposed to this intervention, or we wanted to give a second study only on people that had done our first study, so our tool makes that kind of research really easy.
Robert Wiblin: I imagine the people doing this kind of work online are not typical, or they’re not representative of the whole of society. Do you have issues with people having particular age demographics, or income, or education?
Spencer G: Yeah, that’s a good question. The tool I’ve been mentioning it’s called Task Recruiter. You can check it out, Positly for recruiting research participants. We’ve done a lot of research into who are these people, what are they like, and in fact we’ve added features that also help you target, so in Task Recruit, you can say “I only want males”. “I only want people in this age group”. It allows you to use this population which may not be fully reflective of the group of interest, but kind of craft it to your needs.
Another thing about that is that a lot of the work we do tends to target people that are more similar to the Mechanical Turk population than the general US population, so for example Mechanical Turk tends to skew younger, like 30 year olds, 40 year olds, some 50 year olds, and that tends to be the audience that we’re targeting with our apps. Similarly, the Mechanical Turk users are more tech-savvy than the typical US population, and again, for us, that’s more targeted. The real question is not is it perfectly representative of the population, but how well does it represent the group that you’re interested in studying with your work?
Robert Wiblin: You also created an app called GuidedTrack, right? Which we use on the 80,000 Hours site. Tell us about that.
Spencer G: Yeah, so GuidedTrack is actually a new language. I like to think of it as a language of behaviour change. The idea is if you wanna build an app, let’s say you’re an expert at sleep, and you wanna build an app to help people with sleep problems. You’re not a programmer, though. Well how would you do that? Well, you could hire a programmer at great expense, and then you have to figure out how to manage them, to figure out how to recruit a good person. You also have a lot of communication barriers going back and forth about helping them understand what you know, or you could go use GuidedTrack, where you could actually learn it pretty easily, and build your own app, and eventually deploy it, and soon you’re gonna be able to deploy GuidedTrack programs both to iPhone and Android. But right now only deploy within the lab.
Robert Wiblin: With a combination of the online recruitment, plus you have this language that you’re using that makes it very easy to develop applications, I guess, for encouraging behaviour change or surveying people, you’re able to run vastly more studies than academics can? Or you’re able to do it much more cheaply?
Spencer G: The way I think about it is that in order for research to be useful for the world, a lot of things have to go right. Too many things have to go right. First of all, it has to be working on a problem that’s actually important. Second of all, you have to have a robust enough method that if you answer the question you’ve gotten the right answer. Third, it has to be novel so you’re not just answering a question someone’s already answered. Fourth, someone actually has to take that research that you did and go apply it in the world, et cetera, you can make a long list of all these things that has to go right. Because our goal very explicitly is building products and tools that help people accomplish things that they are trying to accomplish or help people with mental health problems and all these different kinds of other applications, we have to think about every one of these things that go wrong and try to make them go right.
It’s not just about getting fast research. That’s great to get fast research, there’s all these other things you have to focus on, as well.
Robert Wiblin: What kinds of questions have you asked people? Have you done anything that’s yielded any real benefits yet?
Spencer G: For example, Uplift, which is our app for people who are suffering from depression, we did a big study on that where we measured the depression levels of people who used our app. The first 80 people that completed the programme, their depression was reduced by 50% over about a month. Then we followed up six weeks later and found out that they maintained nearly all the benefits. Followed up again six months later, actually we just got these results a few days ago, after six months they maintained almost all the benefits. That was an example where that was really important research for us because we needed to see if this thing we built to help people with depression was actually doing it and whether those results would be maintained.
Robert Wiblin: That’s a huge effect size, almost suspiciously large.
Spencer G: It is. It is.
Robert Wiblin: Did you really believe it?
Spencer G: Honestly, it was even stronger than I expected. Which was super exciting for us. Because our goal is really to help people, we’re trying to make it so that our research reflects the reality as closely as possible. For us this is, if our program doesn’t help people then we’re not shooting our goal. This kind of research for us is like, are we doing the thing that we’re trying to do? That means we have to make sure that we appropriately power the study. Have a large enough sample size, we have to think about drop out. We have to think about all these different questions.
Robert Wiblin: I guess you were building on existing cognitive behavioural therapy research. So you were trying to pick the best ways of treating depression that people had already found.
Spencer G: That’s absolutely right. What we did is we did a large review of the evidence that we could find on what really works for depression. We also studied what existing solutions were out there. We looked for ways that we strategically thought that we could improve on what had been done before. Part of that is making a program that’s highly interactive. It’s constantly adjusting to you and reacting to what you say. It’s not just doing a static online course or reading a book but really trying to make something that’s closer to a ‘choose your own adventure’ story. Although it’s not a game. It’s actually to help you with depression. Anyone who’s interested in that you can sign up for our list to be one of the first to find out about it. Uplift.us, we haven’t released it yet but we’re releasing it fairly soon.
Robert Wiblin: Great, we’ll put up a link to that. Are you interested in the broader problems with academic social science? There’s been a lot of criticisms, I guess, in the last 10 years and really probably going back a long time about the problems of academics running way too many tests on their sample so that they can get positive results even if there isn’t really a positive result there in the data. Underpowering tests, there’s just generally low rates of replicability in social science findings. Are you interested in trying to solve those things more broadly and just tackling the problems one by one?
Spencer G: Yeah, I’m extremely interested in those questions. I believe that social science is actually asking a number of the most important questions that there are. How do you make humans happy? How do you improve human relationships? How do you make humans productive? How do you help mental health? These to me are really fundamental questions in society. It’s so important that we actually get answers to these questions and we make progress. I think while academia has had some wonderful breakthroughs, really some phenomenal ones, I think that it could be doing much better. The fundamental issue, it’s the core of everything that you said. All of these different problems, they really all emanate from the same source, which is that the incentive alignment is just not there. If you’re an academic, there’s extremely intense pressure to publish in top journals. If you don’t do that, you will get squeezed out of the system.
That means that whatever top journals are willing to publish, that’s what you have to do. That doesn’t push you in the direction necessarily of finding the important breakthroughs that are going to have huge improvements to people’s lives.
Robert Wiblin: An interesting finding that I recall reading is that the publications in the top journals actually have lower rates of application than findings in second and third tier journals. Because they are more exciting, more surprising results, more kind of intuitive results. Even though they’re in better journals, because they are on their face implausible findings, in fact they don’t replicate so much. Which suggests that the incentives really are not that good if you get the best results as an academic is getting publication in one of the top journals. In fact the way to do that is to come up with a wacky result that’s probably not in fact true.
Spencer G: It’s a very interesting point. One thing that I like to think about is what is the a priori chance that you come up with some novel important idea? Well that’s really hard. Right? You’re discovering something new about human psychology.
Robert Wiblin: And there’s millions of people trying.
Spencer G: And there’s lots of people trying. You have to publish your paper, so what do you do? Not only that but in each field there’s a certain publication rate that they expect. Reality is really hard and then there’s a certain publication rate that you have to hit. Something’s got to give there, right? You can’t have an amazing, novel, interesting result every time. What’s going to give? It could be that what gives is, how novel it is. Maybe it just becomes small tweaks that people are doing on existing ideas. That’s okay but that’s not necessarily good. It might mean that they’re making trivial progress. Another thing that could give is quality of the research where you end up finding false positives. That’s more disturbing because then you’re actually forcing people to do false positives. If they don’t they can’t get the publication rate that they need.
Robert Wiblin: That’s quite interesting that people who aren’t super educated maybe don’t trust academic journal findings that much. Maybe they don’t respect the credentials, they don’t respect academia. Then you’ve got people who are quite informed who are inclined to defer to scientific research. Then there’s people who know a great deal about academic research and in fact might be more skeptical than many others. They appreciate that many of these supposed findings wouldn’t stand up to scrutiny.
Spencer G: I think that if you think about, let’s suppose you don’t know very much at all about science. You might be skeptical of it as you say. Then you start learning about science and you’re like studies is the way we should answer questions. Then one day you realise the studies often contradict each other and you say, oh no, what do we do with that? Well I know we’ll do randomised control trials because they are better at determining causality and it’s very hard to answer causal questions without randomised control trials. Then you get accepted by those, but then you start realising that those contradict each other.
You’re like oh, okay I know we need a meta analysis. Then you start, which group together these randomised control trials. Then you start looking to those then you realise well there’s publication bias. A bunch of randomised control trials were never published. You actually see a skewed analysis of the data. Then you say oh we need meta analyses that try to do these corrections for publication bias. Then you start learning that some of the methods for correcting publication bias are actually statistically flawed and actually are known to not produce the right answers even though people use them.
Just, how far down the wormhole do you want to go? It can get very disturbing. You’re like, wow okay it’s hard to know what to trust. But the irony and the sad thing to me is that science is so powerful. The actual tools we’re talking about are incredibly powerful, it’s just about how do you execute them in a way, in the real world, with the weird incentives that exist, so that they actually lead to the right answer?
Robert Wiblin: Do you have any ideas for other things that you could do which could make a difference here?
Spencer G: Yeah, as a mathematician and machine learning person, one technique that I absolutely love is that when you collect data you withhold some of it picked at random from your data set. Let’s say like 20% of it. Put it in a vault. Don’t look at it. Do all the analysis that you want. Anything you want, even if you want to do 100 analyses. At first that sounds crazy what you’re going to find. You’re going to be data mining. You’re going to find all these false positives, fine. Then at the end when you pick the few that you think are the most important and most interesting and check them on the 20% of data you withheld. Now of course you’re going to have to gather larger sample sizes in order to do this. You have to make sure that the amount you withhold is really enough to confirm however many hypotheses you want to confirm like if you want to confirm five at the end, you need to make sure you have enough data to confirm five.
You do that and it keeps you honest. While you’re doing your research you’re like, well I know that I’m going to check this. You’re actually scared that it’s not going to hold up. It’s a great way to enforce discipline on yourself as a researcher.
Robert Wiblin: I was told to do this as an undergraduate, especially in forecasting. I guess in finance I think this is an extremely common thing to do if you’re trying to develop trading strategies for stocks, because there’s no point, there’s no point kidding yourself that the strategy is going to make money and then not doing and adding a sample test like that. Because you’re just going to lose money if you’re wrong about the answer.
Spencer G: Yeah, it’s really interesting how like in machine learning this is very standard practice, but in a lot of psychology, for example, very, very few people do it. Whereas, I think it would actually be very beneficial.
Robert Wiblin: Do you worry that because this method and the other things that you’re suggesting require larger samples, it’s going to distort the kinds of interventions that people test or the kinds of questions that people ask in favor of things that are very cheap to do with a large sample? Just doing surveys, for example, rather than providing substantial services to people?
Spencer G: That’s a really interesting question. In some sense I hope that it distorts it in the direction of research that you could do quickly and cheaply. I think of the metaphor of you’ve lost your keys in a dark room. There’s a lamp in one corner of the room, the keys are not more likely to be under the lamp than they are to be anywhere else. You should check under the lamp first because that’s the easiest place to find your keys. That’s actually the best search strategy.
Robert Wiblin: I really like that story that people use. They make fun of the person who’s looking for the keys under the lamppost. I don’t know that they’re going to have that much success if they don’t look under the lamppost because there’s not going to be any light. It’s kind of you’re damned if you do damned if you don’t. If the keys are not under the lamppost then you’re just in a very tricky situation whatever you do.
Spencer G: Exactly. That’s how I think about research. There’s actually so many important questions that one could study that I think that we should do a lot more looking under the lamp. We should look for areas where we could get a lot of data really quickly that actually helps a real important problem and say these other really important problems that are just really hard to study, we’ll get around to them. The high margin thing, the thing that you’re going to get a lot of bang for the dollar buck and helping the world, is the lamp stuff.
Robert Wiblin: Maybe we just have to leave the problems that we can’t solve with current methods or current funding for a later time.
Spencer G: Yeah, exactly. Another thing that I think is really, really important, as I’ve done more and more studies – and now I’ve done really a huge number of studies – I’ve realised is that research really should be iterative. You have the standard practice a lot of times when you do science, you do this big study, you’ve got to get a paper out of it because otherwise you might have wasted months or maybe longer than months. You can’t afford to just let that data go to waste. You’ve got to get something out of it. That’s a really bad incentive. Then what will happen is people in the open science movement, who are trying to have research be higher quality will say, “you know, you really shouldn’t do that. You should pre-register your study and make it public what you’re going to do before you do it.”
What I have found doing research is that there’s a problem with both of the sides of the coin. If you go and you just do this one big study, and force something out of it, of course you’re going to have an incentive to get some false positive. If you think of a study as a big monolithic thing where you pre-register all of your hypotheses, then you can’t learn that much from your study because you’re like well okay I didn’t pre-register this idea. Whereas I think the best research, in my opinion, is iterative research. Because the world’s so complex that by the time you do the first study chances are you’re going to learn something that’s going to make you realise there’s something different about the world than you expected. It might mean you need to do a second study and that’s going to teach you something else and a third and a fourth.
This is one reason I’m really excited about rapid online research. We can do five, seven. In one case we did like fourteen studies on one question because that’s how many it took to really feel like we understood it.
Robert Wiblin: Academic social science seems to be in a fairly bad equilibrium, but it’s in that equilibrium for a reason. It’s a stable situation because the incentives are bad but probably also the incentives to change the situation are bad. Do you think academics are really likely to take up these suggestions and potentially reform the system in ways that might be bad for them. The current approach that they’re taking would no longer get them the publications that they want.
Spencer G: That’s a great question. That’s why I think it’s so important that if you’re working on asking researchers to do something different, you have to find a way that it’s in their immediate interest to do so. Not just like the long term structural interest of all science.
Robert Wiblin: Or it would be good if everyone did it at once. But if you’re the one person who goes and does it first, then you’re screwed.
Spencer G: Exactly, if you ask them to do that, you’re like, “hey buddy, why don’t you just take on for the team.”
Robert Wiblin: Yeah.
Spencer G: And everyone else is not going to do it anyway, so it’s going to be for nothing.
Robert Wiblin: And you get fired anyway because you don’t have the publications because you’re aiming for too high quality so it doesn’t really even help. They have a very good excuse for not doing it.
Spencer G: I was talking to one high level research scientist and she was telling me, she associates all these recommendations of how to do better science with people who aren’t productive. It’s because a bunch of them actually do ask you to spend a whole bunch of extra time. I think this goes to another important point, which is that resources are strapped already. It’s not like researchers are like I have all this free time, and all this free money, and all this free researcher labor that I’m not doing anything with.
Robert Wiblin: It’s such a great idea to just double my sample, I don’t know why I didn’t think of that.
Spencer G: Exactly, there’s no research that’s free sitting around. It has to come from somewhere. One idea that we’re excited about is creating tools that help people do faster, cheaper research but also makes it easier for them to make higher quality research. Just to give one example of that, suppose that you are using an online platform that’s helping you do research faster. Which you want because you want your research to go quickly. It also made it really easy to release your data to the public. Because it’s already on an online platform, it’s just a checkbox. Maybe it’s not even a checkbox, maybe it’s already defaulted to saying we’ll release this in one year.
Then you have to say, “No actually I don’t want it.” And then the system tells you you’re an asshole. But it lets you still not release it if you don’t want to. The idea is that you can make it easy to do the good thing. It’s very little extra time on that part. You can nudge them to try to do the good thing. You can help them do what they’re trying to achieve during that process so that they actually want to use your tool.
Robert Wiblin: Let’s go back to the mental health app. I’m sometimes pretty depressed myself, what would the app offer me?
Spencer G: We try to very closely follow the protocol of cognitive behavioural therapy. Our app, you do one to two sessions a week and they usually take about 30 to 40 minutes. This is intensive stuff. You’re going to get homework assignments and it’s going to check in on you, see how you are doing. It’s going to have all kinds of tools to help you solve problems. Let’s say for example you didn’t do your homework, it’s going to help you try to figure out why you didn’t do your homework. It’s going to help you develop a strategy to do it next time. It’s going to be really intensive. Basically the idea is, teach you the principles of CBT and get you using them in your daily life. Because just knowing the principles without using them, really is not valuable.
Robert Wiblin: Do you only include people who go out and use the CBT properly in your sample to test whether it’s effective? Or do you include everyone who signed up for it but then didn’t really follow through because, I guess, it didn’t really help them?
Spencer G: There’s something called intention-to-treat analysis where anyone who enrolls in the study, even if they disappear, let’s say they drop out after one hour in the study. You still count them as they’re in the study. If you do an intention-to-treat analysis, there’s two ways, well there’s more than two, but there’s two major ways that you can do the analysis. You can say for everyone that dropped out of the study, let’s treat them as their last measurement. It’s called last measurement carried forward or you could treat them as a total failure. You say they dropped out we treat them as if they had no benefit.
Robert Wiblin: Worst case.
Spencer G: Yeah. I guess the worst case is that something really bad happens to them. For something that’s generally helpful, worst case is nothing happened. Then you can do non-intention-to-treat analysis which is just, say look at the people who just completed the programme, for example. So we’ve done all those kinds of analyses of course. When you’re trying to figure out the way reality works, you do every analysis to make sure you understand all the different aspects.
Robert Wiblin: And then integrate it but just be careful not to pick the most outlier result, the one that seems most positive.
Spencer G: That’s the really interesting thing, when you’re making recommendations for what people should do to be trustworthy to outsiders. Like what research practices you should have so that, I can see that what you’re doing is good, they end up being quite different than the research practices that you do to see for yourself to see that what you’re doing is good. For example, it’s very common that people will say well you shouldn’t test too many hypotheses. You shouldn’t collect too many variables. That’s because from the outside perspective if I know you collected a ton of variables and just reported on a few of them, I can’t tell why you did that. Maybe you tried a hundred different analyses and you just gave me the three that had any kind of significance. If you’re doing the research yourself for yourself, and your motivation is really to figure out the truth about the world, you actually do want to collect a lot of variables.
You want to hold yourself to a really high standard by doing things like withholding data in a vault so that in the end you can check to make sure you didn’t come to false conclusions. The reason you want a lot of variables is because it gives you a lot of flexibility to ask questions you hadn’t even thought of, to check things in different ways. To look at there’s three things pointing in different directions that’s a good sign.
Robert Wiblin: I guess, it’s not so bad if you’re doing it yourself because you know everything that you did. In principle you can correct for that and think well I ran a lot of tests so it’s not surprising that some of them came back positive. It might be difficult to know exactly what kind of correction to make in your own mind. I guess you’re saying the outer sample testing, it basically allows you to do that in a pretty thorough way?
Spencer G: Exactly, yeah, because for example, let’s say the things at the end of the day were just simple averages. You’re like, okay at the end of the day I’m going to collect a hundred variables but let’s say I think that I’m going to have probably five averages that will be the important findings. You can withhold enough data on the side that you can, with pretty high confidence, check those five averages without succumbing to many false positives.
Robert Wiblin: Do you think that these mental health apps is going to turn out to be one of the most effective ways of dealing with mental health problems or maybe just improving health around the world in general?
Spencer G: This is our goal. Of course there’s always a lot of uncertainty in these kinds of things. We’re trying to create completely scalable solutions. Earlier you talked about this question of does trying to do this fast online type research lead to doing trivial or things that aren’t deeply helpful? I think one of the neat things about it is it forces you to do scalable things, right? If you’re doing it online, if you’re never meeting with that person in person, it has to be scalable. GuidedTrack, our behavioural change language, when you build a study, you can then take the same code and turn it into an app. If you found that intervention worked in a study, why not release it to the world and let everyone benefit from that? Our thinking is let’s try to be completely scalable solutions because they’re software, we can offer them at very low cost. Anyone can use them.
Robert Wiblin: You can help people in the developing world as well as just rich people in America?
Spencer G: Exactly, we’re getting started in the US but long term, we would love to bring this to India for example and other countries.
Robert Wiblin: Let’s move on. You also started this website clearerthinking.org. Tell us about what people could find on that.
Spencer G: Clearerthinking.org is a website for helping you understand how to make better decisions and in general how to improve your life through using the ideas from science. The thing that you’ll observe is that there are all of these books about finding safer behavioural economics or about the way the human mind goes wrong. It can be irrational, but there’s very little content about how you take that information and actually live a better life with it. Our mission with clearer thinking is actually turning these things into interactive tools, interactive tests, interactive training programs that let you apply these insights to what you’re doing. Just to show you an example of this is Bayesian thinking. This sounds like a confusing math concept, but we have a little training program on Bayesian thinking where it teaches you to in a completely non-mathematical way. We don’t need equations to teach it to you. Then we actually give you a whole bunch of examples from your life, from the kinds of things you do with your life and how it should change your thinking about those things.
Robert Wiblin: What is Bayesianism? Can you explain it in 30 seconds or is that a bit challenging?
Spencer G: Basically, people mean a lot of different things when they say Bayesianism. Here I just am referring to Bayes’ rule which is this rule that says let’s suppose you want to know the probability of something. Before you got some evidence, you had some probability. You thought there was a 10% chance of that thing happening. Now you have some new evidence, well how confident should you be? Now should you be 20% or 5%? It’s basically just a mathematical rule about how to take your prior probability and then update it when you learn something new.
Robert Wiblin: Which part of the site has been most successful?
Spencer G: Our most popular program we ever made is our “how rational are you” test.
Robert Wiblin: I took that.
Spencer G: Oh you took it?
Robert Wiblin: I got pretty good results. I don’t want to brag but …
Spencer G: It’s actually really funny because there was someone I showed it to, right before we launched. Just someone I’m friends with and they were like, “Who would ever want to use this? Who’s the target market? Who’s the target demographic? I don’t understand.”
Robert Wiblin: People who want to show off, I think. You made it easy to share on Facebook, I recall. I saw a lot of people posting their positive results. I didn’t see so many people posting their bad results.
Spencer G: What we try to do in the program, each question is linked to a known bias. As you go through it we analyse your biases and we give you at the end a report. We tell you about your strengths and weaknesses. Most people have both strengths and weaknesses when it comes to bias. And then we actually link you to content that we made to help with each of those so it’s not just okay, you have this bias, good luck. It’s like oh if you want to work on it, why don’t you go to this other content of ours. Yes we’ve had a really good response from that.
Robert Wiblin: The site is, what, three years old?
Spencer G: Yeah, maybe it’s a little older.
Robert Wiblin: How many readers do you have now? What kind of traffic do you get?
Spencer G: Our mailing list is about 20,000 people. We tend to get big influxes when we release new programs so we’ll try to get new media to write about our work which we’ve been quite successful with. In the pipeline we have a few new programs that we’re gearing up to launch, I’m really excited about.
Robert Wiblin: That’s a crazy number of articles and tools on the site. How did you produce so much of it so quickly?
Spencer G: It’s funny, we don’t have a big team. Clearer Thinking is a small team. We leverage GuidedTrack, our behaviour change program language. Which means that, for example, our writer is in a sense a programmer because he’s writing interactive content. He’s not a programmer by profession, he’s a writer by profession. It allows us to leverage the power of writing but actually release these interactive tools. I think it accelerates our development process so much.
Robert Wiblin: Can listeners potentially use GuidedTrack as well? Is it just available for anyone?
Spencer G: You go to GuidedTrack.com You can check it out. Right now it lets you sign up for the mailing list. As soon as it’s released to the public we’ll let you know. As you know, 80,000 Hours uses it.
Robert Wiblin: We found it incredibly helpful. I’ll put up a link to some of the Guided Track tools that we have on our site as well. So imagine given your philosophy you’ll be super keen to test whether these apps are really working. Do you have much evidence that they’ve actually managed to help people reduce their biases because I know there’s been a bunch of research and suggesting that simply being aware of your biases doesn’t help but you’re trying to go beyond that I guess.
Spencer G: You know it’s really interesting. So it depends on the bias and situation. So let’s take the sunk cost fallacy which is the idea that a lot of times when people put a lot of energy or time resources into a project and they’re evaluating whether they should quit. They don’t want to quit even though rationally they should because if they quit they have to then suddenly recast all that effort and money resources as a waste. And that’s really psychologically painful. So this is one reason for the sunk cost fallacy. They stick with it so that they can continue deferring actually having to admit that that was all wasted, all that past effort. Of course when you make a decision you should only be thinking about the future. Is this worthwhile given all this in the future? The past stuff is already spent I’m not going to get that back. So how would you help someone with the sunk cost fallacy?
Well there’s actually a really simple solution. If you can teach someone a pattern of what some sunk cost fallacy looks like. And you can get enough triggers in place so that when they’re in their real life, when they’re falling for it, they think to themselves “hey, this is like the sunk cost fallacy”. And then you actually give them enough motivation to realise that like “hey, you know what, I’m delaying the inevitable. By continuing this on, I actually make things worse.”
Robert Wiblin: They’re about to lose even more again.
Spencer G: Exactly. That may be enough. So whereas other biases, like the halo effect where you think someone is good in one way it makes you think that they’re good in another way that may be a lot harder to correct, because, for example, just having that thought “oh, I like this person this one way it might have a halo effect on these other traits they had”, you may be completely unaware of whether you are or aren’t and you may be very incapable of actually doing a proper correction. I think that different biases are more or less tractable in how fixable they are. In terms of efficacy, we’ve been getting more and more into running studies on our work. You know when you’re first starting and you’re just getting from the “are you just trying to make content that people find valuable”, but now we’re getting more into it. For example, this habit formation study we’re actually looking at which of these habit techniques work.
We’re building a study for sleep problems because if you think about what do people suffer from in our society, it’s like mental health problems, sleep problems, relationship problems, things like this. So we’re working on a program for sleep problems and we’re actually going to be running a trial where we enroll people to try our tool and we track them and we see if they actually have improvements in their sleep quality.
Robert Wiblin: Do you think in some cases these biases might actually not be biases? Once you consider all of the benefits and costs that people face? For example with the sunk cost fallacy. If you’re running a project in a company and you’re considering closing it down, you might just want to delay the company realising that your project is a failure because once they realise that then you might be fired. And so you just want to hold on as long as possible. And presumably there’s some social phenomena like this that explain why people are programmed to behave in these seemingly biased ways. At least some of the time it could actually be reasonable.
Spencer G: It’s interesting, you can try to break the biases down in different categories and it can be harder to tell which biases fall in which category. But if you think about evolutionary history the reason our minds are constructed the way they are is because it’s really helpful for survival, right. It evolved to help us survive. Now there were some of these biases that actually weren’t biases at all in our evolutionary environment, they were just useful. Now maybe they didn’t give you the right answer all the time but they saved computational power, they allowed you to make decisions faster. Not that they were perfect but they were very effective. There are other types of biases that were just not triggered in that environment. In other words, like you just wouldn’t have been in that weird situation. You think of sugar as an example, like it’s very important that we find sweet things tasty because in our evolutionary history getting calories were super important.
Today, the fact that we find sugar so tasty is really a problem. It can cause all kinds of health issues. So a lot of people have gone and tried to reinterpret the bias literature and say oh these things are actually not biases. I can come up with some complicated way where this is actually rational and I think that in some cases they’re right. But the reality is our brains are just, they’re heuristics based. Our minds are constantly running heuristics about what to do and heuristics, while they work in some settings they fail in others. That’s the key is knowing the situations in which they fail. The fact that they work in some, great. Well you’re going to do that anyway. That’s the way your brain works.You just need to know the case they won’t work.
Robert Wiblin: If someone wants to go try out clearerthinking.org obviously they’ve got this rationality test which is kind of fun. What other tool should they start with?
Spencer G: One that I really like is our common misconceptions test. The idea is we give you 30 things, half of which are common misconceptions and half of which are actually true but sound like they might be common misconceptions. You have to predict which is which, but where it gets fun is that we actually have you bet points based on how confident . We say if you’re really confident you bet 10 points, if you’re only semi confident you bet six points and so on. And at the end we actually can use that to analyse the way that you make predictions to see whether you were overconfident or underconfident. We can also tell you how accurate you were. We can correct these common misconceptions that you may have about the world. And so it kind of does a lot of fun things at once.
Robert Wiblin: Yeah I’ve used that one it’s really cool. I was sweating it because of course you want to get good results at the end. You want to see where you’re calibrated. I was spending a lot of time thinking about it. Were you able to see whether people as a whole well calibrated on average?
Spencer G: You know I can’t remember, I think people are a little bit overconfident on average.
Robert Wiblin: Overconfident.
Spencer G: If I recall correctly.
Robert Wiblin: That doesn’t surprise me too much. Let’s move on again. You’ve written about how people can best succeed as entrepreneurs or tell if they’re a good fit for entrepreneurship. Tell us about what criteria they should be looking at.
Spencer G: Yes this is a subject that’s very important to me because in Spark Wave we actually recruit CEOs to take our products, continue developing them, and spin them out. So we actually, for example right now, we have three CEOs that we’ve recruited. They’re going to be spinning out products of ours. There are a lot of traits that one could think about for entrepreneurship. I’d say that one of the very most important ones is persistence in the face of great challenges. It’s really interesting if you talk to successful entrepreneurs because a lot of times they’ll tell you that their company almost failed. Sometimes once, sometimes multiple times. I was talking to an entrepreneur the other day, he said three different times, they were not going to be able to make payroll, like it was Friday and they didn’t have enough money in the bank to pay their employees on Monday. And they managed to get through all three and now it’s a successful company.
Now most people are just not psychologically able to deal with this kind of situation. What’s going to happen is that when these huge, momentous problems occur and it looks like failure’s imminent, they’re not going to continue driving. They’re not going to actually accelerate that driving forward to solve the problem. They’re going to give up, and giving up means that their employees are going to read the signals that they’re giving up. It means investors are going to lose confidence in them and then they’re very likely to fail. The irony is that the sort of person who tends to succeed is one that they’re just unwilling to give up even when the evidence says that things are not going to go well.
Robert Wiblin: Did you find anymore counterintuitive ideas for what predicts being a successful startup?
Spencer G: I guess that wasn’t counterintuitive enough. Well one thing that I find quite interesting, I don’t know if it’s counterintuitive enough for you, is this sort of tension between persistence and looking at the evidence. It’s really important that the person is driving relentlessly towards a general goal, but at the same time they’re highly flexible about how to get that goal. A number of people have written about this idea, but I think it’s really important to highlight that basically if you’re not flexible … You see entrepreneurs who fail in both of these ways. You see the ones who give up, but you also see ones that are like persistent not just about the end goal but persistent about how they get there. They’ll keep trying same thing over and over again hitting a brick wall. Actually this other trait, which is coming up with clever, unexpected solutions, most of them will fail to solve these problems, but then they’ll come up with another and another and another. Being extremely flexible about how you get there.
Robert Wiblin: Do you want people who are generalists or people who are kind of good in a lot of ways or very good in just one or two ways?
Spencer G: For us, the people we recruit we kind of act like the CTO. We build the first version of the product and we only recruit them when our product is getting to the point where we think it has a lot of potential. They don’t need to come with technical skill and so that’s something that’s unusual, if you’re building a software startup it can be generally really helpful to be a technical person, but we can handle most of that for them. That means that we can focus on people who are more in the business side and marketing side, those abilities are there. That’s the kind of skill area that we’re going for.
Robert Wiblin: Is your talk about this online or the article?
Spencer G: I’m not sure what you’re talking about.
Robert Wiblin: About predicting entrepreneurial success?
Spencer G: Ah there is, on clearerthinking.org you can check out we’ve actually had a test that we developed. We went through the works of people who started let’s say three successful companies or founded two successful companies and then funded 150 companies. We took all the things they said that we could find and we turned them into a test. You can do this, you actually go through this test and then it will analyse your responses and give you a report about ways that you are and are not suited to be a startup founder.
Robert Wiblin: And suggest some ways to improve, I suppose.
Spencer G: Exactly and suggest ways that you could be better.
Robert Wiblin: Speaking of entrepreneurship, you’ve written quite a lot about how startups might unintentionally make the world worse. How can you tell if you’re doing harm as a startup founder or tech entrepreneur?
Spencer G: That’s a great question. I think a lot of times people will make this argument that if people are buying your products you must be adding value. You know why else would they buy it if it wasn’t valuable to them?
Robert Wiblin: Libertarian argument or free market argument.
Spencer G: Yeah, it’s a free market argument and at first glance it sounds very compelling, and I’ve heard many people make this argument. If you really think about it for a minute it really doesn’t make a lot of sense. I think of there being four categories of ways that companies can cause harm. The first is when there’s imperfect information. Imagine there’s a supplement company that makes supplements that just don’t do the thing that they claim to do. Now why would they continue to exist? Why would people buy the product. Well because a lot of times the thing that supplements claim do are not easily measurable. You can’t tell if it’s doing them. Heart health would be a great example, how do you know if your heart is actually healthier?
Robert Wiblin: I think in economics these are called post-experience goods. As you have experienced goods where you need to use them to tell whether it’s working. Then you have post-experience goods where even after you’ve used them, you can’t tell whether they’ve worked.
Spencer G: Oh, that’s funny.
Robert Wiblin: Post-experience are some of the products where you find the most dodgy markets basically. Where there’s lots of fraudulent products because there’s just no feedback cycle. You can use a supplement you don’t even know whether it had that thing in it that it claimed to have.
Spencer G: Exactly, and another example of that would be a product that you would only use once so they don’t need repeat business at all so they don’t really care about you having a good experience.
Robert Wiblin: This is the restaurant in a tourist area, in a city, where they’re happy to screw you over because you’re leaving and saying you’re coming back.
Spencer G: Exactly. That would be the first category. Second is where companies exploit irrationality. An example of this would be let’s say a company puts out some kind of lottery, now some people might just find it fun to buy a lottery ticket or they might say “oh well it helps me fantasise about winning money” which is a fun experience all these kinds of things. Those are fine, but some people they actually just overestimate the chance that they’re going to win. And because of that this is actually kind of an irrationally that we human suffer that we’re bad at dealing with small probabilities and that can be exploited.
Robert Wiblin: Yeah.
Spencer G: To go to the third category, I think about products that involve zero sum games or even negative sum games. An example of this, imagine a company that makes software that helps you spam people. Like that software might actually be benefiting the person who’s buying it but it might actually be directly taking that value from other people whose time is being wasted. Or another example would be if a company that makes software that helps in some marketplace, it helps one side in the marketplace do better than the other type of marketplace.
Robert Wiblin: Does Facebook notifications and that kind of addiction you get with social media fall into this category?
Spencer G: Well that’s an interesting question. I would probably put that in some combination. I’d probably put that into exploiting irrationality. Like addictive products where we consume more than we say we want to. Like if you ask the person right before they use it they say I would only want to use this 10 minutes and then they actually start and then they use it for 40 minutes and then you ask them after and they say “God I’m not happy that I used it for 40 minutes”. That kind of product is kind of exploiting irrationality.
Robert Wiblin: This is more of a class of products that maybe benefit the user but harm some third party?
Spencer G: Well it’s a redistribution between the user and some other group basically. Then the fourth example, which actually is related to that one, is negative externality. Oftentimes the company is not intending to move value from to this party at the expense of another party but they might be doing it. The classic example is you’ve got a factory and they make useful products but then they dump their toxic sludge into the river which poisons the local people. It’s a really extreme example.
Robert Wiblin: And you think that more companies are doing this than maybe other people believe?
Spencer G: I think that a lot of people … It’s interesting, there’s certain people out there who think companies are evil and they’re bad actors. There are other people who are out there and they say no, no if people are buying the products then it must be doing good. I think both of those views are kind of problematic and not realistic. Personally the way I like to think about companies is they’re these machines that are designed for a specific goal. Which is approximately what the goal is is something like maximise profit to the owners. Of course that’s not a perfect model but it’s like a reasonable, fairly accurate model for what they’re doing. Now we shouldn’t be too surprised if these machines go out and cause harm when it’s profitable to do harm. We also shouldn’t be surprised if they do good when it’s profitable to do good. It’s really like I think the right way to think about them is they’re these machines that do a certain thing and sometimes that can cause harm sometimes it does good.
One way to think about regulation is that regulation can try to make it so that there are fewer of these ways that companies can cause harm while making money. By making it illegal, for example, to harm people while making money then the company is going to be pushed more towards doing the good ways to make money.
Robert Wiblin: Which tech companies do you think might be bad for the world? Do you have any examples of ones that most people think are good that you think are bad?
Spencer G: I’m going to take a pass on that question.
Robert Wiblin: Don’t want to make any enemies.
Spencer G: I don’t want to name names.
Robert Wiblin: Okay, fair enough.
Spencer G: I would suggest that people really think about some of the products they use and say well is this actually causing improvement in the world. There is some really interesting research that was done where they showed people how much time they spent using different apps at the end of the week. I think this was from Time Well Spent.
Robert Wiblin: Time Well Spent.
Spencer G: You can google it.
Robert Wiblin: Yeah, I’m hoping to interview the creator of that. It’s a very interesting project.
Spencer G: Tristan.
Robert Wiblin: Yeah, Tristan.
Spencer G: They actually analysed did people say they were happy with the time they spent or unhappy. It’s really interesting to look at what apps people were really happy with and which ones they were actually unhappy with their usage of the time.
Robert Wiblin: Speaking of which, startups they absorb capital investment and they absorb stuff that they need and they absorb the time and attention of users. You have to think about what would have happened otherwise so what if you create a startup that in a sense is useful, it’s creating a product that has non-zero value, but someone else would have taken that time and that attention their staff members and their capital and made something that was even more useful? The opportunity cost is quite substantial. Maybe even larger than the benefit that you were creating. Is that another way that companies can do harm just by existing?
Spencer G: That’s a really interesting one. The thing is money is generally not created or destroyed. I mean obviously except when you know governments print it and things like that. Money is just moved around between actors you know like a bill passes from hand to hand to hand. Time can be destroyed, so let’s say a company hires thousands of people and has them doing you know making some not very important thing. It’s not that beneficial. Let’s say it actually benefits people but that product’s below average amount of benefit for the amount of labor going into it. Well the time of those people is gone forever right. They could have spent that time doing something more beneficial for society. So yes that is absolutely a way that companies can cause harm.
Robert Wiblin: So money isn’t destroyed but capital can can be wasted. Imagine if you start a company and you go out and try to solicit funding there’s a bit of randomness in that process. If your company is above or below average usefulness for the world and then someone invests in you just because you happen to get lucky, you did a good presentation, you met the right person, your … Then you use that money to build a factory, you might be pulling away you know investment from another company that otherwise would have gotten investment or had those resources and would have built a more useful product.
Spencer G: Yeah absolutely. It could block another company from getting that money at that time. However, there is a way that that can be less bad than it seems at first because those dollars, let’s say you spend them really stupidly and you pay people to do stupid stuff. Well at least the money goes back in the hands of those employees now and you know it’s not like the money is not gone, it’s just pumped into a different thing. Now it could’ve been better if it had gone to a really useful effective company. At least you know, but the time that’s ever coming back that’s actually destroyed entirely.
Robert Wiblin: Yeah. I think what’s going on is that you take the investment and then you use that to hire people to build a factory or other equipment or to start a company. Those people’s time is not going to come back. I suppose it ends up sometimes you have direct labor or they’re building capital, I’m an economist by training so I’m thinking about this.
Spencer G: Disagreeing with an economist is a bad idea.
Robert Wiblin: The money in a sense just keeps cycling through but then real resources are absorbed, which is like people’s time and attention.
Spencer G: Exactly.
Robert Wiblin: Kind of buildings that we’ve already built and all that.
Spencer G: Exactly.
Robert Wiblin: All right, let’s push on. You’re involved in the effective altruism community and I’m curious to know what you think about how effective altruism as a whole is developing. What kinds of things do you like about it first?
Spencer G: I was really excited to learn about the effective altruism community a number of years ago because my life’s mission for a very long time has been to try to see how I can help society in really large ways. It was really exciting to me to discover, hey there are other people for whom this is their mission as well. I think that has been really cool to me. In terms of how I think the effective altruism movement is doing, one thing that I am really interested in is people trying more new things. Basically I think that in effective altruism there tends to be three cause areas that a lot of people gravitate towards. One being existential threats, like threats to wiping out humanity. Potentially low probability but really, really high bad outcome events. Second being global health. How do we give money so that we can help the poorest people in the world or help people who are really sick really cost effectively? The third being can we help animals? Can we make their lives better?
I think those are all really interesting cause areas and I think there’s interesting arguments around each of them. I also think it’s important to explore lots of different kinds of way to help people. Make sure there’s a broad portfolio because there might be even better opportunities or there might be other ways that there’s on the margin we can get a lot of bang for the buck.
Robert Wiblin: You’re saying you think we’re a bit too narrowly focused perhaps on just a few problems when we could have a broader view?
Spencer G: I think potentially. I’m suggesting that might be the case.
Robert Wiblin: What kind of problems would you like to see us work on? Personal mental health is one.
Spencer G: Of course there’s my own bias which is that I think mental health is an incredibly important topic. I think it causes absolutely massive amounts of suffering. How do you balance that against a small probability of the world being struck. That’s a really tough question. But I think it’s a really important topic and whether it’s the most important, that I’m not going to necessarily have a stake in that one.
Robert Wiblin: Are there any common mistakes you think people in the effective altruism community are making when they’re planning out their careers?
Spencer G: I think I just don’t know well enough to be confident in that. I’d love to see people in effective altruism working on direct science. I think that could be potentially really powerful kind of work to do. Especially because they might come into it with a different mindset than the publish or perish mindset. They might be able to find ways to do research that’s kind of being neglected. It’s really, really valuable because of the focus on really trying to help the world.
Robert Wiblin: We’re trying to shift the emphasis a little bit on our side. Encouraging more people to specialise in particularly useful areas of scientific research.
Spencer G: Oh great.
Robert Wiblin: I think we agree on that. What we’ve found perhaps a lot of our readers early on in their career are going into very generalist roles where they’re just trying to skill up and not specialise. For the community as a whole, that’s creating some problems. That we have now a lot of generalists and not many people who are real experts in nanotechnology or biomedical research. We don’t have people who have the sufficient expertise to say whether we should be putting more effort into those or not. It’s one of those cases where maybe it’s beneficial for each individual to be a generalist because then they’re keeping all of their options open. Then as a community, you end up … it’s a tricky situation where you’ve got a whole bunch of generalists who no one has the specialised knowledge that they can instruct everyone else and what they need to know about their area of expertise.
Spencer G: Yeah, that makes a lot of sense. As one potential example of that I think the threat of biologically engineered viruses and bacteria is really frightening and as technology develops it seems likely that the probability of potentially catastrophic events occurring will go up. I think the effective altruism community is not as well positioned as it could be to work on that kind of problem because of a potential lack of an expertise in biology and other related areas. I think that could be really interesting for more people to work on that kind of specialty.
Robert Wiblin: It’s a great example. We have a few people who know a bit about that. We could have a lot more depth of expertise and then we’d be able to get a lot more done. Speaking of catastrophic risks, do you share the worries that some other people have about very powerful artificial technology. We spoke with Dario Amodei a couple weeks ago he explained his perspective on this. What do you think about it?
Spencer G: Well I think that there’s generally three risks people worry about and I’ll say them in order of increasing weirdness. The first risk is that automation will cause mass unemployment. You know people on one side say well we don’t really need to worry about this because you know overall if you look at society it seems like automation actually has been really good for society in a lot of ways. Even when there’s been huge amounts of automation that occurred over the last hundred years it hasn’t led to like massive unemployment. You know at least not most of the time, so maybe automation is good and maybe it’s overblown. Then I think on the flip side you can make the argument well okay, but let’s look at the micro level like look at the individual person. Let’s say a self-driving car automates away truck driving. What happens to that truck driver? Well they have some options they could go retire early if they’re in close to retirement already. They could go retrain for a skilled job. They could go work in unskilled jobs.
Those actions may also like push down prices in those other areas. It seems like a micro level if automation were to go too quickly like lots of people can end up without jobs really rapidly and then it’s unclear how the market would absorb them. Maybe it would absorb them eventually, but you might get a situation where prices get very depressed in certain areas where wages get really nailed and they cause really big problems. On top of it, some people worry that like if this were to happen too fast or too much of it, it could cause the societal unrest like destabilise society.
Robert Wiblin: I think at 80,000 hours we have a slightly unusual view on this. This has been a topic that’s been in the media a lot recently, people are saying that perhaps unemployment problems today are caused by increasing automation. Having looked at what economists think and looking at the numbers, the evidence for unemployment today being caused by mass automation is just really quite poor. I’ll put up some links to some papers explaining why that’s the case. On the flip side we think there’s strong reasons to think that in the long term, eventually, we will get to a point where machines can do maybe all of the tasks that humans can do better and more cheaply than we can. There’s just only so many capabilities that humans have. In the past, we’ve started using machines to do like one of the things that humans can do which is like heavy lifting.
Fortunately there’s lots of other tasks that humans were still much better at than machine. People just started to move up the list of tasks and in an order of sophistication. Eventually machines will just come across all of these and it just won’t be anything that humans have an advantage at. That gives us a strong reason, in principle, to think that eventually humans should be displaced by machines in the workplace. Maybe not soon, but one day.
Spencer G: Yeah and I share the concern that like this is increasingly an issue. I don’t have an opinion on to what extent it’s caused the current unemployment. I think especially with artificial intelligence you could see a path where it just rapidly replaces a whole huge swath of human labor.
Robert Wiblin: All right, so what’s the what’s the next risk?
Spencer G: The next risk is that artificial intelligence allows one group to gain too much power. There’s different ways this could happen. One way this could happen is maybe one company ends up having an AI monopoly. Their AI is just so much better than other companies or their AI is really good plus their branding is so strong that they end up basically absorbing a huge amount of labor in society. Okay there’s the unemployment thing we talked about but like now imagine there’s this company that’s making the money that all those people used to be making.
Robert Wiblin: This company is now representing 10% of GDP or one day 90% of GDP if it takes over most jobs.
Spencer G: Exactly, and that kind of possibility is potentially frightening because just the concentration of power like that means that that company could just have enormous sway in society and what if it doesn’t have the values that we care about and that kind of thing.
Robert Wiblin: Are there any movies, sci-fi movies where this is the plot line? I guess you have a little bit with Blade Runner to some extent I guess. Yeah.
Spencer G: It’s not as popular in science fiction because it’s not as exciting.
Robert Wiblin: They’re not as interested in economic change or economic restructuring as perhaps aliens are attacking Earth.
Spencer G: Exactly. Then there’s another version of that like concentrating power which is that it’s not so much that the AI enables one group to like replace it with jobs and you know and basically make money that way. Instead, imagine that you have like some magical oracle adviser that could allow you to predict things much better than everyone else could predict. Well you could see all kinds of ways that that could give you control over society or let you pull the strings. You could predict like well, if we did this change, what would happen? It would cause this effect or allow you to develop strategies that nobody else could see. Now, you know if you just think of sort of like human level ability to strategise you’re like “oh, okay you have a great strategist, maybe that gives you some advantage”. If you imagine that this achieves like superhuman levels, maybe the people that control that AI could actually really influence society in a way that’s unprecedented.
Robert Wiblin: Potentially they could have extraordinary military power, just the ability to do violence. Is that another possibility?
Spencer G: Potentially, if they were … For example, imagine one day there’s an AI that’s better than humans at certain kinds of science. If one group developed that AI before others, they might be able to make scientific breakthroughs that are leagues ahead of everyone else that might give military capabilities. For example, the ability to build the atom bomb was a scientific breakthrough. I’d say there’s a whole cluster of these ways that artificial intelligence could, potentially, it’s hard to say with what probability, lead to very extreme concentrations of power.
Robert Wiblin: Okay, that’s a little bit weird. It’s not the strangest thing people have heard. What’s the third one?
Spencer G: The third one, and to many people the weirdest, but very popular in science fiction, is that humanity builds an artificial intelligence that’s much, much smarter than us, at least in some capacities are very important and that we as humans lose control of it. Either that means that it does what we intended it to do but turns out … Sorry that it does what we programmed it to do, but what we had programmed it to do is not actually what we wanted it to do. Or it could mean that the AI changes itself in some way or ends up being different than … Maybe initially it’s what we thought but it ends up deviating from that due to various forces. Let’s say it’s a self-improving AI, it modifies itself. Of course a lot of people think these ideas are still in science fiction, they believe they’re wacky. They are science fiction and they do sound wacky. On the other hand even if there is some small probability of this happening I think we should take it very seriously. People talk about the idea that extraordinary claims require extraordinary evidence, this is a common phrase.
When you’re talking about a potential societal risk that could have massive negative consequences for someone, I think a more appropriate phrase is that when you have the possibility of extraordinary harm, you actually need extraordinary evidence to dismiss it. Even a three percent chance that some really atrocious worldwide thing could happen is worth taking very, very seriously. You’d have to get that three percent down to a much lower probability before you could say we shouldn’t worry about that.
Robert Wiblin: I think three percent is a reasonable number. It’s somewhat scarily, scarily too high. Which is one of the reasons why we’re having a number of conversations about this and trying to get people working on the problem. At least there’s sufficient attention that we can try to bring that number down and feel a little bit more comfortable and then move on to other risks and other potential improvements that we could make for the world.
Spencer G: Absolutely and one metaphor I like to think of with if you think about, let’s say someone will one day build an AI that was much more smarter than humans, it doesn’t seem to violate the laws of physics as far as we understand the laws of physics. It may be possible. Why is it so important exactly how this AI is programmed? I have a metaphor for this, imagine you’re driving a car 40 miles an hour. It matters what way you’re aiming but you can correct. You’re going a little too far to the right. You’re going to the right lane, you can go to the left. The problem is, let’s say you’re driving a car at 40 million miles per hour, even the slightest error could … You blow up part of the world or something like this, the momentum you have. The idea is that if you build something that’s a little bit smart, like okay not a big deal if it’s slightly mis-programmed, you can correct it.
If you build something that’s massively smart, that’s at least in some very important capacities much, much smarter than humans, then a slight error in programming or misunderstanding of how that program is going to manifest could lead to massive, worldwide changes that you didn’t expect.
Robert Wiblin: We don’t have time to go into all the technical explanations for why really advanced artificial intelligence might be dangerous, but I’ll stick up a link to our explanation and some other papers about this. You ran a study about perceptions of catastrophic risks, is that right?
Spencer G: That is correct.
Robert Wiblin: Bringing up the graph there, what did you find?
Spencer G: One thing that we found that was really interesting is that we asked people to estimate the probability of different catastrophic events happening and we found that when we asked people what do you think that the probability is that in the next 50 years, humans will go completely extinct? People put this at an incredibly small probability. They thought it was essentially impossible. To me I was very interested in that, because while I don’t think it’s very likely, I certainly wouldn’t say it’s close to impossible. We found when we surveyed people that vast majority of people thought it was absurdly unlikely. This worries me because I think that even if these events are unlikely they’re potentially so catastrophic that we should take them seriously. The classic example this is asteroids, like we know that occasionally our planet gets hit with a massive asteroid. It’s currently believed that that’s what wiped out dinosaurs.
Even that this alone presents some possibility of human extinction. How can you put the probability that humans are extinct the next 50 years to almost 0 if we know that just asteroids alone can potentially contribute more than zero certainty. Then there’s these other risks beyond asteroids, so I thought that was pretty interesting.
Robert Wiblin: I’m just looking at the figure here and you got people to estimate what was the likelihood of a particular kind of various kinds of disasters killing at least 10% of the human population in the next 50 years. You looked at a broad audience online and then you also surveyed people involved in the effective altruism community. The interesting thing is … so just looking at some of the results here, people thought there was an over 10% chance of climate change doing this, but effective altruism thought it was more like one percent. And basically across the board we’re looking at nuclear war, solar flare, maliciously created viruses, asteroids, effective altruists are known for being really worried about catastrophic risks but they thought basically all of these things were less likely than the general public did.
Spencer G: Yeah I thought that was very interesting. The general public actually thinks it’s not that unlikely that, for example, climate change will cause this massive disaster 10% of humanity wiped out in 50 years or you know that nuclear war will cause that. Yet, they’re not really worried about extinction and effective altruists are actually less worried about these particular risk causing … I mean, they’re still worried, they still put these at reasonable probabilities, but they seem to be a little bit less worried about them rather the general population. But they’re really worried about extinction, at least relatively. I thought that was quite interesting. In fact, we have this number that the effective altruists, their estimate for extinction was 100,000 times the probability that the other group put.
Robert Wiblin: 100,000 times.
Spencer G: It still wasn’t a high number.
Robert Wiblin: That means that people must think that the risk of extinction is 0.0001%.
Spencer G: Very, very low.
Robert Wiblin: Wow. I don’t know what the risk is, but I’m pretty sure it’s higher than that.
Spencer G: I suspect it’s higher than that.
Robert Wiblin: There’s a lot of new stuff going on all the time and you’re kind of opening Pandora’s box with all of the scientific discoveries that we’re making. The risk just can’t be that low because we’re just not in such a stable state where we have a long history of experience with what we’re doing now and knowing that we’re safe.
Spencer G: These kinds of risks that I like to think of as global time bombs, like that’s my phrase. Which is there these risks that they seem to just be getting worse on the margin. We know that they can have massive global negative consequences. Example would be runaway climate change as the world gets warmer, there’s some probability that things are even worse than we thought they were, but certainly year by year, we’re polluting more and more. We expect this to continue. Or nuclear war, you can imagine a situation where the damage from nuclear war went down but it seems like on the margin they’re going up. These things that could potentially have these catastrophic disasters they just seem to be increasing the probability.
Robert Wiblin: Yeah. That’s interesting. The figure there for all of the groups was around 10% chance of a major nuclear war in the next 50 years, which is worryingly high. It’s about .2% a year I guess.
Spencer G: Very worryingly high.
Robert Wiblin: Something that we should potentially be caring more about. Then you did have the disagreement you might expect about AI that kills people. The people involved in the effective altruism community had a median answer of 10% probability on that.
Spencer G: That’s for AI wiping out 10% of the population in the next 50 years.
Robert Wiblin: Whereas the general population thought it was one percent. Still concerningly high. I think I might care if there was a new technology that had that high a chance of causing mass death.
Spencer G: This is part of why I think one of the key issues here is actually understanding … I think the way people think about probabilities. If you think there’s a one percent chance that someone might one day build … In the next 50 years build an AI that could wipe out 10% of the world’s population, that’s really big. A one percent chance is a really big number there and we should be really worried about it.
Robert Wiblin: I guess you wouldn’t put all of our resources, you’d at least want to have a hundred people or something working on that.
Spencer G: Exactly. More than five people.
Robert Wiblin: As is pretty obvious from all of the above, you’ve worked on a lot of different things in your career so far. How have you decided what to focus on?
Spencer G: Obviously there’s going to be a combination of what are researchers doing now? What opportunities do I have now? Then also thinking about what long term goals I have. For myself, I’m a mathematician, I’m a technologist. Building technology seems like to me the competitive advantage that I have for helping the world. Then, with regard to that, I’m wedded to helping the world in one particular way. I think there’s so many important ways you can help the world. My general approach is to try to learn about many powerful tools. By powerful tools I mean things like deeply understanding machine learning. Understanding math, developing a better understanding of psychology, and of study design. Each of these is a really powerful system that lets you model things, understand things, predict things, and then build things. Computer science, programming. A lot of what I’ve done is try to absorb these different powerful tools and then find ways to try to leverage them in the direction of achieving what I want to do in my life which is trying to massively help the world.
Robert Wiblin: Have you found it difficult to juggle doing so many different things at once?
Spencer G: I think there’s definitely drawbacks doing multiple projects. Obviously focus has a big advantage. However, I think that with the way my particular brain works is quite unusual in this regard and so my optimal number of projects is probably somewhere between five and 15. Simultaneously. I find that actually working on many things at once while of course it has disadvantages, it also has a lot of advantages for me like having a very cross-disciplined approach. Finding solutions that where I can apply one thing to another thing in a way that’s unexpected of the people that wouldn’t necessarily realise. Of course that means I need a team because obviously I can’t myself build ten things at once. I need teams and so rather than building directly I’m going to be working with teams to build things.
Robert Wiblin: Do you think in general people jump too much between projects or focus too much?
Spencer G: That’s a really tough question. I would say that it’s very, very bad to jump between a lot of things within a given hour. Right. The problem is a lot of things they require putting stuff into context of brain getting them into memory and if you keep switching back and forth you keep losing that context and it’s not very efficient. But, if you have you know this two hour block at work on this thing and switch to another two hour block and work on that thing that can be very productive and actually can gives you a rest from certain ways of using your mind and then you’re now into other ways of using your mind. I think I find actually very efficient.
Robert Wiblin: Is it helpful when you’re working on so many projects that as your enthusiasm waxes and wanes between different projects you can always just work on one that you are enthusiastic about at that moment?
Spencer G: Yeah that’s really great, but there is a risk there that you quit. For me, by far the most important things I ever learned about myself is how to get myself not to quit. The way I do it, it’s very simple, it won’t work for everyone, but it is I involve another person in the project. To me that’s incredibly motivating and it could be that it could be an employee, it could be it’s a co founder, or it could be that I think my thesis advisor for my PhD. Having another person there just keeps you motivated and I don’t want to let them down, I want to keep going. That’s how I keep driving these things forward. Even if I don’t have that much time to put into it every week, it keeps moving forward.
Robert Wiblin: How do you find all these people to help you out with your projects? This is a fairly unusual strategy for people who are pursuing kind of personal interests to have so many assistants, and people that you’re managing, and, you know, colleagues.
Spencer G: Spark Wave, it’s a company where this is explicitly the structure of Spark Wave. We have a bunch of products we’re working on, we have about approximately 11 right now, depending on how you count. Each of the products is worked on by a small team. They’re kind of explorations. We’re kind of like a research lab in that we try ideas, try building them, and see if there’s a lot of promise there. I think a lot about how do I structure this, how do I recruit for these teams. One thing that when it comes to recruiting that I really like is testing people. Try to give them work that’s very related work or tests that are very related to the thing that you’re actually going to have them do. Because what happens is a lot of times people rely on resumes, which I do not think are a very reliable indicator or they use interviews, which I also do not think are very reliable indicators. There’s a lot of evidence that suggests these are … While people believe in these very strongly, a lot of evidence says these are not reliable indicators of skill.
I like to give people actual tests to do the work and then see if they’re actually good. I find that allows me to recruit people who are really talented but you look at their resume you wouldn’t immediately say, wow this person looks amazing. People come from all kinds of different situations, different backgrounds.
Robert Wiblin: We talk about this in our career guide. About how to get a job. We suggest just trying to impress the employer by doing the job before you’re even hired. Also that’s a very useful way for you to figure out whether they’re actually good at doing that job. If you can do a work test then you can figure out whether it has a good personal fit for you. How do you find these people? Personal networks?
Spencer G: Earlier I mentioned we’ve been recruiting CEO’s, been doing that through personal networks. The word has begun to get out and now I have people contacting me pretty regularly saying, “hey I heard about what you’re doing, it sounds really interesting”. Essentially we’re looking for people who are really excited about doing a startup. They know they want to do a startup, but they don’t have that idea yet. They’re like “oh man I don’t know which idea to work on”. Then they find us and then they say “hey”. We try to evaluate them. If we think they’re sufficiently promising we’ll say “okay, let us look through our 11 products. You tell us about what you value and we’re going to see if we have something that’s a really good fit for your values”.
Robert Wiblin: Do you find that now you have to do a lot of management? That it’s more difficult to find time to actually create things yourself?
Spencer G: Definitely things swing more in the direction of management and not creation but I do try to keep some time for direct creation. Keep my hands in the work. I think that helps me deepen my understanding of the important things we’re trying to do. For example, before building a product, I want to actually be looking at that interface thinking about the features thinking about how a customer will flow through it, because that actually is so essential to building a good product I can’t just outsource that. I want to be involved in these processes.
Robert Wiblin: You have a PhD in machine learning, which is a career option that we’re pretty enthusiastic about. Why did you decide not to continue in machine learning specifically.
Spencer G: I am continuing in machine learning. Some of our products are machine learning.
Robert Wiblin: Interesting, which ones?
Spencer G: Probably ones that nobody knows about. Going back to this idea of powerful tools, when I learned about machine learning a long time ago before people really knew about it, when I heard about it I was like wow. I have to learn about this thing. This is just such a powerful tool. For me, I’m trying to bring all these powerful tools together. Then say how can we use. Let’s say we collect five, six, seven of these really powerful tools. What’s the right way to use them? What’s the right combination to solve a particular problem? Because we aren’t wedded to one set of tools we might be able to find ways to solve problems that people haven’t used before.
Robert Wiblin: Do you think other people should study machine learning? Do you share our enthusiasm? Should people go and get PhDs in this area?
Spencer G: Machine learning is a very hot area. People are hiring really ferociously, obviously it’s hard to say what it will be like in five to 10 years. I think there is reason to think that it’s continuing to grow in popularity. There’s so many use cases of it. I think there’s a ton of applications that will continue to be found.
Robert Wiblin: Some people are skeptical about the value of machine learning. At least in some cases. I’ve heard people say there’s all these companies out there building these complicated machine learning models. What they really need is linear regression. They just need to draw a line through a bunch of dots that they’ve collected. Do you think that’s true?
Spencer G: It’s funny because machine learning is simultaneously one of the most over and under-hyped fields. As with many very valuable ideas, at first nobody know what it is, eventually people start finding out, and then they get super excited. Then some of the people that are excited don’t actually really understand the thing. They start saying it can do this or that, when in fact they’ve actually gone beyond the capabilities of it. This is what’s happened is a bunch of people are now trying to use machine learning for things that it’s not really that great for. Or where there’s much simpler solutions that would work better. You have all these companies that are like we need machine learning for X, but actually machine learning is not the right solution. That’s certainly true, it is over-hyped in that sense.
On the other hand it is also an incredibly powerful set of technologies that can solve real problems. It’s tough to know how it will develop but right now the pace of development is extremely rapid, it could slow down but if it keeps up at this pace, you’re going to see it being applied to so many more things and it’s already being applied in so many different ways.
Robert Wiblin: We’ll be able to automate Spencer, an AI that creates new companies all the time.
Spencer G: Hopefully you can put that off for a few weeks.
Robert Wiblin: More broadly, how has the way you thought about your career changed over time, if it has at all?
Spencer G: I think that more and more I feel like okay I’ve got to get my hands dirty and actually be helping people. Doing less meta stuff and more … By helping people I don’t mean talking one on one with the person, I mean building things where there’s this really tangible value that you can see for this person. I think it makes sense as you’re younger, you’re doing things that are more generally beneficial for your career or that build skills and I’ve done that for a long time. Now I want to bring those skill together and build valuable products.
Robert Wiblin: Is there any information that you can see causing you to really change your focus over the next few years? I guess if you became more concerned about artificial intelligence, could you potentially spend your time working on another problem?
Spencer G: The way that I think about what we’re trying to do at Spark Wave, is we’re trying to create a company creating machine. A company creating machine, if you can do it obviously it’s incredibly hard, but if you can succeed at it, it’s this massively powerful useful thing for helping the world. A company creating machine could create companies in many different industries or applied to many different things. Also maybe it could even be adapted to creating nonprofits or things like that where there’s crossover. If I can successfully build this infrastructure then I would like to apply across many areas and I expect that the areas of focus probably will change over time, but I’d like to use that same infrastructure if I can.
Robert Wiblin: A few months ago we did a little bit of a project together. There was some disagreement in the community about whether most people in society think that charities in the developing world could save lives extremely cheaply. I actually suspected that many people, perhaps most people would think that developing world charities in fact weren’t saving lives. That by and large they just weren’t effective but there were some other people who thought, no they’re going to have an extremely optimistic view, perhaps a naively optimistic view and think that charities will be able to save lives for just a couple of dollars or maybe a couple of hundred dollars. It turned out, you’re a huge fan of empiricism, so you offered to settle this disagreement by using the tools that you developed that we discussed earlier. It turned out I was totally wrong. A typical respondent thought that among the most effective charities, they would be able to save a life in the developing world for just a couple of dollars.
At most $10, $20. We think that the real figure is probably in the low thousands of dollars, perhaps high thousand dollars like three to $7,000 based on given what was researched. I’ll put up a link to our blog post where we discussed these results. It was, just I think really good to just go out and collect the data. It wasn’t that difficult. We could settle that disagreement and then try to learn some lessons about how do you talk about these things in a way that’s not misleading.
Spencer G: It’s the power of science. Once running a study is actually a rapid thing you can do you’re like “oh wait, we can just answer this question, we don’t need to speculate about it”. It’s actually really fun to do that. One thing that I thought was super interesting about that study as well is that we compared what people said they thought the typical charity could save a life for with the amount they thought the most cost-effective charity could save a child’s life for. We specifically said in both cases in a poor country, and the numbers were shockingly close together. People thought that the most cost-effective charity is only a little bit better than the typical charity, which is also I believe very different than what your research has indicated.
Robert Wiblin: Yeah. We tend to think that among the most effective charities thesethere might be 10, possibly even more times effective than just a randomly allocated donation. Really quite a large multiple. In a sample, people didn’t even think that they were twice as cost-effective. They saw quite small differences. They thought that those differences were driven mostly just by how well the charity was run. Whether they could cut costs and get a lot done within an hour of staff time. Whereas we think most of those differences are driven rather by what kind of service or product that they’re offering people. We think that there’s very large differences between say providing bed nets versus other health care treatments that might not be so effective or might be much more expensive to treat, each person. That’s the kind of place where you can see really large differences in cost effectiveness.
I actually got the numbers slightly wrong, I’ve just pulled out the table here. The median response for the cost to prevent one child from dying for a typical charity was $40. For the most cost effective charities, it was $25. Something like a hundred fold cheaper than what we would think it is.
Spencer G: You know what’s funny about that, you got the number wrong first then you corrected it. We actually just did a study about whether people can unforget numbers. We found some evidence that people actually, they’re still influenced by that … We’re running a study where what we do is we give people fake facts about the world. We make them memorise them, we actually quiz them on the fake facts. Then, we say actually everything we told you is just totally false. It was actually generated at random, which they were. We actually had a random algorithm that generated too high ones or too low. It was random what the number was. We said ignore them completely. Now we want you to make estimates about these things, but remember don’t use that information because it was totally random. Then we had to make the estimates that we could show in fact the estimates are highly influenced by these fake numbers. Kind of interesting.
Robert Wiblin: That’s cool anchoring. It’s similar to anchoring.
Spencer G: Exactly.
Robert Wiblin: I think what’s going on there is perhaps a bit of availability bias on my part, but I was most stunned by these very low numbers so they stuck in my head. I can see here that the person at the tenth percentile. Someone who was relatively more optimistic thought that it was five dollars and two dollars for a typical versus most cost effective charity. That was more remarkable, I think is why I remembered.
Spencer G: Yeah, there you go.
Robert Wiblin: You did another similar survey to this to try to figure out people’s attitudes towards animal charity and just the welfare of animals in general, right?
Spencer G: Right, I think one really interesting thing is the way people think about animals. Many people care about animals and think it would be really bad to harm them, yet in the factory farming system, it seems like a lot of animals are harmed. We were interested in studying that apparent contradiction. What’s going on? Is it that people don’t believe they’re being harmed? Is it that people don’t believe animals feel pain? Maybe they only care about dogs and cats and not about other animals. We wanted to investigate that.
Robert Wiblin: Personally, I’m vegetarian. Apart from eating mussels, because I think that they’re not conscious. I’ve looked into it and it seems just like the treatment of animals in factory farms in particular, the United States, is extremely poor. I look around at people in society eating meat and I think they must think the animals can’t experience pain or that even if they do experience pain it’s not morally bad, that they don’t have responsibility not to be basically funding the torture of animals. Was that what you found?
Spencer G: That’s a really interesting thing, so when we asked people in our sample do you think that animals can experience pain? Actually before I tell the answer, whoever’s listening I’d like you to actually make a prediction. What percentage of people say that they think animals experience pain? Okay. Well the answer is 99% said yes. Okay. They think they can experience pain. Well maybe, what about suffering because you know some philosophers made this distinction pain is not necessarily suffering. Well turns out 99% of people reportedly think that animals could suffer too. Then you can keep going. You can go down these series of questions trying to find the point at which people start saying what they’re doing is okay.
Most people eat meat and at some point presumable they’re going to start diverging and saying that their behaviour is acceptable. We asked, do you think it’s wrong to hurt animals if the only reason you do it, is to enjoy your life a little bit more?
Robert Wiblin: Which is basically the situation with meat consumption for the vast majority of people.
Spencer G: Interestingly enough, 88% of people said that it’s wrong in that case.
Robert Wiblin: And then nine percent said that they’re not sure. Only four percent of people were willing to say that that was alright.
Spencer G: Well maybe it’s about farm animals. Maybe when we’re saying hurt animals, people are thinking about dogs and cats. Indeed, I don’t have numbers here, but when we asked about dogs and cats, people think it’s extremely bad to hurt dogs and cats. Okay, we asked do you think it’s wrong to hurt farm animals? To my surprise 65% of people in our sample said yes. And 21% said not sure. Only 13% said no. Actually people think it is pretty wrong to hurt farm animals.
Robert Wiblin: It’s a little bit baffling, why would it be wrong to hurt a dog but not wrong to hurt a pig. They’re both pretty similarly smart and quite behaviourally similar as well. It’s like there’s a bit of rationalisation going on here, right? Let’s keep going.
Spencer G: Then we asked, maybe it’s only about food. Maybe it has to do with, eating is a special category. We asked, do you think it’s wrong to hurt animals if you hurt them mainly because you enjoy the way they taste when you eat them? It’s kind of a different more specific twist on our question about is it okay to do it just because you’ll enjoy your life a little bit more. Actually that did change numbers. When we asked about is it okay to hurt them to enjoy your life a little more, 88% percent said that was wrong, whereas with the food case, only 53% said it was wrong, and 20% were not sure. We did get a significant reduction, but still the majority of people felt that it was wrong.
Robert Wiblin: I feel like that’s probably a contraction between the two answers here. Unless you think that enjoying the taste of meat is not just improving your life a little bit, maybe you think it’s a huge improvement in your life.
Spencer G: It’s interesting because we did a qualitative analysis having people explain a bunch of their answers. One thing that we found is that people do put food in a special category. For a few reasons, one of those is that they think it’s essential. They think I need to eat, therefore it’s more okay. Or they think, well it’s for health purposes. Yes I enjoy the taste but also it’s making me healthier and giving me nutrients. I think that’s part of what’s coming into play here.
Robert Wiblin: I’d be a bit less surprised if the question was if it improved your health, because you were eating meat. I think in most cases eating meat doesn’t really improve people’s health. I’ll see if I can dig up some research on that that we can link people to. The question was about just the taste though. I suppose maybe they’re conflating the two and just thinking about food in general as a special category.
Spencer G: Well you can imagine someone who might say, I’m going to eat healthier if I eat meat products because I enjoy the taste of the healthy meat products more than the healthy natural products or something like that. You can see how they can start getting mixed together.
Robert Wiblin: What was the last question?
Spencer G: The next one, which I found especially interesting was, do you think that animals suffer a lot when they’re raised on farms for food? This is where we started, the first time we saw less than 50% agreeing. It was 43% agreed and 46% said I’m not sure. Actually a lot of people are saying they’re unsure about whether animals suffer a lot when they’re raised on farms for food. I think this is where we begin to see people really saying, what I’m doing maybe is okay because I actually don’t know whether these animals are being harmed.
Robert Wiblin: If you weren’t sure and you thought it was incredibly wrong to hurt animals, and you were buying meat so effectively funding this treatment of animals, wouldn’t you have a huge responsibility to look into this? It’s like I’m doing this thing and I think it’s wrong to cause enormous harm to children and I just never looked into whether it causes enormous harm to children. It might, I just don’t know but I’m going to pay for it anyway. It’s quite an odd position, don’t you think?
Spencer G: I think it’s very admirable in those kinds of situations to look into these things. However, I think it’s very common not to. For example, take sweatshop labor, people have different attitudes about whether it’s harmful or not. For those who think it’s harmful, I think most of them have not looked into whether the clothes that they buy are coming from sweat shops. It is very common. I certainly think it’s admirable to investigate but I think people tend to copy the behaviour of those around them. If most people aren’t doing it, feel off the hook or just never even think to do it.
Robert Wiblin: Well I’ll see if I can dig up the most authoritative thing and the most even-handed thing I can find about animal welfare in farms in the US and perhaps other countries and link people to that so that they can decide for themselves if currently they’re also in this class of not being sure. At that final point we have 43% of people say that they thought animals suffer a lot when they’re raised on farms for food. Which I guess is much larger than the number of vegetarians. I suppose you could have had some of those people who think that they do suffer a lot might have thought it was not wrong, so you don’t know exactly what fraction of people’s answers from these figures suggest that they probably shouldn’t be funding meat, ideally, or meat production ideally.
Spencer G: It’s interesting, reading people’s qualitative responses, it could really flesh out these kind of quantitative numbers. One thing we found, there were some people in the study that said, “Wow, this study is making me really rethink what I’m doing and like I’m feeling really bad.” That was interesting, that could be part of the 43%. There are others that say, “You know what, I just find it’s really, really hard for me. I want to go vegetarian but I can’t do it.”
Robert Wiblin: It’s a willpower issue.
Spencer G: There’s willpower or practicality stuff. I think there’s a lot of stuff going on.
Robert Wiblin: I suppose I shouldn’t be too hard on the people who said that they’re not sure because at least they’re not going full rationalisation and kidding themselves and saying that farms must be fine. Or at least open to the possibility that they’re not. One thing I’ll just add is as I said earlier, I eat mussels because I’ve looked into their nervous system and I’ve looked into how they’re raised and I think there’s a very low probability that eating mussels is an immoral thing to do. If you’re in this group of people who think you should eat some meat, or at least eat some seafood in order to be healthy, that’s an option where you can eats kinds of meat that cause very little or potentially no suffering for the amount of meat that you’re consuming. Is there anything final you wanted to say? We’ll stick up links to even some other interesting things that you’ve done because you just live such an interesting, intellectual life. I want people to be able to dabble in that.
Spencer G: Thanks, I appreciate that.
Robert Wiblin: Cool. My guest today has been Spencer Greenberg, thanks for coming on the show.
Spencer G: Thanks so much for having me, it was a lot of fun.
Robert Wiblin: Thanks for joining. If you’d like to help out the show, share this episode with your friends, or leave us a review on iTunes.
If you’d like to work on any of the problems discussed today, like animal welfare or global catastrophic risks, you should apply for free one-on-one coaching from the team. The application only takes a few minutes and the link is in the show notes and associated blog post.
Talk to you next week.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.