What's actually good for workers
Bryan Caplan: The most important thing to know is that just because a regulation sounds good does not mean that it’s actually a good idea or helpful for workers. I often teach my students about the “Bryan Caplan Protection Act” — this is a law where it says that anyone who wants to hire me has to pay me at least a million dollars an hour. Any dispute about my treatment is adjudicated in a court run by me. I receive unlimited benefits. Everyone has to call me “Your Lordship” — there’s a million dollar fine for every failure to call me Your Lordship.
Bryan Caplan: Then the question is, is this law good for me? And everyone wants to say, “Yes, of course this law is good for you.” I say, “Well, what if I don’t have a job yet, and people know who I am? Then is the law good for me?” And then everyone says, “Uh, no. Then you’ll never get a job.” Exactly. This is the same logic behind every labor regulation that exists — people think of it as just a gift to the worker, and yet when you realize that normally you don’t have to actually hire the person in the first place, the question is, do you really want this gift?
Bryan Caplan: Another example that I like is I often ask my students to imagine that we had a $20 minimum wage, but only for Blacks. Would that be good for Blacks? Well, then you might not hire them because of that, and you might just hire someone else. So, great if you’re Black and you have the job, but if you don’t get the job because of the law, then it’s not so good. That really is the logic of almost all labor regulation out there.
Bryan Caplan: People really do like the idea of just saying, “You have to treat workers better. You’re mean.” And it is not actually the slam dunk that they think it is. Once you accept this, then you realize that a very popular story about why workers get better treatment now than 100 years ago is just that we have more laws. What would happen if you imposed a modern minimum wage in a pre-modern era? That would mean that you have to pay your workers more than gets produced in a year, so what would really happen is that would cause mass unemployment — or actually, more realistically, there’d just be a massive black market, because the people have to either break the law or starve. Even in North Korea, they will break the law.
Bryan Caplan: Now, part of the case that I make is that economists do make one mistake, which is focusing solely upon giving workers income. They forget that we have a lot of evidence from psychology that unemployment per se causes great misery, because people’s jobs provide a lot of the social contact that they get; it provides a sense of identity, sense of meaning, sense of purpose. During COVID I think a lot of people felt like I did: I’m still getting my full salary, and yet I’m all alone in my basement. It felt like being unemployed. The money’s still coming in, but I no longer have any place in the world.
Bryan Caplan: And that is the way that a lot of people actually feel about their jobs. Once you appreciate that, then I think you realize that saying it’s just a tradeoff between destroying jobs and improving conditions [is incomplete]. I think actually the end point is just saying, “We really don’t want to do anything that’s going to reduce employment, because it’s not just about the money — it’s also about having a place in the world.”
Arguments against open borders
Bryan Caplan: Now when you actually listen to cultural arguments, one problem is that they’re so vague that it’s hard to really find out what they are. So what I do is I start with the specific ones — things like language acquisition. And there, I will say that we’ve got quite good data on language acquisition, and there’s just no sign that there’s any serious problem.
Bryan Caplan: Basically the pattern is that even in very high immigration eras, first-generation immigrants who come as adults rarely achieve true fluency. This was always the case, even in 1900 — the census had different questions about language acquisition, but still fairly comparable, and it’s just not true that when someone showed up from Italy at the age of 25, that they became a fluent English speaker during their lives. But then the second key thing is that the second generation, both today and in the past, almost always does achieve full fluency. So language acquisition is one where it really just doesn’t hold water.
Bryan Caplan: In terms of other ones that people have actually done social science on, like trust: this is one where there is substantial literature on especially trust assimilation, which I say is very favorable. It is not true that the kids of people from a very untrusting country remain untrusting when they grow up in a high-trust country, so there’s that. Culture sort of bleeds into politics and people say, “What about the political views of the immigrants? What about the political views of their kids?” Once again, I’d say, the first generation often do have political views that would frighten you, but their kids, on the other hand, have very high assimilation.
Bryan Caplan: Now the question is, can you keep counting on this assimilation to work when you have much higher levels of immigration? Here’s the key thing to know about the US: we actually multiplied our population about 100 times over 200 years, so that basically means every century you’re multiplying your population tenfold. If you go and take a look, we did take a lot of people that were very culturally different from the original arrivals, and yet it’s very hard to see any substantive problem with this level of assimilation.
Bryan Caplan: If you go and break it down, remember there’s the math of exponential growth — so if I remember correctly, you can multiply your population tenfold in a century by having your population grow by 2.7% per year. That’s just not actually that unmanageable. It’s one thing if you think about a billion people showing up tomorrow, but that would never happen. You need to think about it as a snowballing process.
Bryan Caplan: People have said, “Well, wouldn’t you be scared if 300 million foreigners showed up this year?” And I’m like, “Yeah, I’d be scared.” I’m not blind; I’m aware that things go wrong in the world.
Bryan Caplan: However, first of all: super unlikely you would actually get 300 million in a year. But in any case, on the one hand, there are these tail risks that you should be mindful of and confront very seriously. On the other hand, there is the continuing horror of the status quo, which is very easy just to go and act like it’s no big deal. It is a big deal. It is really bad to be in Haiti. To say, “Sure, they could get jobs here, and take care of their kids, and basically solve all of their severely serious problems if we would just go and stamp their passports — but there’s a 0.01% chance this could lead to something terrible.” You know what? It’s just not reasonable to be that risk averse, especially when the harm that you’re imposing on the would-be immigrants is so immense.
Self-Interested Voter Hypothesis
Bryan Caplan: So there’s the question of why should we not believe in it? That’s where I’ll just say there’s about 40 years’ worth of research, and I’ve also done a fair amount of the research on my own. I’ve gotten my hands dirty in the data many, many times. Published papers along the lines that says it just is false. When you go and try to predict what people’s political views will be given plausible measures of self-interest, it just doesn’t work, or at least the effects are very small. So you might find that there is like a 0.03 correlation between your income and your probability of voting Republican. That would be pretty typical over the period of 1972 to 2010. That’s of course averaging.
Bryan Caplan: I haven’t seen the very latest data, but I think it’s very likely that in modern America, richer people are now notably more Democratic. I wouldn’t be surprised if there was now a correlation more like negative 0.1 between logarithm of income and Republican voting. That would be just one example of it. So that’s the question of why should I not believe it? You should not believe it because you have to look at the data and see it just is at best greatly exaggerated. If it does predict at all, it predicts very weakly.
Bryan Caplan: In terms of what’s wrong with the theory, why is it so wrong? What I’ll say there is: what’s the difference between giving $10 million to charity and voting for a guy who’s going to charge you $10 million more in taxes, figuring you’re super rich? Is there a difference at all? Oh yeah, just a night and day difference. One actually definitely leaves you $10 million poorer. And the other one, there is like a one in a billion, trillion, zillion chance that you wind up tipping the scales in favor of the side that takes $10 million from you.
Bryan Caplan: Self-interest is not the problem. The problem is more along the lines of, “I think this is bad for our society.” People think more in terms of, “This can be really bad for the economy.” So when someone says that, economists tend to hear it as, “I don’t feel like paying more for gas” rather than, “I don’t like the idea of people in our society paying more for gas” — which is a very different thing indeed.
Bryan Caplan: At least in the United States, probably most modern countries, it’s the country that’s the main thing that people have in mind. Or they say, “This is going to be really bad for the economy.”
Bryan Caplan: So again, it’s always so tempting to hear this as thinly veiled self-interest, and yet when we go and try to scratch the surface, we find it’s not thinly veiled self-interest — it’s much harder core than that. There is a long tradition of doing research where we say, “What predicts voting out the incumbent? The national unemployment rate, or whether you personally are unemployed?” And it really is the national unemployment rate that is predictive. Personal events, people do not seem to be really swaying the events that much. Then you may say people take their own situation as indicative of the overall levels — yet when you get more specific there, no, that doesn’t seem like it’s really going on.
Why Bryan and Rob disagree so much on philosophy
Bryan Caplan: Now for me, this is as close as you can get to an algorithm in philosophy: where someone comes with a puzzle, and I go, “What is the most naive, simple-minded, common-sense view of this? And is there any reason why we shouldn’t just believe that?” If I see it, it’s real. If I feel pain, it’s because I’m actually in pain, and there isn’t some other weird thing. If I seem to have a personal identity, I have a personal identity. If it’s no big deal when I run over a squirrel, it really is no big deal that I ran over a squirrel.
Bryan Caplan: So these are all places that I start with, and in particular for me, there’s no overarching general principle that I’m going to apply, and say that’s the one that I derive everything from. And say there’s a lot of set questions that are logically quite separate, and for each of these, you have to go and apply this.
Bryan Caplan: Then there is the concern of what if there’s a couple things that both seem really obvious and commonsensical that conflict? And that’s where you say, “Do they really conflict? Hmm. OK, I guess they do.” All the things that you were saying to me are things where, honestly, I will actually go and do the empirical philosophy thing — and if you just go and talk to normal people about it, I think they are pretty puzzled by all those views that you said.
Bryan Caplan: And again, that doesn’t show that you’re wrong. But to me it does show at least this isn’t just me saying whatever I happen to think, eccentrically. I go and say, “This is the obvious position.” I am really trying to say, I’m going to try to get outside of any particular doctrinal thing or anything controversial. I’m just going to try to get to something that almost any human being throughout human history would’ve said. “Rocks are actually real, man” — that kind of thing — rather than, “It’s a sense datum. For all I know it could just be a bunch of gray that happens to be there with some other shading that simulates there being an object. Who knows, 50-50.”
Rob Wiblin: You’ve hit the nail on the head here. In terms of being persuasive to people, what you’re saying seems right. You want to start with premises that they agree with very strongly, and then argue from there.
Rob Wiblin: But I guess for my purposes of trying to figure out what’s true, I have this attitude that humans evolved to survive. That’s where lots of our intuitions come from. And our intuition also comes from everyday experiences that aren’t necessarily connected to deeper, deeper truths about the nature of the universe. So I don’t regard it as surprising when I reason something through and I reach a counterintuitive conclusion, or something that wasn’t immediately intuitive to me. I often just trust the reasoning process more than the intuition to which I arrived at the problem with. I think that’s probably where many philosophers are, and other people who are more inclined to throw away common sense in favor of a more considered argument on something. What’s wrong with that?
Bryan Caplan: Nothing is wrong with that on your list of possibilities. So to say, “I could be wrong because I had the wrong starting point. I could be wrong because there was an error in the chain of reasoning.” Those are all possible. And then again, it could be that evolution has tricked you into something that is just conveniently wrong.
Social desirability bias
Bryan Caplan: Something else that I really like about effective altruism is that the very existence of the movement depends upon my very favorite concept in all psychology, which is social desirability bias: the idea that there’s a big gap between what sounds good and what really is good. Essentially, this is the technical concept to explain why, when the truth sounds bad, people lie. And if the lies become sufficiently ubiquitous, then they start to sincerely believe the lie.
Bryan Caplan: Why would you have a group called “effective altruism”? Obviously, it’s a pretty thinly veiled insult to all other altruism, basically saying, “We are the effective ones and you guys are not effective. You’re ineffective altruists and you basically act like you’re so good, but actually you’re squandering precious resources. Maybe it’s better than nothing, but come on, you guys can do a lot better.” Then you ask, why would there be ineffective altruism? Why would there be people who are putting so much energy into charity that doesn’t accomplish very much?
Bryan Caplan: The social desirability explanation is what makes sense: this idea that some stuff sounds really good, even though it is not in fact very good. It just sounds wonderful to support ballet performances for inner-city children. It’s such a lovely idea and you can see why people would be moved by it, and why they would give millions of dollars for these programs. The reality of, first of all, there’s starving children in the world, so even if the ballet was great, how good can ballet possibly be? And second of all, the harsh reality is hardly any kid in the world is going to like ballet, so you’re not giving them a great, wonderful, sublime experience — you’re torturing and boring these poor children.
Bryan Caplan: Yet people say, “Oh no, no, no, at first they might assume that, but then the love of dance will take over and the prancing and the pirouettes will win them over.” And it’s like, no, that’s just total fantasy. That’s not what’s going to happen.
Bryan Caplan: So to have a whole group predicated upon this notion of social desirability bias, which I think is one of the most powerful explanatory concepts that we have in all of social science — psychologists will sometimes say they’re natural science; that doesn’t sound right to me — but anyway, whatever the category is, it is one of the most powerful concepts we have for understanding individual behavior and for understanding policy.
Bryan Caplan: My view is this is really the biggest problem with policy — in democracies at least, probably dictatorships too — that there’s a lot of policies that are really good, but the optics are bad. And people don’t want to have everyone yelling at them and throwing tomatoes at them when they propose their ideas, so they say something that will get smiles rather than something that will work. I think you and I are both fans of human challenge trials. I’m going to profile you as a hardcore human challenge trial person, all right?
Rob Wiblin: Damn right.
Bryan Caplan: And yet no country on Earth did it, I think.
Rob Wiblin: UK did. UK has now — the first one.
Bryan Caplan: Right. But too little, too late, right? Day late and a dollar short.
Rob Wiblin: Well, I think it’s mostly just setting a precedent for next time.
Bryan Caplan: Yes. Although next time I bet it’ll be relitigated while people die.
Bets Bryan could make with listeners
Bryan Caplan: So I literally have an end-of-the-world bet with Eliezer Yudkowsky. Which many people believe cannot be made, but it’s super easy. The person who disbelieves in the end of the world just pays the money now. And then if the world does not end, the loser pays back with whatever the odds are.
Bryan Caplan: It takes a little effort to understand the bet, because his view is so specific. He said, “Look, I want a bet on there will no longer be any human beings on the surface of the Earth on January 1, 2030.” I was willing to give him, like, “How about all of human extinction?” — “No, no, no, no, no. There could still be humans in mine shafts. That’s OK. But not the surface of the Earth.” And I’m like, “All right. If that’s such a big deal to you, fine, we’ll make it the surface of the Earth, whatever.” But yes.
Bryan Caplan: So anyway, we have a bet, where I don’t remember the exact odds. It might be just like two-to-one. And I prepaid, so implicitly there’s interest. So it’s not as good as it seems.
Bryan Caplan: I’d still be really happy to do a bet on climate change that relates to effects on human living standards. I think it’s very unlikely that climate change is going to lead to any kind of absolute reduction in human living standards. I think it plausible that it will lead to a slowing of growth that otherwise would’ve happened. But again, the scenario where it actually gets so bad that GDP per capita goes down, that seems quite unlikely to me.
Rob Wiblin: I guess I think the odds of that are maybe 15%.
Bryan Caplan: Yes. 15% for like global GDP to go down overall. That’s probably even optimistic, because there’s a bunch of things that could go wrong. But then narrowly tailored global warming causes it, and then you’d have to specify the bet a little more precisely to —
Rob Wiblin: I guess I’m saying that the annual drag of climate change over some period of time is more than 4% or something like that.
Bryan Caplan: Right? Of course that’s always going to be an estimate, so it’s harder to bet on something like that. You could in principle say, “The following regression will have a coefficient smaller than this.” Basically it’ll be this model, this dataset. And when we run the regression, it will have a sensitivity of GDP with respect to climate change of less than something.
Bryan Caplan: Here’s one that does not really go to anything fundamental. This is the one that, once we have synthetic meat, that the opinion of humanity will be that meat eaters in the past were just complete savage barbarians, like Nazis. And I say no, that is not what people are going to think. They may be like, “Oh gee, that’s really gross,” but it’s not going to be that people will regard people who ate meat as being like cannibals, or something like that. There’ll still be animals in this world of synthetic meat. There are still going to be squirrels that get run over by cars and people are not going to go and regard running over a squirrel as being like running over a human. So I don’t think that the opinion will ever change on meat eating to this level.
Bryan Caplan: The point of when I call myself an arrogant hedgehog is to say I’m a flawed human being and these are my failings, and I try to go and put myself to the test so that I don’t do what arrogant hedgehogs usually do — which is just say a ton of wrong and ridiculous stuff. When I say I’m an arrogant hedgehog, I’m not saying that’s a good thing to be. It’s basically me just trying to remind myself of my flaws. Just in the same way that I will sometimes tell my kids, “Remind me to go and do this thing” — I know my kids aren’t really going to remind me; they’re kids, they’re forgetful — but I say it out loud. And that helps me to remember: by telling someone else to remind me, by acknowledging my flaw, it makes it easier for me to at least mitigate the flaw.
Bryan Caplan: That’s the same thing with saying, “I’m an arrogant hedgehog.” There are a lot of arrogant hedgehogs in academia, and of course I think most of them have terrible views. In particular, views that are just so silly. And they won’t bet on stuff, and they’re just pontificating, and just makes me sick to listen to them.
Bryan Caplan: Here I’m remembering John Podhoretz, who, some years back, said, “Obama’s nuclear agreement with Iran effectively ensures that Iran will be a nuclear power in 10 years” — something like that. And I just said, “I don’t know a lot about Iran, but you don’t know enough about Iran to say that.” And I did try to get him to bet me, but I said, “Since you say it ‘effectively ensures,’ you should give me odds.” And he’d only bet at even odds. I’m just saying, “I don’t know. But what I do know is you don’t know.”
Bryan Caplan: But again, that kind of attitude is just so standard in academia. Every time there’s some professor saying, “The effect of this could only be to X,” I’m like, I think there’s actually a lot of things the effect of that could be. This is just you going and repeating some stuff that you read in some book, from some other higher-status arrogant hedgehog that you are now a vessel for.
Bryan Caplan: I would really like academics to be more open to big questions. That’s very different from being an arrogant hedgehog. Let’s focus on questions that are more important, but at the same time, let’s start off by saying, “What has anyone been able to figure out about these questions?” — not, “Let’s go and find some continental philosophy sage and start quoting this guy and acting like this guy knew stuff.” They’re almost the last people I would ever rely on — if they were saying anything that was even meaningful in the first place, which I tend to doubt.