Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

Length is not bounded, volume is not bounded, time, spacetime curvature — various things are not bounded. And why should utility be?

Normally when you do have a bounded quantity, you can say why it is and you can say what the bound is. Think of, say, angle: if you think of it one way, angle is bounded by zero to 360 degrees, and it’s easy to explain that. Probability is bounded with a top value of 1, bottom value of 0. Not so easy to say in the case of utility.

Alan Hájek

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

The standard way of analysing gambling problems, ‘expected value‘ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for ‘0.5 * $2 = $1’ in expected earnings. A 25% chance of winning $4, for ‘0.25 * $4 = $1’ in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that’s despite the fact that you know with certainty you can only ever win a finite amount!

Today’s guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn’t find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

These issues regularly show up in 80,000 Hours’ efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

Insisting that people give up a sure thing in favour of a vanishingly low chance of a very large impact strikes some people as peculiar or even fanatical. But one of Alan’s PhD students, Hayden Wilkinson, discovered that rejecting expected value on this basis requires you to swallow even more bitter pills, like giving up on the idea that if A is better than B, and B is better than C, then A is also better than C.

Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we’re better off looking for ways our probability estimates might be wrong.

In today’s conversation, Alan and Rob explore these issues and many others:

  • Simple rules of thumb for having philosophical insights
  • A key flaw that hid in Pascal’s wager from the very beginning
  • Whether we have to simply ignore infinities because they mess everything up
  • What fundamentally is ‘probability’?
  • Some of the many reasons ‘frequentism’ doesn’t work as an account of probability
  • Why the standard account of counterfactuals in philosophy is deeply flawed
  • And why counterfactuals present a fatal problem for one sort of consequentialism

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

Highlights

Using expected value in everyday life

Rob Wiblin: We’ve got this St. Petersburg game, we’ve got Pascal’s wager. They’re both introducing infinities by different passages, and then they seem to just really create an awful lot of trouble for expected value. So, I want to go out tonight and choose what movie to watch and make decisions based on expected value instead of weighting things by their probability linearly.

Rob Wiblin: I want to feel like I’m doing the right thing here, but all these people are coming up and saying, “Hey, I’ve got these paradoxes for expected value that produce garbage results.” Or at least, that require totally rethinking it. How comfortable should I feel when I use expected value to make decisions in life? Are these wacky cases with convergences to infinities and putting infinities in things, are they fundamentally a problem? Or are they just more curiosities?

Alan Hájek: Well, one solution is — you really do just zero out these crazy cases. You don’t even give them one-in-a-googolplex credence. And that would certainly quarantine them. I have made versions of this worry in a few places, how even just everyday decisions seem to be contaminated by, in this case, infinity. I’ve also talked about it in relation to a game that has no expectation at all, the so-called Pasadena game. And the game itself may seem pathological, but if you give it any credence, then even a simple choice like, “Where should I go out for dinner tonight? Will it be Chinese or pizza?” If you give some probability to the crazy stuff happening at the end, that easy decision gets infected too. So I guess you have to do the dogmatic thing and just say, “Look, I’m just zeroing out…”

Rob Wiblin: I suppose you can choose your dogmatism. You can either be, “When things become sufficiently weird, I give them zero probability. It just seems dogmatic.” Or you can be, “I refuse to consider infinities.” Just give them some finite positive value and leave it at that. Or you just have to become a fanatic who pursues infinite values all the time.

Alan Hájek: Well, and you heard me before putting in an argument for the crazy thing. That’s right. And so for practical purposes, I think you have to be dogmatic. And maybe even in some cases, just not being dogmatic in giving probability zero to these scenarios. In some cases, you just don’t even consider them. They’re just not even in your space of possibilities to begin with. It’s not that you recognise it and give it probability zero. This is one statistician’s reply that I’ve heard: “You just don’t even put it in your model of the world.”

Rob Wiblin: I see. OK. So to speak up for being crazy for a minute, imagine that we really did think that infinite utility was a live possibility. Let’s say that we didn’t, for example, think that the universe was going to peter out, either become very big or very small, such that we’re in a steady-state universe. And maybe you could set up a system where you do live forever. There’s nothing that interferes with your life. And so maybe you could get an infinite utility that way.

Rob Wiblin: So, we have some theory that makes it feel not infinitesimally likely, but maybe 1-in-1,000 likely. Then it feels less crazy to say you should orient your life around trying to do that, trying to get the infinite utility by living forever because the universe permits that. So maybe we can bite the bullet.

Alan Hájek: Another way to go is to give infinity a more nuanced treatment. So far, I was imagining — it’ll be hard to convey this just over the podcast — but I’m sort of drawing the figure eight of infinity. It’s the figure eight on its side, infinity. And that’s the un-nuanced infinity that seems to have these problems. If you halve it or you multiply it by one in a googolplex, you still get the same sideways figure-eight infinity back. But if you had a more mathematically nuanced treatment of infinity, where halving something or multiplying it by one in a googolplex made a difference, then we might get the ordering that we want again. This is another way of handling the problem, by the way, which led to your lexical rule. Maybe if we just distinguish among different infinities…

Rob Wiblin: Oh God, I’m scared of it. This just seems like it’s going to create more problems.

Alan Hájek: And it’s also scary: just the sheer mathematics of it is formidable. But it turns out that there are these systems — for example, the surreal numbers, hyperreal numbers — where you have infinities, and multiplying them makes a difference. Multiplying by 1/2 or what have you will change the value, will make it smaller in this case. And so maybe now you get the ordering that you are hoping for, and you can choose Chinese over pizza after all, if you keep track of the sizes of all of these infinities.

Pascal's wager

Alan Hájek: This is Pascal’s argument for why you should believe in God or cultivate belief in God. And just to locate it historically, we should contrast Pascal’s wager to predecessors, which purported to establish the existence of God, prove the existence of God. I’m thinking of things like the ontological argument: St. Anselm and Descartes had one, Thomas Aquinas had five ways, Descartes had a cosmological argument. And there the conclusion was: God exists.

Alan Hájek: Pascal won’t have a bar of this. He says, “Reason can decide nothing here.” You can’t just by some clever proof establish the existence of God. But he turned his attention to the attitude we should have to the existence of God. Should you believe in God or not? That’s now a decision problem, and that’s why it’s relevant to our discussion about decision theory. And he argued that you should believe in God, or at least wager for God, as he said. Think of that as “cultivate belief in God.” The short version of the argument is because it’s the best bet. And in fact, Hacking writes that this was the first-ever exercise of decision theory.

Alan Hájek: Which is interesting, because of all the cases, this is such a problematic case for decision theory, which is ironic.

Rob Wiblin: Yeah. We’re opening with the paradox, basically.

Alan Hájek: Opening with the paradox. Anyway, Lakatos said that every research programme is born refuted, and maybe you could say that of this very case: decision theory was born refuted with a problematic case.

Alan Hájek: Here’s how the argument goes. There are two ways the world could be: God exists, or God does not exist. Two things you could choose to do: Believe in God, or not believe — or, as Pascal says, wager for God and wager against God. And here are the payoffs: If God exists and you believe in God, you get salvation. Let’s call it that. Infinite reward, infinite utility — “An infinity of infinitely happy life,” as Pascal says. And now, in every other case — where God does not exist or you don’t believe in God — you get some finite payoff. And there’s some controversy about the case where God does exist and you don’t believe: maybe you get negative infinity, maybe you have infinite damnation.

Rob Wiblin: It’s sufficient to put zero there. Isn’t it? Or not?

Alan Hájek: I think Pascal himself in the text is telling us that really, that’s a finite term. That wager against God or don’t believe in God, God exists is only finite in utility, not infinitely bad.

Alan Hájek: OK, so that’s the first premise. That’s the decision matrix, as we say. Those are the utilities: infinity for believe in God, God exists; or wager for God, God exists. Finite everywhere else. Then, the premise about the probability: the probability that God exists should be positive. So your credence, as we would say, should be positive.

Alan Hájek: Nonzero, as we might say. It’s possible that God exists, so you should respect that by giving a positive probability. This theme keeps coming up. And now Pascal does what we recognise as an expected utility calculation, and just does the sum. You’ve got infinity times some positive probability, plus some finite stuff. Add it up, you get infinity. So, it looks like wagering for God, believing in God, has infinite expected utility. And wagering against God, not believing in God, the expected value was some finite stuff plus some finite stuff, which is finite. Infinity beats finite. Therefore, you should believe in God. That’s Pascal’s wager.

Rob Wiblin: I see. So, is this structurally analogous to the St. Petersburg paradox, as you’re biting the bullet on it? Or are there any differences here that are important?

Alan Hájek: Interesting. It’s structurally similar in that infinite utility is what you get in the punchline. Notice we got to it in a different way in Pascal’s wager from St. Petersburg. In St. Petersburg, we were adding finite terms and every possible payoff was finite, but just because of the way they’re summed, you get infinity. In Pascal’s Wager, it’s different: you just get this single hit of infinity; this one possible outcome that just gets you the infinite utility in one shot. That’s a structural difference, but I think there are other parallels here.

Most counterfactuals are false

Alan Hájek: For a start, most counterfactuals are false. Consider the coin in my pocket. Let’s assume it’s a fair coin. I’ll never toss it. If I were to toss it, it would land heads. Not tails; it would land heads.

Rob Wiblin: Doesn’t seem right.

Alan Hájek: That doesn’t seem right. Thank you. I don’t think that’s right. I think that’s false. And why? Well, it’s a chancy coin I’m imagining, and if I were to toss it, it might land heads, it might land tails. All right. Now let’s make the coin heavily biased to heads. Let’s say 99% chance of heads, 1% chance of tails. If I were to toss the coin, it would land heads, not tails. Still bad, I say: it still might land tails.

Alan Hájek: Consider a huge lottery — let’s say it has a million tickets — that’s never played. “If the lottery were played, ticket number 1 would lose.” I say no. And notice, by the way, the problem there: if you say that of ticket number 1, seems you’d better say it about ticket 2, ticket 3 would lose, blah, blah, blah, ticket number million would lose. Seems like you’re committed to every ticket would lose.

Rob Wiblin: One has to win.

Alan Hájek: There’s got to be a winning ticket. So in fact you’d contradict yourself if you said all of that. And now consider your favourite intuitive commonsensical counterfactual. I’m holding a cup. If I were to release the cup, it would fall. Now I know it’s very tempting to say that’s true. I still say it’s false — because it’s a lottery, it’s a chance process, I say. If the cup were released, it might not fall because someone might quickly place a table under it. A very surprising updraft of air might suddenly lift it rather than letting it fall. Physics tells us that has some positive chance, and so on. So these things might happen. I know some of them are extremely improbable. I don’t mind. Just as in the lottery case, it was extremely improbable that ticket number 1 would be the winner.

Rob Wiblin: So these things aren’t absolutely certain. It’s not true in every possible counterfactual world that the cup does fall. I guess some people might wonder, does it really matter that in some infinitesimal fraction of possible counterfactual worlds, the consequent doesn’t actually occur? Or are you being a bit obtuse here about this?

Alan Hájek: I get that a lot. And maybe I am obtuse. Well, I’m being pedantic, but I do think that our use of counterfactuals commits us to this kind of pedantry in various ways. For example, look at the logic of counterfactuals, modus ponens seems plausible — that’s the rule “If P, then Q. P, therefore Q.” So modus ponens will fail, it seems, if you lower the bar for the chanciness below one.

Rob Wiblin: So this is like, “If P then probably Q. If P, then Q,” then that doesn’t go through.

Alan Hájek: Yeah, that’s it, right. If you thought all you needed for the truth of the counterfactual was the probability — high probability of Q given P, something like that — then you could easily have a case where P is true, the probability is high, and yet it didn’t happen. It was very probable that ticket number 1 would lose in the lottery. But sometimes ticket number 1 wins.

Relevance to objective consequentialism

Rob Wiblin: I think you reckon that this sort of reasoning about counterfactuals, or recognising the trouble that comes with counterfactuals, can potentially present a problem for a flavour of utilitarianism called objective utilitarianism, or I guess objective consequentialism of any kind. Most people know that consequentialism is when you judge the value or goodness of actions, or what would be right to do, based on the consequences that they have. Most people have heard of consequentialism in some form, but what is objective consequentialism?

Alan Hájek: Yeah. Roughly this: Action one is objectively better than action two if and only if the consequences of action one are better than those of action two. I think here we are imagining really the long-term consequences — not just the immediate consequences, but really perhaps to the end of history.

Alan Hájek: And now, we get into a big discussion — which I know is close to the hearts of many listeners — about the long-term consequences of what we do. But anyway, what I’m about to say I think will generalise beyond just objective consequentialism, but that’s a good place to start.

Alan Hájek: All right. So let’s take a case. You have a choice: you could help the old lady cross the street, or go to the pub. What should you do? What’s the right thing to do? Now, let’s suppose, in fact, you take the old lady across the street. You help her. I don’t have any problem with taking a total of all of the goodness — whatever, the happiness or the welfare — after that. I’m happy to allow there’s a fact of the matter of the total value, the total goodness, the consequences of that. But what about the thing you didn’t do? You did not go to the pub. That’s where my worry is going to kick in.

Alan Hájek: First thing, we should make clear that this is a counterfactual. The way I just stated it before, notice the carelessness of it: “Action one is objectively better than action two if only if the consequences of action one are better than those of action two.” Well, in this case, action two didn’t happen. It was non-actual. It didn’t have any consequences, so we must be talking about counterfactual consequences. And now, my worries about counterfactuals are going to start to kick in.

Alan Hájek: All right, so let’s take the thing you didn’t do: you didn’t go to the pub. Well, case one, the world is chancy. Well, let’s consider the very first chancy coin toss that never happened. How would it have landed? “If that coin had been tossed, it would’ve landed heads — not tails, heads.” No, no, I say. That I find implausible; it might have landed tails. Consider the first lottery that never happened. “If the lottery had taken place, ticket number 17 would have won.” No, I say. Can’t say any ticket would have won it. Some other ticket might have won instead.

Alan Hájek: All right, but I’ve hardly started. Now I know in the cluelessness industry that this worry about consequentialism, there’s a lot of discussion of how our actions have these far-reaching consequences. There are these ripple effects, but it’s not like ripples in a pond that tend to dampen down as you go further out. No, these just keep on rippling for the rest of history. Unborn children, which children would’ve been born or not, depends acutely, sensitively, on very minor changes in what we do.

Rob Wiblin: Precise timings of population.

Alan Hájek: All right, so now let’s go back to the hypothetical visit to the pub. The first child to have been born thereafter. The child to be conceived, hypothetically, depends on which sperm fertilises an egg, and it’s a lottery which sperm wins the race to the egg. So there would have to be a fact of the matter of which sperm wins the lottery to fertilise the egg to make it this child that would’ve been conceived and not some other one that would’ve been the winner of a different sperm winning the race. And I’ve still barely started. That’s the first child. But now, consider that child’s children and grandchildren and great-grandchildren and now the rest of history.

Rob Wiblin: And all of the people who they interact with…

Alan Hájek: All of that. All of that, that’s right.

Alan Hájek: Now, I find it wildly implausible that there is a fact of the matter. We’re still considering the chancy case where all of these chancy processes would be resolved in one particular way and no other way. But it makes a huge difference to how we evaluate these counterfactual histories which way things go.

Alan Hájek: So in one hypothetical scenario, the children that happen to be conceived are latter-day Gandhis and Einsteins, and a wonderful world follows. And with just a small tweak, we now get a different counterfactual history with a latter-day Hitler, followed by a latter-day Stalin, and a horrible world. And everything in between. And all of this, again, is acutely sensitive to how things are initiated and also how the chance processes go.

Philosophical methodology

Alan Hájek: I like the [heuristic] I call “Check extreme cases.” You’re in some domain. Extreme cases are things like the first case or the last case, or the biggest or the smallest, or the best or the worst, or the smelliest, or what have you. Now you’ve got this huge search space, and someone gives a big philosophical thesis. Suppose you want to stress-test it: are there counterexamples? Hard problem: somewhere in this search space, find trouble, find counterexamples. Easier sub-problem: go to the corner cases, go to the extreme cases. Often the trouble lurks there if it lurks anywhere, and it’s a smaller search space. So that’s the technique.

Rob Wiblin: Give us an example or two.

Alan Hájek: All right. Grandiose philosophical thesis: “Every event has a cause.” At first you might think, “Gee, I don’t know. Is that true or false? It’s kind of hard to tell.” All right, hard problem: come up with a counterexample to “Every event has a cause.” Easier sub-problem: consider extreme cases of events. For example, the first event. Call it the Big Bang. The Big Bang didn’t have a cause. Counterexample.

Alan Hájek: Or philosophers sometimes say that you should only believe in entities that have “causal efficacy” — they have some oomph. That’s maybe a reason to be suspicious of numbers: maybe numbers don’t exist because they don’t cause anything. And then Lewis has us imagine: “Well, what about the entity which is the whole of history?”

Alan Hájek: There is causation within it, but that doesn’t cause anything, so according to this principle, you shouldn’t believe in the whole of history. So there the heuristic is doing negative work: it’s destructive, shooting down some position. But I think it could also be constructive.

Rob Wiblin: Yeah, maybe it’s worth explaining a little bit of one of your theories of what philosophy is.

Alan Hájek: Yeah, I think you’re thinking of: “A lot of philosophy is the demolition of common sense followed by damage control.”

Rob Wiblin: Yeah, I love that quote.

Alan Hájek: Philosophy often comes up with some radical claim like, “We don’t know anything.” But then we try to soften the blow a bit, and we find some way —

Rob Wiblin: Maybe we know a little bit.

Alan Hájek: We know a little bit, or we have to understand knowledge the right way. Anyway, this extreme-cases heuristic was somewhat negative: it was pointing out a counterexample to some big thesis. I think it could also be constructive.

Alan Hájek: Maybe longtermism could be thought of in this way. Maybe the thing that comes naturally to us is to focus on the short-term consequences of what we do, and we think that’s what matters. Then you push that out a bit, and then an extreme case would be, “Well, gosh, our actions have consequences until the end of time, for the rest of history, so maybe we should be more focused on that.” And that’s now the beginning of a more positive movement.

Rob Wiblin: Yeah. So the philosophical question there might be, “For how long should we consider the consequences?” or “What should be the scope of our moral consideration?” And here you say, “Well, let’s consider the extreme possibility. We should consider all space and all time forever.”

Alan Hájek: That’s right. So I started with “Check extreme cases.” Then sometimes you might just check near-extreme cases — so you back off a bit and they’re a little bit more plausible. So maybe we don’t need to look until the end of time, but still look far ahead, and that is still at some odds with initial common sense.

Rob Wiblin: Yeah. I guess people might often come back and say, “Well sure, in the extreme situation it doesn’t work. Lots of things don’t work in extremes. It’s more sensible to focus on the middle cases, and so this isn’t actually such a powerful objection.” What do you think of that?

Alan Hájek: I think it’s for that very reason that this is a fertile heuristic, because we spend our lives mostly living among the normal cases, so extreme cases don’t come so naturally to us, even though they may well be trouble for some philosophical position. In fact, maybe especially because they’re extreme, they’re more trouble than the middle cases.

Downsides to Bayesianism

Alan Hájek: Here’s one thing that just bothers me a bit, and I’ll throw it out there. As a slogan, I’ll say, “Subjective Bayesianism is anchoring and adjustment” — and I need to explain what I mean by that. “Anchoring and adjustment” is a heuristic that people often use when estimating some quantity. They’re given a so-called “anchor” — some starting point for thinking about the value of that quantity — and then they adjust until they reach an estimate that they find plausible. The trouble is that sometimes the anchor is entirely irrelevant to the quantity, and it just should be ignored, yet it still influences the final estimate — the adjustment is insufficient.

Alan Hájek: There are a couple of classic examples I can give you. Tversky and Kahneman had a famous study. They asked people to watch the spin of a roulette wheel, which was rigged to land on either 10 or 65, and then they were asked whether the percentage of African countries in the United Nations was higher or lower than the number that they saw. And then they were asked to estimate the percentage. Those who saw a low number tended to give substantially lower estimates for the percentage than those who saw a high number. Of course they knew that the roulette number, the anchor, provided no information whatsoever about the percentage, yet it still influenced their estimate. And that just seems absurd, that just seems crazy.

Alan Hájek: There’s another famous study from Ariely et al. They asked MBA students at MIT to write down the last two digits of their social security number. And then they were asked whether they would pay this number of dollars for some product — say, a bottle of wine or a box of fancy chocolates, and so on. And then they were asked what was the maximum amount they were willing to pay for the product. Those who wrote down higher two-digit numbers were willing to pay substantially more. And of course, they knew that their social security number was completely uninformative about the value of the product, but still, they anchored on it, and it influenced their final valuation. So the idea is that the residue of the anchor remained, even after the adjustment of thinking, “Well, how valuable is this product, really?”

Alan Hájek: Now these seem to be paradigm cases of irrationality, but now consider a punitive paradigm of rationality: subjective Bayesianism. Here you start with a prior — that’s your initial probability distribution before you get any information. And the only constraint on this is that it obeys the probability calculus, that’s the version I’m thinking of. That’s your anchor. Your prior is your anchor. And then you get some information and you update by conditionalising on it, as we say. So, your new probabilities are your old probabilities, conditional on that information. That’s your adjustment.

Alan Hájek: But the trouble is that your prior has no evidential value; it’s not based on any information. And you know this — that’s what makes it a prior. And its residue remains, even after the adjustment, often. Now, we can imagine that your prior was even determined by the spin of a roulette wheel, or by your social security number, as long as it obeys the probability calculus, and still it influences your final probabilities, your posterior probabilities, as we say. Now the worry is, why isn’t that just as absurd as before? We were laughing at the people in the African United Nations experiment, or the wine and chocolate experiment. What’s the relevant difference? And look, there are things that one can say, but I just put that out there as something that needs some attention.

Infinites

Rob Wiblin: Let me make another line of argument here. Infinities mess shit up. Some listeners might be familiar with the Banach-Tarski paradox. Basically, you take a sphere, a solid sphere. If you divide it into an infinite number of points — the mathematicians in the audience might be annoyed by this — but divide it into an infinite number of points, and then move them around in some special way. And it seems like you can get two full spheres out of the matter or the volume of the original sphere. It’s like you’ve doubled the amount of volume that you have just by splitting something into infinite points and then putting it together again.

Rob Wiblin: I don’t think that that could happen in the universe, probably. It doesn’t seem like that happens. And it’s like, maybe just whenever we put infinities into these decisions, we’re just going to find lots of problems and lots of things that will never happen in the real world. And so we should be OK to dismiss infinities and throw them out, just on the basis that they make life unlivable.

Alan Hájek: I know, great. Feynman was told about the Banach-Tarski paradox, and it was presented to him involving an orange. You’ve got an orange of a certain size, and by suitably cutting it up, you can create two oranges of that size, and in fact you can keep multiplying them. And Feynman bet that that was just nonsense, that wasn’t true. And then someone explained to him how you do it — “There’s this infinitely precise surgery that involves non-measurable sets,” and so on — and Feynman said, “Come on, I thought you meant a real orange.”

Alan Hájek: Now, of course we understand that reaction. But I feel like saying, “Yeah, but that doesn’t really solve the paradox.” Well, thank God we can’t do infinitely precise surgery on oranges, hence our theory of measure is safe. You feel like saying no, that of course this is highly implausible that you can actually do this, but aren’t you worried that there’s something wrong with our theory of measure that it seems to allow this result?

Alan Hájek: And I feel like saying something similar about decision theory. Notice that Richard Jeffrey’s reply was rather like Feynman’s regarding Banach-Tarski. Jeffrey said with regard to the St. Petersburg paradox: anyone who offers you the St. Petersburg game is a liar. And of course, that’s true. No one in the real world is going to offer you the St. Petersburg game genuinely. But I still have that niggling feeling, too. Look, there’s still something wrong with our theory of measure in the Banach-Tarski case, of expected utility and rational decision in the case of St. Petersburg. And it’d be nice to solve that problem. But maybe that’s now the philosopher in me, rather than the physicist or the engineer in me.

Rob Wiblin: It’s a very common theme, I guess, in philosophy: that one flips between the sublime realm of ideas, if you like, and highly idealised situations. And then you bring it back into the world and you have to say, “Is this still relevant?” You do a bunch of maths and you’re like, “Does this apply to the universe?” And I guess people sometimes do have different judgements on whether it’s still relevant, as you’ve made it stranger and stranger.

Alan Hájek: Yeah, that’s right. Philosophers often have highly fanciful thought experiments to make some philosophical points. Like Frank Jackson imagined Mary in a room, and she knows all the physical facts, but she’s never seen red. And when she sees red for the first time, it seems that she’s learned something. The Chinese room from Searle is a famous thought experiment. Putnam had Twin Earth, and so on.

Alan Hájek: Now, it seems to me philosophically unsatisfying to reply, “Well, there’s no such room. There’s no room with Mary in it. There’s no Chinese room, Twin Earth.”

Rob Wiblin: “This is all rubbish.”

Alan Hájek: “This is all rubbish. There’s no Twin Earth.” Yeah, of course we know that. We never said there was. But these thought experiments — and St. Petersburg I’ll put in the same category, and Banach-Tarski — they’re putting pressure on some entrenched notion of ours.

Articles, books, and other media discussed in the show

Alan’s work:

Probability:

Expected utility, Pascal’s wager, and counterfactuals:

Paradoxes and thought experiments:

Other 80,000 Hours podcast episodes:

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.