Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

Am I a believer in climate change, or a denier, if I say ‘Well, I’m 72% confident that the UN IPCC surface temperature forecasts are correct within plus or minus 0.3°C’? … I’m flirting with the idea that they might be wrong, right?

Professor Philip Tetlock

Have you ever been infuriated by a doctor’s unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won’t tell you the chances you’ll win your case?

Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can’t assess the likelihood of different outcomes we’re in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul’s Drag Race.

Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.

He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better.

Along with other psychologists, he identified that many ordinary people are attracted to a ‘folk probability’ that draws just three distinctions — ‘impossible’, ‘possible’ and ‘certain’ — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% versus 57% likely.

In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014.

That was five years ago. In today’s interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement.

We discuss how his work can be applied to your personal life to answer high-stakes questions, such as how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by Open Philanthropy and Clearer Thinking that teaches you to accurately distinguish your ’70 percents’ from your ’80 percents’.)

We also bring a few methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions their profession, as it has been for Tetlock over many decades.

We view Tetlock’s work as so core to living well that we’ve brought him back for a second and longer appearance on the show — his first appearance was back in episode 15. Some questions this time around include:

  • What would it look like to live in a world where elites across the globe were better at predicting social and political trends? What are the main barriers to this happening?
  • What are some of the best opportunities for making forecaster training content?
  • What do extrapolation algorithms actually do, and given they perform so well, can we get more access to them?
  • Have any sectors of society or government started to embrace forecasting more in the last few years?
  • If you could snap your fingers and have one organisation begin regularly using proper forecasting, which would it be?
  • When if ever should one use explicit Bayesian reasoning?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours Podcast is produced by Keiran Harris.

Highlights

When intelligence analysts are doing a postmortem on policy towards Iraq or Iran or any other part of the world, they can’t go back in history and rerun, they have to try to figure out what would have happened from the clues that are available. And those clues are a mixture of things, some of them are going to be more beliefs about causation, the personalities and capacities of individuals and organizations. Others are going to even be more statistical, economic time series and things like that. So it’s going to be a real challenge. I mean this is research in progress so inevitably I have to be more tentative and I’m speculating, but I’m guessing we’re looking for hybrid thinkers. We’re looking for thinkers who are comfortable with statistical reasoning, but also have a good deal of strategic savvy and also recognize that, “Oh, maybe my strategic savvy isn’t quite as savvy as I think it is, so I have to be careful.”

The peculiar thing in the real world is how comfortable we are at making pretty strong factual claims that turn out on close inspection to be counterfactual. Every time you claim you know whether someone was a good or a bad president, or whether someone made a good or bad policy decision, you’re implicitly making claims about how the world would have unfolded in an alternative universe to which you have no empirical access, you have only your imagination.

The best forecasters we find are able to distinguish between 10 and 15 degrees of uncertainty for the types of questions that IARPA is asking about in these tournaments, like whether Brexit is going to occur or if Greece is going to leave the eurozone or what Russia is going to do in the Crimea, those sorts of things. Now, that’s really interesting because a lot of people when they look at those questions say, “Well you can’t make probability judgements at all about that sort of thing because they’re unique.”

And I think that’s probably one of the most interesting results of the work over the last 10 years. I mean, you take that objection, which you hear repeatedly from extremely smart people, that these events are unique and you can’t put probabilities on them. You take that objection and you say, “Okay, let’s take all the events that the smart people say are unique and let’s put them in a set and let’s call that set ‘allegedly unique events’. Now let’s see if people can make forecasts within that set of allegedly unique events and if they can, if they can make meaningful probability judgments of these allegedly unique events, maybe the allegedly unique events aren’t so unique after all, maybe there is some recurrence component.” And that is indeed the finding, that when you take the set of allegedly unique events, hundreds of allegedly unique events, you find that the best forecasters make pretty well calibrated forecasts fairly reliably over time and don’t regress too much toward the mean.

Am I a believer in climate change or am I disbeliever, if I say, “Well, when I think about the UN intergovernmental panel on Climate Change forecasts for the year 2100, the global surface temperature forecasts, I’m 72% confident that they’re within plus or minus 0.3 degrees centigrade in their projections”? And you kind of look at me and say, “Well, it’s kind of precise and odd,” but I’ve just acknowledged I think there is a 28% chance they could be wrong. Now they could be wrong on the upside or the downside, but let’s say error bars are symmetric, so there’s a 14% chance that they could be overestimating as well as underestimating.

So I’m flirting with the idea that they might be wrong, right? So if you are living in a polarized political world in which expressions of political views are symbols of tribal identification, they’re not statements that, “Oh, this is my best faith effort to understand the world. I’ve thought about this and I’ve read these reports and I’ve looked at… I’m not a climate expert, but here’s my best guesstimate.” And if I went to all the work of doing that, and by the way, I haven’t, I don’t have the cognitive energy to do this, but if someone had gone to all the cognitive energy of reading all these reports and trying to get up to speed on it and concluded say 72%, what would the reward be? They wouldn’t really belong in any camp, would they?

I think that in a competitive nation state system where there’s no world government, that even intelligent self-aware leaders will have serious conflicts of interest and that they will know that there is no guarantee of peace and comity. But I think you’re less likely to observe gross miscalculations, either in trade negotiations or nuclear negotiations. I think you’re more likely to see an appreciation of the need to have systems that prevent accidental war and prevent and put constraints on cyber and bio warfare competition as well as nuclear.

So those would be things I think would fall out fairly naturally from intelligent leaders who want to preserve their power, and the influence of their nations, but also want to avoid cataclysms. I’m not utopian about it. I think we would still live in a very imperfect world. But if we lived in a world in which the top leadership of every country was open to consulting competently technocratically run forecasting tournaments for estimates on key issues, we would on balance, be better off.

Well, what would we know about counterfactual reasoning in the real world is that it’s very ideologically self-serving. That people pretty much invent counterfactual scenarios that are convenient and prop up their preconceptions. So for conservatives, it’s pretty much self evident that without Reagan, the Cold War would have continued and might’ve well have gotten much worse because the Soviets would’ve seen weakness and they would’ve pushed further. And for liberals it was pretty obvious that the Soviet Union was economically collapsing and that things would have happened pretty much the way they did and Reagan managed to waste hundreds of billions of dollars in unnecessary defense expenditures. So you get these polar opposite positions that people can entrench in indefinitely.

Articles, books, and other media discussed in the show

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.