#15 – Phil Tetlock on predicting catastrophes, why keep your politics secret, and when experts know more than you

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose forecasts we can trust, whose we can’t and why – and developed methods that allow all of us to be better at predicting the future.

After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting – was a media sensation in 2015.

It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information.

Today he’s working to develop the best forecasting process ever by combining the best of human and machine intelligence in the Hybrid Forecasting Competition, which you can start participating in now to sharpen your own judgement.

In this interview we describe his key findings and then push to the edge of what’s known about how to foresee the unforeseeable:

  • Should people who want to be right just adopt the views of experts rather than apply their own judgement?
  • Why are Berkeley undergrads worse forecasters than dart-throwing chimps?
  • Should I keep my political views secret, so it will be easier to change them later?
  • How can listeners contribute to his latest cutting-edge research?
  • What do we know about our accuracy at predicting low-probability high-impact disasters?
  • Does his research provide an intellectual basis for populist political movements?
  • Was the Iraq War caused by bad politics, or bad intelligence methods?
  • What can we learn about forecasting from the 2016 election?
  • Can experience help people avoid overconfidence and underconfidence?
  • When does an AI easily beat human judgement?
  • Could more accurate forecasting methods make the world more dangerous?
  • How much does demographic diversity line up with cognitive diversity?
  • What are the odds we’ll go to war with China?
  • Should we let prediction tournaments run most of the government?

Highlights

…if when the President went around the room and he asked his advisors how likely is Osama to be in this mystery compound, if each advisor had said 0.7, what probability should the President conclude is the correct probability? Most people say well, it’s kind of obvious, the answer is 0.7. But the answer is only obvious if the advisors are clones of each other. If the advisors all share the same information and are reaching the same conclusion from the same information, the answer is probably very close to 0.7

But imagine that one of the advisors reaches the 0.7 conclusion because she has access to satellite intelligence. Another reaches that conclusion because he access to human intelligence. Another one reaches that conclusion because of code breaking, and so forth. So the advisors are reaching the same conclusion, 0.7, but are basing it on quite different data sets processed in different ways. What’s the probability now? Most people have the intuition that the probability should be more extreme than 0.7. And the question then becomes how much more extreme? … You want to extremise that in proportion to the diversity of the viewpoints among the forecasters who are being aggregated.

the John Cleese, Michael Gove perspective that Expert Political Judgment somehow justified not listening to expert opinion about the consequences of Brexit struck me as a somewhat dangerous misreading of the book. It’s not that I’m saying that the experts are going to be right, but I would say completely ignoring them is dangerous.

It’s very hard to strike the right balance between justified skepticism of pseudo-expertise, and there’s a lot of pseudo-expertise out there and there’s a lot of over-claiming by legitimate experts. So justified skepticism is very appropriate, obviously – but then you have this kind of know-nothingism, which you don’t want to blur over into that. So you have to strike some kind of balance between the two, and that’s what the new preface is about in large measure.

There’s an active debate among researchers in the field about the degree to which calibration training generalizes. I could get you to be well-calibrated in judging poker hands; is that going to generalize to how calibrated you are on the weather? Is that going to generalize to how well-calibrated you are on the effects of rising interest rates?

The effects of transfer of training are somewhat on the modest side so you want to be really careful about this. I would say, oh gosh. You really want to concentrate your training efforts into things you care about. So if it’s philanthropic activities, I think you want to have people make judgements on projects that are quite similar to that to get the maximal benefit.

I’m not saying that transfer of training is zero, although some people do say that. I think it’s too extreme to say that transfer of training is zero and, I think the transfer of training is greater if people not only get practice at doing it but if people understand what calibration is…

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.