Philip Tetlock, Professor of Psychology and Management at the University of Pennsylvania, studies expertise and the architecture of good judgment. In particular, he has been interested in evaluating predictions made by experts.
From 1984 to 2003, Tetlock gathered more than 28,000 predictions from 284 economic and political professionals in order to determine their accuracy, and by extension, the quality of expertise of the prognosticators.
Surprisingly, he found that on the whole, experts performed as well as a chimpanzee. That is to say, on average they couldn’t beat guesses made at random.
Worse yet, famous pundits–the type of experts you might see on the network news–performed worse than their lesser-known colleagues. There appears to be an inverse relationship between fame and quality of expertise.
You might think that specializing in an area of study would improve forecasting, but sadly, you would be wrong. Specialists performed no better.
Don’t break out the pitchforks just yet.
While it is true that prediction experts as a cohort are no better than chance at predicting future outcomes, it doesn’t mean all expertise is worthless in this regard. There are some people that Tetlock labels “superforecasters” that vastly outperform their competition.
The difficulty is finding them.
That is our aim here at Shot Caller. We want to identify, amplify and celebrate true expertise.
Zack Prager, Co-founder of Shot Caller