Philip Tetlock, one of my favorite social scientists, is making waves with his new book, Expert Political Judgment. Tetlock spent two decades asking hundreds of political experts to make predictions about hundreds of issues. With all this data under his belt, he then asks and tries to answer a bunch of Big Questions, including “Do experts on average have a greater-than-chance ability to predict the future?,” and “What kinds of experts have the greatest forecasting ability?” This book is literally awesome – to understand Tetlock’s project and see how well he follows through fills me with awe.
And that’s tough for me to admit, because it would be easy to interpret Tetlock’s work as a great refutation of my own. Most of my research highlights the systematic belief differences between economists and the general public, and defends the simple “The experts are right, the public is wrong,” interpretation of the facts. But Tetlock finds that the average expert is an embarassingly bad forecaster. In fact, experts barely beat what Tetlock calls the “chimp” stategy of random guessing.
Is my confidence in experts completely misplaced? I think not. Tetlock’s sample suffers from severe selection bias. He deliberately asked relatively difficult and controversial questions. As his methodological appendix explains, questions had to “Pass the ‘don’t bother me too often with dumb questions’ test.” Dumb according to who? The implicit answer is “Dumb according to the typical expert in the field.” What Tetlock really shows is that experts are overconfident if you exclude the questions where they have reached a solid consensus.
This is still an important finding. Experts really do make overconfident predictions about controversial questions. We have to stop doing that! However, this does not show that experts are overconfident about their core findings.
It’s particularly important to make this distinction because Tetlock’s work is so good that a lot of crackpots will want to highjack it: “Experts are scarcely better than chimps, so why not give intelligent design and protectionism equal time?” But what Tetlock really shows is that experts can raise their credibility if they stop overreaching.
READER COMMENTS
Steve Sailer
Dec 27 2005 at 4:27am
Right, the questions people are interested in hearing forecasts about are often ones that are contrived precisely to be hard to predict. For example, not whether the Super Bowl champ Patriots would beat the lowly Jets last night, but whether they’d beat them by more than the point spread.
Steve Sailer
Dec 27 2005 at 4:36am
By the way, a fun trick to play on economists is to say, “Free trade has been backed by all hard-headed, highly successful, practical-minded men like Bismarck and Alexander Hamilton,” and watch how all the economists’ heads nod sagely in agreement, until one spoilsport says, “Hey, wait a minute, Bismarck and Hamilton were protectionists!”
Another fun trick is to get economists who drive Hondas to cite Ronald Reagan as a free trader and then ask them why their Honda was built on this side of the Pacific.
Matthew Cromer
Dec 27 2005 at 8:50am
History begs to differ with your thesis.
spencer
Dec 27 2005 at 9:53am
The point spread is a price that brings supply and demand into balance so that the people running the book are not exposed to large risks.
It reflects the wisdom of the crowd and if it works as it is suppose to an equal number of betters will be on the over and under side of the bet.
So you should not use betting against the point spread as an example of making predictions.
Robin Hanson
Dec 27 2005 at 10:09am
This is a fine hypothesis Bryan. I hope that someone can test it someday. The biggest lesson I take from Tetlock is that we need to start measuring more about opinion accuracy, to test all these theories we so easily generate, and so rarely test.
Roger M
Dec 27 2005 at 11:09am
Bryan believes that experts have “reached a solid consensus” on core issues. My own experience has been that you can find experts on any side of any issue. Usually one side has a majority of “experts” on its side, but often the minority opinion turns out to be the correct one. This is particularly true of innovative ideas, which often take a long time for the establishment to accept.
Remember President Clinton having 500 economists approve of his tax increases? And global warming/cooling? Keynesian economics comes to mind, too.
Bryan loves to take shots at intelligent design, but as Dan Peterson shows in the June issue of The American Spectator, and in an online article, http://www.spectator.org/dsp_article.asp?art_id=9185, there are a lot of experts on the side of intelligent design.
My guess is that Bryan’s research suffers from selection bias, while Tetlock’s covers experts on both sides of any issue. The experts on one side of any issue may be right more often than the other side, combining them in a statistical analysis will make all of them look bad. It’s the fallacy of the average.
daveg
Dec 27 2005 at 3:57pm
So you should not use betting against the point spread as an example of making predictions.
The example seems to be between two levels of difficulty in making predictions. It is more difficult to make a prediction agains the spread than on absolute outcome of the game.
That is, the spread is a reasonable predictor of the outcome of the game.
It is also true that prediction is not the main purpose of the spread, but I don’t think that invalidates the point being made.
daveg
Dec 27 2005 at 5:37pm
Could the experts be wrong about illegal immigration?
Link
RogerM
Dec 27 2005 at 10:24pm
Daveg,
I think on your last post you’re making the same error as Bryan in assuming that Tetlock said the experts were wrong. What Tetlock actually wrote was that experts and common people make equally bad forecasts, if they’re both given the same information. Experts obviously know more than non-experts, but their egos get in the way of them making good forecasts. Check out the Muck and Mystery link below. It contains a good article on how experts could improve forecasts by relying more upon statistical models.
Joseph Hertzlinger
Dec 27 2005 at 10:30pm
I was impressed by the fact that some of the studies of the fallibility of experts compare them with the results of statistical analyses. Maybe the lesson is that experts who use statistical analyses are more accurate than experts who don’t.
At least, that’s what experts using statistical analyses say.
Roger M
Dec 28 2005 at 9:50am
The J.D. Trout & Michael Bishop essay “50 Years of Successful Predictive Modeling Should be Enough: Lessons for Philosophy of Science” quoted in the Muck & Mystery blog, provides evidence that experts using just their judgement are far worse predictors of the future than simple linear regression models, even models in which all of the coefficients are ones.
I’ve run into something similar with people investing in the stock market. Experts will spend thousands of dollars on models to guide them, then ignore the model results. Maybe that’s why so few mutual funds do as well as stock indexes.
Simon
Dec 28 2005 at 11:06am
I think that mahalanobis (http://mahalanobis.twoday.net/topics/about+me/) summarizes the problem very well.
“Oil Analysts, Wrong Since 2001, Predict Prices Will Rise in First Quarter”
Actual headline on my Bloomberg terminal today (no web story, unfortunately). This is a nice encapsulation of the “market expert” problem–if they were right more than chance, they would not give their opinions away for free to journalists.
Bryan, experts fail for simple reasons:
– One, they seldom really use their models
– Two, their models seldom include more than a few variables
– Three, they are clsoed system logics built to forecast in an open systems world
– Four, the models are only marginal in terms of statistical significance. They seldom explain most of the variance. This is particularly true of Behavorial Decision Theory. One of the great WEAKNESSES associated with BDT is the small amount of variance actually explained by these theories.
Dewey Munson
Dec 28 2005 at 11:38am
Then again, life’s problems are centered on the unpredictable and the ensuing solution efforts will distort analysis of prior predictions.
Roger M
Dec 28 2005 at 12:52pm
Simon,
Trout and Bishop argue in their paper that experts fail because they don’t use statistical models at all, but use intuition and expertise instead. Had they used any statistical model at all, even a bad one, they would make better forecasts.
Phil
Dec 28 2005 at 12:57pm
Isn’t this somewhat of a tautology? In any field, no matter how advanced, there are things known and things unknown. So it shouldn’t be a surprise to find that for things that are unknown to experts, the experts don’t know them!
This is a bit of an oversimplification, of course, but still …
Roger M
Dec 28 2005 at 4:04pm
Phil,
I believe Tetlock was referring to predicting the future. The issue isn’t what an expert knows or doesn’t know, but how well experts can use what they know to predict the future. One would think that experts could make better predictions than nonexperts. The evidence says no.
Roger M
Dec 28 2005 at 4:06pm
Trout and Bishop use medical doctors as an example. They show that statistical techniques perform much better than physicians in diagnosing illnesses. But doctors won’t use them.
Will Wilkinson
Dec 28 2005 at 10:16pm
Let me second Roger’s promo of Trout & Bishop.
Epistemology & the Psychology of Human Judgment is a must-read for Bryan, Robin, and the people who love them.
Michael Blowhard
Dec 29 2005 at 11:12pm
Maybe a good topic for discussion — maybe an urgent one, come to think of it — in the many professional fields would be, “How to acquire expertise without the usual accompanying ego-bloat.” Why do I suspect the lecturer on the topic would be a pompous, full-of-himself ass?
And — given that the experts predict no more successfully than Joe/Jill Sixpack does — maybe another good topic would be, Why do some people get to be professional pundits while the rest of of us don’t? It apparently doesn’t have to do with the, er, objective worth of their opinions and predictions. What then does it have to do with?
Anna Haynes
Jan 11 2006 at 1:14am
Don’t dispense with the experts just yet. From Carl Bialik on Tetlock in Jan 6. WSJ:
——–
——–
http://online.wsj.com/public/article/SB113631113499836645-aITzpprs8gIF6EGnwOnLfa_nuYw_20070106.html?mod=tff_main_tff_top
Comments are closed.