I can’t believe I took so long to discover Thomas Gilovich‘s excellent How We Know What Isn’t So. I’ve read almost all of the semi-popular books in cognitive psychology, and this turns out to be one of the best. All too often, cognitive psychologists exhibit the very biases they denounce – overconfidence to take just one. Gilovich makes a determined effort to present experimental results without over-stating the generality of the findings.

My favorite example: Cognitive psychologists spend a lot of time belittling personal experience in favor of objective statistics. “Don’t give me another ‘man-who’ story,” they often say, as in “Once I knew a man who…” Gilovich goes over the same evidence, but then he surveys a whole other literature on our tendency to “believe what we are told” without adequately discounting for the reliability of the source, imperfect chains of communication (ever played “the telephone game”?), and so on. He then has the decency to point out that these two literatures are in tension:

Upon reading this chapter, the reader may feel in a bit of a bind. The implication of much of the earlier part of this book is that our habitual ways of evaluating evidence are subject to error, and that therefore we can be misled by the apparent lessons of everyday experience… Because personal experience is not an infallible guide to truth, we must augment it (augment it more than we apparently do) with relevant background statistics.

But Gilovich admits this conclusion is a bit hasty:

That is all well and good, but how do we get these background statistics?… We generally do not collect the base-rate data ourselves; it must be obtained from secondhand sources. Moreover, few people have the wherewithal to look up (and decode) the relevant data in scientific journals, and so their exposure is limited to the summaries presented by various media outlets. But alas, as we have just seen, the summaries presented for mass consumption are often terribly distorted.

Gilovich then reviews some scary – and bogus – late 80’s statistics for the probability of non-drug-using heterosexuals in the West contracting AIDS. Oprah seriously suggested that 20% of heterosexuals might die from AIDS by 1990. Quoth Gilovich:

Fortunately for those who did not rein in their sexual habits, the base-rate implied by these alarmist account was way off the mark… In marked contrast, people’s personal experience, fallible as it may sometimes be, would have given them a much more accurate estimate of the risk of the heterosexual AIDS threat. The vast majority of the U.S. population cannot think of a single person who has contracted AIDS through heterosexual intercourse, nor can they think of someone who knows someone who has… [T]he true threat is much closer to that intuited by personal experience (“It cannot be that pervasive, no one I know has it”) than that implied by alarmist media accounts (“One in five heterosexuals could be dead in the next three years”).

Gilovich’s bottom line is pure common sense. Before you chuck personal experience in favor of secondhand statistics:

1. Consider the source.

2. Distrust projections.

3. Look for “sharpening” and “leveling” – i.e., stories that are too tidy to be true.

4. Beware of testimonials.

As Hank Hill would say: Yep.