Escaping Paternalism Book Club: Part 2
By Bryan Caplan
Rizzo and Whitman now devote two chapters to critiquing (a) the underlying empirics of behavorial economics, plus (b) the way behavioral economists market these empirics. In Chapter 4, RW go after “defective” preferences; in Chapter 5, they reconsider “biased” beliefs.
RW repeatedly stress the modesty of their project. They aren’t saying that behavioral economics is worthless, just oversold. If only behavioral economists had vetted their own work with the same (motivated?) skepticism they’ve honed for standard economics! RW take up this neglected task, and claim to “show that the preferences deemed better or more ‘true’ by paternalists are often just as questionable on behavioral grounds, if not more so.”
Chapter 4’s main applied topics are: (a) hyperbolic discounting, including preference reversals and intransitivities; (b) endowment effects, including loss aversion and status quo bias; (c) and poor affective forecasting (failure to correctly predict how outcomes will make you feel).
1. RW argue that hyperbolic discounting could arise because of our subjective perception of time:
People do not necessarily perceive time in the way that the calendar or the number-line portrays it. When asked, “How long do you consider the duration between today and a day some distance in the future,” with the interval ranging from three months to thirty-six months, people answer in a nonlinear fashion. For example, while the time horizon from three months to one year grows 300 percent by calendar measure, it grows only 35 percent in subjective duration…
2. In any case, preference reversals for intertemporal choice are rather rare – and often in the opposite of the expected direction:
In recent research there have been experiments that elicit preferences by questioning participants over the passage of actual time, and not simply at a point in time as in the process just described. These are called “longitudinal” studies. A large majority of individuals do not actually switch over time: those who are patient with regard to the distant decision remain patient and those who are impatient remain impatient (Read et al. 2012). Relatedly, Halevy (2015) finds that only 10 percent of participants are actually time inconsistent in a longitudinal study. Furthermore, the ubiquity of impatient preference reversals is in doubt. The longitudinal experiments (Read et al. 2012; Sayman and Öncüler 2009) have found a very large number of shifts from SS to LL – that is, patient reversals – as the distant decision becomes more nearly immediate.12 In fact, in two experiments conducted by Read and coauthors (2012) the numbers of impatient and patient reversals were roughly equal.
3. Does time inconsistency actually hurt individuals? The evidence is thin, but RW discuss some fascinating results:
In [Berg et al.]’s own study of 881 participants, they find that time-inconsistent individuals earn substantially more than time-consistent ones across the forty time trade-off payoff options in the experiment. This is because the time-consistent individuals are, for whatever reason, consistently more impatient in the choices…
Therefore, if time inconsistency is to be deemed irrational in more than a presumptive neoclassical sense, the comparison cannot be relative to just any time-consistent agent. It must be with a suitably patient time-consistent agent. Impatience in whatever form can depress lifetime earnings or wealth. Why single out inconsistency as the problem?
4. When endowment effects matter, Willingness to Pay (WTP) is less than Willingness to Accept (WTA). Behavioral economists tend to want to treat WTP as the suspect measure. But this raises a host of issues. Case in point:
[A]dopting WTP rather than WTA as the appropriate norm for “true” preferences would mean abandoning the case for many paternalist interventions, such as new labor-friendly default rules that offer workers additional contractual benefits. The alleged superiority of such rules rests on the implicit assumption that WTA is the right valuation.
An exquisitely clever point:
[I]f status quo bias is the true explanation for the endowment effect, it casts doubt on the validity of paternalist nudges whose “sticking power” depends on it. Suppose a new default rule entitles workers to paid vacation time (presumably funded by lower wages). Now endowed with this new benefit, workers resist giving it up during contract negotiations. But why? Simply because the new rule is now the status quo. If status quo bias is indeed irrational, then the persistence of the new status quo offers no grounds for thinking the new rule is an improvement.
[T]he experimental evidence we do have is for an “instant endowment effect.” This means that the experimenters test for WTA–WTP gaps and reluctance to exchange within a few minutes after the subjects are given a mug or some other good (Ericson and Fuster 2014, 557). How these subjects react after they have possessed the good for some time (a day, week, month, or more) is unknown. Does the novelty of the gift wear off, or do they become more attached?
Chapter 5’s main applied topics are: (a) the functions of beliefs and learning; (b) violations of classical logic; (c) the conjunctive effect; (d) Bayes’ Rule, base-rate neglect, and belief revision; (e) availability bias; (f) salience; and (g) overconfidence. A few highlights:
1. Biased beliefs can offset confused preferences, or even other biased beliefs.
Varki (2009), among others, argues that optimistic illusions may have had adaptive value for early humans because it counterbalanced the fear of death and oblivion that came with the emergence of conscious foresight. In short, the “best” beliefs for attention, motivational, and even survival purposes may not be the most correct from the standpoint of truth.
2. Logical consistency is overrated:
From a pragmatic perspective, the case for limiting the role of logic is even stronger. It is uneconomic for the agent who wants to attain his goals efficiently to worry about the consistency of all of his beliefs.1 The inconsistency of an entire system of beliefs is likely to be vast.
Trying to make all your beliefs consistent is as foolish as trying to keep your house perfectly clean.
3. Experimental subjects often make “mistakes” because the experimenters expect the subjects to interpret all instructions literally. In real life, you’re not supposed to do so!
The behavior of the experimenters violates the expected norms of conversational interaction (Grice 1989). Among these norms is the maxim of relevance, which says that in a cooperative setting, people assume that their interlocutors present them with the information required for current purposes and no more.
4. In real life, people often neglect base rates because they should:
Consider the case of a doctor who takes a job in a new clinic… A significant number of her patients test positive for HIV. Now, if the doctor applied Bayes’ rule using base rates from the national population – where the fraction of people with HIV is small – she would have to conclude that most of these are false positives. Fortunately, the doctor is smarter than that… So she adjusts her priors away from the base rates, and thus concludes that many of the positive test results are probably true positives.
5. Availability bias doesn’t matter much when you have lots of first-hand experience:
[W]hen economics students and nursing students were asked to estimate the frequency of deaths from various causes in their own age cohorts, the whole picture changed – in fact, the bias vanished (Benjamin et al. 2001).39 The logic of this result is compelling. In a world of scarce resources, people have a tendency to learn what is in their interest to learn. The students, by and large, did not need to know population figures, but they did find it useful to have a decent idea of the hazards they actually face in their age groups.
6. The standard evidence of overconfidence is largely artifactual. People seem overconfident if you ask them their confidence question-by-question. But if you ask them for their overall accuracy, there is often no sign of overconfidence. Format matters.
What all this format dependence ultimately means for an understanding of “overconfidence” phenomena is unclear. This is because there is no good theory to guide us in determining which measures are most relevant to pragmatic concerns. In other words, we do not know which formats mirror the process by which real-world individuals evaluate their own knowledge and, most importantly, make decisions about significant matters.
Chapters 4 and 5 are packed with good points. Behavioral economists should be more self-critical. There is considerable contrary empirical evidence they should take to heart. Most remarkably, RW manage to simultaneously provide a careful survey of the conventional view with insightful critique.
Yet ultimately, the main results of behavioral economics seem pretty solid to me. In particular:
1. The whole idea of time inconsistency is that people predictably change their minds. If this is not a strong sign of irrationality, what is? As I claimed last week, the time consistency literature doesn’t go far enough. Discounting the future purely because it is in the future should be classified as irrational even if you do so with perfect consistency.
2. The evidence RW present on the subjectivity of time is credible. However, we should interpret it as a further sign of irrationality rather than use it to rationalize time inconsistent choices.
3. RW flatly deny the irrationality of loss aversion (and, by extension, endowment effects):
Loss aversion would then seem to be a taste variable no different from the nonpecuniary aspects of labor that economics has recognized from early on. Loss-averse agents happen to attach value to changes in wealth, with greater value attached to a loss than to the equivalent gain.
I get where they’re coming from. I whole-heartedly love the kids I have, even though I recognize that I would have felt the same way about very different offspring. In common-sense terms, I see nothing “irrational” about this. On the other hand, I would deem it highly irrational to fall in love with the peaches I bought yesterday, knowing full-well that I would have felt the same love for whatever peaches I purchased. Can I formally model this distinction? No, but it seems dogmatic to deny the silliness of getting attached to a specific bag of peaches.
4. RW’s discussion of the “maxim of relevance” is thought-provoking. Still, how much does it really buy them? Suppose experimenters loudly and clearly announced, “Focus on the literal meaning of all our instructions.” Would that really lead anyone to avoid the conjunctive fallacy? Similarly, think about how long it took humans to apply the experimental method. Thinking in clear-cut, literal terms yields enormous gains – but the experimental method has only been around for a few centuries, and only a few people really understand it even today.
5. RW sternly remind us:
In a Bayesian framework, all probabilities are conditional. The priors are conditional on everything the agent believes as background knowledge. This knowledge may include base rates, but not exclusively. In the subjectivist version of Bayesianism, any prior probability would be allowable. The rationality of Bayes’ theorem begins after the agent has chosen his priors.
In most experiments, however, the descriptions are so austere that using any prior probability other than the base rates is bizarre. Suppose, for example, experimenters tell you that the balls in an urn are half blue, half green. Next they ask, “What is the probability that you draw a green ball?” Sure, you could say, “My prior probability says that experimenters always stack green balls on top, so the chance that the first ball I draw will be green is 100%.” Isn’t that absurd, though?
In a sense, RW accidentally show that behavioral economics sets the bar of rationality much too low. Rationality is actually a matter of substance, not form alone.
6. For availability bias and overconfidence, the reasonable prior is that they’re serious. Ponder all you’ve seen. Human beings overweight rare, vivid events. Human beings are overconfident and ubiquitous cognitive flaws. We should have believed this before any experimental evidence arrived, because daily life overwhelmingly affirms these patterns. And we should continue to believe these problems are severe even if the scientific evidence of these biases is fragile. So while RW do a fine job of exposing researchers’ overconfidence in their own research, we should only marginally change our minds about human psychology itself.