"You can easily prove that people are irrational if you tightly constrain the choice environment."
Behavioral psychologists and behavioral economists have found a great deal of fault with economists' analytical methods. Their criticisms follow a standard format. Modern behavioralists first posit that, in their models, mainstream (neoclassical) economists assume that people are perfectly rational (or nearly so) as they go about their daily lives, which is to say that people make exacting estimates of the costs and benefits (discounted precisely for risks and time) in almost every choice situation. The behavioralists then assert that many modern economists really do believe their own models; that is, they really do believe that people are perfectly rational in their daily dealings.
The behavioralists insist that, contrary to what economists posit, people are not just irrational in a variety of ways; they are "predictably irrational." But people are more predictably rational than the behavioralists' methods suggest. Indeed, that is what I've found, at least with respect to one prominent behavioralist finding,
Choices between Sure Things and Gambles: A Source of Irrationality?
Modern behavioralists have devised surveys and experiments that uncover a host of "anomalies" in people's decisions and behavior. These surveys and experiments purport to show that people are beset with an array of serious decision-making biases, which means that they are not—and, by extension, cannot be—as perfectly rational as economists assume. Indeed, behavioralists have uncovered so many decision-making biases and flaws resulting in so many documented "irrationalities" among their varied human subjects that one has to wonder why they believe themselves capable of writing rationally on the subject of people's irrationalities.
Behavioralists then dismiss much modern economic analysis as misleading, if not totally wrongheaded, and suggest that the better way to predict economic behavior is through surveys and experiments on how people actually make decisions. Because they are so easily replicated, laboratory experiments are the "fruit flies" of social science, enabling the experimenters to conduct a frame-by-frame analysis of what people do, not what economists deduce they do.
Modern behavioralists have been rewarded for their detective work with widely read, sometimes best-selling books, the most prominent of which are Richard Thaler's and Cass Sunstein's Nudge (2008) and Dan Ariely's Predictably Irrational (2008). One behavioral psychologist, Daniel Kahneman, was awarded a Nobel Prize, in part for his relentless debunking of the presumption of perfect or unbounded rationality that economists perpetuate.
Behavioralists certainly have important points. People are not, and cannot be, as rational as economists say. When pervasive real-world scarcity forces people to make choices, perfection in anything is not a viable option. Perfection is simply not worth the time and mental energy required to achieve it. Besides, as Herbert Simon recognized a half-century ago, the human brain is not sufficiently powerful, and the real world is too replete with uncertainties, for the refined calculations necessary for perfectly rational choices to be made.
In addition, if people were, in fact, as fully rational as economists assume, there would be no point to economics professors' explaining perfect rationality to their students. Perfectly rational students would surely be able to understand, without instruction, their own perfect rationality. Thus, all economic education would be a waste, devoted to explaining the nature and implications of rationality that perfectly rational ordinary people, not just privileged economists, would already know. Indeed, if people were perfectly rational, all education, not just economics education, would be a waste.
Predictably Rational Behavior from "Irrationalities"
But to see the problems with this kind of thinking, consider one of the main experiments that behavioralists Kahneman and Amos Tversky use as evidence for the limitations of perfect rationality as a behavioral premise. They offer their subjects two options: Option A is a "sure thing," carrying a payoff of, say, $800. Option B is a gamble with an expected payoff of $850: The subjects have an 85-percent chance of receiving $1,000 and a 15-percent chance of getting nothing. The behavioralists report that a "large majority" of subjects choose Option A, in spite of its having an expected value $50 lower than Option B. According to behavioralists, this majority choice demonstrates a form of "bounded rationality." In other words, the subjects' rational decision making is impaired by mental constraints on information processing and calculating capacity, not the least of which is risk aversion (with risk aversion evident in people heavily favoring Option A).
I have repeated this exact choice experiment with my fully employed and executive (business-seasoned) MBA students for several years at the start of their first class—before we discuss rationality, decision making, or any microeconomic concepts and lines of analysis. Just as Kahneman and Tversky report, a "large majority"—between 70 and 85 percent—of my MBA students choose Option A, the sure thing. But would conventional economic thinking fail to predict such an outcome? Not really. As Dwight Lee explained four decades ago (and economists in earlier epochs have presumed), expected value is not all that matters for rational decision making. What the behavioralists miss is that variance in outcomes is also consequential in assessing options. Option A has no variance; Option B has a substantial variance, with the outcome ranging from zero to $1,000. Hence, for many choosers, Option A can be more valuable than Option B. Indeed, if expected value were all that mattered, people would never buy insurance. Is the purchase of insurance irrational?
Economists have no particular expertise on why people value things as they do. Indeed, the (negative) value of variance in choice situations can be expected to differ among choosers as much as the value of chocolate in candy bars varies across buyers of candy. The main thing we economists can say is that people will vary the frequency of purchases with the price. This means that economists cannot know a priori how people will choose between the two options. They can, however, predict that if the payoff from Option A falls from $800 to a lower number, more people will choose Option B. Call this "predictable rationality." My MBA students appear to be predictably rational, given that the percentage choosing Option A goes down progressively as the sure thing falls to $750 and below. This raises the question: Why did the experiment set the spread between the amount of the sure-thing option and the expected value of the gamble chosen so low—only $50? Was it to help the behavioralists demonstrate a high degree of "irrationality?"
Similarly, economists can predict that if the variance of Option B declines, holding Option A constant, a higher percent of people will choose Option B. I've also run an experiment in various classes by reducing the variance of Option B. For example, I've started the experiment by giving students one draw of one "coupon" from "Barrel A," which contains only coupons with a redemption value of $800. Or they have ten draws from "Barrel B," in which 85 percent of the "coupons" have redemption values of $100 and the rest have redemption values of zero. Sure enough, the percentage of students choosing Barrel B is higher than in my first experiment described above. When I tell them that they have 100 draws from Barrel B with 85 percent of the coupons worth $10, it increases yet again. The students are predictably rational, not predictably irrational.
Behavioralists also argue that how the options are "framed" influences choices. I agree. When I have described Options A and B as "Business Venture A" and "Business Venture B," the percent of MBA students who choose "Business Venture B" plunges relative to the percent who choose "Option B." The change in wording may cause the students to think more explicitly about the financial gain for themselves (or their bosses) in the two ventures. Perhaps, also, the students construe "Business Venture B" as having a lower variance than "Option B" because in business, any one venture is commonly a component of a whole portfolio of ventures, with the implied diversification in the portfolio dampening the risk involved in ventures taken separately.
In the original choice experiment between Options A and B, even if the subjects were told, after their choice, what percent chose each option, they were given no chance to act on that information. In other words, the experiments explicitly did not allow for learning. Moreover, the subjects had no personal incentive, or stake, in the choices they made, which could have meant that many subjects made choices carelessly (and rationally so).
I partially remedied these deficiencies in runs of the experiment by telling my MBA students that 75 percent had chosen Option A, with the remainder choosing Option B. I then assigned them a team paper to address two questions:
- First, given the division of the choices between A and B, were choosers of Option A or Option B leaving money on the table?
- Second, if money was being left on the table, what means could they devise to pick up a portion or all of that money?
For the overwhelming majority of the 160 or so students in my three classes, the assignment appeared to be easy (I've infrequently assigned grades on this paper below A-). They all concluded that the expected value of Option B was $850, which meant that an average of $50 was being left on the table by all Option-B choosers. Furthermore, virtually all of the teams readily developed a way in which all or a portion of the additional dollars could be pocketed. Most teams proposed that students cooperate, all agreeing to choose Option B, and share equally their collective proceeds. Not a bad solution in a relatively small class setting.
Other student teams devised a more entrepreneurial solution. They proposed to induce Option A choosers, by offering them a sure thing of $801, to switch their choice to Option B on condition that the draw would be handed over to those making the $801 payment. Some students even realized that a sure-thing offer of $801 might not work, but only because others could be expected to come up with the same solution, which could lead to a bidding war for choosers of Option A (the ranks of whom could be expected to swell as the sure-thing price rose and would-be Option B choosers moved progressively, or pretended to move, to Option A).
A handful of student teams also suggested that the reluctance of the Option A choosers to gamble on Option B could be alleviated partially by offering Option A choosers insurance against the downside risk of receiving nothing when choosing Option B. The price of the insurance, of course, would take away some of the gain from choosing Option B.
Over the years, several of my students have taken the analysis further and recognized that if Options A and B were real-world business ventures, their market values would rise and fall with the division of the choices made between them. If a large majority of choosers selected Option A, then surely people in real-world market settings offering Option A would have an incentive to lower its "list price" of $800, causing a drift in the division of choices toward Option B. Think this wouldn't happen? It does. Hordes of risk-averse people seeking security buy government treasury bonds, which are safer than corporate bonds. As a result, the rate of return on government bonds relative to corporate bonds falls, with the spread causing many would-be government-bond buyers to settle for the higher return on corporate bonds.
The moral of my sequence of classroom experiments is simple: You can easily prove that people are irrational if you tightly constrain the choice environment, barring choosers from knowing what others are doing, preventing choosers from correcting errant decisions, and ensuring that "bad" choices do not have economic (and monetary) consequences, with subsequent effects on people's incentives to learn from and act on their own and others' errant choices. In such environments, drawing out irrational decisions is like shooting fish in a barrel.
This is not to say that people are "perfectly rational." "Perfect rationality" as a statement of human nature, as distinguished from a theoretical device, makes for evolutionary nonsense. Had hominids sought to achieve perfect rationality in their decisions, they would have died out as they spent all of their time refining their cost/benefit calculations: Spending too much time calculating the possibility of outrunning a velociraptor would, no doubt, have had dire results. Perfection in choices is impossible, and uneconomical, just as, in a world of scarcity, developing a perfectly safe automobile is impossible—and uneconomical.
People, including economists, are imperfect decision makers because of their mental limitations. But this fact does not mean that markets fail. Indeed, markets do far more than induce improved allocation of resources, given wants and resources. Markets induce market participants to be more rational than they otherwise would be because they must pay a price for being irrational. Thus, markets allow—no, require—economists to assume that people are more rational than they are likely to be found to be in laboratory settings, absent meaningful information and incentives and absent market pressures.
See, for example, Richard H. Thaler. 2001. Anomalies. Journal of Economic Perspective 15(1): 219-232.
The decision-making biases identified by behavioralists include the availability bias, relativity bias, diagnosis bias, anchoring bias, herding bias, arousal bias, endowment bias, optimism bias, status quo bias or inertia bias, representativeness bias, relativity bias, loss-aversion bias, anchoring bias, and planning bias. Many of these decision-making biases are covered in the work of Dan Ariely (2008. Predictably Irrational: The Hidden Forces that Shape Our Decisions. New York: HarperCollins Publishers) and Richard Thaler and Cass Sunstein (2008. Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven, Conn.: Yale University Press).
For a brief summary of the behavioral approach and severe criticisms of the economic approach, see Ariely (2008, pp. xi-22). In his book that brings together the many anomalies in decision making he has uncovered, Richard Thaler recognizes that devising accurately descriptive models of human behavior is difficult because many theorists have a "strong allergic reaction to data." Moreover, economic models based on the rationality premise are "elegant with precise predictions," while behavioral work tends to be "messy, with much vaguer predictions." He then asks, "But,... would you rather be elegant and precisely wrong, or messy and vaguely right? (1992. The Winner's Curse: Paradoxes and Anomalies of Economic Life. New York, Free Press, p. 198).
See Daniel Kahneman and Amos Tversky (2000. "Choices, values, and frames," as reprinted in Choices, Values, and Frames, eds. Daniel Kahneman and Amos Tversky. Cambridge, U.K.: Cambridge University Press, 2000, p. 3 or pp. 1-16) and Dan Ariely (2008, p. xxi).
Herbert Simon. 1957. Models of Man. New York: John Wiley & Co.
Kahneman and Tversky 2000.
See Dwight Lee (1969. "Utility analysis and repetitive gambling." American Economist 13, 2: 87-91). I have also shown that many economic luminaries from Adam Smith onward have viewed economic analysis as a partial view of the constellation of motivations that people have (see Predictably Rational? chaps. 3-5)
Several teams indicated a willingness to borrow the required funding (meaning that lenders charging interest payments would siphon off some of the money being left on the table).
Richard B. McKenzie is the Walter B. Gerken Professor of Enterprise and Society in the Paul Merage School of Business at the University of California, Irvine.
For more articles by Richard B. McKenzie, see the Archive