University of Chicago economist Richard H. Thaler, probably the most important founder of “behavioral economics,” is a fantastic storyteller. In his latest book, Misbehaving, he tells, roughly chronologically, of his initial doubts about the standard economist’s “rational actor” model and how those doubts led him to set his research agenda for the next 40 years. In chapter after chapter, he tells of anomalies–bits of evidence that are inconsistent, sometimes wildly so–with the various economic models and of his debates with the proponents of those models. In Thaler’s telling, he always won the debates. One would expect him to say that, but as someone who did not start out on his side of the debates, I think he often did win.
This is the opening paragraph of my “The Case for ‘Misbehavior,'” my review of Richard H. Thaler’s new book, Misbehaving. It appears in the Fall issue of Regulation. (Scroll down.)
An excerpt that illustrates one of his main themes:
Consider what he calls the endowment effect. In laying out the effect, Thaler presents the results of two versions of a question he asks his students. In version A, he tells them that they have been exposed to a rare disease that they have a 1 in 1,000 chance of contracting. If they get the disease, they will die within a week. They can take an antidote that, with certainty, will prevent death. How much, he asks, are they willing to pay for the antidote? A typical answer is $2,000.
Then he presents the same students with version B, telling them that they can choose whether or not to enter a room in which they will have a 1 in 1,000 chance of getting that same disease. The question: how much do they have to be paid to be willing to enter the room? The answer should be something close to $2,000, possibly a little higher to reflect what economists call the “wealth effect:” if they are paid to accept a small risk, they are slightly wealthier than if they must pay to avoid a small risk. But the typical answer? $500,000. Thaler calls this phenomenon the endowment effect because, he explains, “the stuff you own is part of your endowment” and “people valued things that were already part of their endowment more highly than things that could be part of their endowment.” He gives numerous other examples that, I suspect, will ring true with most readers.
Thaler’s response to those who think that people become more rational when the stakes are higher:
One of the arguments that economists often make against Thaler’s view of humans is that most of his evidence comes from low-stakes situations in which the gains from being rational are not large. However, they assert, when the gains are large, humans tend to be much more careful. But, using evidence from the National Football League’s entry draft, Thaler makes a strong argument against this view.
NFL teams are multi-multi-million-dollar enterprises, and their draft picks represent multi-million-dollar decisions. Surely, if there is strong evidence of rationality, it would be in the NFL. But Thaler shows that NFL owners and managers seem to make poor draft decisions. For instance, he discusses the considerable evidence that teams are better off “trading down”–that is, swapping a single early-round draft pick for multiple later picks–and trading away a draft pick this year for multiple picks in future drafts, and yet few teams do this. He even tells of a conversation he had about these issues with Dan Snyder, owner of the Washington Redskins, which led Snyder to send two of his top managers to talk to Thaler and his colleague Cade Massey. Their subsequent draft picks showed that they ignored Thaler’s advice. And, as anyone who follows the Redskins knows, they paid dearly, highlighted by the bonanza of high-round draft choices they traded away for a single pick in 2012, which they used to draft Robert Griffin III.
Finally:
But Thaler and Sunstein drastically understate the problems that arise because the people in government doing the nudging are also Humans, not Econs. And bureaucrats have generally bad incentives to nudge in the “right” direction. On this point, I laid out my criticisms in more detail in my review of Sunstein’s 2013 book Simpler (“Simpler? Really?” Fall 2013.)
Thaler answers that he and Sunstein “went out of our way to say that if the government bureaucrat is the person trying to help, it must be recognized that the bureaucrat is also a Human, subject to biases.” He expresses his frustration that “no matter how many times we repeat this refrain we continue to be accused of ignoring it.” But the accusation is understandable, as they keep advocating government intervention.
The best way to show that they do not ignore this problem is for them to advocate taking large amounts of power out of the government’s hands. As I’ve written elsewhere, one way to reduce government power and make people more aware of government’s activities–after all, many of the problems Thaler cites are due to people’s being unaware–is to get rid of tax withholding. That way, people can be more aware of their tax bill, which is one of the major costs of government. He has not yet advocated that idea.
Maybe we should nudge him.
READER COMMENTS
ThomasH
Oct 4 2015 at 11:32pm
Sounds like an interesting research program: lab experiments in which we observe the systemic errors of “regulators” as they try to prevent “phishing.” Heck, we might even be able to throw light on why the Fed is so gung ho to raise interest rates.
Philo
Oct 4 2015 at 11:43pm
In the Thaler-Sunstein vision, perfectly rational (and public-spirited) bureaucrats nudge ordinary people to behave more rationally. But since (as Thaler admits) bureaucrats are not perfectly rational (or public-spirited), this is unrealistic. It would be only slightly less realistic to put forward a vision of ordinary people already being perfectly rational, in which case, of course, they wouldn’t need nudging. But of what use is either of these unrealistic visions? Does Thaler have some reason to advocate rational (and public-spirited) nudging by bureaucrats, rather than simply advocating rational behavior directly by ordinary people?
By the way, Thaler should take the Don Boudreaux challenge, and buy his own NFL team!
Pajser
Oct 5 2015 at 12:25am
“But Thaler and Sunstein drastically understate the problems that arise because the people in government doing the nudging are also Humans, not Econs.”
I think people in government are Professionals. They only need to be motivated to do their job as good as they can. It seems much easier than to motivate Humans to make good decisions about their lives. Individually and collectively, Professionals like engineers and physicians dramatically improve quality of their work. Do you believe that bureaucrats cannot improve the quality of their work on similar way? Why?
Tom Nagle
Oct 5 2015 at 7:24am
Your conclusion that “The best way to show that they do not ignore this problem is for them to advocate taking large amounts of power out of the government’s hands” makes the implicit assumption that, because government bureaucrats are often not totally rational, what they do is therefore on net harmful. That may well be true of some areas of government (e.g., the DEA), but many functions of government add to our welfare even though government tends to do things sub-optimally. Consider, for example, the EPA. Certainly it makes a lot of sub-optimal rules that make what they do excessively costly. Still, most people who have lived long enough to remember when nearly all rivers near urban areas were toxic sewers, smog was horrible, and children who grew up in urban areas all acquired lead levels sufficient to reduce their intelligence simply from breathing, would conclude that the EPA has improved our lives despite doing so inefficiently.
James Hanley
Oct 5 2015 at 7:25am
I’ve been a fan of behavioral econ for a long time. But I think it it’s a poor basis for regulatory nudging, because behavioral econ is based on assumptions about preferences, and srudies whether we make rational choices in pursuit of those preferences. In studying football draft picks the assumptions about preferences are simple and probably right. But for individuals preferences may not be so clear and simple. Regulators may be trying to nudge people toward the regulators’ preferences rather than toward the actual preferences of the regulated.
Nathan W
Oct 5 2015 at 8:15am
Behavioural economics is a very young field. I think it will be centuries before we have a good grasp on this stuff. Meanwhile, since many aspects of people’s economic selves are socialized as opposed to “natural”, researchers are basically taking snapshots of moving targets with high standard deviation. The deviation from mean is itself a problem, but it is altogether possible that previous results will not be comparable to future results, due to changes in culture, etc. This will be one of many factors which will make it exceedingly difficult to build a consistent and credible literature in behavioural economics, especially if we aim to be precise, as opposed to merely identifying general directions of effects.
Strongly agree that bureaucrats are a) human, subject to biases. I would add b) their incentives are unlikely to be perfectly aligned to the public good, for a large number of reasons. A most well known case of b) is mission creep, where people generally like to believe that the work they do is important, and will ignore evidence to the contrary rather than reallocate efforts to a more useful area of effort. Bribed by self interest might be the right term – of course they see the benefits of expanding a program, which coincidentally increases prospects for promotions, raises, status, etc.
On the question of tax withholding – I think the point is valid, but I strongly disagree on practical grounds: since we are not actually very rational, many people will spend their tax money throughout the year and go into debt come tax time, whereas tax withholding forces them (no choice) to internalize the expected cost on a periodic basis through the year. Perhaps people could receive the full paycheque, along with a little note advising them of the amount to send to the tax man within the next week? But then, any number of excuses could come along to justify delaying the payment – paying for children’s activities, pay of credit card first, etc. In balance, I think directly withholding taxes from salaries is better.
As for nudging and bureaucrats – is there any way we can nudge them to be more rational? As a shot in the dark … perhaps small performance bonuses for nudges which are actually proven to pass a cost/benefit analysis?
Mike W
Oct 5 2015 at 8:40am
Here’s John Cochrane’s review of the White House’s recently released Social and Behavioral Sciences Team Annual Report, the government’s implementation of Nudge:
http://johnhcochrane.blogspot.com/2015/10/uncle-sam-spam_1.html
David R. Henderson
Oct 5 2015 at 9:19am
@Tom Nagle,
Your conclusion that “The best way to show that they do not ignore this problem is for them to advocate taking large amounts of power out of the government’s hands” makes the implicit assumption that, because government bureaucrats are often not totally rational, what they do is therefore on net harmful.
No, it doesn’t. I said “large amounts of power,” not all power. That’s why I gave the example I did: withholding.
@Philo,
By the way, Thaler should take the Don Boudreaux challenge, and buy his own NFL team!
I’ve never found that argument of Don’s persuasive. If you want to read many examples of businesses making bad decisions that we can pretty much know in advance are bad, read Charley Hooper’s and my Making Great Decisions in Business and Life. Our saying that doesn’t mean that we would run the businesses better. We probably wouldn’t. It does mean that on the issues we address, we would do better. It’s a simple division of labor point.
Hasdrubal
Oct 5 2015 at 11:14am
3 things:
First, which is more valuable to an NFL team? One or two wins at the margin or a superstar? If it’s the latter, does that have an impact on their optimal draft strategy? Does Thaler address this?
Second, I don’t think that the argument for nudges is that public officials are more rational than the rest of us. I think it’s that a person who is informed of common fallacies, and who is thinking about the decision being made, can more easily avoid those fallacies than the person making the decision at the time. (Think of that picture of two tables every behavioral econ book likes to show: After the second or third time you see them, you know they’re the same dimensions even if they don’t appear to be.) The problem I see with this is that the law of unintended consequences still probably holds: Sure, we’ve identified one fallacy, possibly even the dominant fallacy. But addressing that may expose or even create other problems. And the more complex the situation, the more likely you are to run into unintended consequences. (E.g. If I were to write a response to a book like Nudge, I’d show two tables in a picture similar to theirs, but the tables actually would have different dimensions to illustrate that sometimes the “right” answer isn’t always the apparently correct one.)
Finally, I’ve been skeptical of behavioral economics for a while, it seems more like a game of gotcha than an effective way of predicting general behavior. Are there any models where behavioral economics describes the world better than, say, a model assuming bounded rationality (less than perfect information, costly processing, etc?) It always seemed to me that, while individuals may behave irrationally, the market as a whole winds up behaving more or less rationally. (Certainly more rationally than popular behavioral economics books would lead us to expect.) Sort of a “wisdom of the crowds” effect for behavior.
The main thing that makes me leery of popular representations of behavioral economics is that its proponents don’t seem to try very hard to understand different possible explanations for their “gotcha” examples. For individuals especially, they seem to assume maximizing money is the same as maximizing utility. But if you look at what the individual is trying to maximize, sometimes the apparent irrationality is actually traditional utility maximization. Take the hedonic treadmill into account and maximizing money itself is a pretty irrational way to maximize utility, the disutility of expending effort to choose an optimal retirement plan might actually be greater than the happiness gained by, say, a 10% increase in earnings during retirement. Love, respect, power, influence, fame, these things may be what we want to maximize, and money really doesn’t have a huge effect on them yet so many of the fallacies I see in behavioral economics are based on monetizing options rather than optimizing actual utility. (Not to say I don’t see the same issue in traditional rational actor based economics as well.) Maximizing welfare is a different story, as the resources represented by money have far less aggressive diminishing returns for welfare than they do for utility or happiness.
Paul Zrimsek
Oct 5 2015 at 11:40am
I don’t understand the NFL example. The same considerations that make trading down a good move should make trading up a bad move– and the only way to trade down is to find another team that’s willing to trade up.
Jim Dow
Oct 5 2015 at 11:53am
Hasdrubal: Love, respect, power, influence, fame, these things may be what we want to maximize, and money really doesn’t have a huge effect on them … (Not to say I don’t see the same issue in traditional rational actor based economics as well.)
I agree. Behavior economics doesn’t take the psychology as seriously as it should, but the fact that it’s taking it into account at all is broadening the conversation in an important way.
People clearly differ on a number of different dimensions affecting decision making, including intelligence, willingness to delay gratification, executive function, etc. One can’t simply say that government workers are human therefore leaving it the individual must be better, the conversation needs to be about who makes the better choices for a particular kind of decision, and that’s an empirical question.
Where I agree with the John Cochrane link is that nudges don’t often do it, and so for me sometimes a shove is required (e.g. social security rather than just telling everyone to save enough for retirement)
Jesse C
Oct 5 2015 at 2:35pm
It might be rational for an NFL team to trade expected success for winning for a shot at a star player. I would expect a star player to sometimes add monetary value to a team in some cases that is greater than, say, 10% increase in probability of making it to the playoffs.
Decisions are best left to those with skin in the game, especially when the decisions are complex, and I would expect NFL franchise valuation is filled with nuanced complications.
Also, would the Redskins argument hold up if RG3 turned out to be the next Peyton Manning? That example feels like cherry picking.
I’m being a knee-jerk skeptic here. It may be that a team trading ten 2nd round draft picks for a 1st round pick is just being stupid.
Matt Moore
Oct 5 2015 at 6:54pm
Leftists are envious (give it)
Rightists are puritanical (stop it)
Philo
Oct 5 2015 at 8:10pm
“Our saying that doesn’t mean that we would run the businesses better. We probably wouldn’t. It does mean that on the issues we address, we would do better. It’s a simple division of labor point.” Well, Thaler could probably keep the entire management of his new NFL team in place or, if necessary, replace some of them with very similar NFL veteran managers, only *insisting on a change in draft strategy*.
Sean
Oct 5 2015 at 8:59pm
I am not understanding the endowment effect (at least as presented here). In the first case I pay for a known cure; in the second I am being paid for potentially dying when no cure is mentioned. In the first I am paying to possibly save my life. In the second I am being paid for possibly dying–that is, someone is giving me $500,000 when no cure is mentioned as being available. If in the second scenario it had been specified that a cure was available, then the two scenarios would be more accurately reflections of each other. Otherwise, I do not see why the monetary amounts should be equal (or roughly equal) between the two. Or am I just perversely clinging to my endowment effect?
blink
Oct 5 2015 at 10:43pm
The NFL draft is an important example. Indeed, we should expect rational behavior with such high stakes. Thaler example really shows the limits of *his* knowledge — specifically, his knowledge of an NFL team’s objective function. (@Hasdrubal and @Jesse C make good points about what the real true objective may be.)
So Thaler’s example proves too much. Instead of demonstrating irrationality in the NFL, Thaler’s analysis instead illustrates the hubris of the social planner who claims to know individuals’ preferences better than they do themselves.
Minderbinder
Oct 6 2015 at 11:37am
If someone tells you that you can choose whether or not to enter a room in which they will have a 1 in 1,000 chance of getting a rare and incurable disease, and then asks you how much they’d have to pay you to go into that room, then the rational response is to punch the person and call the police and/or CDC. Because that person is crazy and there’s obviously some sort of sick game being played.
I can’t imagine why people wouldn’t behave rationally to this fabricated situation that no human has ever found themselves in.
John Wake
Oct 10 2015 at 1:35pm
Great comment thread!
Comments are closed.