[An updated version of this article can be found at Game Theory in the 2nd edition.]
Game theory is the science of strategy. It attempts to determine mathematically and logically the actions that "players" should take to secure the best outcomes for themselves in a wide array of "games." The games it studies range from chess to child rearing and from tennis to takeovers. But the games all share the common feature of interdependence. That is, the outcome for each participant depends upon the choices (strategies) of all. In so-called zero-sum games the interests of the players conflict totally, so that one person's gain always is another's loss. More typical are games with the potential for either mutual gain (positive sum) or mutual harm (negative sum), as well as some conflict.
Game theory was pioneered by Princeton mathematician John von Neumann. In the early years the emphasis was on games of pure conflict (zero-sum games). Other games were considered in a cooperative form. That is, the participants were supposed to choose and implement their actions jointly. Recent research has focused on games that are neither zero-sum nor purely cooperative. In these games the players choose their actions separately, but their links to others involve elements of both competition and cooperation.
Games are fundamentally different from decisions made in a neutral environment. To illustrate the point, think of the difference between the decisions of a lumberjack and those of a general. When the lumberjack decides how to chop wood, he does not expect the wood to fight back; his environment is neutral. But when the general tries to cut down the enemy's army, he must anticipate and overcome resistance to his plans. Like the general, a game player must recognize his interaction with other intelligent and purposive people. His own choice must allow for both conflict and for possibilities for cooperation.
The essence of a game is the interdependence of player strategies. There are two distinct types of strategic interdependence: sequential and simultaneous. In the former the players move in sequence, each aware of the others' previous actions. In the latter the players act at the same time, each ignorant of the others' actions.
A general principle for a player in a sequential-move game is to look ahead and reason back. Each player should figure out how the other players will respond to his current move, how he will respond in turn, and so on. The player anticipates where his initial decisions will ultimately lead, and uses this information to calculate his current best choice. When thinking about how others will respond, one must put oneself in their shoes and think as they would; one should not impose one's own reasoning on them.
In principle, any sequential game that ends after a finite sequence of moves can be "solved" completely. We determine each player's best strategy by looking ahead to every possible outcome. Simple games, such as tic-tac-toe, can be solved in this way and are therefore not challenging. For many other games, such as chess, the calculations are too complex to perform in practice—even with computers. Therefore, the players look a few moves ahead and try to evaluate the resulting positions on the basis of experience.
In contrast to the linear chain of reasoning for sequential games, a game with simultaneous moves involves a logical circle. Although the players act at the same time, in ignorance of the others' current actions, each must be aware that there are other players who, in turn, are similarly aware, and so on. The thinking goes: "I think that he thinks that I think...." Therefore, each must figuratively put himself in the shoes of all and try to calculate the outcome. His own best action is an integral part of this overall calculation.
This logical circle is squared (the circular reasoning is brought to a conclusion) using a concept of equilibrium developed by the Princeton mathematician John Nash. We look for a set of choices, one for each player, such that each person's strategy is best for him when all others are playing their stipulated best strategies. In other words, each picks his best response to what the others do.
Sometimes one person's best choice is the same no matter what the others do. This is called a dominant strategy for that player. At other times, one player has a uniformly bad choice—a dominated strategy—in the sense that some other choice is better for him no matter what the others do. The search for an equilibrium should begin by looking for dominant strategies and eliminating dominated ones.
When we say that an outcome is an equilibrium, there is no presumption that each person's privately best choice will lead to a collectively optimal result. Indeed, there are notorious examples, such as the prisoners' dilemma (see below), where the players are drawn into a bad outcome by each following his best private interests.
Nash's notion of equilibrium remains an incomplete solution to the problem of circular reasoning in simultaneous-move games. Some games have many such equilibria while others have none. And the dynamic process that can lead to an equilibrium is left unspecified. But in spite of these flaws, the concept has proved extremely useful in analyzing many strategic interactions.
The following examples of strategic interaction illustrate some of the fundamentals of game theory:
For example, Cortés burned his own ships upon his arrival in Mexico. He purposefully eliminated retreat as an option. Without ships to sail home, Cortés would either succeed in his conquest or perish. Although his soldiers were vastly outnumbered, this threat to fight to the death demoralized the opposition; it chose to retreat rather than fight such a determined opponent. Polaroid Corporation used a similar strategy when it purposefully refused to diversify out of the instant photography market. It was committed to a life-or-death battle against any intruder in the market. When Kodak entered the instant photography market, Polaroid put all its resources into the fight; fourteen years later, Polaroid won a nearly billion-dollar lawsuit against Kodak and regained its monopoly market.
Another way to make threats credible is to employ the adventuresome strategy of brinkmanship—deliberately creating a risk that if other players fail to act as one would like them to, the outcome will be bad for everyone. Introduced by Thomas Schelling in The Strategy of Conflict, brinkmanship "is the tactic of deliberately letting the situation get somewhat out of hand, just because its being out of hand may be intolerable to the other party and force his accommodation." When mass demonstrators confronted totalitarian governments in Eastern Europe and China, both sides were engaging in just such a strategy. Sometimes one side backs down and concedes defeat; other times, tragedy results when they fall over the brink together.
Recent advances in game theory have succeeded in describing and prescribing appropriate strategies in several situations of conflict and cooperation. But the theory is far from complete, and in many ways the design of successful strategy remains an art.
Avinash Dixit is the John J. Sherred Professor of Economics at Princeton University. Barry Nalebuff is Milton Steinbach Professor of Management at Yale University's School of Organization and Management.
Ankeny, Nesmith. Poker Strategy: Winning with Game Theory. 1981.
Brams, Steven. Game Theory and Politics. 1979.
Davis, Morton. Game Theory: A Nontechnical Introduction, 2d ed. 1983.
Dixit, Avinash, and Barry Nalebuff. Thinking Strategically: A Competitive Edge in Business, Politics, and Everyday Life. 1991.
Luce, Duncan, and Howard Raiffa. Games and Decisions. 1957.
McDonald, John. Strategy in Poker, Business and War. 1950.
Porter, Michael. Competitive Strategy. 1982.
Raiffa, Howard. The Art and Science of Negotiation. 1982.
Riker, William. The Art of Political Manipulation. 1986.
Schelling, Thomas. The Strategy of Conflict. 1960.
Williams, J. D. The Compleat Strategyst, rev. ed. 1966.
Neumann, John von, and Oskar Morgenstern. Theory of Games and Economic Behavior. 1947.
Ordeshook, Peter. Game Theory and Political Theory. 1986.
Shubik, Martin. Game Theory in the Social Sciences. 1982.