Economic models of cooperation and conflict are often based on the Prisoner’s Dilemma (PD) of game theory. As simple as this model is, it helps us understand whether or not a war will be fought, where “fought” includes escalation steps through retaliation—the current situation between the government of Israël and the government of Iran.

Assume two countries each governed by its respective ruler, S and R. (In the simplest model, it may not matter who the ruler is and whether it is an individual or a group.) Each ruler faces the alternative of fighting with the other or not. By definition of a PD, each ruler prefers no war, that is, no mutual fighting; let’s give each ruler a utility index of 2 and 3 for a situation of fighting and not fighting respectively. A higher utility number represents a more preferred situation (a situation with higher “utility”). Each ruler, however, would still prefer to fight if he is the only one to do it and the other chickens out; this means a utility number of 4 for that situation,  the top preferred option for each of them. The worst alternative from each player’s perspective is to be the “sucker,” the pacifist who ends up being defeated; the utility index is thus 1 for the non-fighter in this situation.

No cardinal significance must be attached to these utility numbers: they only represent the rankings of different situations. Rank 4 only means the most preferred situation, and 1 the least preferred, with 3 and 2 in between. A situation more preferred can simply be less miserable, with a smaller net loss.

This setup is represented by the PD payoff matrix below. For our two players, we have four possible combinations or situations of “FIGHTING” or “NO fighting”; each cell, marked A to D, represents one of these combinations. The “payoffs” could be sums of money; here, they are our utility rankings, which we assume to be the same for the two players. The first number in a cell gives the rank of that situation for S (the line player, blue in my chart) given the corresponding (column) choice by R. The second number in the cell gives the rank of that situation for R (the column player, red in my chart) given the corresponding (row) choice of S. For example, Cell B tells us that if S does not fight but R does, the latter gets his most preferred situation while S is the sucker and gets his worst possible result (being defeated or severely handicapped). In Cell C, S and R switch places as the sucker (R) and the most satisfied (S). The player who exploits the sucker is called a “free rider”: the bellicist gets a free ride to the detriment of the pacifist. Both S and R would prefer to land in Cell A than in Cell D, but the logic of a PD pushes them into the latter.

The reason is easy to see. Consider S’s choices. If R should decide to fight, S should do the same (Cell D), lest he be the sucker and get a utility of 1 instead of 2. But if R decides not to fight, S should fight anyway because he would then get a utility of 4 instead of 3. Whatever R will do, it is in the interest of S to fight; it is his “dominant strategy.” And R makes the same reasoning for himself. So both will fight and the system will end up in Cell D. (On the PD, I provide some short complementary explanation in my review of Anthony de Jasay’s Social Contract, Free Ride in the Spring issue of Regulation.)

This simple model explains many real-world events. Once a ruler views his interaction with another as a PD game, he has an incentive to fight (attack or retaliate). The ruled don’t necessarily all have the same interest, but nationalist propaganda may lead them to a contrary belief. One way to prevent war is to change some payoffs in the ruler’s matrix so as to tweak his incentives. For example, if S or R realizes that, given the wealth he may lose or the other’s military capabilities—if war  threatens his own power, for example—war would be too costly. The preference indices will change in the matrix; try 4,4 in cell A and 3,3 in cell D, with 2,1 and 1,2 in the other diagonal. New incentives will have eliminated the PD nature of the game.

Another way to stop the automatic drift into Cell D is for the two players to realize that, instead of a one-shot game, they are engaged in repeated interactions in which cooperation—notably through trade—will make Cell A more profitable than a free ride over several rounds. However, this path is likely to be inaccessible if S or R are autocratic rulers, who don’t personally benefit from trade and individual liberty as much as ordinary people. The possibility of transforming a PD conflictual game into a repeated cooperative game was brilliantly explained by political scientist Robert Axelrod in his 1984 book The Evolution of Cooperation (Basic Books, 1984).

Like all models, this one hides some complexities of the world. It does not explicitly incorporate deterrence, which is essential for preventing war as soon as one of the players views the game as a PD. But when deterrence has not worked—one did attack—the question is whether a counter-attack, and which sort, will have a better deterrent effect or will just be another step in mutual retaliation, that is, open war.

In the current Middle East situation, religion on the Iranian rulers’ side makes matters worse by countering rational considerations of military potential. Preferences are thus likely to differ from a repeated PD game. “When you shot arrows at the enemies, you did not shoot; rather God did,” goes a saying among Iranian radical zealots (quoted in “Iranians Fear Their Brittle Regime Will Drag Them Into War,” The Economist, April 15, 2024). You cannot (always) lose with God on your side.

Good shooting his arrow, by Pierre Lemieux and DALL-E

God or Allah shooting the arrow, by Pierre Lemieux and DALL-E