Should we want the greatest good for the greatest number? (And, incidentally, should the “we” mean a numerical majority?) The Trolley problem in philosophy raised the issue. I was reminded of that in an interesting article by economist and philosopher Michael Munger, “Adam Smith Discovered (and Solved!) the Trolley Problem” (June 28, 2023), as well as in a follow-up Econtalk podcast.

The precise form of the Trolley problem was formulated by British philosopher Philippa Foot in a 1967 article. Imagine you see a runaway trolley speeding down a steep street and about to hit and kill five men working on the track. But you are near a switch that can divert the trolley to another track where only one man is working. None of the men see the trolley coming. You are certain that if you switch the track, only one will die instead of five. Should you switch it, as a utilitarian would ?

If you answered yes, consider an equivalent dilemma (quoting from Munger’s article):

Five people in a hospital will die tomorrow if they do not receive, respectively: (a) a heart transplant; (b) a liver transplant; (c) and (d) kidney transplants; and (e) a blood transfusion of a rare blood type. There is a sixth person in the hospital who, by astonishing coincidence, is an exact match as a donor for all five. If the head surgeon does nothing, five people will die tonight, with no hope of living until tomorrow.

Assuming there is no legal risk (the government is run by utilitarians who want the greatest good for the greatest number and are fond of cost-benefit analysis), should the head surgeon kill the providential donor to harvest his organs and save five lives? To answer this question, most people would probably change their minds and reject the crude utilitarianism they espoused in the preceding Trolley problem. Why?

Munger argues that Adam Smith formulated another instance of the Trolley problem in his 1759 book The Theory of Moral Sentiments and discovered the principle to solve it. Smith did not express it that way, but his solution points to the distinction between intentionally killing an innocent person, which is clearly immoral, and letting him die from independent causes, which is not necessarily immoral. Drowning somebody to kill him is immoral, but not saving somebody who is drowning may not be. Intentionally shooting an African child is murder; not giving to a charity the $100 that would save his life is certainly not criminal.

A more recent argument by Philippa Foot (see Chapter 5 of her book Moral Dilemmas: And Other Topics in Moral Philosophy [Oxford University Press, 2002]) explains that the underlying basic distinction is “between initiating a harmful sequence of events and not interfering to prevent it” (this concise formulation of her complete argument is from the abstract of her article). More precisely, she writes:

The question with which we are concerned has been dramatically posed by asking whether we are as much to blame for allowing people in Third World countries to starve to death as we would be for killing them by sending poisoned food?

Emphasizing moral agency, the basic principle is that

It is sometimes permissible to allow a certain harm to befall someone, although it would have been wrong to bring this harm on him by one’s own agency, by originating or sustaining the sequence which brings the harm.

In his 2021 book Knowledge, Reality, and Value, libertarian-anarchist philosopher Michael Huemer also considers the Trolley problem and comes to a similar solution, albeit more nuanced in extreme cases. His philosophical approach is “intuitionism,” as the subtitle of this book suggests: A Mostly Common Sense Guide to Philosophy. (My double Regulation review, “A Wide-Ranging Libertarian Philosopher, Reasonable and Radical,” gives the flavor of this book and of his The Problem of Political Authority: An Examination of the Right to Coerce and the Duty to Obey [2013].)

Anthony de Jasay’s condemnation of utilitarianism as a justification for government (coercive) interventions is based on the simple economic observation that there is no scientific basis for comparing utility between individuals; for example, it is meaningless to say that saving five men preserves “more utility” than killing one. Utility pronouncements, he writes, “are unfalsifiable, forever bound to remain my say-so against your say-so.” (See my Econlib review of his Against Politics.)

What is pretty sure is that utilitarianism, and certainly “act utilitarianism” (as opposed to “rule utilitarianism”), does not work, except perhaps in the most extreme and uninteresting cases—such as “stealing $20 from Elon Musk without him noticing and transferring the money to a homeless person would create net utility,” that is, Musk would lose less utility than the pauper would gain. Even if the statement seems to make sense, we cannot predict a single individual’s behavior, only general classes of events: perhaps that homeless person will use the $20 to buy cheap alcohol, get drunk, and kill a mother and her baby, who would have been a second Beethoven. He might even be a utility monster, deriving “more utility” from the harm he causes to others than what they lose. Even if the homeless person uses his $20 to purchase a used copy of John Hicks’s A Theory of Economic History, the story of his “gift” might spread and lead to a billion greedy people crying for the same transfer from Musk. Or they may agitate for the $20 billion to be directly expropriated by the state to finance subsidies for them.

****************************************

I tried hard to have DALL-E represent the simplest version of Philippa Foot’s Trolley problem. Despite my detailed descriptions, “he” just could not understand–which is not really surprising, after all. Even the idea of a fork in a trolley’s track with five workers on one side and one on the other side, he could not represent. I finally asked him to draw a runaway trolley with one track and five workers in the middle of the track. The images he produced were among his most surrealistic, as you can see from the featured image of this post. Given his poor performance, I mentally apologized to Philippa Foot (who died at 90 in 2010) and instructed DALL-E to add to the image “an old, dignified woman (the philosopher Philippa Foot) in deep thinking and looking at the trolley coming.” In this simple task, the robot did quite well.

Philippa foot wondering about how DALL-E can have made such a mess of her Trolley problem.

Philippa Foot wondering about how DALL-E could make such a mess of her Trolley problem