I just completed Tyler Cowen’s thought-provoking new book entitled “Stubborn Attachments“. In the book, he explains his views on the appropriate ethical perspective to use when thinking about public policy issues. Here Tyler discusses one common complaint about utilitarianism:

Are you willing to value the interests of others on par with your own, or those of your family or friends?  If you buy into the standard utilitarian logic of beneficence, a mother might have to abandon or sell her baby in order to raise money to send food to the babies of others.  At this point most people balk at the argument and search for some moral principle that limits our obligations to the very poor.

One problem is that the needs of the suffering are so enormous that only a few able or wealthy individuals would be able to carry out individual life projects of their own choosing.  Most people would instead become a kind of utility slave, serving only the interests of others and feeding themselves just enough to survive.  The result is that utilitarianism—or many forms of consequentialism, for that matter—is often seen as an excessively demanding moral philosophy.  People fall into two camps: those who reject utilitarianism for its extreme and unacceptable implications, and those, like the early Peter Singer, who trumpet the call for greater sacrifice and pursue the utilitarian logic to a consistent extreme.

There is no single argument capable of rebutting this critique of utilitarianism.  Nor is there any pair of arguments.  There are three arguments, however, that in combination do rebut this argument, and restore utilitarianism to its rightly place at the core of any sound moral theory.  I’ll take these three arguments one at a time:

1. The selfish gene.

2. What do you mean we, white man?

3.  Think globally, act locally.

Let’s start with selfishness.  What does it mean to say a moral philosophy is “excessively demanding”?  Christianity calls for “turning the other cheek”.  How many people are capable of doing this?  Does that make Christian morality wrong?

Nature has hard-wired us to be somewhat selfish, favoring our own interests ahead of those closest to us, and their interests ahead of strangers.  There are evolutionary models that predict this sort of selfishness. But the fact that people are naturally inclined to occasionally be selfish, or violent, or racist, or sexist, or cruel, or lazy, or dishonest, has no bearing on whether those traits should be considered good.  I have no problem admitting that I fall far short of being an ideal person, or that Mother Theresa was much more worthy of admiration than I am.  I’m much more selfish than I would be if I strictly adhered to the utilitarian moral code.

The world would probably be a better place if I wrote a check for $10,000, gave it to Tyler, and asked him to pass it on to the Ethiopian family that he plans to support with the royalties from his book project.  But I don’t plan to do so.  It would be comforting for me to dream up some sort of moral theory that exalted selfishness; but that would be dishonest, as deep down I would not believe it.

On the other hand, I’m not completely selfish.  People I speak with are occasionally surprised when I say I favor the recent tax changes that hurt California residents.  “You live in California now, why don’t you support unlimited deductions for S&L taxes?”  Umm, because I’m not that selfish.  Utilitarianism is a sort of lodestar, like the teachings of Jesus.  Sensible people understand that most of us will fall well short of perfection, as we are highly fallible human beings.  But that doesn’t mean that what seems good is not actually good, just because it’s difficult to achieve.  My suggestion to people contemplating Peter Singer’s advice?  Just do the best you can.

The second item on my list is the punch line from a joke referencing the famous radio show featuring the Lone Ranger and his trusty sidekick Tonto (who was a Native America).  When Tyler implies that most of us recoil from a moral theory that demands that affluent Westerners give a big chunk of their income to poor residents of developing countries, I wonder whom he means by “most people”.

I say, “Tyler implies”, as he’s too smart to actually come out and make that precise argument, rather he uses an even more extreme example involving the sale of babies, which I’ll address in the third point.  But others often do make this argument about charitable giving, so I’d like to deal with it here.  It’s quite likely that if you polled all 7.3 billion humans, a majority might support large transfers of wealth from the rich countries to the not so rich countries.  So it’s not at all clear that “most people” find this utilitarian implication to be unpalatable. On the other hand, the sale of babies might be a bridge too far, even for poor people in developing countries.  I’d guess that most would not recommend that American moms sell their kids to raise money to deliver food to Guatemala.

One of the strengths of Tyler’s book is that he explains how the demands of utilitarianism are actually not as great as they seem:

I’ll instead focus on the broader conceptual question of whether growth or redistribution—in the public or private sector—is a more effective means of healing the poor.  When framed in this manner, we’ll see that there are some strong and strict limits on our obligations to redistribute wealth, even if we accept the full utilitarian framework. (emphasis added).

People are most effective when they focus on helping their own children, not kids who live 10000 miles away, about which they know little or nothing.  Think globally, act locally.  That doesn’t mean that all charity is ineffective—not at all—rather that one should not naively assume that $1000 can be transferred from the rich to the poor at anything close to zero cost.  This is a common mistake made by moral philosophers, who tend to underestimate the importance of economic incentives.  I’ll explore the how Tyler addresses these issues (the most interesting part of his book) in a future post.

These three arguments need to taken jointly, not one at a time.  Starting with the third argument, the demands of utilitarianism are large, but not as large as suggested in thought experiments by moral philosophers.  If the thought experiment seems nightmarish, say selling babies, there’s a good chance that’s because the policy would not in fact make the world a happier place.  Any time you see a nightmarish utilitarian thought experiment, which makes you recoil, remember that any truly utilitarian policy will make the world a happier place.

Here you might be thinking, “Of course Sumner would claim that utilitarian policies make for a happier world, as he’s a utilitarian.”  No, I’m not “claiming” that utilitarian policies make for a happier world, that’s the definition of utilitarian policies.

Once we understand that utilitarianism calls for some sacrifices, but nowhere near as draconian as those suggested by some moral philosophers, we might still be confronted with the fact that many affluent people will recoil from the implied obligations.  But the purpose of moral philosophy is not to describe what’s best for a small minority of affluent people; it’s to describe what’s best for humanity as a whole.  It’s not at all implausible that the world would be better off if more money were donated to poor people in Ethiopia.  And the fact that selfish people like me don’t want to do so doesn’t change the fact that this would make the world a better place (unless the disincentive effects of charity are even greater than I assume.)

When a thought experiment seems to reach a “repugnant conclusion”, ask yourself if you are thinking about the experiment in the correct way.  Here’s Tyler:

Parfit’s repugnant conclusion compares two population scenarios. The first outcome has a very large, very fulfilled, and very happy population.  The world also has many ideal goods, such as beauty, virtue, and justice.  The second outcome has a much larger population, but few, if any, ideal goods.  Parfit asks us to conceive of a world of “Muzak and potatoes.”  Nonetheless, the lives in this scenario are still worth living, although perhaps by only the tiniest of margins.  Parfit points out that if the population of the second scenario is large enough, the second scenario could welfare-dominate the first.

In my view, Derek Parfit’s thought experiment, and dozens of other similar anti-utilitarian examples, is nothing more than a cognitive illusion, artfully presented to lead the reader astray.  In this case, the reader is tricked into thinking about the example from a sort of “veil of ignorance” perspective.  Which society would you rather live in?  But that’s not the question.  The question is not which society is preferable to live in, rather it’s whether you’d prefer living in the poor one, or having a 0.01% chance of living in the nice one (and a 99.99% chance of never existing at all.)

Most people want to live.

Let’s reframe this thought experiment.  Imagine a kingdom in central Asia with 1000 people living in a beautiful town in the mountains in the far north of the country.  They are a powerful clan that is entitled to receive all of the oil revenues of the kingdom.  They enjoy the arts and have an enviable moral code based on their Buddhist religion.  It’s a lovely place.  This mountain town is surrounded by a vast dry hinterland with 25 million poor uneducated peasants.  An asteroid is approaching Earth, and seems likely to destroy the hinterland of this kingdom, while sparing the affluent mountain town of 1000.  NASA can divert the asteroid to save the hinterland but only by destroying the mountain town.  Should it do so?  I say yes.  Those that feel Parfit’s thought experiment is repugnant might say no.  After all, the repugnant conclusion argument is used to suggest that the world would be a better place with the 1000 lucky residents of the mountain town, rather than the 25 million downtrodden residents of the hinterland.

Notice that this is another example of overlooking “Tonto’s perspective”.

PS.  Just to head off some familiar comments, I’m a “rules utilitarian” who believes that the world is better off with a set of laws banning censorship, murder, and certain other bad things, even if there are some public utterances, and some people, that the world would be better off without, judged individually on a strict utilitarian framework.  So my policy views end up pretty close to some natural rights advocates, except I don’t think the rights are “natural”, rather a “useful fiction”.