I’ve never found critiques of utilitarianism to be persuasive. Here are a few I’ve run across:
1. The critic describes a horrible scenario, and then asks the reader to assume that it results in greater total utility. The scenario might be war, slavery, or the entire world’s wealth being held by one man. This thought experiment is supposed to persuade us that utilitarianism is flawed, because it might lead us to approve of various horrible political/economic/social systems. I react differently, thinking that all they show is that the horrible scenario being envisioned would almost certainly make the world a less happy place, and hence my support for utilitarianism becomes even stronger.
2. Other criticisms seem based on cognitive illusions like innumeracy. Imagine great harm to a single person, and a bit of extra pleasure to millions, where the total effect is a net positive to aggregate utility. We are supposed to think this is a cruel and immoral trade-off. But those examples simply take advantage of the fact that we can much more easily imagine great harm to a single person, than small gains to millions. In our everyday life we (correctly) are willing to do things that will make exactly that sort of trade-off. We drive to the store rather than walking, knowing there is a slight chance of dying in an auto accident. We might inoculate millions of people against an uncomfortable but nonfatal disease, knowing a few might die from the vaccine. Indeed the entire cost/benefit approach to things like highway safety improvements relies on exactly that sort of utilitarian logic.
3. We are told that utilitarianism might lead to the conclusion that people would prefer to leave “real life” and be hooked up to a virtual reality “happiness machine.” Poll results supposedly show that this is not so. But the polls are faulty, and merely show a bias for the here and now over the “elsewhere.” Even worse, society seems to be rushing headlong into a virtual reality world anyway.
4. Other “counterexamples” take advantage of illogical moral intuitions that have evolved for Darwinian reasons, like discomfort at pushing a fat man in front of a trolley car to prevent even more deaths.
In a recent book review in the NYR of Books, Cass Sunstein has a very thoughtful critique of the way philosophers go about constructing anti-utilitarian arguments. Here’s one excerpt, but you should read the whole thing:
Edmonds has written an entertaining, clear-headed, and fair-minded book. But his own discussion raises doubts about a widespread method of resolving ethical issues, to which he seems committed, and of which trolleyology can be counted as an extreme case. The method uses exotic moral dilemmas, usually foreign to ordinary experience, and asks people to identify their intuitions about the acceptable solution. . . . But should we really give a lot of weight to those reactions?
If we put Mill’s points about the virtues of clear rules together with a little psychology, we might approach that question in the following way. Many of our intuitive judgments reflect moral heuristics, or moral rules of thumb, which generally work well. You shouldn’t cheat or steal; you shouldn’t torture people; you certainly should not push people to their deaths. It is usually good, and even important, that people follow such rules. One reason is that if you make a case-by-case judgment about whether you ought to cheat, steal, torture, or push people to their deaths, you may well end up doing those things far too often. Perhaps your judgments will be impulsive; perhaps they will be self-serving. Clear moral proscriptions do a lot of good.
For this reason, Bernard Williams’s reference to “one thought too many” is helpful, but perhaps not for the reason he thought. It should not be taken as a convincing objection to utilitarianism, but instead as an elegant way of capturing the automatic nature of well-functioning moral heuristics (including those that lead us to give priority to the people we love). If this view is correct, then it is fortunate indeed that people are intuitively opposed to pushing the fat man. We do not want to live in a society in which people are comfortable with the idea of pushing people to their deaths–even if the morally correct answer, in the Footbridge Problem, is indeed to push the fat man.
On this view, Foot, Thomson, and Edmonds go wrong by treating our moral intuitions about exotic dilemmas not as questionable byproducts of a generally desirable moral rule, but as carrying independent authority and as worthy of independent respect. And on this view, the enterprise of doing philosophy by reference to such dilemmas is inadvertently replicating the early work of Kahneman and Tversky, by uncovering unfamiliar situations in which our intuitions, normally quite sensible, turn out to misfire. The irony is that where Kahneman and Tversky meant to devise problems that would demonstrate the misfiring, some philosophers have developed their cases with the conviction that the intuitions are entitled to a great deal of weight, and should inform our judgments about what morality requires. A legitimate question is whether an appreciation of the work of Kahneman, Tversky, and their successors might lead people to reconsider their intuitions, even in the moral domain.
Nothing I have said amounts to a demonstration that it would be acceptable to push the fat man. Kahneman and Tversky investigated heuristics in the domain of risk and uncertainty, where they could prove that people make logical errors. The same is not possible in the domain of morality. But those who would allow five people to die might consider the possibility that their intuitions reflect the views of something like Gould’s homunculus, jumping up and down and shouting, “Don’t kill an innocent person!” True, the homunculus might be right. But perhaps we should be listening to other voices.
READER COMMENTS
Niclas Berggren
Jul 20 2014 at 2:37am
I don’t know if Sunstein cites him, but this is essentially the argument developed in detail by R. M. Hare in his great book Moral Thinking.
Matt R
Jul 20 2014 at 7:28am
All utility is good in utilitarianism which can lead to some morally uncomfortable points,
Consider child molestation. A strict utilitarian would insist on counting the utility if the child molester as a positive to be weighed off against all the dis utilities for the child, society etc,
It just seems morally repugnant to consider the utility of the child molester at all.
Something similar happened over the debate to bail out the banks. Some people just had a strong reaction against counting the benefits to the banks when doing their utilitarian calculus.
JH
Jul 20 2014 at 7:33am
Sumner says that objections to utilitarianism rely on irrational moral intuitions that are merely evolved for Darwinian reasons. But utilitarianism also relies on moral intuitions, like the judgments that: pain is intrinsically bad, happiness is intrinsically good, more utility is morally better than less utility, and so on. Some of these judgments are almost certainly evolved intuitions. For example, there are excellent evolutionary reasons to think that judgments about happiness and pain have an evolutionary story. We evolved to think that pain is bad. Does that make these intuitions irrational? If so, then utilitarianism is also irrational. See this paper for more on this point.
Every moral view relies on intuitions. It therefore no response to objections to utilitarianism that they rely on intuitions, while utilitarianism does not. It is also no objection to deontology that it relies on intuitions that have been shaped by evolution. So does utilitarianism.
Greg G
Jul 20 2014 at 8:01am
JH,
—“Sumner says that objections to utilitarianism rely on irrational moral intuitions that are merely evolved for Darwinian reasons.”
Actually he does not say “merely.” This matters more than you think. Unless you are a creationist, Darwinian reasons are not trivial. Natural selection works through heuristics not the discovery of metaphysical certainties.
We do care, and should care, about many different values and these can easily conflict with each other. In my opinion, the search for a single principle that settles all moral disputes is a fool’s errand. A clever philosopher can always find an exception. Exceptions that actually occur in real life are, and should be, more persuasive than those that only occur in thought experiments.
Scott is not claiming that utilitarianism is useful because it avoids relying on intuitions. He is claiming that it is useful because it yields better results than other systems.
Greg Heslop
Jul 20 2014 at 9:06am
There is an excellent discussion about happiness in Robert Nozick’s The Examined Life, in which the reader is asked to imagine a given amount of happiness distributed over his lifetime. If there are reasons to prefer one distribution over another (such as preferring [1, 1, 1] over the occasional deep unhappiness of [103, -50, -50]), then happiness cannot be all that is worth pursuing. The same example can be repeated with “utility” substituted for “happiness”.
Scott Sumner
Jul 20 2014 at 9:14am
Thanks Niklas.
Matt, Yes, that’s sort of what I had in mind with my slavery example. What if the pleasure derived by slaveowners exceeded the pain suffered by slaves.
My only answer is the pragmatic claim that I don’t believe those sorts of hypotheticals actually occur in the real world. And if they do occur (as with say pot smoking) then perhaps we should change our laws. Maybe pot smoking isn’t as horrific as the government assumes.
JH, Yes, it’s likely that our utilitarian intuitions are also evolved. But I think there is more to it than that. Some intuitions evolved to deal with a social environment that no longer exists. For instance, our evolved instincts about war are probably no longer sensible in the nuclear age. That may also be true about attitudes toward certain economic policies. That’s where the Kahneman reference comes in.
Of course it’s also possible that our evolved instincts toward utilitarianism are no longer appropriate. I certainly don’t claim any sort of ironclad proof of its superiority. Rather I’d say that when I think about the different moral systems that have been put forward, it seems the most reasonable and/or least bad.
Greg, Good point.
konshtok
Jul 20 2014 at 9:35am
what am I missing?
from a utilitarian pov killing one person and using the organs to save the lives of others is the right thing to do , right?
isn’t that enough to make utilitarianism NOT a good basis for morality?
Greg G
Jul 20 2014 at 9:43am
konshtock,
I think I know what you are missing.
A policy of organ harvesting would cause a great many people to suffer from fear that their organs would be harvested. That introduces a lot of disutility to the policy. That also explains why we see so little organ harvesting in real life.
Joe Teicher
Jul 20 2014 at 10:47am
My personal objection to utilitarianism is that I just don’t much care about distant strangers. I don’t think that there is any particular reason that I should care about them. As far as deciding what public policies I favor, I will always judge based on the effect on the people I care about. Of course, once I’ve made up my mind, utilitarianism can be a good justification for my preferences. After all, there seem to be a lot of people who buy those arguments.
Greg G
Jul 20 2014 at 11:42am
Joe
It probably is the case that all moral systems are justifications for intuitive preferences.
Even if that is true, utilitarianism has the virtue of requiring those distant strangers you don’t care about to consider the effect of their actions on you.
Tom Crispin
Jul 20 2014 at 1:03pm
Strange.
In doing utility calculations, almost no one seems to remember the concept of diminishing returns.
Which makes these “extreme” situation utility calculations mostly hand-waving.
Greg Heslop
Jul 20 2014 at 2:02pm
@ Greg G,
But this fails to engage in the hypothetical example. Konshtok did not claim that organ harvesting would be made public policy under utilitarianism. In a world of seven thousand million people, one doctor’s killing one patient to save X others will trigger no fear under the right circumstances (say, the killed patient had no friends or something). Such clandestine organ harvesting will pass the utilitarian cost-benefit calculation.
Now you may argue that those circumstances are incredibly rare (and they surely can’t be too common or people will be suspicious and fear the hospital), but the point is that by engaging in the example (provided it is at all possible, but this one clearly is) one finds out exactly what moral theories prescribe. If one does not like the conclusion, the moral theory fails (unless one is wrong about not liking it). One may still claim that it is the least bad one, but then one should be similarly lax about judging competing moral theories.
Greg G
Jul 20 2014 at 2:40pm
Greg Heslop,
Good point. That would be a case where utilitarian reasoning fails. I guess I was thinking more in terms of public policy because you most often see utilitarian reasoning invoked for decisions involving public policy.
As I said in my first comment in this thread, I don’t believe any single principle can resolve all ethical dilemmas. Utilitarian considerations always matter but that doesn’t mean they should always be decisive.
Adam
Jul 20 2014 at 6:02pm
What about the classic commensurability critique? The idea that you can’t reduce values to a single scale, even in a Pareto ordinal sense.
A couple of things I’ve written on the subject:
http://theumlaut.com/2014/06/23/can-utilitarianism-explain-dog-ownership/
http://sweettalkconversation.wordpress.com/2014/06/21/on-commensurability/
The bottom line is that without a commensurable scale, you can’t speak meaningfully of “better” aggregate situations.
Adam
Jul 20 2014 at 6:39pm
I’ve also written a quick response to this post: http://sweettalkconversation.wordpress.com/2014/07/20/mistaken-moral-logic/
Ak Mike
Jul 20 2014 at 7:06pm
A lot of good discussion here. My 2 cents: Although utilitarianism is an unusually weak ethical system, it suffers from the same basic problem as all philosophical ethical systems: ethics is basically psychology, not philosophy.
Philosophical ethical systems are all rationalizations, attempting to make some logical sense of what your gut is telling you. Because that turns out to be quite difficult, it is usually not that hard to find counterexamples for any system (as the very cogent discussion between the two Gregs above shows) that our gut rejects as not ethical even though within the rules of the system.
Because ethics depends entirely on “intuition” (i.e., gut feelings, psychology), Prof. Sumner’s comments about “illogical moral intuitions” is, I’m sorry, nonsense.
Utilitarianism falls down in a number of ways. First, like every other philosophical system of ethics, it has no rational or logical basis. Second, it leads to repugnant conclusions such as that discussed by the Gregs. Third, there is really no way to measure “utility” particularly as between different individuals, so not only is there no way to know whether what you have done has increased total utility, but the entire concept of utility is so vague as to verge on meaninglessness.
Fourth, our world is such that no one can predict the future impact of current actions, particularly as the circle of causality widens into the more distant future – and with a results-based ethics like utilitarianism, this means one never knows whether one’s actions are ethical, and has little in the way of guidelines for behaving ethically.
Mark V Anderson
Jul 20 2014 at 8:32pm
Nice posting. I am glad that someone in this group of bloggers accepts utilitarianism.
AK Mike: Of course there is no logical basis at the heart of ethics. It is based on emotions. But that doesn’t mean that one can’t use logic to organize one’s emotional reactions to maintain consistency. That’s what utilitarianism does. Good and bad and the degree to which one counts each situation (such as how much pain is equivalent to killing) is entirely subjective and emotional. But utilitarianism tells us that it is these good and bad results that matter, not arbitrary acts that a person sees as ethical or not.
Greg and Greg. Harvesting the organs of an otherwise healthy person so that five other people that would otherwise die would have equivalently healthy lives is clearly the most ethical position to take, if this harvesting had no other effect on society at large. But of course the assumption of no other effects on society is the kind of hand waving that anti-utilitarianists use, when in real life it could never happen that way. I don’t understand how anyone could see the trading of one life for five as unethical, if there are no other effects.
Greg G
Jul 20 2014 at 10:48pm
Mark,
So are you consistently organized enough that we may now harvest your organs then?
ChrisA
Jul 21 2014 at 2:02am
Scott – do you see philosophy discussions like this as works of art – nice to look at but not actually used for anything? Or as guides to actions, whether for you individually, or other people (such as Governments)? If the latter, then the theory needs to have practical issues (or pragmatism) involved. Thus the objections that it is impossible to weight different people’s utilities against each other, the “utility monster” issue, the issue of pitting intuition against the theory, animal vs humans, significant harm to one versus minor irritation to many, freedom and respect for individual rights versus the community, even the happy genocidists problem etc etc, become very significant. A guide to action needs to be actually usable or it is “by definition” not much use.
James
Jul 21 2014 at 11:54pm
“I’ve never found critiques of utilitarianism to be persuasive.”
How persuasive does the critique need to be? Is there some compelling argument FOR utilitarianism that needs to be overcome with an even better argument? Read Bentham, Mill, and work your way on down to more modern utilitarians and the best you can hope for is a description of what utilitarianism is, plus some fairly decent demonstrations that the arguments against utilitarianism are not without flaws. But flawless is the wrong standard. Since there is no argument for utilitarianism, any argument at all should be sufficiently persuasive. Here are some that I think are more than persuasive enough to meet that bar:
1. If utilitarianism is true, nearly all leisure is immoral. Since this implication is almost surely false, then the premise is almost surely false.
2. Utilitarianism denies any distinction between people’s preferences and moral facts. Whatever the other merits of utilitarianism, it’s not even about what English speakers have in mind when they use words like “ethics”, “ought”, and “morals.” It’s a change of subject.
3. Most claimed attempts at utilitarianism seem to be made in bad faith. Actually implementing utilitarianism is an optimization problem but ask a utilitarian to prove the optimality of their ideas and they are more likely to waffle or compare their recommendations to a well chosen subset of the search space than to refer to simplexes or lambdas. It’s like they are using utilitarianism as a sort of branding rather than an actual technique.
I’m sure any of these is persuasive enough to overcome all zero arguments for utilitarianism.
Comments are closed.