The Argument from Hypocrisy (a close cousin of the “demandingness objection“) is one of the strongest objections to utilitarianism. (Strangely omitted from Scott’s inventory). The argument has two steps.
Step 1. Note that utilitarianism implies extreme moral demands. For example, maximizing total happiness requires you to give away all your surplus wealth to the needy – at least needy people whose behavior is unlikely to respond much to incentives, such as children.
Step 2. Point out that even the staunchest utilitarians are light-years away from fulfilling these extreme moral demands. Even Peter Singer “only” gives 20%.
To put the Argument from Hypocrisy conversationally: “Your view implies that you should give away all your surplus wealth to needy kids. But you don’t. If even explicit utilitarians like you don’t seem to take their views seriously, why should anyone else?”
To be fair, there is an obvious though embarrassing reply to Argument from Hypocrisy. Namely: “Like most humans, I’m deeply morally flawed. I know utilitarianism is true, but I’m too weak to live by it. Saint Paul had it right: ‘For the good that I would I do not: but the evil which I would not, that I do.'”
On reflection, however, a variation on the Argument from Hypocrisy is largely immune to the Pauline reply. I call it the Argument from Conscience.
Instead of harping on utilitarians’ moral weakness, the Argument from Conscience begins by singling out the most morally exemplary utilitarians.
Take Bill Dickens. I’ve known Bill for almost a quarter-century. In all these years, I have repeatedly witnessed him spontaneously take unpleasant actions out of a sense of moral duty. I have never witnessed him treat another person badly. Ever.
While Bill Dickens is a man of conscience, he’s also officially a utilitarian or near-utilitarian. How could his extreme scrupulousness possibly discredit his utilitarian philosophy? Simple. Like every other utilitarian, his behavior is wildly at odds with utilitarianism’s demands.
Although Bill gives generously to charity, he consumes far more than he needs to keep working. He skis in Colorado. He goes to GenCon. Bill also clearly prioritizes his contractual obligations above the desperate need of total strangers – even when repeated play is unimportant. If Bill forgot to tip a waiter, he would strive to make amends to the aggrieved waiter – not mail the waiter’s tip to Oxfam.
The upshot: If Bill Dickens told me, “Like most humans, I’m deeply morally flawed. I know utilitarianism is true, but I’m too weak to live by it,” I wouldn’t believe him. Bill is a paragon of decency. If he really believed he morally owed vast sums to the poor, he’d skip GenCon and fork over the money. Since he doesn’t, I infer that despite his official position, utilitarianism seems almost as crazy to him as it does to me. The same goes for every earnest yet non-compliant utilitarian. Utilitarianism doesn’t just go against their interests. It goes against their consciences.
To put the Argument from Conscience conversationally: “You live by your conscience. If you really thought utilitarianism was true, you would live up to it. Yet you don’t. If even scrupulous utilitarians like you don’t take the view
seriously, why should anyone else?”
And that, my utilitarian friends, is the Argument from Conscience. The problem isn’t that your doctrine is too good for you. The problem is that you’re too good for your doctrine.
READER COMMENTS
Scott Young
Jul 20 2014 at 10:57pm
Why is it so hard to believe that a moral system could exist which sets its ideal too high for any person to reasonably follow?
After all, we accept many other hypothetical ideals for which reality falls far short and yet are still useful theoretical constructs. Human rationality in economics. The frictionless plane in Newtonian physics.
Why not the utility-maximization principle as a version of idealized morality?
Perhaps utility maximization is an insufficient description of moral reality, just as the other assumptions from science are simplifications. But that’s hardly a sufficient argument for rejecting them outright.
adbge
Jul 20 2014 at 11:10pm
The Argument from Nihilism: If you have to consistently follow your moral philosophy for it to be valid, moral nihilism (or close enough) becomes the only tenable position.
The Argument from Coincidence: If you’ve more-or-less built a Frankenstein morality that just *happens* to place you into the category “good, decent, human being”… well, isn’t that a fine coincidence?
Kevin Dick
Jul 20 2014 at 11:17pm
I would make the obvious Hansonian reply to your objection Bryan.
In far mode, I’m a utilitarian. In near mode, I’m merely a decent person. As a human, I am incapable of being a full utilitarian in near mode. Unfortunately, many of the acts to which you refer require a near mode interaction and so I fail to take them.
But that really has no bearing on whether a _collection_ of human beings are morally obligated to build _institutions_ that help them be as utilitarian as possible over the long term.
Hopaulius
Jul 21 2014 at 12:23am
I’m not sure that “giving all your surplus wealth to the needy” is the best way to help the needy. For example, surplus wealth, in the form of investment, funds research that cures diseases. It also drives technological progress, which in turn improves the food and water supplies. The best thing for the needy would be a roaring economy.
Eelco Hoogendoorn
Jul 21 2014 at 2:48am
Why is it so hard to believe that a moral system could exist which sets its ideal too high for any person to reasonably follow?
They ‘could exist’ (not sure about your precise intended meaning there), but its preachers will find they have a hard time being taken seriously.
Perhaps utility maximization is an insufficient description of moral reality, just as the other assumptions from science are simplifications. But that’s hardly a sufficient argument for rejecting them outright.
‘Perhaps creationism is an insufficient description of physical reality, but that’s hardly a sufficient argument for rejecting it outright’
Poor analogy of course. Morality isn’t descriptive; its prescriptive. Its about talking people into doing the things that we want them to do. Physical reality doesn’t factor much into it; it is at most a décor for our rhetorical antics.
But being an effective moralizing agent requires you to build coalitions, and that’s where hypocrisy factors in. Try playing the iterated prisoners dilemma with a ‘don’t do as I do, but do as I say’ attitude, and see what happens.
Nick
Jul 21 2014 at 8:28am
Can’t a utilitarian believe in maximizing the weighted sum of human happiness, with a proportionally larger weight on themselves?
RPLong
Jul 21 2014 at 8:44am
This is why I think virtue ethics are the way to go. Generosity is a wonderful force; but giving all your money away to other people violates the principle of practical wisdom.
Yes, we all want to live a virtuous life; but that means having the good sense not to sacrifice all other virtues for the sake of one of them.
Greg Heslop
Jul 21 2014 at 9:04am
If I had to choose between a life of ten utils lasting 80 years, and a life of nine utils lasting forever (so that each “period’s” payoff might follow the series 0.8, 0.09, 0.009, 0.0009, etc., or some other distribution), I would take the latter.
To me, that is the best argument against utilitarianism; that other things beside utility matter. Of course, a utilitarian may object that I am mistaken somehow.
@ Nick,
I believe there are so very many versions of utilitarianism that one can basically construct any sort of theory in which (1) consequences matter and (2) people other than oneself are morally significant, and call it a version of utlitarianism. There may exist several hundred such versions. I like the variation which only cares about minimizing disutility myself (though I am not an adherent of it – I only think it a nice though).
gwern
Jul 21 2014 at 9:53am
I don’t see how the Dickens version is anything but the original argument plus some tendentious psychologizing & no true Scotsman thrown in for rhetorical effect. (‘But Bryan, I *do* feel bad that I can’t live on less.’ ‘Sorry Bill, but I know you too well; and that’s No True Remorse you feel! QED, utilitarianism is false.’)
FB
Jul 21 2014 at 10:13am
@Scott
“After all, we accept many other hypothetical ideals for which reality falls far short and yet are still useful theoretical constructs. Human rationality in economics. The frictionless plane in Newtonian physics.”
Splitting hairs, maybe, because I agree with your central point, but this is a false analogy. Utilitarianism doesn’t attempt to model or approximate reality, unlike these unrealistic (insofar as they don’t agree with more accurate accounts of the world) albeit useful constructs. Rather, it serves as an ideal.
Tiago
Jul 21 2014 at 11:03am
I quote Joshua Greene, who wrote the best defense of utilitarianism I am aware of, in Moral Tribes.
“Being a flesh-and-blood utilitarian does not mean trying to turn yourself into a happiness pump. To see why, one need only consider what would happen if you were to try: first, you wouldn’t even try. Second, if you were to try, you would be miserable, depriving yourself of nearly all of the things that motivate you to get out of bed in the morning (that is, if you still have a bed). As a halfhearted happiness pump, you would quickly rationalize your way out of your philosophy, or simply resign yourself to hypocrisy, at which point you’d be back where you started, trying to figure out how much of a hypocrite, and how much of a here, you’re willing to be.
At the same time, being a flesh and blood utilitarian doesn’t mean being a complete hypocrite, giving yourself a free pass.”
As Tyler Cowen would say, read the whole thing.
(Not That) Bill O'Reilly
Jul 21 2014 at 2:00pm
Virtually every critique I have ever seen of utilitarianism can seemingly be boiled down to “there is no good way to measure utility on an objective basis.”
Which is entirely true, and the reason I don’t personally identify as a utilitarian at any practical level, but also largely beside the point when discussing abstract moral first principles.
Every commendation of non-utilitarian morality can seemingly be boiled down to “this is a good heuristic for utilitarianism” (excepting those that rely on the premise “Because someone says so”).
Randall Randall
Jul 21 2014 at 3:19pm
Well, Christianity does claim exactly that it is possible to be good in theory, but not possible in practice (“no, not one”).
JA
Jul 21 2014 at 3:57pm
I am not sure this critique applies to rule based utilitarianism. Would it really be a utility maximizing world (in the medium to long run) if everyone gave away everything they did not need to live? I’d think we’d be lucky to be living with technology and comforts equivalent to that of the 1700s if that was the universal principle.
Maybe, people trying to make themselves and families happy without doing violence to others, and maybe giving 10-20% of their income to the less fortunate is a better rule to maximize utility in the long run.
Bryan’s hypothetical surely seems very near sighted. Also, non-orphaned children have parents who respond to incentives of their children or prospective children.
Eliezer Yudkowsky
Jul 21 2014 at 4:37pm
I think someone needs to define “utilitarianism” in enough detail that someone like me can figure out what to say to this. Are we supposing that people who try to do the thing that maximizes aggregate welfare after taking into account all side effects, including side effects on themselves, are somehow worse people than those who pursue some other policy? Or are we supposing that they’re in predictable error about what maximizes reliable welfare and can’t simply be informed about this fact? Do you think that the sum of all aggregate welfare is actually better if Bill doesn’t tip the waiter and instead sends it to Givewell? (Is OXFAM an especially efficient charity? I don’t remember hearing about it in any effective altruist circles.) It is not obvious to me that this is actually the case, and I certainly think that what I care about here is aggregate welfare of Earth’s future light cone.
Thomas Colthurst
Jul 21 2014 at 5:41pm
“[T]he Argument from Conscience begins by singling out the most morally exemplary utilitarians.”
I think that there is some equivocation going on here with the concept of “morally exemplary”.
For the argument to make sense, it would have to be that morally-exemplary-according-to-utilitarianism people fail to be good utilitarians. But the standard Bryan seems to be applying (to Bill Dickens, for example) is of morally-exemplary-according-to-conventional-morality.
Surely it would be no argument against virtue ethics (say) if I was hypothetically good-according-to-utilitarianism but bad-according-to-virtue-ethics. And I think that remains the case even if there were many more people like that, and even if we all loudly proclaimed our belief in virtue ethics. All that would show is that many people are at least sometimes bad at being able to act on their beliefs.
Finally, it is certainly true that standard presentations of utilitarianism make it easy to see that it is a “demanding” morality in the sense that it says that it is better to give half your income (like my coworker Jeff Kaufman does) than just 20%. But it isn’t clear to me that this is really any different than conventional morality saying that it is better to be as kind as Mr. Rogers was. I personally would find it vastly easier to give away half my income than to be that kind!
Now to be fair, conventional morality does have the concept of supererogation — the idea that some acts are good but not morally required. But most formal moral systems of the type philosophers talk (utilitarianism obviously, but also Kantian ethics, etc.) about don’t include room for supererogation, probably because there are good standard philosophic arguments against supererogation.
Mark V Anderson
Jul 21 2014 at 9:14pm
Bryan has greatly over-simplified utilitarianism to be the “greatest good for the greatest number.” And a rather simple minded version of that philosophy to boot.
As Nick says, one can be a utilitarian and still put a greater weight on consequences to oneself. As JA says, even helping children can sometimes be harmful in the long run. As any libertarian knows, helping oneself often results in a greater benefit to society than directly helping others. The invisible hand, anyone?
To me utilitarianism just means that consequences are what matters in ethics. The idea that one is obligated to take a specific action regardless of consequences is nuts in my world-view. Many libertarians seem to feel that any and all liberty is a good thing even if it causes more harm than good. I am a constant proponent of liberty because it almost always results in better lives, but I feel it is terrible policy to support it regardless of its consequences.
Tony Mercurio
Jul 21 2014 at 9:55pm
Since the subject matter here is philosophy, perhaps I may be permitted to argue from logic. The short version goes something like this:
Why is any moral philosophy (utilitarianism included) given our consideration that has no objective point of reference? If it is wished to assert relativism, then do so plainly (knowing that it ends up in the fallacy of self-contradiction). But, if it is wished to assert objective truth, then consider no further than a Creator.
A moral law is only valid if there is a moral law giver. We cannot transcend ourselves; anything explaining life as we know it and prescribing a law accordingly must be outside of ourselves and our universe. You may be inclined then to posit a multi-verse, which however can neither explain itself.
So now assuming God, why would we seek the philosophy of men? At best it can be derivative, but certainly it will be deficient as a worldview.
Eelco Hoogendoorn
Jul 24 2014 at 3:31am
I certainly think that this is what you want others to believe; we are all utilitarians when explaining our motivations to others.
If you are smart about it, you can hide you actual revealed preferences by skepticism concerning the question whether the unborn aids infected baby really needs that 5 dollars more than you do; you don’t know its utility function, after all, and who knows how much good you will do down the line by investing that 5 dollars in yourself?
Its the best of both worlds; you continue to spend 99% (give or take) of your resources directly on yourself, while at the same time not directly reminding other people of the sometimes uncomfortable fact that we are all creatures driven by self-interest, first and foremost.
Personally, I feel more comfortable around people who can muster a shred of honesty, if only with themselves. But granted, im the odd one out, and your strategy appears perfectly well attuned to the sensibilities of the median homo sapiens.
Eelco Hoogendoorn
Jul 24 2014 at 4:14am
Because then the concept might actually have some bearing on empirical reality, and a weighted utility function, ironically, has no utility as a moral yardstick.
Its exactly the comical crookedness of this yardstick, combined with the fact that hardly anyone will point out this elephant in the room, which makes it such a popular yardstick to measure others by.
Mark V Anderson
Jul 24 2014 at 7:51pm
Tony — Who says God is more moral than humans? Presuming you can somehow obtain this moral viewpoint from God, I still don’t see why this other creature should tell humans how to behave.
Comments are closed.