Here are my reactions to your latest round of comments.
The more I’ve discussed moral philosophy with people, the more I suspect lots of people are just intuitionists in denial. Scott Alexander, who calls himself a utilitarian, was admirably upfront about this when he wrote:
It seems to boil down to something like this: I am only willing to accept utilitarianism when it matches my moral intuitions, or when I can hack it to conform to my moral intuitions. It usually does a good job of this, but sometimes it doesn’t, in which case I go with my moral intuitions over utilitarianism.Which raises the obvious question – why not just say you’re an intuitionist then and save yourself all the effort? A lot of brainpower is spent by very smart people trying to find ways to hack their ethical theories in the way Scott describes. Most of these efforts just seem like obvious attempts at reverse engineering an explanation to get to the desired and intuitive conclusion, and the obviousness of this backwards reasoning only further discredits that ethical theory to anyone who doesn’t share it.
I’d say, and I think Huemer would agree, that intuitionism is a meta-ethical theory while utilitarianism is a first-order ethical theory. So technically, you could be an intuitionist utilitarian or a non-intuitionist non-utilitarian. Substantively, though, you’re right. In the Huemerian framework, the correct formulation is that most so-called utilitarians are “moderate deontologists in denial.” And as your “reverse engineering” point suggests, utilitarianism frequently degrades the quality of social science, because one of the easiest ways for utilitarians to to retain their ethical theory is to embrace to implausible factual beliefs.
It’s hard to see how demandingness is a particularly strong objection to utilitarianism. All moral theories can be extremely demanding in certain circumstances.
Yes. The point is that utilitarianism is demanding in almost all real-world situations, not just odd hypotheticals. It’s not implausible that you would have extreme moral duties in an emergency. What’s implausible is that you have extreme moral duties almost all the time. (Alternately, that we’re almost always in a moral emergency).
John’s child is dying from cancer. He cannot afford to fly her to a country offering pioneering treatment. Let’s assume it has a reasonably high chance of saving her life. Can John hack into Bill Gates’ bank account and steal the money he needs?
I take it that Caplan thinks this would be wrong; and yet what could possibly be more demanding than a moral principle which requires you to step back and watch your own child die from cancer? In fact, utilitarianism would probably be less demanding in this case, since you may well increase overall utility by stealing the money.
Actually, I don’t think this would be wrong. Like Huemer, I endorse a moderate deontological view that allows rights violations when the benefits vastly exceed the costs.
On utilitarianism, Huemer writes: “To a first approximation, you have to give until there is no one who needs your money more than you do.” But, no, this is not a consequence of utilitarianism. Huemer is overlooking the fact that in utilitarianism future people count just as much as present people. If I invest my surplus wealth, I will benefit future people, probably doing more good than if I gave it away to people who will consume it immediately.
First of all, this objection only works if you’re choosing between investment and charity. It does not work if you’re choosing between consumption and charity. So utilitarianism still requires you to bring your consumption down to near-zero.
Second, as long as people in the future are likely to be much richer than people today, the law of diminishing marginal utility means that it could still easily be better to give to charity today. (This is also the least-bad objection to Robin Hanson’s view that effective altruists should take all the money they would have given to charity and invest it in order to help a vastly larger number of recipients in the far future. A la Benjamin Franklin).
(Also, given the inherent self-centeredness of virtually all people, the expectation of give-aways would undermine the recipients’ incentives to be productive.)
Almost all utilitarians would agree that government shouldn’t tax people into penury due to the bad incentives. But these bad incentives only exist because almost no one is a consistent utilitarian! A consistent utilitarian would keep working hard despite the lack of selfish payoff.
READER COMMENTS
Parrhesia
Jul 8 2021 at 12:04pm
I have found the rationalist community somewhat frustrating in their discussions of utilitarianism. I do respect them because they are an intelligent group but I think this is a major blindspot. I had never seen that quote from Scott but that seems to be the way he operates. I wrote a piece against an essay in which he tries to justify following his intuitions rather than following utilitarian obligations. It discusses a distinction he made between “axiology” and “morality” which I found illogical. I also attacked the idea of Moral Offsetting. I don’t see it making any sense. I think it is a form of guilt management.
Article: https://parrhesia.substack.com/p/contra-alexander-on-moral-offsetting
Something interesting that I didn’t see Caplan talk about in his review is animal ethics. Caplan has raised the point about moral concern for bugs with regards to veganism which I read prior. However, it was when Scott Alexander mentioned bug utility in the context of utilitarianism that I had my idea. It only took a little math to demonstrate that bug suffering could be the worst problem in the whole world, orders of magnitude larger than human suffering. Basically, it would be a massive utility monster.
Quotation here:
From a quick google, there are 10,000,000,000,000,000,000 bugs alive at any given time. The number of bugs one human interacts with or could potentially interact with is huge. If bugs have even a very small fraction the capacity for suffering or pleasure that humans have, then it makes me think that utilitarianism’s most primary concern by orders of magnitude is how are we going to treat bugs. Perhaps rather than dedicating 50% of our income to saving starving children in the developing world, we should be using 50% of our income to create a farm with hundreds of trillions of happy ants.
We do not even have to be sure that bugs suffer. We can say that bug suffering is unlikely and that there is only a .01 chance. We could also say that bug suffering only amounts to .01 of human suffering. Of course, there would actually be a probability distribution over potential suffering but consider this simplified version. Multiply these and arrive at .0001. Multiply this number by 10,000,000,000,000,000,000 and arrive at 1,000,000,000,000,000 suffering capacity. If I increase the number of humans on earth for the sake of simplicity to 10 billion, then the “problem of bug suffering” is 100,000 times more important than the problem of human suffering. Raw numbers is only a very rough guide for a measure of moral importance. We would have to evaluate average life quality and other factors as well. I have to be a bit reductionist for clarity’s sake.
Article: https://parrhesia.substack.com/p/insect-suffering-as-the-biggest-utility
Would any utilitarian concede my point and start spending a bunch of time figuring out how to make bug life feel better? I don’t think so. I got some objections like we should focus on humans first or that we don’t know how to make bugs feel better so we don’t have anything to do. But there is no utilitarian and good reason to focus on humans first. And not knowing how is not a good reason to not try. If bug suffering is even just 1000 times more important, then this is as if there was a disease killing 1000x as many people as COVID and we were saying “let’s focus on covid first because we can figure out how to fix that one”
You could make objections about neurons but then you would have to hold that nematode suffering is the most important problem in the world. You could make the argument that it scales non-linearly with neurons but then big brained people would be significantly more important due to additional neurons. If you set your calculation off, then intelligent people could be orders of maginitude more important. I got into a disagreement and someone set the parameters of the argument such that bug suffering was insignificant; they said okay I think bug suffering is 0.0000000…01 of human suffering. Something that they would not do unless I already proposed the poblem! As you said in the post, this leads to empirical beliefs that may be wrong.
Lance S. Bush
Jul 8 2021 at 12:16pm
I don’t share the notion that it’s implausible we’d have extreme moral duties. Intuitions differ on the matter, and I don’t see any reason to privilege Huemer or your intuitions over mine, and don’t generally buy into intuitionism as an adequate way to determine what (if any) moral duties we have. You have one set of values, and I have a different set of values. Mine happen to better accord with utilitarianism. I don’t think yours are mistaken, but I do think that the attitude that something strikes you as implausible or absurd is at best only an indication that it does not accord with your personal preferences or values. I don’t regard it as significant evidence that it’s literally false.
For what it’s worth, utilitarians have much to say about the demandingness objection. A utilitarian could point to humanity, with their limited epistemic, emotional, and motivational resources, as the problem.
By way of analogy, consider a goal: reaching the summit of a mountain. Suppose your only way to get up the mountain was to drive an old rickety car. The car just isn’t up to the task. Nevertheless, a goal is a goal, and you can at least do your best to get as close as you can to the summit. If, while doing so, you can make adjustments to the car, or even fix it entirely, so that you can reach the summit, all the better (to translate this part of the analogy: this is why as a utilitarian I’d favor transhumanism: simply modify humans so that we can give until it hurts).
The point of this analogy is that goals aren’t implausible or bad or not worth pursuing merely because your means of pursuing (in this case, the limitations of human psychology) are inadequate. The utilitarian says that, in principle, it would be good to eliminate all the suffering in the world.
If you or I cannot achieve that, does that mean that suffering is no longer morally bad, simply because it seems so onerous to place that burden on you or me? I don’t have that intuition at all. Quite the contrary, I really do think all suffering is bad, and if you or I just can’t muster the will to eradicate it, the fault lies with us, not with utilitarianism. Personally, I find it appalling that anyone would think our moral duties are circumscribed by our psychological resources, such that we have no obligation to augment ourselves to do as much good as we can, asymptotically approaching a world with as little suffering as we can possibly achieve, and attempting to produce as much happiness as possible. To the extent that I have anything like a moral intuition, it’s that it really would be morally best to maximize happiness and minimize suffering; I just happen to think humans are pretty poor vehicles for achieving this goal.
Another problem I have with this line of reasoning is the notion that the bar for what morality requires of us should be set at…well, what, exactly? Are we to adopt some kind of status quo approach where morality can only demand of us action that doesn’t obligate us to do much that’s different from what we’re already doing? That sure seems to be what demandingness objections lean on. I don’t see what the rationale is here. We wouldn’t necessarily reject a theory in physics because it required us to alter our perspective on how reality works too much, nor would we reject the theory of evolution because it’s, as Dennett puts it, a “strange inversion of reasoning.” If there are moral facts in the same way that there are facts about physics or biology, they may deviate a fair bit from how things seem to us. After all, it may be that our sense of what is morally required of us is deeply skewed by a bias towards moral apathy, a bias in favor of the status quo, and so on. I’d also note that how demanding a moral theory seems could be variable and contingent on our individual psychologies. There are plenty of effective altruists who donate huge sums of money to charitable organizations. And there are people who dedicate their lives to helping others at huge personal cost to themselves. I think these people are morally better than us. Simply because most of us aren’t so altruistic (myself included) doesn’t mean we ought not be doing what they are.
So is utilitarianism demanding? Absolutely: and that’s exactly why I’m in favor of it. My problem isn’t utilitarianism. It’s the shortcomings of humanity.
Philo
Jul 8 2021 at 1:10pm
You should admit that a wealthy utilitarian ought not to give away the bulk of his wealth: he should retain and invest most of it, while living a Spartan lifestyle himself. (The exact specification of “Spartan” remains to be spelled out.) You qualify this admission by imagining a future in which everyone is relatively well-off; but that is only a remote possibility: for the next few generations there are likely to be billions of absolutely poor people. You also seem to admit that, in practice, my regularly acting in a conventionally charitable fashion will tend to have an adverse effect on the incentives of poor people. As you yourself have maintained, giving to the poor is probably supporting someone’s irrational, self-defeating lifestyle.
Let me add that charity is also infused with a knowledge problem: I know if I am hungry, and whether a hamburger or a fish sandwich will better combine nutrition with tastiness (thus accomplishing two good ends). I do not know whether you are hungry, or what you consider tasty. My epistemic position argues for a division of labor: I will aim primarily to satisfy my desires, while you take primary responsibility for your own. It is reasonable to extend my concerns somewhat beyond myself, to family, friends, community, (perhaps nation,) all presently existing people, even all people who will ever exist. But my knowledge gets scantier as the groups get bigger and more remote, and my level of practical concern declines accordingly. These are utilitarian considerations: the gain in happiness from my buying and eating a fish sandwich (or whatever) is practically certain; the gain from my delivering a fish sandwich, or even giving a comparable amount of money, to a remote beggar—while it might be greater—is very uncertain.
The knowledge problem would apply in a world comprising exclusively utilitarians; it applies with even greater force in the actual world of (almost entirely) largely selfish people.
Furthermore, in the actual world, a utilitarian cannot expect that handing over his resources to one of these selfish people will produce a better result in the long run. Suppose that later he became needy, and required the assistance of someone who had more than she “needed” (however this is spelled out). He has every reason to expect not to receive this charity. He should, therefore, save for his own possible later maintenance, since no other resource is reliable in case he fell into need. And if a recipient of his earlier charity were enabled thereby to become wealthy herself, she would predictably lavish luxuries on herself rather than acting in a utilitarian fashion: from a utilitarian point of view, surrendering control of resources to non-utilitarians is dangerous.
Finally, let me mention the value of the rich person’s luxurious consumption as demonstration. As mankind becomes richer, more and more people will be dealing with lifestyle choices that the poor do not face. By trying to live well oneself on the surplus wealth at his disposal, the rich person provides an example for the nouveaux riches. As more and more people become rich, they will be able to look back on such examples, the better to get the greatest value out of their new wealth. (These examples may be either positive or negative, or some of each.) This consideration loosens somewhat the restriction to a “Spartan” lifestyle, in the interest of “experiments in living.”
Comments are closed.