I disagreed with most of what Tyler Cowen said in his recent interview with Ezra Klein, but this part launched a vociferous internal monologue:
Ezra Klein
The rationality community.
Tyler Cowen
Well, tell me a little more what you mean. You mean Eliezer Yudkowsky?
Ezra Klein
Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef,
Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community
of people who are frontloading ideas like signaling, cognitive biases,
etc.Tyler Cowen
Well, I enjoy all those sources, and I read them. That’s
obviously a kind of endorsement. But I would approve of them much more
if they called themselves the irrationality community. Because it is
just another kind of religion. A different set of ethoses. And there’s
nothing wrong with that, but the notion that this is, like, the true,
objective vantage point I find highly objectionable. And that pops up in
some of those people more than others. But I think it needs to be
realized it’s an extremely culturally specific way of viewing the world,
and that’s one of the main things travel can teach you.
Here’s how I would have responded:
The rationality community is one of the brightest lights in the modern intellectual firmament. Its fundamentals – applied Bayesianism and hyper-awareness of psychological bias – provide the one true, objective vantage point. It’s not “just another kind of religion”; it’s a self-conscious effort to root out the epistemic corruption that religion exemplifies (though hardly monopolizes). On average, these methods pay off: The rationality community’s views are more likely to be true than any other community I know of.
Unfortunately, the community has two big blind spots.
The first is consequentialist (or more specifically utilitarian) ethics. This view is vulnerable to many well-known, devastating counter-examples. But most people in the rationality community hastily and dogmatically reject them. Why? I say it’s aesthetic: One-sentence, algorithmic theories have great appeal to logical minds, even when they fit reality very poorly.
The second blind spot is credulous openness to what I call “sci-fi” scenarios. Claims about brain emulations, singularities, living in a simulation, hostile AI, and so on are all classic “extraordinary claims requiring extraordinary evidence.” Yes, weird, unprecedented things occasionally happen. But we should assign microscopic prior probabilities to the idea that any of these specific weird, unprecedented things will happen. Strangely, though, many people in the rationality community treat them as serious possibilities, or even likely outcomes. Why? Again, I say it’s aesthetic. Carefully constructed sci-fi scenarios have great appeal to logical minds, even when there’s no sign they’re more than science-flavored fantasy.
P.S. Ezra’s list omits the rationality community’s greatest and most epistemically scrupulous mind: Philip Tetlock. If you want to see all the strengths of the rationality community with none of its weaknesses, read Superforecasting and be enlightened.
P.S. By “extraordinary” I just mean “far beyond ordinary experience.” People who take sci-fi scenarios seriously may find this category hopelessly vague, but it’s clear enough to me.
READER COMMENTS
Matt Raft
Apr 4 2017 at 1:31pm
Hasn’t sci-fi always been based on existing projects in R&D? The British/Netflix series Black Mirror is mostly real… and yet sci-fi.
blacktrance
Apr 4 2017 at 1:35pm
The utilitarian can easily answer most of them (the provided responses are mostly satisfactory), and with some minor modifications (e.g. rule consequentialism) can address the others as well.
To borrow from Bertrand Russell, I don’t subscribe to utilitarianism, but if anything could make me do so, it’d be the common arguments against it.
psmith
Apr 4 2017 at 1:46pm
…he posted, on the Internet.
Andrew_FL
Apr 4 2017 at 2:20pm
I reiterate my desire to see Bryan Caplan debate Scott Sumner on the merits of utilitarianism. The fact that one econlog blogger is a big proponent of it and another a strong critic is an elephant in the room and I think it would be quite illuminating as to the root of the difference there.
Tracy Flick
Apr 4 2017 at 2:20pm
I think what distinguishes Bryan slightly from the rationalist community is his philosophy background. In philosophy, we also have rationalism, but it means something slightly different to the above usage, I think, which is sometimes verging closer to scientism.
Hazel Meade
Apr 4 2017 at 2:59pm
I wouldn’t say that it is the “one, true, objective vantage point”.
I would say that the self-conscious effort to root out psychological biases is a continuing and constant battle that must always be waged by anyone who is reasonably intellectually honest.
We’ll probably never get to 100% true objectivity, but at least we all ought to TRY. Otherwise, you might as well give up any pretense of serious intellectual inquiry.
londenio
Apr 4 2017 at 3:16pm
One key example of religion-like behavior of the “rationality” community is the absurdly high probability they assign to the scenario where cryonics is a success.
It is possible that we can freeze our heads and then be “resucitated” in a brave new world and cured of whatever killed us in the first place. This probability is equal to epsilon.
The rationality community admits that it is not certain that cryonics will work. But the reward is high, so it is worth doing, even if epsilon is small.
The problem is that the writings, behaviors, and judgements (plenty in Less Wrong and similar sites) seem to indicate that the rationalists believe that epsilon is substantial. They are willing to pay for the service of freezing your brain, write about it, state that those who deny this service to their children are bad parents, etc. It seems that they think that epsilon is 1 to 5%. But if they were true rationalists and weighed all the evidence, they would conclude that epsilon is actually about 0.001%.
I find that this overstatement of the magnitude of epsilon is a kind of religious promise in an afterlife. Some religions say that if you follow the law/commandments/Jesus, you will live forever after death. The rationalists just tell you that epsilon is thousands of times higher than it really is out of the pure desire to live forever, or the fear of death. Certainly not based on unbiased analysis of evidence.
Luke Simpson
Apr 4 2017 at 5:19pm
I feel the same way about consequentialism. The cavalier claims many people make about consequentialism being obviously correct drive me up the wall.
Thomas
Apr 4 2017 at 5:43pm
Bryan’s claim: “The rationality community is one of the brightest lights in the modern intellectual firmament. Its fundamentals – applied Bayesianism and hyper-awareness of psychological bias – provide the one true, objective vantage point.”
Bayesianism is bias clothed in scientific-sounding jargon. Hyper-awareness of psychological bias is itself a kind of psychological bias.
There’s a whole lot of psychological projection going on.
Scott Alexander
Apr 4 2017 at 6:53pm
The survey of academic philosophers ( https://philpapers.org/surveys/results.pl ) shows that about as many of them are consequentialist (= the rationalist view that you’re calling “utilitarian”) as any other ethical view. If there are “many well-known, devastating counter-examples”, then the people who spend their whole lives studying this kind of thing haven’t heard of them.
Looking at your link, these are objections that have been raised and discussed, the same as there are objections that have been raised and discussed to any philosophical theory (you can find a few hundred of these for eg atheism if you want). As always, people who don’t like the theory think they’re “devastating” and people who do like the theory point to any of the hundred or so books and papers claiming to have decisively refuted them.
Ilya Novak
Apr 4 2017 at 9:33pm
Bryan’s claim, “its fundamentals – applied Bayesianism and hyper-awareness of psychological bias – provide the one true, objective vantage point. It’s not “just another kind of religion” is precisely Cowen’s point.
This is just replacing the metaphysics of religion with another one, that of the enlightenment .There is no way to rationally “prove” that Bryan’s claim is true. How could he, other than to use rationality itself! He just accepts that this methodology leads to objective truth no less than a Catholic accepts that reading the bible is a methodology that leads to objective truth.
Nor is there any reason to suppose that the psychology of Bryan’s mind is different from that of the religious. He has his own deep psychological motivations to believe in rationality no less than a Catholic does to believe in Christ.
Peter
Apr 4 2017 at 11:07pm
So I just discovered this “rationalist community” as it is called and I have some questions after reading about what the term means as explained at lesswrong.com.
Can two members of the rationalist community disagree? If so, does that mean that one member is wrong (or more wrong)? If so, how wrong can a member be and still be considered rational? I suppose at some point a member could be so “more wrong” that they might be deemed no longer rational? But where is the cutoff point?
But to the point: is it possible for someone to be both fully rational, and fully wrong?
I am not trying to be snarky. I am seriously trying to ask what I think is a very interesting question that I have been pondering lo these last 30 years or so since I read Socrates formulate the observation: When it comes to matters of justice – of right and wrong – Socrates once noted that even the gods disagreed. And I have, in turn, noticed that intelligent men and women of good will continue to do so even today.
Why?
Does the divergence of judgment on right and wrong track with intelligence? My conjecture, based on the evidence I have collected, is that it does not.
If this is true it would mean we need some other explanation – beyond intelligence (or rationality?) – for the lack of consensus on the all-important, crucial question lurking within the heart and soul of many a human being: “what is the right thing to do?”
Now, math is definitely a very rational exercise. And obviously all users of math agree with each other. (OK, that was snarky).
But this process does in fact do a marvelous job of producing (a certain kind of) consensus. Perhaps we could use math to create consensus about justice and those questions that even the gods disagree on? This has been tried, of course. See Leibniz, others.
What’s that you say? Bayesian? I see. Maybe the Bayesian approach holds the keys of success?
Ah, but can moral questions (the ones most important to the human being) really be reduced to empirical processes?
Your prior on this question may be: yes! But what of those whose prior is: no!
For example, if my prior is that human organs should not be sold for profit because it fundamentally devalues the human being, what sort of “evidence” could convince me otherwise?
Or what if my (very, very strong) prior is that humans should not eat each other, even for survival. What if I choose to starve rather than kill my mate for the feast? What if I stop three others from killing someone for the same reason?
Can you show me an algorithm to change my mind? What is that formula again that tells me what I should do?
I am not sure if these are the best examples, but the point is that I really wonder if a purely rational approach using statistics and mathematical models can ever really solve the questions that most vex the core identity of the human being?
Einstein once said that not everything that can be counted counts and not everything that counts can be counted.
Can the empirical method answer all of our moral (and some would argue therefore our most important) questions? Really?
I had high hopes for a while but I have grown skeptical. I suspect that these questions closest to the heart – what is the right thing to do? – touch something in the human being deeper than the merely rational.
And by the way, is every bias wrong? Are all of our moral instincts to be dismissed because they are instinctive?
The truth is that the human being is more than merely rational.
And for myself I have come to think that I prefer it this way. After all, if a calculator could determine my most cherished values, or answer the moral questions most important to my core sense of identity then would I not be reduced to the moral equivalent of a well-coded machine? Can the human being survive such a transformation?
Maybe that day of pure rationality will yet come when the human is transcended and moral choices along with all others will have become standardized computational utility calculations but I fear that something quintessentially human will have been lost.
I suspect that as much as we think we would be better off, in the end there is no escape from the dilemma of being human.
James
Apr 4 2017 at 11:19pm
Andrew_FL,
The debate you propose would be interesting but for one problem: The utilitarianism that Bryan denies is not the same utilitarianism that Sumner espouses.
Specifically, the standard arguments Caplan cites against utilitarianism are all arguments against the view that it is a demonstrable moral fact that people have a duty to act at all times to maximize happiness.
There may be some utilitarians who believe this, but I don’t think this is what Sumner believes. I hate to speak for others but I suspect Sumner believes something more like “Moral facts may or may not exist but even so the expected results of our actions are the best guide we have when deciding how to behave.”
On the other hand, I would enjoy seeing Caplan debate with himself. On the one hand, he seems to believe that human minds are the result of natural selection. On the other hand, he seems to believe that the intuitions produced by minds are informative on matters outside the fairly narrow scope of what is expedient for survival and reproduction.
Up The Irons
Apr 5 2017 at 12:37am
Extraordinary relative to everyday experience, yes. But rationalists argue that many things about the modern world would be considered extraordinary relative to the everyday experience of those who lived in the past.
So your disagreement is about how much to weight intuitions from one’s everyday life as a prior when predicting the future. The problem with that strategy is that it historically has not worked very well.
mjgeddes
Apr 5 2017 at 12:40am
When the ‘rationalists’ encounter complex phenomenon, they’re too quick to jump on one big idea or system that offers the promise of ‘explaining everything’.
If something is simple, then sometimes it really is the case that one big idea is going to explain it all (often works in physics for example). But for things that are complex, it’s much less likely that there’s one big idea that ‘explains everything’.
For complex phenomenon, I’d say rationalists should be looking for a combination of 2 or 3 different big ideas at a minimum, to even get to the point where you’ve got a rough approximation of the truth.
Ethical theories are good example of the rationalist mistake at work. Rationalists have tried to jump on the one big idea that is ‘the answer to everything’ (Utilitarianism/Consequentialism), then they spend most of their time working out all the logical implications in an attempt to apply it everywhere. What happens is that it gets more and more disconnected from reality. An enormous logical edifice is constructed that’s not connected with empirical reality, and the rationalists then fall in love with their own logical constructions and ignore empirical reality.
In the example of ethics, it’s a complex phenomenon, I pointed out that 2 or 3 big ideas at least are probably needed. So here’s a ‘common-sense suggestion’:
For ethics, why not a hybrid of the 3 main theories?
Virtue Ethics AND Consequentialism AND Deontology (a combination).
The idea is that it may be possible to get a rough equivalence between the 3 main ethical theories in many situations, but sometimes the results of pairing say (Deontology, Virtue Ethics) can over-ride or ‘out-vote’ consequentialism in some situations- so different considerations are being balanced out, in such a way that the ethical theories are mutually self-supporting, each having complementary strengths and weaknesses.
Isn’t this ‘triple-aspect’ hybrid model of ethics more likely to give sensible answers than trying to pick only one ethical theory to cover everything?
Alexander Kruel
Apr 5 2017 at 5:47am
Actually, they do not. They know that their framework of rationality and
ethics is
broken. The problem is rather that they still draw action relevant
conclusions from it other than that it needs to be fixed.
I also disagree here. The real problem is that they realize how hard
empirical science is, that much published research is wrong, yet at the same
time are highly confident that their armchair theorizing about much more
complex and vague ideas is correct enough to devote all their time and
resources in order to fix problems that those ideas imply.
DM
Apr 5 2017 at 6:56am
‘If there are “many well-known, devastating counter-examples”, then the people who spend their whole lives studying this kind of thing haven’t heard of them.’
Wrong. The problem is that all the plausible alternatives to consequentialism that are clearly worked out enough for us to have an idea what they’d say about particular cases, also face apparently devastating counter-examples.
Though the general point that consequentialism is about as well regarded as any other theory is correct.
mjgeddes
Apr 5 2017 at 7:58am
An ethical theory that almost no one can follow is a useless theory.
If a trolley is headed towards 5 strangers on one track, and you could save them by diverting the trolley to run over your boy-friend/girl-friend on the other track, would you do it?
Of course not! And that one example right there should be telling you that ‘Utilitarianism’ is a very dumb basis for founding all of ethics on.
And the fetish with ‘Utilitarianism’ is just one of many examples of what makes these ‘rationalists’ so very annoying. No common sense at all.
Ari
Apr 5 2017 at 12:33pm
I agree with this.
Luke Simpson
Apr 5 2017 at 2:25pm
I agree with DM a few responses above, and I imagine most academic philosophers would say the same thing as well. It’s practically par for the course to have a variety of options, all of which face apparently devastating counter-examples. We face this situation not only in the consequentialism/deontology/virtue ethics debate, but many other branches of philosophy as well.
That’s not to say that there aren’t any celebrated successes in philosophy or that we never make any progress. But regarding consequentialism as though it is clearly the best choice is epistemologically too reckless. The development of consequentialist ideas is a success and qualifies as important progress. Establishing that those ideas are correct isn’t a done deal.
Steve J
Apr 5 2017 at 2:30pm
mjgeddes – your example is telling me nothing. I guess you are telling me I should favor people I like over those I don’t know at a rate of 5 to 1. Is this one of those self evident rules of deontology like do not steal?
BH
Apr 5 2017 at 2:42pm
Scott Alexander should acknowledge that a huge chunk of academics still identify as Marxists. The academy has all sorts of weird selection effects and cultural craziness, so that fact that a lot of people in philosophy departments go for utilitarianism is not, in itself, a terribly meaningful response.
Aloha Jane
Apr 5 2017 at 2:59pm
[Comment removed pending confirmation of email address. Email the webmaster@econlib.org to request restoring this comment. A valid email address is required to post comments on EconLog and EconTalk.–Econlib Ed.]
Jeff
Apr 5 2017 at 7:30pm
mjgeddes,
I think I would choose to save the five strangers rather than just saving a girlfriend.
Now if she were holding a cat, that might change things….
Kieran McCarthy
Apr 5 2017 at 8:01pm
At least according to the most recent SSC survey (which is probably the best current data source for the beliefs of this community) a little less than 47% ascribe to consequentialist ethical philosophies.
That’s about the same percentage of Americans who believe in ghosts!
Also, if you look at the same survey, the question about readers’ concerns about AI reveals a discrete uniform distribution, with a few more readers who are not at all concerned or not very concerned than those who are.
Also, I think for most of the sophisticated members of that community, the primary AI-catastrophe concern relates to the “alignment problem,” not to the problem of “hostile AI,” or any of your other examples. But, that said, I would agree that there is a bit too much emphasis on improbable scenarios in the community.
Either way, saying that “most people” in the rationalist community ascribe to any one ethical philosophy would appear to be untrue (though not by much).
mjgeddes
Apr 5 2017 at 8:25pm
Steve,
What the example should be telling you is that there is no one simple ethical calculation procedure that can give answers – instead, we’re actually balancing multiple concerns (2 or 3 major different concerns at least).
In the example, raw numbers of people have *some* importance (utilitarianism), but that is counter-balanced by considering duties to people you know (deontology) and personal virtues (virtue ethics).
Jonathan Gress-Wright
Apr 5 2017 at 11:30pm
BH,
If Marxists predominated in economics departments, you might have a point, but it seems rather that Marxist academics predominate in departments where actual knowledge of economics is irrelevant. Likewise, if economists were largely utilitarian, I could put that down to irrational cultural selection bias, but if philosophers seem to have settled on utilitarianism, I grant it more weight.
Of course, I can already think of some rebuttals to my points, but I’ll let you take over. 🙂
John Halstead
Apr 6 2017 at 8:39am
This is pretty common in anti-consequentialist thought but I think it fails to acknowledge the stance of real world consequentialists. It’s not as though consequentialists are unaware of these objections. Rather, it’s that they’ve decided they are not actually decisive after sustained consideration. All moral views have very counter-intuitive implications. We have to figure out which counter-intuitive implications to accept. e.g. absolutist libertarianism arguably implies that all pollution should be zero (no cars, industry), since it constitutes a non-consensual bodily harm.
To take the repugnant conclusion as an example… Yes this looks very counter-intuitive. But then it turns out all other population ethics theories have counter-intuitive implications. Total utilitarianism arguably has a better rationale than all these other rival theories.
mjgeddes
Apr 6 2017 at 8:59am
OK, so imagine 6 philosophers walk into a bar. There’s a pair of Deontologists, a pair of Consequentialists and a pair of Virtue ethicists.
Now after consuming too much alcohol, the philosophers announce to the whole bar they are going to ‘solve ethics once and for all’.
Each type of philosopher is a savant who gives excellent ethical advice much of the time according to his own philosophy, unfortunately, sometimes, in certain ‘edge cases’, the advice will be crazy. After conferring for a while, the philosophers hit on a possible solution.
Now for each type of philosopher, one member of the pair agrees to act as an expert in giving ethical advice based on his philosophy, but the other member of the pair agrees to act as an expert in picking holes in the advice of a *different* type of philosopher.
And then the philosophers mixed into new pairs in order to argue. Let us call these new pairings ‘Adversarial Philosophy Networks’ (APNs)
Here’s the APN pairings:
Deontologist vs Consequentialist
Deontologist vs Virtue Ethicist
Consequentialist vs Virtue Ethicist
Often the arguers could agree, but sometimes the crazy ‘edge cases’ would appear, in which cases the disagreements raged. And here the philosophers agreed to the following procedure to handle these edge cases: a majority vote would be taken based on the outcomes of all the APNs arguments, and this would act to ‘patch’ each type of philosopher’s insanity when they encountered their edge cases.
And as the arguments of the philosophers raged , it turned out that every philosophers edge cases had a successful ’patch’ ,and progress begun to be made…and to the astonishment of the whole bar, progress started to accelerate, slowly at first, but more and more rapidly….until the insights were raining down like a raging torrent of enlightenment.
And as the night wore on the sun-rise came and ethics was solved.
Light Bulb Moment
Keith Henson
Apr 6 2017 at 8:20pm
One of the strangest things I ran into in the last 20 years is where being
rational is not the best strategy for a person’s genes.
The idea that a person and their genes could differ in what would be
considered rational fell out of a biology model looking into the conditions
where war is the better strategy over staring in place. It turned out that
looking at the problem that in some circumstances it was rational from the
genes viewpoint, but not for the people involved to fight a war. Genes are
in the game much longer than individual people. They have been selected to
survive and they build humans that reflect this.
Where there is a pending resource crisis and only half the local population
will not starve, it turns out that going to war (in this simple model) was
37% per war better for genes than starving.
The reason is the propensity of the victors to take the young women of a
defeated group as booty, taking them (and their genes) as wives or extra
wives for the warriors. But from the gene’s viewpoint, this limited the
downside risk of going to war, which would (on average) be lost half the
time.
So if you wonder where the non-rational decisions come from with starting
and fighting wars, it’s expected since it’s been selected.
Background here:
https://www.academia.edu/777381/Evolutionary_psychology_memes_and_the_origin_of_war
Steve J
Apr 6 2017 at 10:34pm
mjgeddes – you can’t seriously believe utilitarianism cannot account for duties to people you know and personal virtues. Why do you think so many people are utilitarians – it can account for whatever the heck you want it to account for.
FeepingCreature
Apr 8 2017 at 11:10pm
Rationalist here – of course I flip the lever. (I mean, I might not in the moment because damn that’s a hard scenario, but I definitely wish I’ll be strong enough to do it.)
Knowing nothing in advance, there’s five times as great odds that my boyfriend will be in the 5-group than the 1-group. Consequentially, I want the person at the lever to be committed to switching whether or not their actual boyfriend is in the 1-group, and if they’re rational they’ll agree with this. Well, I’m rational enough to agree, so …
This is just rule utilitarianism in the superrational implementation. The point is that living in a world with trolley problems, everybody wants the person at the lever to switch it, and doing so does maximize lives saved.
Of course, this does not even faintly generalize, because scenarios with levers that demand one person’s action usually are just invitations to outright murder, and in the generic case you want the person at the “lever” to not do anything. But no, in this particular scenario the repugnant answer is also the correct one imo.
Peter Gerdes
Apr 10 2017 at 12:00am
All the objections to utilitarianism you mention have been well known for decades and are either not problems at all (you didn’t think morality would be the one area of human endeavor without counterintuitive conclusions) or have equally compelling responses.
I don’t expect you to take my word for it merely observe that utilitarianism is still taken seriously by leading moral philosophers. Despite the fact that utilitarianism’s greater simplicity and computational approach make it particularly difficult to philosophical papers about. With almost any deontic theory, preference satisfaction or virtue ethics account you can take random social problem and wring a paper out of it. Here is a Kantian analysis of racism or a virtue ethics account of affirmitive action can easily get you papers (even if they don’t add much to philosophy) but utilitarianism doesn’t have enough little knobs to turn to say anything but “do the computations” yet despite this publication bias it is still highly popular amoung philosophers.
This is because every other theory has MUCH worse problems. Hell most moral theories consistantly give totally incorrect results unless you intervene to fix them, e.g., Rawlsianism actually collapses into utilitarianism on the most reasonable assumptions (you want to maximize your expectation of utility..by definition…not your minimum possible value) and great gyrations are taken to avoid such interpratations.
Most importantly utilitarianism behaves like we expect any mind independent theory to behave it is simple, relatively few knobs and produces surprises. So if you are a moral realist at all it gets the greatest weight of probability (if you aren’t a realist then who cares utilitarianism is good enough)
Peter Gerdes
Apr 10 2017 at 1:08am
Wait most of those aren’t even reasonable criticisms.
Difficulty of quantification/measurement/practical usefulness
Not a reason to doubt the truth of utilitarianism any more than the fact that QM is hard, measurements are hard to impossible and particle physics has limited usefulness are reasons to doubt the truth of the standard model.
Utilitarianism is a theoretical claim as to what actions (or really states of affairs) are right/desirable not a heuristic for living your life. Indeed, the fact that some moral theories confuse those two points is a mark against them. Indeed, there is nothing inconsistent about utilitarianism yielding the verdict that you should adopt some other non-utilitarian heuristic for making choices.
higher/lower pleasures
Bentham and Mill were simply snobs who wanted their theory to be appealing to other snobs. Base pleasures are probably just as good and so what?
Utilitarianism may have been developed by Bentham and Mills but it is no more restrained by their views than calculus is by Newton’s
Injustice/Motives/Particular moral obligations
Even pure mathematics yields deeply counterintuitive results so why should morality be any different?
But having said that I’ll take a greater expectation of utility than a just distribution any day. Of course injustice does matter insofar as it does reduce utility so the truly disturbing results are avoided.
Why Utility?
Ultimately this comes down to a comparison of alternatives and judgement of a priori plausibility. Give me a better option.
—
This didn’t even include any of the serious criticisms of utilitarianism (e.g. discounting and infinite expected utility).
Comments are closed.