As a professor and public speaker, I’ve spoken to a wide range of student groups. On reflection, my very favorite turns out to be: Effective Altruism. Indeed, I’ve had positive experiences with 100% of the EA groups I’ve encountered.
What’s so great about the Effective Altruists? They combine high knowledge, high curiosity, and high iconoclasm. When I ask EAs if they’ve heard of signaling, or the Non-Identity Problem, or pollution taxes, most of them say Yes. The ones who say No are eager to get up to speed. And if I defend a view that would shock a normal audience, EAs are more likely to be amused than defensive or hostile. They’re genuinely open to reasoned argument.
Though you might expect EAs to be self-righteous, they’re not. EA is a chill movement. While ethical vegans are greatly overrepresented in EA, they’re the kind of ethical vegans who seek dialogue on the ethical treatment of animals, not the kind of ethical vegans who seek to bite your head off.
Most EAs are official utilitarians. If they were consistent, they’d be Singerian robots who spent every surplus minute helping strangers. But fortunately for me, these self-styled utilitarians severely bend their own rules. In practice, the typical EA is roughly 20% philanthropist, 80% armchair intellectual. They care enough to try make the world a better place, but EA clubs are basically debating societies. Debating societies plus volleyball. That’s utilitarianism I can live with.
Why do I prefer EA to, say, libertarian student clubs? First and foremost, libertarian student clubs don’t attract enough members. Since their numbers are small, it’s simply hard to get a vibrant discussion going. EA has much broader appeal. Anyone who likes the idea of “figuring out how to do the most good” fits in. Furthermore, to be blunt, EAs are friendlier than libertarians – and as I keep saying, friendliness works.
Furthermore, while the best libertarian students hold their own against the best EA students, medians tell a different story. The median EA student, like the median libertarian student, like almost any young intellectual, needs more curiosity and less dogmatism. But the median EA’s curiosity deficit and dogmatism surplus is less severe.
The good news is that most EA clubs already contain some libertarians. And the best way to improve both movements is for the libertarians to regularly attend EA meetings. It’s a great chance to spread superior libertarian logos while absorbing superior EA ethos.
When I last spoke at the University of Chicago, one student defended education as a crucial promoter of social justice. In response, I argued that Effective Altruism is what the social justice movement ought to be. EAs know that before you can make the world a better place, you must first figure out how to make the world a better place. This in turn requires you to prioritize the world’s problems – and calmly assess how much human action can remedy each of them. Social justice activists imagine that these questions are easy – and as a result their movement has become one of the world’s major problems. Probably like the twentieth-worst problem on Earth, but still.
Perhaps the main reason why I get along so well with EAs is that their whole movement rests on a bunch of my favorite heresies. First and foremost: Good intentions often lead to bad results. EA exists because many good things sound bad, and many bad things sound good. The very existence of their movement is an attack on Social Desirability Bias and demagoguery. Furthermore, since EAs like to rank social problems by their severity and remediability, their movement is also a thinly-veiled attack on Action Bias and social stampedes. No, we shouldn’t do “all that we can” to fight Covid, or global warming, or anything, because resources are scarce, some problems fix themselves, many problems aren’t worth solving, and many cures are worse than the disease. Once you take these truisms for granted, fruitful conversation is easy. And fun.
READER COMMENTS
David Henderson
Jan 10 2022 at 11:19am
“not the kind of ethical vegans who seek to bite your head off.”
That’s good. Especially since they’re vegans. 🙂
Art K
Jan 10 2022 at 11:46am
This needs to be shouted out from the rooftops. More people need to know about EA.
Philo
Jan 10 2022 at 12:49pm
“If they were consistent, they’d be Singerian robots who spent every surplus minute helping strangers.” No, you are ignoring the knowledge problem and the capability problem. A utilitarian should expend vastly more effort on serving his own interests than on serving the interests of anyone else: He knows vastly more about his own situation and by what means it might be improved than he does about the circumstances of a random other person. And even if he knew as much about another person’s situation (which is impossible in practice), he would have severe difficulty in effectively bettering it by the fact that the average other person is thousands of miles away from him (in contrast, obviously, there is no spatial gap with himself).
Yet another consideration in favor of not bothering much about distant others is that people are naturally inclined to serve their own interests, while action on behalf of distant other people involves a mental strain.
Intelligent utilitarians would behave differently from egoists, but the behavioral difference would not be great, and would chiefly affect their conduct towards family, friends, and neighbors. They would be far from “Singerian robots.”
KevinDC
Jan 10 2022 at 2:10pm
I’ve read of a split between two camps of utilitarians that seems relevant here – one group called “actualists”, and the other called “possiblists.” The dispute is, in essence, should utilitarians take the most utility-maximizing action they could possibly take, or the most utility-maximizing action they would actually take, given the foibles of human nature? Possiblists are more idealistic, whereas actualists think utilitarianism needs to be tethered to facts about how people actually behave. If you make massive demands of people of the sort Bryan describes, most people respond by doing nothing at all. But if the requirements are milder, something like donating 10% of your income to effective charities, people are far more likely to participate. So the actualist would say that actualism is more utility-maximizing than possiblism.
I recall one utilitarian philosopher (I’m blanking on who) describe the difference with something like the following thought experiment:
You’re trying to decide what to do with your night. The best option is to stay home and study for the GRE, since you’ve been considering going to grad school. The second best option is to spend the evening at the pub with your friends, having fun and building social bonds. The least good option is to sit around on your couch binging Netflix. However, you also know that you’re a terrible study. Every single time you’ve decided to crack open the books, you get bored and idle and quickly end up watching Netflix anyway.
In this situation, when deciding whether or not to stay home or spend time with your friends, the actualist would say to see your friends, while the possiblist would say to stay home. For the same reason, the actualist utilitarian would say “Donate 10% of your income to a Givewell charity,” while the possiblist would advocate something like what Bryan says. Bryan’s charge of inconsistency might hold water against possiblists, but it’s pretty weak against actualists.
Stefan Schubert
Jan 10 2022 at 3:56pm
Fwiw I’ve written a paper with Lucius Caviola on how utilitarians should behave in the real world, which is heavily inspired by effective altruism. It discusses some of the themes you mention.
https://forum.effectivealtruism.org/posts/q7WwTuZQWMqDEEoWM/virtues-for-real-world-utilitarians
Mark Z
Jan 10 2022 at 4:21pm
It seems pretty clear to me that almost anyone living in the first world could make others better off by giving away all of their property – beyond what’s minimally necessary to survive – and giving it to others. It doesn’t seem extraordinarily hard to find people who would derive greater utility from it; e.g., they can always use more malaria nets in Africa. It seems like an unconvincing excuse that one just can’t find people who would clearly be better off with your money than you.
The second point only applies to how one should try try to influence others to behave, not how one should behave oneself. “Because I know people tend to be selfish, I should only try to convince them to give more than 10% of their income to poor people,” does not imply, “I’m only obligated to give 10% of my income to poor people.”
Philo
Jan 10 2022 at 8:08pm
On the second point: you and I, like other people, have certain natural tendencies. We can override them, but the effort required is a negative factor, which must be included in evaluating the total outcome.
I disagree with you on the first point.
Jacob
Jan 30 2022 at 2:44pm
You are forgetting one point of utilitarianism … one shouldn’t just care about about directional utility but also magnitude. There is an argument to be made, for example, that one should maximize one’s lifetime income and then give away that amount on death (or most of it upon retirement) on the basis that the aggregate amount would be larger. Or for your malarial example, that investment in a malarial vaccine might have larger risk adjusted returns in lives saved.
Aaron Stewart
Jan 10 2022 at 1:48pm
I’m curious if there’s any reasonable model of societal progress where this is not the asymptotic state of affairs. At any moment in time, there is some pool of potential changes we could make to how we do things. Some sound good and some sound bad. While we can reason about whether any potential change is good or bad, but we can’t be sure until we’ve tried it. Social desirability bias leads to use trying the “good sounding” changes first, leaving a larger portion of “things that sound bad” over time.
Obviously this is an over-simplification, but I don’t see how this wouldn’t happen, so long as the system for deciding what changes to make is complicated enough to suffer from social desirability bias in the first place.
Yes, I understand that you usually leave your keys on the counter or on your desk. But how many times are you going to check the counter before you looking in places you don’t expect to find them?
Matthias
Jan 18 2022 at 3:48am
See https://equilibriabook.com/ for examples of good sounding policies that are good, and still can’t be enacted.
Basically, two-sides coordination problems are hard.
J. Goard
Jan 11 2022 at 9:37am
Maybe it’s all the tabletop RPGs, but it’s never seemed to me like asserting that some description of moral good is correct is inconsistent with choosing to prioritize other things over moral goodness. It’s more like the concept of “financial responsibility” or “healthy lifestyle”: I adhere to these quite a bit, but I certainly don’t maximize them, and there are quite a few circumstances where I’ve gotten intellectual pleasure out of discussing some aspect of them without doing it.
I think I basically believed that (at least something closely approaching) veganism was moral low-hanging fruit for about a decade before I stopped eating animal products daily. I wasn’t “inconsistent”. I just had a correct partial description of moral goodness while not being a very good person. Now, I’m a much better person, and still seeking out moral improvement in some areas, but, even according to my own account of goodness, I don’t plan to become as good as I possibly could.
Comments are closed.