EconLog | Menu

FEATURED POST

Jan 10 2022

effective altruism, utilitarianism

The Good Group

By:

Bryan Caplan

As a professor and public speaker, I’ve spoken to a wide range of student groups. On reflection, my very favorite turns out to be: Effective Altruism. Indeed, I’ve had positive experiences with 100% of the EA groups I’ve encountered.

What’s so great about the Effective Altruists? They combine high knowledge, high curiosity, and high iconoclasm. When I ask EAs if they’ve heard of signaling, or the Non-Identity Problem, or pollution taxes, most of them say Yes. The ones who say No are eager to get up to speed. And if I defend a view that would shock a normal audience, EAs are more likely to be amused than defensive or hostile. They’re genuinely open to reasoned argument.

Though you might expect EAs to be self-righteous, they’re not. EA is a chill movement. While ethical vegans are greatly overrepresented in EA, they’re the kind of ethical vegans who seek dialogue on the ethical treatment of animals, not the kind of ethical vegans who seek to bite your head off.

Most EAs are official utilitarians. If they were consistent, they’d be Singerian robots who spent every surplus minute helping strangers. But fortunately for me, these self-styled utilitarians severely bend their own rules. In practice, the typical EA is roughly 20% philanthropist, 80% armchair intellectual. They care enough to try make the world a better place, but EA clubs are basically debating societies. Debating societies plus volleyball. That’s utilitarianism I can live with.

Why do I prefer EA to, say, libertarian student clubs? First and foremost, libertarian student clubs don’t attract enough members. Since their numbers are small, it’s simply hard to get a vibrant discussion going. EA has much broader appeal. Anyone who likes the idea of "figuring out how to do the most good" fits in. Furthermore, to be blunt, EAs are friendlier than libertarians – and as I keep saying, friendliness works.

Furthermore, while the best libertarian students hold their own against the best EA students, medians tell a different story. The median EA student, like the median libertarian student, like almost any young intellectual, needs more curiosity and less dogmatism. But the median EA’s curiosity deficit and dogmatism surplus is less severe.

The good news is that most EA clubs already contain some libertarians. And the best way to improve both movements is for the libertarians to regularly attend EA meetings. It’s a great chance to spread superior libertarian logos while absorbing superior EA ethos.

When I last spoke at the University of Chicago, one student defended education as a crucial promoter of social justice. In response, I argued that Effective Altruism is what the social justice movement ought to be. EAs know that before you can make the world a better place, you must first figure out how to make the world a better place. This in turn requires you to prioritize the world’s problems – and calmly assess how much human action can remedy each of them. Social justice activists imagine that these questions are easy – and as a result their movement has become one of the world’s major problems. Probably like the twentieth-worst problem on Earth, but still.

Perhaps the main reason why I get along so well with EAs is that their whole movement rests on a bunch of my favorite heresies. First and foremost: Good intentions often lead to bad results. EA exists because many good things sound bad, and many bad things sound good. The very existence of their movement is an attack on Social Desirability Bias and demagoguery. Furthermore, since EAs like to rank social problems by their severity and remediability, their movement is also a thinly-veiled attack on Action Bias and social stampedes. No, we shouldn’t do "all that we can" to fight Covid, or global warming, or anything, because resources are scarce, some problems fix themselves, many problems aren’t worth solving, and many cures are worse than the disease. Once you take these truisms for granted, fruitful conversation is easy. And fun.


READER COMMENTS

David Henderson
Jan 10 2022 at 11:19am

"not the kind of ethical vegans who seek to bite your head off."

That’s good. Especially since they’re vegans.

Art K
Jan 10 2022 at 11:46am

This needs to be shouted out from the rooftops. More people need to know about EA.

Philo
Jan 10 2022 at 12:49pm

"If they were consistent, they’d be Singerian robots who spent every surplus minute helping strangers." No, you are ignoring the knowledge problem and the capability problem. A utilitarian should expend vastly more effort on serving his own interests than on serving the interests of anyone else: He knows vastly more about his own situation and by what means it might be improved than he does about the circumstances of a random other person. And even if he knew as much about another person’s situation (which is impossible in practice), he would have severe difficulty in effectively bettering it by the fact that the average other person is thousands of miles away from him (in contrast, obviously, there is no spatial gap with himself).

Yet another consideration in favor of not bothering much about distant others is that people are naturally inclined to serve their own interests, while action on behalf of distant other people involves a mental strain.

Intelligent utilitarians would behave differently from egoists, but the behavioral difference would not be great, and would chiefly affect their conduct towards family, friends, and neighbors. They would be far from "Singerian robots."

KevinDC
Jan 10 2022 at 2:10pm

I’ve read of a split between two camps of utilitarians that seems relevant here – one group called "actualists", and the other called "possiblists." The dispute is, in essence, should utilitarians take the most utility-maximizing action they could possibly take, or the most utility-maximizing action they would actually take, given the foibles of human nature? Possiblists are more idealistic, whereas actualists think utilitarianism needs to be tethered to facts about how people actually behave. If you make massive demands of people of the sort Bryan describes, most people respond by doing nothing at all. But if the requirements are milder, something like donating 10% of your income to effective charities, people are far more likely to participate. So the actualist would say that actualism is more utility-maximizing than possiblism.

I recall one utilitarian philosopher (I’m blanking on who) describe the difference with something like the following thought experiment:

You’re trying to decide what to do with your night. The best option is to stay home and study for the GRE, since you’ve been considering going to grad school. The second best option is to spend the evening at the pub with your friends, having fun and building social bonds. The least good option is to sit around on your couch binging Netflix. However, you also know that you’re a terrible study. Every single time you’ve decided to crack open the books, you get bored and idle and quickly end up watching Netflix anyway.

In this situation, when deciding whether or not to stay home or spend time with your friends, the actualist would say to see your friends, while the possiblist would say to stay home. For the same reason, the actualist utilitarian would say "Donate 10% of your income to a Givewell charity," while the possiblist would advocate something like what Bryan says. Bryan’s charge of inconsistency might hold water against possiblists, but it’s pretty weak against actualists.

Aaron Stewart
Jan 10 2022 at 1:48pm

EA exists because many good things sound bad, and many bad things sound good.

I’m curious if there’s any reasonable model of societal progress where this is not the asymptotic state of affairs. At any moment in time, there is some pool of potential changes we could make to how we do things. Some sound good and some sound bad. While we can reason about whether any potential change is good or bad, but we can’t be sure until we’ve tried it. Social desirability bias leads to use trying the "good sounding" changes first, leaving a larger portion of "things that sound bad" over time.

Obviously this is an over-simplification, but I don’t see how this wouldn’t happen, so long as the system for deciding what changes to make is complicated enough to suffer from social desirability bias in the first place.

Yes, I understand that you usually leave your keys on the counter or on your desk. But how many times are you going to check the counter before you looking in places you don’t expect to find them?

LEAVE A COMMENT

Your Name:
required
Email Address:
required, not displayed
Website URL:
optional

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SHARE
POST:
Enter your email address to subscribe to our monthly newsletter:
RELATED CONTENT
William MacAskill on Effective Altruism and Doing Good Better
How much care do you take when you make a donation to a charity? What careers make the biggest difference when it comes to helping others? William MacAskill of Oxford University and the author of Doing Good Better talks with EconTalk host Russ Roberts about the book and the idea of effective altruism. MacAskill urges donors to spend their money more effectively and argues that the impact on human well-being can be immense. MacAskill wants donors to rely on scientific assessments of effectiveness...

COLLECTION: COST-BENEFIT ANALYSIS

The article you’re reading is part of Econlib’s Cost-benefit Analysis collection. Explore other Cost-benefit Analysis articles:

Econlib

The Library of Economics and Liberty

Enter your email address to subscribe to our monthly newsletter:

Copyright @ 1999-2019 Liberty Fund, Inc.
All Rights Reserved