Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Chicago Tribune
Chicago Tribune
Comment
Joshua Pederson

Commentary: Somebody needs to save effective altruism from itself

Round up for charity?

In the last year or so, I’ve come across these words more and more often, frequently from cashiers or on payment keypads at stores. Would I like to give a little bit of money to cancer research or literacy efforts or the local animal shelter? My answer is forceful, and it’s the same every time.

No.

It’s not because I’m stingy, and it’s not because I don’t care about cancer patients, young readers or puppies. It’s because I’m what some call an effective altruist. Effective altruism, or EA, is a fancy name for a really simple idea: Those who can should give lots, and they should be very thoughtful when they do so.

Or at least I used to be an effective altruist. Now, I’m not sure, because some of the movement’s leaders are taking EA in distressing new directions and pushing work that threatens to undermine the very real good they have done in the past.

I learned about EA a few years ago when I read Peter Singer’s “The Most Good You Can Do.” In it, Singer urges us all to be more charitable — but to do so in ways that make our donations go as far as possible. One of his most memorable examples involves the Make-A-Wish Foundation’s decision to turn a 5-year-old leukemia patient into a superhero for a day. The stunt was heartwarming and headline-grabbing, but it cost thousands of dollars. Wouldn’t it be better, Singer asks, if that money went to an organization like the Malaria Consortium, which for less than $10 per child protects kids from a disease that kills at least 400,000 children a year?

Singer and others like him believe that there are quite a few organizations like the Malaria Consortium, charities that we can confidently say cheaply alleviate a lot of needless suffering and death. (A group called Givewell provides an excellent list.) For effective altruists, supporting them is a no-brainer.

I’ve been teaching Singer for a half decade now in my ethics courses, and after reading him, many of my students totally buy in. I recently spoke to one who just finished a summer internship at a tony New York investment bank; she gave a substantial part of her salary to effective charities and plans to continue doing so as she climbs the corporate ladder. I admire her so much.

And yet in recent months, some of the movement’s key figures are making some new claims about EA — claims I fear will turn off more people than they attract. Prime among them is William MacAskill, a University of Oxford philosopher who sometimes describes himself as a co-founder of the effective altruism movement. MacAskill, who was recently the subject of a sprawling, ambivalent New Yorker profile, now considers himself a “longtermist.” Longtermists believe that we have a responsibility not only to people who are alive right now but also to all those humans who do not yet exist — the billions (or trillions?) who may one day be born.

So what, then, does that responsibility entail? Usually, it boils down to trying to avert “existential risks,” threats that may wipe out most or all humans. Some of the risks MacAskill addresses in his new book, “What We Owe The Future,” are urgent and widely agreed upon: climate change and future pandemics. Others, such as an artificial intelligence takeover, are more speculative.

This is all intellectually provocative, and it demands scholarly and governmental attention. But what it’s not is effective. The great benefit of EA is its promise that you can do real good — and make real change — immediately and confidently. You can cure the blind right now. You can prevent disease right now. You can save lives right now. There is suffering, and you can stop it. I know these pleas work because I’ve seen them work with my students — and with myself.

We can’t say the same thing about longtermism. As MacAskill himself admits, experts often can’t agree on the causes of existential risk — much less on how to fight it. This is especially true of artificial intelligence. Experts polled on the probability of a near-term AI catastrophe gave answers ranging from 0.1% to 95%, and descriptions of the likely origin of that catastrophe vary widely. If the smartest people in the world can’t identify the source of a threat, how are rank-and-file donors supposed to give money to stop it?

MacAskill acknowledges this and says that we should be educating AI experts. OK. But while we do so, hundreds of thousands of people die from malaria every year. Simply put, MacAskill’s not wrong, per se, but he does seem to have his priorities really screwed up. And he’s bringing others along with him, lobbying the United Nations and trying to bring tech billionaires and crypto bros under his sway. (That the famously unaltruistic Elon Musk is on board is particularly troubling.)

So I would urge MacAskill to correct course and refocus his attention on the pain that can be eliminated here and now. If he doesn’t, I may have to found a new movement based on his old ideas. Maybe I’ll call it ineffective altruism.

____

ABOUT THE WRITER

Joshua Pederson is an associate professor of humanities at Boston University and the author of “Sin Sick: Moral Injury in War and Literature.” Find him on Twitter @joshua_pederson.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.