Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Peter Vanham

Effective altruism's fundamental flaw

(Credit: Jack Guez—AFP/Getty Images)

Hello, and welcome back to Impact Report.

In 2023, one approach to impact had an Icarus-like rise and fall: effective altruism. Can the movement regain ground in 2024?

Effective altruism, in the words of one of its most well-known proponents, Oxford philosopher William MacAskill, is about “doing good, better.” It is both an academic field and a social movement, which “aims to find the best ways to help others, and put them into practice,” as per the movement’s official website. It focuses on problems that are “important, neglected, and tractable,” and aims to solve them.

So far, it sounds like it fits well with what this newsletter is about, right? We too care about having a positive impact on some of the world’s most challenging problems.

But the respective downfall and governance failures of two of effective altruism’s corporate champions—FTX and OpenAI—between November 2022 to November 2023 made it clear there are fundamental differences between the way this movement sees impact and the way we do at Fortune's Impact Report. In fact, in some ways, the EA movement and this newsletter are at odds with how we see the role companies can and should play.

Take first the case of FTX and Sam Bankman-Fried. The crypto-billionnaire-turned-convict and most of his leadership team at FTX were “devoted effective altruists,” according to last week’s must-read Financial Times essay on the movement. “It was what drew them together,” author Michael Lewis told the FT’s Martin Sandbu. “It was extremely serious to them.”  

From my perspective, that sounds incongruous. I believe in the positive impact companies can bring to society if they pursue a greater purpose than profits and transform their business model so as to have a positive impact on society. The way FTX’s story ended—with massive defrauding of customers—it failed on both of those litmus tests.

The FTX narrative demonstrates the first great shortcoming of effective altruism: its focus on how (business) people spend their money and time, not on how companies make money. Isaac Getz, a professor (and my thesis advisor) at ESCP Business School and the author of an upcoming book on the concept of the altruistic enterprise, pointed out to me that is a first great flaw of the approach. 

“Effective altruism is a step forward over pure philanthropy, but fundamentally it is the same approach,” he told me over the phone. “It is still ‘enlightened capitalists’—just with a master in science. Instead of giving away compulsively or arbitrarily, they calculate the return on investment of their donations. But for the companies that have generated their funds, it is business as usual. Nothing changes about how they are operating and the potential damage to society they may cause.”

Which brings us to the second great shortcoming of the philosophy: As it depends on individual contributions, and not on market forces, EA eventually often loses out—even among its adherents. (And it certainly has those. EA "is quickly becoming the ideology of choice for Silicon Valley billionaires," the FT wrote.) That is brought home by the case of effective altruism at OpenAI. At least two of the nonprofit board members that ousted, and then reinstated, Sam Altman had ties to EA, my colleague Jeff John Roberts reported in November.

But Altman and his OpenAI cofounder, Ilya Sutskever, also paid lip service to EA principles, leading to what could be described as an EA vs. EA showdown at OpenAI.

The OpenAI board opposed Altman in part because he was too prone to pursue the risky aspects of generative AI in exchange for profits. That was in line with official effective altruism positions: The movement’s researchers put the chance of a “major AI disaster” in the next century at 4%. They put it in the same league as a nuclear disaster, and therefore one of the most important risks to avoid.

Yet despite these concerns and their nominal power in the OpenAI board, the explicit effective altruists in the organization lost out to those focused more on the immediate business opportunities. This case of dog eat dog is not surprising, Getz told me. “The moral of this story is: Let’s first make money, and then we’ll see how to best use it. It’s effective altruism in action.”

That brings us full circle. The fundamental flaw of EA is that it just focuses on how to spend money, not how to earn it. “They found the magical formula on how to best spend donations. All you need to do is sign a check,” Getz concluded. “Yet, it’s the companies who may cause the biggest risks to the society, OpenAI included. To transform them to become a force for good is much harder.”

This isn't how I see the world—in fact, it's the exact opposite. What matters most for a company, I believe, is how it makes its money. How it spends it is any shareholder’s prerogative. But it's the revenue model that holds the key to any contribution to a sustainable future. Do you agree?

More news below.

Peter Vanham
Executive Editor, Fortune
peter.vanham@fortune.com

This edition of Impact Report was edited by Holly Ojalvo.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.