Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Why the collapse of Sam Bankman-Fried's FTX has split A.I. researchers

Photo of Sam Bankman-Fried (Credit: Lam Yik—Bloomberg via Getty Images)

The biggest story in A.I. this week was the collapse of Sam Bankman-Fried’s cryptocurrency exchange FTX and his related trading firm Alameda Research. What does that have to do with A.I.? The answer is that Bankman-Fried was a major donor to projects working on both creating superpowerful A.I. and those working on what’s known as “A.I. Safety.” And, for reasons I’ll get to shortly, the collapse of his empire could imperil important, cutting edge research devoted to understanding potential dangers of A.I.

First, we need to clear up terminology, like A.I. Safety, which sounds like a completely neutral, uncontroversial term. Who wouldn’t want safe A.I. software? And you might think that the definition of A.I. “safety” would include A.I. that isn’t racist or sexist or is used to abet genocide. All of which, by the way, are actual, documented concerns about today’s existing A.I. software.

Yet actually, none of those concerns are what A.I. researchers generally mean when they talk about “A.I. Safety.” Instead, those things fall into the camp of “A.I. Ethics” (or “responsible A.I.”) A.I. Safety is something else entirely. The term usually is used to refer to efforts to prevent superpowerful future A.I. from destroying humanity. Sometimes this is also known as “the alignment problem.”

(I personally think this split between Safety and Ethics is unfortunate and not very helpful to anyone who cares about software that is actually safe, in the most commonplace understanding of that word. Ideally, we want A.I. software that won’t result in Black people being wrongly arrested or imprisoned, won’t crash our autopiloted cars into overturned trucks, and also won’t decide to kill all humans on the planet. I don’t really see any of those things as a nice to have. What’s more, research into how to prevent A.I. from being racist ought to be at least somewhat useful in preventing A.I. from killing all of us—both are about getting A.I. systems to do the stuff we want them to do and not do the stuff we don’t want them to do. But no one asked me. And the schism between A.I. Safety and A.I. Ethics is increasingly entrenched and bitter. It has also gotten very caught up in racial and gender politics: most prominent researchers in A.I. Safety are white and male. Meanwhile, the A.I. Ethics community probably counts more people of color and women among its ranks than other areas of A.I. research.)

Bankman-Fried led the $580 million Series B venture capital round for Anthropic, a research lab, formed mostly from a group that broke away from OpenAI, that is interested in both building powerful A.I. models and figuring out how to prevent them from running amok. His FTX Future Fund, a philanthropic institution he established earlier this year, had already pledged $160 million to causes that included a lot of research into A.I. Safety, including prizes at prestige machine learning conferences such as NeurIPS for teams that developed systems to spot dangerous emergent behaviors in neural networks or ways neural networks could be tricked by malicious humans into causing harm. The FTX Fund had promised to give away as much as $1 billion per year in the future, with much of that going to similar endeavors.

On Friday, the FTX Fund’s entire board of directors and advisers resigned saying it had “fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund.” In their resignation statement, the advisers said they believed the fund would not be able to honor most of its current grants.

This is a big deal for A.I. not just because of the money lost for A.I. Safety research. It’s a big deal because the scandal at FTX has brought a lot of negative attention to Effective Altruism, the philosophical and social movement to which Bankman-Fried subscribed and which he said motivated his philanthropy. It turns out that a fair number of researchers working on A.I. at cutting edge A.I. labs—such as OpenAI, Anthropic, DeepMind, and MIRI (the Machine Intelligence Research Institute) —and a good number of the tech billionaires funding those labs, are also believers in Effective Altruism, or at least share the movement’s belief that A.I. is one of the most consequential technologies mankind has ever developed: one that will either usher in a techno-utopia, or end in humanity’s extinction.

Bankman-Fried attributed his entire decision to get a job in finance, and then later, to get into cryptocurrency arbitrage, to an Effective Altruist doctrine known as “earning to give.” The idea was to encourage young people to pursue jobs in high-paying sectors so that they would have more money to give away to charity. They calculated this money, properly deployed, was more beneficial than any direct impact an individual might have as a social worker or a teacher or a doctor.

Effective Altruism (known to its followers as EA), is dedicated to using rationalist principles in an attempt maximize the benefits that people’s lives have for the rest of humanity. (EA is also a “community”—some critics would say cult—and people in the movement refers to themselves as “EAs,” using the term as a noun). Will MacAskill, the philosopher who is EA’s co-founder, as well as Toby Ord, another philosopher closely associated with EA, have in recent years pushed the movement towards considering the lives of future humans as equally if not more important than the lives of those currently here on the planet. The idea is that since in the future there are likely to be many more humans than there are currently, the greatest good any EA could ever do is to save the entire species from an extinction level event.

As a result, EA has increasingly encouraged people to look into ways to address “existential risks,” including pandemics, bioweapons, nuclear war, Earth-smashing asteroids, and, yes, powerful A.I. that is not “aligned” with humanity. (Controversially, EA has prioritized these issues above climate change, which it considers to be an important challenge, but not one likely to wipe out all of humanity. And since EAs use a “utility maximizing” logic in deciding where to put their money, they have tended to put money towards existential risks above all others.)  At the same time, the movement sees superpowerful A.I. (often referred to as artificial general intelligence, or “AGI”) as a potentially critical enabling technology for solving many of the other pressing problems facing the world. This belief explains why Bankman-Fried’s FTX Future Fund was focused on A.I. Safety (it is also no coincidence that the Future Fund’s CEO and its advisory board were all prominent in EA.)

Now, some are wondering whether Bankman-Fried also used EA’s utilitarian philosophy to rationalize business practices that were at best unethical and possibly illegal. MacAskill has been at pains to rebut any such suggestions. “If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community,” MacAskill wrote on Twitter. He cited passages from his own writing and those of Ord, in which they urged those interested in the movement not to rationalize causing near-term harm for the purpose of doing longer-term good, although they did so in part on utilitarian grounds—that the negative publicity associated with the near term harm would ultimately do more damage the larger cause.

EA has motivated a lot of its adherents, which include many students at top universities in the U.S., the U.K., and Europe, to work on both developing AGI and researching A.I. Safety. Perhaps as importantly, it has provided intellectual support to a number of technology investors, who while not necessarily in EA, share many of its beliefs about AGI. These investors include: Elon Musk, who co-founded OpenAI partly for these reasons; Sam Altman, the Silicon Valley big wig who also co-founded OpenAI and currently serves as its CEO; Jann Tallinn, an Estonian billionaire who initially made his fortune with Skype and has devoted much of his philanthropy to existential risk; and Dustin Moskovitz, the Facebook co-founder and billionaire, who is an EA adherent. Moskovitz’s family foundation, Open Philanthropy, has also given substantial grants to A.I. Safety research.

If EA now falls into disrepute, or collapses entirely, as some critics have suggested it will and as some of its adherents fear it might, A.I. Safety research may well suffer with it. Not only could there be a lot less money to spend on this important topic, but disillusionment with EA could dissuade talented students from entering the field.

To critics of A.I. Safety, which include many in A.I. Ethics field, that is just fine. They would rather have money and talent focused on threats that are here today from existing A.I. systems than to see resources lavished on hypothetical threats from a technology that doesn’t yet exist. I think these A.I. Ethics folks have a point—but only to a point. I am skeptical that AGI of the kind that could imperil civilization is imminent and so would hate to see money spent on A.I. Safety to the exclusion of A.I. Ethics. But again, I am not sure why these two fields have been set in opposition to one another. And I would hate to be wrong about AGI and wake up in a world where AGI did exist and no one had devoted any resources to thinking about how to avoid catastrophe.

And now here’s the rest of this week’s A.I. news.


Jeremy Kahn
@jeremyakahn
jeremy.kahn@ampressman

***
There's still time to apply to attend the world's best A.I. conference for business! Yes, Fortune's Brainstorm A.I. conference is taking place in-person in San Francisco on December 5th and 6th. Hear from top executives and A.I. experts from Apple, Microsoft, Google, Meta, Walmart, Land o Lakes, and more about how you can use A.I. to supercharge your company's business. We'll examine the opportunities and the challenges—including how to govern A.I. effectively and use it responsibly. Register here today. (And if you use the code EOAI in the additional comments field of the application, you'll receive a special 20% discount just for Eye on A.I. readers!)

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.