Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Steve Mollman

Elon Musk warns ‘something scared’ OpenAI chief scientist Ilya Sutskever as CEO Sam Altman’s return fails to answer key questions

(Credit: Kirsty Wigglesworth - WPA Pool/Getty Images)

Elon Musk played a big role in persuading Ilya Sutskever to join OpenAI as chief scientist in 2015. Now the Tesla CEO wants to know what he saw there that scared him so much.

Sutskever, whom Musk recently described as a “good human” with a “good heart”—and the “linchpin for OpenAI being successful”—served on the OpenAI board that fired CEO Sam Altman two Fridays ago; indeed, Sutskever informed Altman of his dismissal. Since then, however, the board has been revamped and Altman reinstated, with investors led by Microsoft pushing for the changes.

Sutskever himself backtracked on Monday, writing on X, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI.” 

But Musk and other tech elites—including ones who mocked the board for firing Altman—are still curious about what Sutskever saw. 

Late on Thursday, venture capitalist Marc Andreessen, who has ridiculed “doomers” who fear AI’s threat to humanity, posted to X, “Seriously though — what did Ilya see?” Musk replied a few hours later, “Yeah! Something scared Ilya enough to want to fire Sam. What was it?”

That remains a mystery. The board gave only vague reasons for firing Atlman. Not much has been revealed since.

'Such drastic action'

OpenAI’s mission is to develop artificial general intelligence (AGI) and ensure it “benefits all of humanity.” AGI refers to a system that can match humans when faced with an unfamiliar task. 

OpenAI’s unusual corporate structure put a nonprofit board higher than the capped-profit company, allowing the board to fire the CEO if, for instance, it felt the commercialization of potentially dangerous AI capabilities was moving at an unsafe speed.

Early on Thursday, Reuters reported that several OpenAI researchers had warned the board in a letter of a new AI that could threaten humanity. OpenAI, after being contacted by Reuters, then wrote an internal email acknowledging a project called Q* (pronounced Q-Star), which some staffers felt might be a breakthrough in the company’s AGI quest. Q* reportedly can ace basic mathematical tests, suggesting an ability to reason, as opposed ChatGPT’s more predictive behavior.

Musk has longed warned of the potential dangers to humanity from artificial intelligence, though he also sees its upsides and now offers a ChatGPT rival called Grok through his startup xAI. He cofounded OpenAI in 2015 and helped lure key talent including Sutskever, but he left a few years later on a sour note. He later complained that the onetime nonprofit—which he had hoped would serve as a counterweight to Google’s AI dominance—had instead become a “closed source, maximum-profit company effectively controlled by Microsoft.”

Last weekend, he weighed in on the OpenAI board’s decision to fire Altman, writing: “Given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such drastic action.” 

When an X user suggested there might be a “bombshell variable” unknown to the public, Musk replied, “Exactly.”

Sutskever, after his backtracking on Monday, responded to the return of Altman by writing on Wednesday, “There exists no sentence in any language that conveys how happy I am.”  

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.