Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Prarthana Prakash

Even OpenAI CEO Sam Altman thinks people are going a little too crazy over A.I.: ‘It's wildly overhyped in the short-term’

a picture of Sam Altman (Credit: Win McNamee—Getty Images)

OpenAI’s tools like image generator DALL-E and viral chatbot ChatGPT have become the center of a growing conversation about what artificial intelligence can achieve. In just two months following its November 2022 launch, ChatGPT hit 100 million monthly active users, and A.I. references skyrocketed 77% during last year’s fourth-quarter earning calls reported in March. Suffice to say, people can’t stop talking about A.I.

A.I.'s current power goes beyond just its ability to excite the public's imagination. The technology has also caused big market movements, led to thousands of white-collar job losses, and sent Nvidia close to a $1 trillion valuation. It has people calling it an existential threat in one moment, and a savior in the other. 

But OpenAI CEO Sam Altman, who co-founded the company in 2015 when it was a non-profit, thinks that the frenzy surrounding A.I. is too much.   

"It's wildly overhyped in the short term," Altman said Thursday, referring to A.I., at an event held by Indian newspaper the Economic Times. "There's crazy stuff happening in Silicon Valley right now."

Altman's labeling of the current A.I. boom as overhyped adds to a groundswell of voices calling for a reconsideration of the technology amidst the whirlwind of claims being made about what it can do. Hedge fund veteran Ken Griffin, for one, also recently labeled A.I.'s hype dangerous.  

“I do think the A.I. community is making a terrible mistake by being full of hype on the near-term implications of generative A.I.,” Griffin, founder of hedge fund Citadel, said on Tuesday. “I think they’re actually doing everybody a huge disservice with the level of hype they are creating.”

Overhyped...but undervalued

For Altman, the issue is not that A.I. is less powerful than people imagine, but that the public is focusing on short-term novelty instead of long-term potential. In fact, he thinks A.I. is likely undervalued in the long term because its full potential remains unknown.  

“If we really do make the progress that we think we’re going to make and we have this magical system that can just do anything you ask, no one knows how to think about that, no one knows how to value that, but whatever they're thinking is probably too low," Altman added. 

Despite the tug-of-war over what A.I. really is, Altman argued during the interview that A.I. was like any disruptive tech—it was spurring sudden change at a rapid pace. But those changes will also do a lot of societal good along the way. 

“I think with other job impacts, it’s just going to be surprising. But I think the world will get way wealthier, we’ll have a productivity boom and we will find a lot of new things to do,” Altman said.

OpenAI leaders point to jobs in fields like health and education as areas in which A.I. could potentially have a big impact. 

“Maybe the problem is we don’t have nearly enough people to do all the jobs we want to. We are in this massive crunch and if you can make way more ‘job-doing ability’ available, the world would consume a hundred times, a thousand times more,” Altman said. “I think we may really see that.” 

Regulation

But while Altman is at the forefront of a tech revolution that has been likened to inventions like the personal computer and has been declared “more profound” than electricity and fire, he has also been part of a growing call for regulations to govern the development of this powerful tech.

“I think this is a special moment where the globe can come together and get this right, and certainly we’d like to try to do that,” he said.

Just a few weeks ago, Altman testified before a Senate subcommittee and urged greater A.I. regulations.

Specifically, he called for measures that were flexible and adaptable, while creating the required safeguards. And late last month, he was among the many signatories of a letter that said the tech posed a “risk of extinction” for humanity akin to pandemics and nuclear warfare and that mitigating that risk should be a global priority.

That wasn’t the first instance where industry experts called for firm action in A.I. when it comes to regulation. In March, a group of academics, leaders, and tech executives called for a six-month A.I. pause to allow the government to create robust regulations to support the rapid pace of development.  

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.