Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

OpenAI CEO Sam Altman's own AI tech is making him nervous

One of the strongest criticisms of artificial intelligence revolves around a simple question: if these models could actually be as dangerous as their creators are saying, why make them at all? 

This approach, echoed in May by several students protesting one of OpenAI CEO Sam Altman's London presentations, is one that applies to more than just the creation of a superintelligent AI, something OpenAI is intent on creating. It encompasses the many harms this technology is already wreaking and the harms it is poised to enact. 

DONT MISS: US Expert Warns of One Overlooked AI Risk

And, true to form, Altman himself is feeling "nervous" about the impact his technology will have. 

"I am nervous about the impact AI is going to have on future elections (at least until everyone gets used to it)," he said, noting some of the ways in which AI can cause election interference. "Personalized 1:1 persuasion, combined with high-quality generated media, is going to be a powerful force."

But despite his anxiety around this technology, neither he nor OpenAI seems intent on doing anything to address it.

"Although not a complete solution, raising awareness of it is better than nothing," he added. "We are curious to hear ideas, and will have some events soon to discuss more."

Many responded to Altman's apparent fears with a simple suggestion that mirrors some responses to Altman's fears of a superintelligent AGI: if it really is going to be as bad as you say, shut it down.

More Artificial Intelligence:

"The arguments against this being the case are plentiful. But if not then the easiest thing to do is shut down the company if the nervousness is so great," Steven Sinofsky, formerly the president of Microsoft's Windows division, tweeted. "The idea of both aggressively acting like a helpless arsonist/firefighter while also profiting is just super weird."

AI expert Margaret Mitchell, Hugging Face's chief ethics scientists, said the onus ought to be on the AI companies to simply "identify when AI-generated content is being shared."

OpenAI has not yet shared any plans it might have for protecting the coming election from AI interference, something AI expert Gary Marcus thinks is the most significant issue posed by these models. 

"The biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation," he said

Altman, who signed a statement in May denoting the "extinction" risk of AI, said during a May senate hearing that "If this technology goes wrong, it could go quite wrong."

Get exclusive access to portfolio managers and their proven investing strategies with Real Money Pro. Get started now.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.