Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Elon Musk Shares His Unusual Vision For a Safer Form of AI

Elon Musk has long been a prominent voice in the AI world. But on July 12, he jumped more officially into the sector when he launched his new AI startup, xAI. 

In the past, he has discussed the importance of AI safety, adding his weighty signature to a letter seeking a six-month moratorium on the development of more powerful AI systems several months ago.

DON'T MISS: Elon Musk Says He Has the Answer About Aliens

Just a few days after the announcement of the launch, Musk broke down the goals of the company, as well as his views on AI safety, in a Twitter Spaces event July 14. 

"The goal is to build a good AGI with the overarching purpose of just trying to understand the universe," Musk said. "I think the safest way to build an AI is to make one that is curious and truth-speaking."

The term 'AGI' refers to Artificial General Intelligence, or an AI model with intelligence that is equal to or greater than human intelligence. 

"My theory behind a maximally curious, maximally truthful AI as being the safest approach is, I think to a superintelligence, humanity is much more interesting than not humanity," Musk said. To Musk, despite his interest in space, humans are the thing that makes Earth interesting. And if an AI system is designed to comprehend that humanity is the most interesting thing out there, it won't try to destroy it. 

More Elon Musk:

"That kind of approach to growing an AI, and I think that is the right word for it, growing an AI, is to grow it with max ambition," Musk said, adding that he has been concerned about AI safety and regulation for a long time. He said that it should not be up to companies to act as they please when it comes to AI.  

"My view on safety is try to make it maximally truth-seeking, maximally curious," Musk said, turning to a video game reference from the Super Mario Bros. series to make his point. "I think this is important to avoid the inverse-morality problem. The Waluigi problem. If you make Luigi, you risk making Waluigi at the same time. That's what we're going to try to do here."

Musk said at a separate Twitter Spaces earlier in the week that he expects AGI to be achieved in the next five to seven years. 

xAI's team, which features talent from Google, Microsoft, OpenAI and Tesla, will be led by Musk and advised by Dan Hendrycks, the director of the Center for AI Safety

The Center for AI Safety released a controversial statement in May that said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

OpenAI's CEO Sam Altman, alongside Microsoft co-founder Bill Gates and xAI's co-founder, Igor Babuschkin, were among the statement's prominent signatories.

Musk -- who co-founded OpenAI in 2015 -- first mentioned the idea of launching a rival AI company in April. He stepped down from OpenAI's board in 2018 and has since been critical of the company's relationship with Microsoft. 

"It does seem weird that something can be a non-profit, open source and somehow transform itself into a for-profit, closed source," he said of OpenAI in a May interview with CNBC. "This would be like, let's say you found an organization to save the Amazon rainforest, and instead they become a lumber company, and chop down the forest, and sold it for money."

Get exclusive access to portfolio managers and their proven investing strategies with Real Money Pro. Get started now.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.