Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Prarthana Prakash

Alphabet CEO Sundar Pichai says that A.I. could be ‘more profound’ than both fire and electricity—but he’s been saying the same thing for years

Picture of Sundar Pichai talking (Credit: Kyle Grillot—Bloomberg/Getty Images)

According to Greek mythology, Prometheus stole fire from the gods, subjecting himself to an eternity of torture just to give mankind the technology. According to Alphabet CEO Sundar Pichai, artificial intelligence will be just as important to human history. 

“I’ve always thought of A.I. as the most profound technology humanity is working on—more profound than fire or electricity or anything that we’ve done in the past,” Pichai said in an interview with CBS’s 60 Minutes that aired on Sunday. 

“It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before.” 

This isn’t Pichai’s first time comparing A.I. to fire and electricity, though—in fact, he’s been saying it for five years now. He had the same thoughts during a Google town hall in 2018, saying that A.I. was “one of the most important things to humanity,” adding it’s “more profound than, I don’t know, electricity or fire.” 

At the time, Pichai went on to compare the upside and downside of A.I. with the ancient discovery.   

“Well, it kills people, too,” Pichai said about the perils of fire in 2018. “We have learned to harness fire for the benefits of humanity, but we had to overcome its downsides, too. So my point is, A.I. is really important, but we have to be concerned about it.”

To advance A.I., Pichai said in his 60 Minutes interview, it was important that the models don’t just train with engineers; he spoke of the role other disciplines should have in developing robust A.I. models to be more humanlike. 

“You know, one way we think about, How do you develop A.I. systems that are aligned to human values—and including morality? This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on,” Pichai told CBS. He added that those were questions for all of society to answer, rather than specific companies.

Regulations and Google’s A.I. race

The release of OpenAI’s ChatGPT in November kicked the race for A.I. into overdrive, and pushed tech companies like Google to amp up the pace of releasing their own products after years of development. 

The search engine giant opened up the wait list for its A.I. chatbot, Bard, last month so more people could try the tool and provide feedback. Bard is still not as widely available as ChatGPT, which has over 100 million active monthly users. But the race for A.I. is existential for Google, as it could make the company’s search business irrelevant.  

During Bard’s launch, Pichai noted that Google’s chatbot would make mistakes and that “things will go wrong”—and indeed they did. In a public demo, Bard made a factual mistake that wiped out $100 billion in Google’s market value. A recent study found that Bard often misinforms users when given the right prompts to jump its guardrails. But to be sure, Microsoft’s OpenAI-powered chatbot and OpenAI’s ChatGPT have made their share of factual errors in the past. 

The Alphabet CEO reiterated his previous points during his interview with 60 Minutes that while A.I. could revolutionize human civilization, the A.I. race will not be without its threats, calling it a “cat and mouse game.” Pichai gave an example of how Google addressed spam on Gmail by constantly developing its algorithm so it was able to better detect it. He says the same would have to be done with deepfakes created using A.I., but added that alone may not suffice.

“Over time, there has to be regulation. You’re going to need laws against…there have to be consequences for creating deepfake videos that cause harm to society,” Pichai said. 

Other CEOs have also called for the regulation of A.I., including chip company Nvidia‘s Jensen Huang and Tom Siebel of A.I. software firm C3 AI. Even former Google CEO, Eric Schmidt, warned that the tech industry could face a “reckoning” if the right controls and regulations aren’t put in place. 

Europe has started considering measures that limit the use of A.I. in certain cases where copyrighted materials are involved. Meanwhile, in the U.S., the Biden administration said last week that it’s seeking public comments on potential rules.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.