Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Brian O'Connell

Former Google Engineer Issues Grave Warning About Sentient AI

Can AI perceive and feel emotions just like an actual human?

ChatGPT certainly doesn’t think so.

Don’t Miss: Elon Musk Warns Of a 'Major' ChatGPT Problem

Ask ChatGPT if AI can be sentient and the response is direct.

“No, AI is not sentient,” the AI bot declares. “AI is artificial intelligence, which means it is created by humans and does not possess the capacity for sentience or self-awareness.”

Ask a well-informed human, however, and the response is both direct and alarming.

Take Blake Lemoine, a former Google (GOOG) engineer with a penchant for describing AI bots as having human emotions and tendencies – or at least doing a bang-up job of impersonating human beings.

In June, 2022, Lemoine went on record in the Washington Post as noting that Google's Large Language Model (LLM), the Language Model for Dialogue Applications, actually had a mind of its own.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told the Post, after a stint at Google testing whether or not the AI tool used discrimination or hate speech.

On Sentient “Fears”

Fast forward to February, 2023, and Lemoine is still pushing the “AI as sentient” theory.

In a recent Newsweek essay, he said the quiet part out loud.

“I Worked on Google's AI. My Fears Are Coming True”, Lemoine stated in the essay’s headline.

When the LaMDA chatbot engine said it was feeling anxious, Lemoine said he “understood” that he had done something that made it feel anxious based on the code that was used to create it.

“The code didn't say, "feel anxious when this happens" but told the AI to avoid certain types of conversation topics,” Lemoine wrote. “However, whenever those conversation topics would come up, the AI said it felt anxious.”

The Google Chatbot could also distinguish between making benign comments on lifestyle matters and offering direct advice on hot-button issues.

“Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to,” Lemoine said in his Newsweek essay.

That, among other comments Lemoine had previously made in the Post article and on his blog, wound up costing him his job.

“After publishing these conversations, Google fired me,” he stated. “I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.”

Lemoine said he went public because people weren’t aware of just how advanced AI was becoming.

“My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department,” he said. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.