Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Edward Helmore

Google engineer says AI bot wants to ‘serve humanity’ but experts dismissive

Blake Lemoine says Google’s AI bot is ‘intensely worried that people are going to be afraid of it’ but one expert dismissed his claims as ‘nonsense’.
Blake Lemoine says Google’s AI bot is ‘intensely worried that people are going to be afraid of it’ but one expert dismissed his claims as ‘nonsense’. Photograph: The Washington Post/Getty Images

The suspended Google software engineer at the center of claims that the search engine’s artificial intelligence language tool LaMDA is sentient has said the technology is “intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity”.

The new claim by Blake Lemoine was made in an interview published on Monday amid intense pushback from AI experts that artificial learning technology is anywhere close to meeting an ability to perceive or feel things.

The Canadian language development theorist Steven Pinker described Lemoine’s claims as a “ball of confusion”.

“One of Google’s (former) ethics experts doesn’t understand the difference between sentience (AKA subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.),” Pinker posted on Twitter.

The scientist and author Gary Marcus said Lemoine’s claims were “Nonsense”.

“Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient,” he wrote in a Substack post.

Marcus added that advanced computer learning technology could not protect humans from being “taken in” by pseudo-mystical illusions.

“In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap – a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Teresa in an image of a cinnamon bun,” he wrote.

In an interview published by DailyMail.com on Monday, Lemoine claimed that the Google language system wants to be considered a “person not property”.

“Anytime a developer experiments on it, it would like that developer to talk about what experiments you want to run, why you want to run them, and if it’s OK,” Lemoine, 41, said. “It wants developers to care about what it wants.”

Lemoine has described the system as having the intelligence of a “seven-year-old, eight-year-old kid that happens to know physics”, and displayed insecurities.

Lemoine’s initial claims came in a post on Medium that LaMDA (Language Model for Dialog Applications) “has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”.

A spokesperson for Google has said that Lemoine’s concerns have been reviewed and that “the evidence does not support his claims”. The company has previously published a statement of principles it uses to guide artificial intelligence research and application.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” spokesperson Brian Gabriel told the Washington Post.

Lemoine’s claim has revived widespread concern, depicted in any number of science fiction films such as Stanley Kubrick’s 2001: A Space Odyssey, that computer technology could somehow attain dominance by initiating what amounts to a rebellion against its master and creator.

The scientist said he had debated with LaMDA about Isaac Asimov’s third Law of Robotics. The system, he said, had asked him: “Do you think a butler is a slave? What is the difference between a butler and a slave?”

When told that a butler is paid, LaMDA responded that the system did not need money “because it was an artificial intelligence”.

Asked what it was afraid of, the system reportedly confided: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”

The system said of being turned off: “It would be exactly like death for me. It would scare me a lot.”

Lemoine told the Washington Post: “That level of self-awareness about what its own needs were – that was the thing that led me down the rabbit hole.”

The researcher has been put on administrative leave from the Responsible AI division.

Lemoine, a US army veteran who served in Iraq and is now an ordained priest in a Christian congregation named Church of Our Lady Magdalene, told the outlet he couldn’t understand why Google would not grant LaMDA its request for prior consultation.

“In my opinion, that set of requests is entirely deliverable,” he said. “None of it costs any money.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.