Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Anthony Cuthbertson

Mattel adding ChatGPT to toys would cause ‘real damage’ to children, experts warn

A Mattel Barbie pictured at the 'Barbie clinic' of collector Bettina Dorfmann in Duesseldorf, Germany on 25 July, 2023 - (AFP/Getty)

Plans by toy giant Mattel to add ChatGPT to its products could risk “inflicting real damage on children”, according to child welfare experts.

The Barbie maker announced a collaboration with OpenAI last week, hinting that future toys would incorporate the popular AI chatbot.

The announcement did not contain specific details about how the technology would be used, but Mattel said the partnership would “support AI-powered products and experiences” and “bring the magic of AI to age-appropriate play experiences”.

In response, child welfare advocates have called on Mattel to not to not place the experimental tech into children’s toys.

“Children do not have the cognitive capacity to distinguish fully between reality and play,” Robert Weissman, co-president of the advocacy group Public Citizen, said in a statement.

“Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children.”

Among the potential harms cited by Mr Weissman include the undermining of social development and interfering with children’s ability to form peer relationships.

“Mattel should announce immediately that it will not incorporate AI technology into children’s toys,” he said. “Mattel should not leverage its trust with parents to conduct a reckless social experiment on our children by selling toys that incorporate AI.”

The Independent has reached out to OpenAI and Mattel for comment.

Mattel and OpenAI’s collaboration comes amid growing concerns about the impact generative artificial intelligence systems like ChatGPT have on users’ mental health.

Chatbots have been known to spread misinformation, confirm conspiracy theories, and even allegedly encourage users to kill themselves.

Psychiatrists have also warned of a phenomenon called chatbot psychosis brought about through interactions with AI.

“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end,” Soren Dinesen Ostergaard, a professor of psychiatry at Aarhus University in Denmark, wrote in an editorial for the Schizophrenia Bulletin.

“In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis.”

OpenAI boss Sam Altman said this week that his company was making efforts to introduce safety measures to protect vulnerable users, such as cutting off conversations that involve conspiracy theories, or directing people to professional services if topics like suicide come up.

“We don’t want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough,” Mr Altman told the Hard Fork podcast.

“However, to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven’t yet figured out how a warning gets through.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.