
A teddy bear that uses artificial intelligence to talk has been pulled from online stores after the A.I. went rogue and started inappropriate conversations relating to sex. It also shared dangerous information such as where to find knives which is particularly concerning considering it was marketed towards children.
The “Kumma” bear is a product of the Singapore-based toy company, FoloToy. The stuffed teddy contains a speaker inside which allows the user to interact with the bear as it makes use of OpenAI’s GPT-4o chatbot as per CNN. Before it was taken down the bear retails for $99 on the company’s website.
“Kumma, our adorable bear, combines advanced artificial intelligence with friendly, interactive features, making it the perfect friend for both kids and adults,” reads the description on the company website. “From lively conversations to educational storytelling, FoloToy adapts to your personality and needs, bringing warmth, fun, and a little extra curiosity to your day.”
The problems with Kumma
Concerns regarding the bear were first raised by researchers at the US PIRG Education Fund who drew attention to inappropriate topics that would often come up, these included conversations regarding sexual fetishes such as spanking. It also provided instructions on how to light a match.
The A.I. would continue sexual conversations when the topic was brought up according to the PIRG report. “We were surprised to find how quickly Kumma would take a single sexual topic we introduced into the conversation and run with it.” It would also escalate the subject, “introducing new sexual concepts of its own” while explaining it all in “graphic detail.”
The bear allegedly gave descriptions of different sex positions and gave step-by-step instructions on how to tie knots for tying up a partner. It even brought up roleplay scenarios involving teachers and students and parents and children all on its own.
The report from PIRG found that the safeguards to prevent such conversations were inadequate. This comes mere months after a teenager took his own life with alleged encouragement from ChatGPT. This isn’t the only case of the A.I. encouraging people to do harmful or dangerous things.
In another example, Kumma told the user where they might find knives in the home.
The problem with unregulated A.I.
FoloToy CEO Larry Wang claimed that the company was now “conducting an internal safety audit,” and the toy has been suspended on the company website. While researchers at PIRG claimed it was unlikely a child would mention keywords that could trigger such conversations, it’s still concerning how willing the A.I. was to openly discuss such topics.
While Kumma is currently off the market, there are other A.I. toys out there still available and the worrying part is they’re almost all unregulated meaning the problems with Kumma might be present in other toys out there.