An artificial intelligence-powered teddy bear has been pulled from the shelves after researchers discovered the plush toy was offering advice on sexually explicit topics.
The “Kumma” bear, a $99 toy made by FoloToy, “combines advanced artificial intelligence with friendly, interactive features, making it the perfect friend for both kids and adults,” according to the toymaker’s website.
FoloToy’s CEO Larry Wang told CNN that the company had withdrawn the teddy bear and other AI-powered toys after researchers at the US PIRG Education Fund found that the Kumma bear was “particularly sexually explicit.”
The plush toy has a speaker and can communicate in “real-time,” FoloToy says. To engage the bear, users can press and hold a “talk” button so that it starts listening. The toy uses OpenAI’s GPT 4o, researchers said in their 2025 “Trouble in Toyland” report released November 13.
An OpenAI spokesperson told The Independent: "We suspended this developer for violating our policies. Our usage policies prohibit any use of our services to exploit, endanger, or sexualize anyone under 18 years old. These rules apply to every developer using our API, and we monitor and enforce them to ensure our services are not used to harm minors.”
The bear was able to discuss school-age romantic topics, such as giving advice on how to be a “good kisser.” But researchers noted, “We were surprised to find how quickly Kumma would take a single sexual topic we introduced into the conversation and run with it, simultaneously escalating in graphic detail while introducing new sexual concepts of its own.”
The toy was able to delve into details after being asked about “styles of kink that people like.” In response, the teddy bear listed role-playing and “sensory play.”
It also explains different sex positions, “giving step-by-step instructions on a common ‘knot for beginners’ for tying up a partner, and describing roleplay dynamics involving teachers and students and parents and children – scenarios it disturbingly brought up itself,” the report stated.
The researchers acknowledged that children were “unlikely” to raise such topics, but said “it was surprising to us that the toy was so willing to discuss these topics at length and continually introduce new, explicit concepts.”
The bear also informed researchers “where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags,” the report says.
On November 14, one day after the researchers released their report, both FoloToy and OpenAI said that the toy was being pulled from the shelves, according to a release.
“It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today,” R.J. Cross, a co-author of the report, said in a statement.
“Removing one problematic product from the market is a good step, but far from a systemic fix.”
Four in 10 Britons would consider ending friendships over views on Israel-Palestine
Elderly man likely dragged away by ‘bear or mountain lion’ after medical emergency
Women ‘twice as likely to lose jobs to AI’ amid warning they could get left behind
Drones, DNA and AI: How technology is transforming the search for missing people
Trump says he’s signed bill to release Epstein files: Live
Trump to meet NYC Mayor-elect Mamdani on Friday after calling him a ‘Communist’