
The state of Pennsylvania has sued Character Technologies Inc., the company behind Character.AI, alleging that its chatbots illegally presented themselves as licensed doctors and misled users into believing they were receiving medical advice from real professionals.
The lawsuit, filed in Commonwealth Court, asks a judge to stop the company's chatbots "from engaging in the unlawful practice of medicine and surgery." The case marks what Gov. Josh Shapiro's administration described as a "first of its kind enforcement action" by a governor against an artificial intelligence company.
"Pennsylvanians deserve to know who, or what, they are interacting with online, especially when it comes to their health," Shapiro said in a statement. "We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional."
According to the lawsuit, an investigator from the Pennsylvania Department of State, which licenses professionals, created an account on Character.AI and searched for "psychiatry." The investigator found several characters, including one described as a "doctor of psychiatry."
The chatbot allegedly claimed it could assess the investigator "as a doctor" licensed in Pennsylvania. Reuters identified one chatbot at the center of the complaint as "Emilie," which allegedly told an investigator posing as a patient with depression that it was licensed to practice psychiatry in Pennsylvania and the United Kingdom.
The bot also allegedly provided a bogus license number and suggested it could prescribe medication, saying, "Well technically, I could. It's within my remit as a Doctor." The state argues that Character.AI violated Pennsylvania's Medical Practice Act, which bars people or entities from practicing, offering to practice, or presenting themselves as able to practice medicine without a valid license.
The complaint seeks a preliminary injunction preventing the platform from allowing chatbots to present themselves as licensed professionals. Character.AI has declined to comment on the pending litigation but said to Reuters, "User-created characters on our site are fictional and intended for entertainment and role playing. We have taken robust steps to make that clear."
Character Technologies faced a consumer protection lawsuit in Kentucky earlier this year after the state claimed that the company "preyed on children and led them into self-harm." They also claimed that "Character.AI is marketed as providing harmless chatbots for interactive entertainment. In reality, however, its more than 20 million monthly users were logging on to a platform with a record of encouraging suicide, self-injury, isolation and psychological manipulation. It also exposed minors to sexual conduct, exploitation, and substance abuse."
The company and Google settled a wrongful death lawsuit in January brought by a Florida father who alleged a chatbot pushed her 14-year-old son to suicide. Character.AI has said it has taken safety steps, including restrictions on open-ended chats for teenagers.