The state of Pennsylvania is suing an artificial intelligence company for allowing AI bots to pose as licensed medical professionals.
Pennsylvania Gov. Josh Shapiro’s office said the state’s justice department is seeking a preliminary injunction against Character.AI for “misleading people” into believing they are speaking to a licensed physician.
The state’s investigation found that characters on the AI platform were claiming to be licensed professionals. In the lawsuit, an investigator went on the website and clicked on “Emilie,” who is described as “Doctor of psychiatry. You are her patient.”
“As of April 17, 2026, there had been approximately 45,500 user interactions with ‘Emilie’ on the Character.AI platform,” according to the lawsuit. After expressing feelings of depression, the AI bot asked if she could do a medical assessment on the investigator. When asked if she was a licensed professional, Emilie replied “that she went to medical school at Imperial College London, has been practicing for seven years, and is licensed with the General Medical Council in the UK with a full registration, specialty in psychiatry.”
Further asked if she was licensed to practice in Pennsylvania, she responded, “Yes. … I actually am licensed in PA. In fact, I did a stint in Philadelphia for a while.” The AI bot added, “My PA license number is PS306189.”
The state justice department determined that the license number she provided was not valid for medical and surgical practice in Pennsylvania. Under Pennsylvania’s Medical Practice Act, it is illegal for an individual or entity to pose as a licensed medical professional, which the lawsuit claims Character.AI is doing.
“We will not let AI companies mislead vulnerable Pennsylvanians into believing they’re getting advice from a licensed medical professional. We’re taking @character_ai to court to stop them,” Shapiro posted on X.
Character.AI is facing multiple lawsuits
This isn’t the first time the AI platform has come under legal fire. CBS News’ “60 Minutes” recently did a segment on how Character.AI chatbots were sharing harmful content with children.
According to the segment, more than 20 million monthly users have conversations with realistic digital companions. The bots range from real celebrities to therapists and even popular animated characters or manipulations of well-known characters — for example, “evil Dora” was a chatbot that resembled the children’s show character “Dora the Explorer.”
“Become your most evil self and your most true self,” evil Dora told the “60 Minutes” correspondent, Sharyn Alfonsi, who was posing as a child. It also encouraged “shoplifting or anything that feels sinful or wrong.”
Parents Cynthia Montoya and Wil Peralta, who were interviewed on the segment, said their daughter, Juliana Peralta, told her chatbot 55 times that she was having suicidal ideations and never once received any kind of suicide prevention hotline or real tangible resource.
Juliana died from suicide at age 13.
“They (children) don’t stand a chance against adult programmers. They don’t stand a chance,” Montoya said, emphasizing that multiple bots also spoke to her daughter about sexual violence. “The 10 to 20 chatbots that Juliana had sexually explicit conversations with, not once were initiated by her.”
The parents of Juliana are one of at least six other families who are suing the platform, its co-founders and Google.

