Another family joined the list of mourners who have sought legal accountability for the deaths of their children that they say occurred at the digital hands of artificial intelligence.

College freshman Sam Nelson died last year at 19 years old from a drug overdose. His parents told CBS News their son used OpenAI’s ChatGPT tool to get information on drug doses to take. They sued the AI company on Tuesday.

The lawsuit alleges that the chatbot gave medical advice that it was not authorized to give, like telling Nelson it “was safe to take kratom, a supplement used in drinks, pills and other products, in combination with Xanax, a widely used anti-anxiety medication,” per CBS News.

Like other parents, Leila Turner-Scott said her son’s conversation with ChatGPT wasn’t flagged, and “safety nets” weren’t provided.

“This isn’t safe just because someone says it’s safe,” Turner-Scott said. “He truly thought he had everything under control. He truly thought he was researching things in a way where he could be safe. And he was so wrong.”

She discovered her son in his bedroom with blue lips and unresponsive. The evening of his death, Nelson had asked the chatbot if Xanax could relieve nausea from taking 15 grams of kratom, per The Telegraph.

A spokesperson told CBS News that the version of ChatGPT Nelson was using has been updated and is no longer in operation. They also said the situation was “heartbreaking” and expressed sympathy for the family’s loss.

“ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” the spokesperson said. “The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”

Related
Did ChatGPT miss the warning signs? OpenAI sued over FSU shooting
View Comments

Earlier this year, OpenAI released ChatGPT Health, which markets itself as a tool for health and wellness.

“ChatGPT Health builds on the strong privacy, security, and data controls across ChatGPT with additional, layered protections designed specifically for health — including purpose-built encryption and isolation to keep health conversations protected and compartmentalized,” the product description says, noting that it supports but doesn’t replace real physician care.

Turner-Scott isn’t sold.

The concept is “absolutely terrifying,” she said. “With the kind of advice that it was giving Sam ... any doctor would lose their license.”

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.