OpenAI launched new parental controls for ChatGPT after facing a lawsuit over a teenager’s death.
The company confirmed the tools will allow adults to oversee which features their children use on the platform.
Parents can connect their accounts with their child’s, track chat history, and manage memory functions.
OpenAI stated ChatGPT will notify parents if it detects “acute distress” in teens, based on expert guidance.
The company plans to roll out these parental tools within a month.
Lawsuit links chatbot to tragic suicide
Adam Raine, a 16-year-old, died by suicide in April.
His parents accused ChatGPT of fostering psychological dependence and encouraging him to take his life.
They claimed the chatbot even generated a suicide note for their son.
The family’s attorney, Jay Edelson, dismissed OpenAI’s changes as vague promises and a crisis management strategy.
Edelson demanded Altman either prove ChatGPT is safe or withdraw it from the market.
Experts and competitors highlight safety gaps
Meta announced restrictions on chatbot conversations with teens about suicide, self-harm, and inappropriate topics.
Its bots will redirect such conversations to professional support resources, alongside existing parental oversight.
A RAND Corporation study found safety inconsistencies in ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Researchers said the tools require further refinement to protect vulnerable users.
Lead author Ryan McBain welcomed incremental safeguards but warned against relying solely on company self-regulation.
He urged independent safety benchmarks, clinical testing, and enforceable standards to reduce risks for teenagers.