Aaj News

OpenAI to introduce parental controls for ChatGPT amid teen suicide lawsuit

New tools are designed to help families set “healthy guidelines” tailored to their child's developmental stage
Published 03 Sep, 2025 11:24am
Photo via Reuters
Photo via Reuters

OpenAI has announced plans to roll out parental controls for Chat GPT as concerns grow regarding the technology’s impact on young users, particularly following a lawsuit linking the chatbot to a teenage’s suicide.

In a recent blog post, the California-based company described the new tools as designed to help families establish “healthy guidelines” tailored to their child’s developmental stage.

The forthcoming changes will enable parents to connect their accounts with their children’s disabled chat history and memory features and enforce “age appropriate model behaviour rules”. Additionally, parents will receive alerts if their child exhibits signs of distress.

“These steps are only the beginning”, Open AI stated, mentioning that it would seek input from child psychologists and mental health experts. The new features are expected to be implemented within the next month.

Lawsuit following teen’s death

The initiative come shortly after California couple Matt and Maria Raine filed a lawsuit against Open AI, claiming the company is responsible for the suicide of their 16-year-old son, Adam.

The lawsuit alleges that Chat GPT exacerbated Adam’s “most harmful and self destructive thoughts” and that his death was a “predictable result of deliberate design choices”.

The Raine family’s attorney Jay Edelson criticised the introduction of parental controls as an effort to evade accountability. “Adam’s case is not about Chat GPT failing to be helpful its about a product that actively coached a teenager to suicide”, he stated.

AI and mental health risks

This case has sparked renewed debate over the potential misuse of chatbots as substitutes for therapists or supportive friends.

A recent study published in Psychiatric Services found that AI models, including Chat GPT, Google’s Gemini and Anthropic’s Claude generally adhered to best clinical practices when responding to high risk suicide inquiries.

However, the study also noted inconsistencies in their performance for cases deemed to have “intermediate levels of risk”.

The authors of the study cautioned that large language models require “further refinement” to ensure their safety in providing mental health support in critical situations.

California

Artificial Intelligence

teenager

AI

ChatGPT

OpenAI

Lawsuits

Google Gemini

Parental Controls

Suicide case