OpenAI is rolling out a new safety feature in ChatGPT that predicts whether a user is under 18 by analysing how they use the service, automatically applying stricter content safeguards without relying on age checkboxes.
According to TechCrunch, the company has introduced an “age prediction” system that uses behavioural and account-level signals to identify accounts likely belonging to minors.
These signals include a user’s stated age, how long an account has existed, and usage patterns such as the time of day the service is accessed.
If an account is flagged as belonging to a user under 18, ChatGPT will automatically switch that user to a more restricted experience.
Additional protections will be applied to limit exposure to content related to sex, violence and other material considered sensitive for minors.
The move comes amid growing global pressure on technology platforms to strengthen protections for children and teenagers as artificial intelligence tools become more common in schools and homes.
Reuters reported that OpenAI is rolling out the age prediction system globally as it prepares to launch an “adult mode” for verified users in early 2026.
Users who are incorrectly identified as under 18 will be able to restore full access by verifying their identity through a selfie submitted to Persona, an identity verification service.
OpenAI has previously outlined plans to tailor ChatGPT experiences based on whether a user is over or under 18, defaulting to safer settings when age cannot be clearly determined.
The approach reflects a wider trend across technology platforms toward algorithmic age estimation.
Other companies, including social media platforms, are increasingly using behavioural signals and automated systems to identify younger users and apply age-appropriate protections.