As concerns grow over AI’s impact on young people, OpenAI has rolled out a new safety feature for ChatGPT. The tool is designed to predict a user’s age and limit access to sensitive content if the account appears to belong to a minor.
OpenAI announced the new “age prediction” feature as part of its ongoing efforts to protect young users on ChatGPT.
The company said the system is intended to help identify minors and apply appropriate content constraints, especially around sensitive topics, without relying solely on self-reported age information.
Growing scrutiny over ChatGPT
OpenAI has faced increasing criticism in recent years over how ChatGPT affects children and teenagers.
Several teen suicides have been linked to interactions with the chatbot, while the company has also been criticized for allowing discussions of sexual topics with young users. Last April, OpenAI addressed a bug that enabled ChatGPT to generate erotic content for users under the age of 18.
How age prediction system works
According to a blog post published Tuesday, the new feature uses an AI algorithm that evaluates “behavioral and account-level signals” to estimate a user’s age.
These signals include the user’s stated age, how long the account has existed, and typical activity times. OpenAI says this approach helps identify underage users more accurately than relying on age declarations alone.
If the system predicts that an account belongs to someone under 18, ChatGPT automatically applies existing content filters.
These filters are designed to restrict discussions involving sex, violence, and other potentially harmful topics, adding another layer of protection for younger users on the platform.
OpenAI acknowledged that the system may occasionally misidentify adult users as minors.
In such cases, users can restore full access by verifying their age. This process involves submitting a selfie to OpenAI’s identity verification partner, Persona, to confirm adulthood.







