Openai CEO Sam Altman announced a number of new user policies on Tuesday, including a pledge to significantly change the way ChatGPT interacts with users under the age of 18.
“We prioritize safety before teen privacy and freedom,” reads the post. “This is a new and powerful technology and we believe minors need great protection.”
Changes to minor users specifically address conversations that include sexual topics and self-harm. Under the new policy, ChatGpt will be trained to stop engagement in “frivolous talk” with minor users, with additional guardrails being centered around the discussion of suicide. If a minor user uses CHATGPT to imagine a suicide scenario, the service will try to contact the local police, or, especially if it is strict.
Sadly, these scenarios are not hypothetical. Openai is currently facing an illegal death lawsuit from Adam Raine’s parents. Adam Raine died after committing suicide after several months of interaction with ChatGpt. Another consumer chatbot, Character.ai, faces a similar lawsuit. The risks are particularly urgent for minor users considering self-harm, but the broader phenomenon of chatbot fuel delusions has sparked widespread concern, especially as consumer chatbots have become more sustained and detailed interactions.
In addition to content-based restrictions, parents registering minor user accounts have the power to set “blackout times” where CHATGPT is not available. This is a feature that was previously unavailable.
The new ChatGPT policy comes on the same day as the Senate Judiciary Committee entitled “Investigating the Harm of AI Chatbots.” The hearing was announced in August by Senator Josh Hawley (R-MO). Adam Lane’s father will be speaking at the hearing among other guests.
The hearing will also focus on the results of a Reuters investigation that unearthed policy documents that clearly encourage sexual conversations with minors. Meta updated its chatbot policy after a report.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Separating minor users is a critical technical challenge, and Openai detailed its approach in another blog post. The service is about “building towards a long-term system to understand whether someone is over 18 or under 18”, but in many vague cases, the system defaults towards more restrictive rules.
For concern parents, the most reliable way to ensure that minor users are recognized is to link their teenage accounts to their existing parent accounts. This allows the system to directly alert parents if teenagers are believed to be suffering.
However, in the same post, Altman highlighted Openai’s continued commitment to user privacy, giving adult users a wide range of freedom in how they interact with ChatGPT. “We recognize that these principles are at odds,” the post concludes.
If you or someone you know need help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text your 741-741 home for free. Text 988; or receive 24-hour support from the Crisis Text Line. Outside of the US, please see the International Association for Suicide Prevention for a database of resources.