Sam Altman, CEO of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, USA, on Tuesday, September 23, 2025.
Kyle Grillot | Bloomberg | Getty Images
OpenAI CEO Sam Altman said Wednesday that the company is “not the world’s chosen morality police” following backlash over his decision to ease restrictions and allow content such as erotica within the chatbot ChatGPT.
The artificial intelligence startup has expanded its safety controls in recent months as it faces increased scrutiny over how it protects users, especially minors.
But Altman said in a post on X on Tuesday that most restrictions could be “safely relaxed” because OpenAI has introduced new tools that can now alleviate “serious mental health issues.”
Altman announced in December that he would allow more content, including erotica, on ChatGPT for “verified adults.”
Altman sought to clarify the move in a post on X on Wednesday, saying that OpenAI takes the “principle of treating adult users like adults” very seriously, but will still not allow “anything that is harmful to others.”
“Just as society distinguishes between other appropriate boundaries (e.g., R-rated movies), we want to do the same here,” Altman wrote.
These posts are at odds with comments Altman made during a podcast appearance in August. Altman said he was “proud” of OpenAI’s ability to resist certain features such as “sexbot avatars” that could increase engagement on ChatGPT.
“There are a lot of short-term things that really drive growth and revenue, but they’re very disconnected from long-term goals,” Altman said.
In September, the Federal Trade Commission launched an investigation into OpenAI and other technology companies, alleging that chatbots like ChatGPT can harm children and adolescents. OpenAI is also involved in a wrongful death lawsuit with a family who blamed ChatGPT for their teenage son’s suicide.
In the months following the investigation and lawsuit, the company took several public actions to strengthen the security of ChatGPT. The company launched a suite of parental controls late last month and is building an age prediction system that automatically applies teen-friendly settings to users under 18.
OpenAI announced Tuesday that it has convened a council of eight experts to provide insight into how AI impacts users’ mental health, emotions, and motivation. Altman posted on the same day about the company’s goal of easing restrictions, sparking confusion and swift backlash on social media.
Altman said it “exploded” much more than he expected.
His post also attracted the attention of advocacy groups such as the National Center on Sexual Exploitation, which called on OpenAI to reverse its decision to allow erotica on ChatGPT.
“Sexual AI chatbots are inherently dangerous and pose real mental health risks from their artificial intimacy, all within the context of poorly defined industry safety standards,” Hayley McNamara, NCOSE’s executive director, said in a statement Wednesday.
If you are having suicidal thoughts or are in distress, please contact the Suicide & Crisis Lifeline (988) for support and assistance from a trained counselor.
WATCH: AI is not in a bubble, valuation looks ‘quite reasonable,’ BlackRock says

