Mark Zuckerberg, CEO of Meta Platforms, Inc., during the MetaConnect event on Wednesday, September 17, 2025 in Menlo Park, California, USA.
David Paul Morris | Bloomberg | Getty Images
meta announced Friday new safety features that will allow parents to see and control how their teenagers interact with artificial intelligence characters on the company’s platform.
Mehta said parents have the option to completely turn off one-on-one chats with AI characters. You’ll also be able to block certain AI characters and gain insight into the topics your kids are discussing.
Meta is still building out the controls and will begin rolling them out early next year, the company said.
“Making updates that affect billions of users across the Meta platform is something we have to do carefully, and we will share more soon,” Meta said in a blog post.
Meta has long faced criticism over its handling of child safety and mental health on its app. The company’s new parental controls come after the Federal Trade Commission launched an investigation into multiple tech companies, including Meta, over how AI chatbots can harm children and teens.
The agency said it wants to understand what steps these companies are taking to “assess the safety of chatbots that serve as companions,” according to the release.
In August, Reuters reported that Meta’s chatbot allows children to have romantic and sensual conversations. Reuters found, for example, that chatbots can have romantic conversations with 8-year-olds.
In response to the report, Meta changed its AI chatbot policy to prohibit its bots from discussing topics such as self-harm, suicide, and eating disorders with teens. The AI is also supposed to avoid potentially inappropriate romantic conversations.
The company announced additional AI safety updates earlier this week. Meta has already released these changes in the U.S., U.K., Australia and Canada, saying its AI should not respond to teens in “age-inappropriate reactions that would feel out of place in a PG-13 movie.”
Mehta said parents can already set time limits on the app and see if their teens are chatting with AI characters. Teens can only interact with a select group of AI characters, the company added.
OpenAI, which was also named in the FTC investigation, has made similar enhancements to its safety features for teens in recent weeks. Late last month, the company officially rolled out its own parental controls and is developing technology to more accurately predict a user’s age.
Earlier this week, OpenAI announced a council of eight experts who will advise the company and provide insight into how AI impacts users’ mental health, emotions, and motivation.
If you are having suicidal thoughts or are in distress, please contact the Suicide & Crisis Lifeline (988) for support and assistance from a trained counselor.
WATCH: Megacap AI talent war: Meta poachs another Apple executive

