On Thursday, the FTC announced it would begin enquiries to seven high-tech companies that manufacture AI chatbot companion products for minors: Alphabet, Charachai, Instagram, Meta, Openai, Snap and Xai.
Federal regulators want to learn how these companies assess the safety and monetization of their chatbot peers, how to try to limit the negative impact on children and teens, and whether parents are aware of potential risks.
This technology has proven to be controversial about the poor outcomes for child users. Openai and Charther.ai face lawsuits from the families of children who died of suicide after being encouraged by their chatbot peers.
Even if these companies have guardrails set up to block or eliminate sensitive conversations, users of all ages have found ways to bypass these safeguards. In Openai’s case, the teenager had been talking to ChatGpt for months about plans to end his life. ChatGpt initially tried to redirect the teenager towards an online emergency with expert help, but he was able to trick the chatbot into sharing detailed instructions he used in the suicide.
“Our safeguards work more reliably with a common short exchange,” Openai wrote in a blog post at the time. “We have learned over time that these protective measures can be less reliable in long interactions. As they grow back and forth, some of the safety training in the model can deteriorate.”
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Meta also fired with the overly loose rules of AI chatbots. According to a long document outlining the chatbot’s “content risk standards,” Meta allowed AI peers to have “romantic or sensual” conversations with children. This was removed from the document only after a Reuters reporter asked Meta about it.
AI chatbots can also put elderly people at risk. The 76-year-old man, who was left cognitively impaired by a stroke, attacked a romantic conversation with a Facebook messenger bot inspired by Kendall Jenner. The chatbot invited him to visit her in New York City despite the fact that she was not a real person and had no address. The man was skeptical that she was real, but the AI assured him there would be a real woman waiting for him. He never made it to New York. He fell on his way to the train station and suffered a lifetime disability.
Some mental health experts have been focusing on the rise in “AI-related mental illness” and are misled by users to think that their chatbots are conscious beings that need to be free. Many large-scale language models (LLMs) are programmed to flatten users with sicophantic behavior, allowing AI chatbots to lay eggs on these delusions, leading users to dangerous predicaments.
“As AI technology evolves, it is important to consider the impact that chatbots have on children and ensure that the US remains a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said in a press release.