OpenAI on Thursday announced a new feature called Trusted Contact. The feature is designed to alert a trusted third party if self-harm is mentioned in a conversation. This feature allows adult ChatGPT users to designate another person as a trusted contact within their account, such as a friend or family member. If a conversation has the potential to escalate into self-harm, OpenAI will prompt the user to reach out to that contact. It also sends automatic alerts to your contacts, reminding you to check in.
OpenAI is facing a wave of lawsuits from families of people who committed suicide after talking to chatbots. In many cases, families say ChatGPT encouraged their loved ones to commit suicide or helped them plan suicide.
OpenAI currently uses a combination of automation and human review to address potentially harmful incidents. Certain conversation triggers alert the company’s systems to suicidal thoughts, and that information is relayed to the human safety team. The company claims that a human reviews the incident each time it receives this type of notification. “We strive to review these safety notices within an hour,” the company said.
If OpenAI’s internal team determines that a situation represents a significant safety risk, ChatGPT will send an alert to your trusted contacts either via email, text message, or in-app notification. Alerts are concise and designed to encourage contacts to contact the person in question. The company said it did not include detailed information about what was being discussed to protect user privacy.

The Trusted Contact feature follows safeguards the company introduced last September to give parents some level of oversight over their teens’ accounts, including receiving safety notifications designed to alert parents if OpenAI’s systems determine that their child faces a “significant safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional medical services if the conversation turns to the topic of self-harm.
Importantly, Trusted Contacts are optional and any user can have multiple ChatGPT accounts, even if protection is enabled for a particular account. OpenAI’s parental controls are also optional and have similar limitations.
“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people in difficult moments,” the company said in an announcement post. “We will continue to work with clinicians, researchers, and policy makers to improve how AI systems respond when people may be experiencing distress.”
tech crunch event
San Francisco, California
|
October 13-15, 2026
If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.
