Mark Zuckerberg, CEO of Meta Platforms Inc., appears in Los Angeles Superior Court on Wednesday, February 18, 2026 in Los Angeles, California, USA.
Kyle Grillot | Bloomberg | Getty Images
Instagram announced Thursday that its parent company will warn parents if their teens repeatedly search for suicide or self-harm terms. Meta Scrutined across multiple trials.
“These alerts are designed to ensure parents are aware if their teen is repeatedly attempting to search for this content and provide them with the resources they need to support their teen,” the company said in a release.
The parental monitoring feature comes as the company faces allegations that the design and features of apps like Instagram are negatively impacting the mental health of young users.
Experts discuss court cases and related litigation involving companies such as: Google YouTube, TikTok, snap This is a “Big Tobacco” moment for the social media industry, as courts consider the alleged harms of their products and efforts to mislead the public about their harmful effects.
Instagram Alerts will roll out next week in the US, UK, Australia, and Canada.
The company said in a blog post that parents will receive an alert if their teen repeatedly searches for “phrases that encourage suicide or self-harm, phrases that suggest the teen wants to harm themselves, or terms such as ‘suicide’ or ‘self-harm'” over a “short period of time.”
The company calls this the “right starting point” for finding the right threshold to send an alert. Mehta said parents may receive warnings that may not indicate a real cause for concern, but that they will continue to listen to feedback about the feature.
Alerts are delivered to parents via email, text message, WhatsApp, or within Instagram.
To take advantage of the alert feature, both parents and teens must sign up for Instagram’s parental monitoring tool.
Parents who receive the alert will see a message explaining their teen’s Instagram search habits, and will also be given the option to view additional help resources, the company said.
Meta said it plans to eventually release similar parental alerts “for specific AI experiences,” which are intended to notify parents “when their teens attempt to engage in certain types of conversations related to suicide and self-harm with our AI.”
These upcoming AI-related parental alerts come in the wake of growing concerns that AI chatbots from various technology companies, including OpenAI and Meta, are engaging in questionable and potentially harmful conversations with users related to mental health.
Meta offers its own AI chatbot and is working on a new powerful AI model codenamed “Avocado” that is expected to debut later this year, CNBC reported.

Meta CEO Mark Zuckerberg testified in Los Angeles Superior Court last week as part of a trial in which a plaintiff claims he became addicted to social media apps such as Instagram when he was a minor.
In his testimony, Zuckerberg reiterated Meta’s position that it is favored by owners of mobile operating systems and associated app stores. apple In contrast to app makers, Google is better at verifying the age of users.
Regarding age verification, the Federal Trade Commission announced Wednesday that it will not enforce measures related to the Children’s Online Privacy Protection Regulation (COPPA Rule) on “certain website and online service operators” that collect user data that can be used to inform age verification technology.
The FTC said the policy statement is part of a larger review of COPPA rules regarding age verification.
Legal filings released last week as part of a separate meth-related trial in New Mexico detail internal messages from employees discussing how the company’s encryption efforts make it difficult to disclose reports of child sexual abuse materials to authorities.
Mr. Mehta denies the charges in both the California and New Mexico cases.
Last week, CNBC reported that the National Parent Teacher Association would not renew its funding relationship with Meta, citing various legal challenges facing the company regarding children’s digital safety.
If you are having suicidal thoughts or are in distress, please contact the Suicide & Crisis Lifeline (988) for support and assistance from a trained counselor.
WATCH: The worst outcome of the Meta LA trial is a structural change to the company’s app.

