More and more teenagers are interacting, including Open Ai’s ChatGpt, Character.ai, and Meta AI.
According to a July 2025 report by non-profit Common Sense Media, 72% of teens aged 13 to 17 use their AI peers at least once. Over half, 52% interact with the platform at least several times a month, with 13% being daily users.
While teenagers’ use of chatbots is pretty benign, 46% say they use it as a tool or program, but for some, trust and relationships can be much deeper and sometimes tragic purposes.
The parents of a teenager who died of suicide after using a massive chatbot last week, testified before Congress about the dangers of the new technology.
The bestselling author of NYU Stern School of Business Professor and “Unreliable Generation” Jonathan Haidt has been wary of the use of teenage technology for the past few years.
He also offers practical steps parents can take to protect their child’s mental health.
“Chats won’t be longer than 30 rounds.”
AI chatbots are “incredibly dangerous,” Haidt told CNBC Make last week at the Fast Company Innovation Festival.
“We have death. Adults have delusions too. And now the most dangerous thing is relationships, long conversations.”
If children use AI as a tool to learn and find information, it’s “generally a good thing,” says Haidt. In fact, in high school, it probably needs to be used for assignments.
When the use of this technology is in the wrong direction and children begin to develop relationships with it, problems arise. That says it’s wise for parents to set boundaries.
To ensure that children only rely on AI chatbots as a tool, parents can set rules about how to use them at home. They can only be restricted to use on shared devices such as family computers.
No children have any relationships with AI.
Jonathan Height
Professor, Bestseller Author
Haidt advises parents to set limits on how long their children can “conversation” with the chatbot and suggest around or below 30 rounds. In the tragically ended story, the conversation was “1,000 rounds and 1,000 rounds.” That made the difference, he says.
In a blog post in August 2025 addressing ChatGpt’s rapid adoption and app safety, Openai wrote, “We wrote that our safeguards work more reliably with a common short exchange. We learned that these protections can be less reliable with long interactions.”
Haidt argues that high-tech companies “have a long track record of hurting children on an industrial scale,” and therefore parents are responsible for enforcing strict rules regarding product use.
“Children should not develop relationships with AI,” he says.
Do you want to be your own boss? Sign up for CNBC’s Smarter and start a new online course, how to start a business: For first-time founders. From testing ideas to increasing revenue, find step-by-step guidance for starting your first business. Sign up today with coupon code EarlyBird to receive a 30% introductory discount from the regular course price of $127 (plus tax). Valid provisions from September 16th to September 30th, 2025.
Additionally, we request that you sign up for CNBC to connect with experts and peers in our newsletter, money, and life to get tips and tricks for success in the workplace.

