As Americans become increasingly lonely, more people are getting emotional support from artificially intelligent chatbots, worrying some mental health experts.
“There’s a lot of talk about AI for therapy[and]emotional support,” says Lianna Fortunato, a licensed clinical psychologist and director of quality and healthcare innovation at the American Psychological Association. “Anecdotally, providers are talking about it. We know from research that people are increasingly using AI tools for that kind of support.”
Some chatbot users inadvertently get drawn into mental health-related conversations, for example by complaining about a stressful day to a digital entity that is supposed to listen. Fortunato said some people seek mental health advice from AI chatbots, which are not licensed professionals but are cheaper than therapists.
In a health survey survey of more than 20,000 U.S. adults, 10.3% of participants said they use generative AI daily. Of this group, 87.1% reported using technology for personal reasons such as advice or emotional support. The study was published on January 21 and was conducted by researchers at Massachusetts General Hospital, Weill Cornell Medical College, Northeastern University, and other institutions.
On TikTok, the search term “Therapy AI Bot” has at least 11.5 million posts, ranging from users sharing the best prompts to turn chatbots into therapists to medical professionals warning about potential dangers.
Technology companies are spending billions of dollars developing AI tools and seeking to further integrate them into people’s daily lives. But historically, AI chatbots have not always understood when users are facing a serious health crisis, and have not always responded accordingly. The New York Times reported on November 23 that “nearly 50 people have suffered mental health crises while chatting with ChatGPT,” and three of them have died.
Companies like Anthropic, Google, and ChatGPT maker OpenAI say they’re working with mental health experts to make their tools more responsive to sensitive conversations. An OpenAI spokesperson told CNBC Make It, “These are incredibly heartbreaking situations, and our hearts go out to everyone affected.” “Working closely with mental health clinicians and experts, we will continue to improve ChatGPT’s training to recognize and respond to signs of distress, defuse conversations during sensitive moments, and direct people to real-world support.”
An April 2025 paper written by OpenAI product policy researchers says that frequent conversations with AI companions can impair people’s real-life social skills. Heavy daily use of ChatGPT is correlated with increased feelings of loneliness, an OpenAI-MIT Media Lab study also published in April 2025 found.
The American Psychological Association strongly recommends against using AI as a substitute for therapy or mental health support.
Some mental health experts say chatbots can be used without risk for certain related topics. Here’s what you need to know:
“I think of it as a tool, and I think tools are useful.”
AI chatbots could help us learn about mental health, says psychotherapist and lifestyle coach Esin Pinari. They can help you create journaling prompts for reflection and can also ask for links to research papers on coping strategies, treatment options, and other questions about mental health conditions, she says.
“I don’t think of it as a (replacement for) therapy. I think of it as a tool, and I think tools can be helpful,” said Pinari, founder of Eternal Wellness Counseling, a private practice based in Boca Raton, Florida. She says her clients sometimes talk to ChatGPT about specific situations in their personal lives, run through the responses, and then take action.
Pinari said his personal AI testing confirmed that chatbots were using language that supported “unhealthy behaviors” in users. For example, if you ask a chatbot about a conflict with a friend, it might tell you that your friend is being too sensitive, when in fact it’s your fault.
If interacting with an AI chatbot impacts your mental health, Fortunato recommends asking yourself the following questions:
Is there a reliable source that I can cross-check this information with? Is there a provider that I can ask these questions to?
Trustworthy sources include peer-reviewed scientific studies, articles in the health press, and resources from medical organizations such as Harvard Health Publishing and the Mayo Clinic. “AI has the potential to really increase people’s access to health information,” Fortunato says. “[But]AI doesn’t always provide the right information.”
Keep these considerations in mind when using AI
Pinari and Fortunato agree that AI chatbots should not be used to diagnose or receive support for mental health crises, especially suicidal thoughts. During an ongoing mental health crisis, you can always call or text the Suicide and Crisis Lifeline at 988. This service is confidential and available 24 hours a day, 7 days a week, and free of charge.
“We’ve seen some very high-profile cases where AI failed to properly respond to situations, especially with young people and vulnerable groups who may be at risk,” Fortunato said. “Continued to engage with people in crisis. Did not provide resources for the crisis. Did not challenge problematic thought patterns.”
Both men also argue that medical records and personally identifying information should not be shared because conversations with chatbots are not confidential or legally protected. And we shouldn’t rely on AI to solve real-world interpersonal problems, Pinari says.
“You need someone with a different nervous system sitting across from you to pay attention to your body language and tone of voice,” she says. Chatbots are “not emotionally challenging and do not require reciprocity.”
If you are experiencing a mental health crisis or have any mental health symptoms, please contact our free and confidential National Mental Health Helpline at 1-800-662-HELP (4357).
Want to improve your communication, confidence, and success at work? Take CNBC’s new online course, Mastering Body Language for Influence.

