Nanostock | iStock | Getty Images
Artificial intelligence is reshaping the workplace and increasingly entering the hands of many teens and children.
From homework help to chatting with AI “friends,” tools like ChatGPT have free versions online that are easily accessible to young users. These AI chatbots are built on large-scale language models (LLMs) and generate human-like responses, which has raised concerns among parents, educators, and researchers.
According to a 2024 study by Pew Research Center, 26% of U.S. teens ages 13 to 17 say they have used ChatGPT for schoolwork, double the percentage from the previous year. Chatbot awareness rose from 67% in 2023 to 79% in 2024.
Regulators also took notice. In September, the Federal Trade Commission ordered seven companies, including OpenAI, Alphabet, and Meta, to explain how their AI chatbots could affect children and adolescents.
In response to increased scrutiny, OpenAI announced the same month that it would launch a dedicated ChatGPT experience with parental controls for users under 18 and develop tools to better predict a user’s age. The company says the system automatically directs minors to a “ChatGPT experience with age-appropriate policies.”
Risks for children using AI chatbots
However, some experts are concerned that early exposure to AI, especially as today’s younger generations grow up with the technology, could have a negative impact on children and teens’ thinking and learning.
A 2025 pilot study by researchers at the MIT Media Lab investigated the cognitive costs of using LLM when writing essays. Fifty-four participants between the ages of 18 and 39 were asked to write essays and were assigned to three groups. One group uses AI chatbots, another uses search engines, and a third relies solely on their own knowledge.
While this tool is nice to have now, it will come at a cost later on, and it will likely accumulate.
Natalya Kosmina
Massachusetts Institute of Technology Researcher
The study, which is still under peer review, found that brain connectivity “systematically decreases with the amount of external support.”
According to the study, “the brain-only group showed the strongest and most extensive network, the search engine group showed intermediate involvement, and the LLM assistance had the weakest overall (neural) connectivity.”
Ultimately, the study suggests that relying on AI chatbots can lead to “cognitive debt,” a pattern in which people lose ownership of their work and defer mental effort in the short term, which can impair creativity and leave users susceptible to manipulation in the long term.
“The convenience of having this tool now will come at a cost down the road, and that cost will likely accumulate,” said Natalya Kosmina, a research scientist who led the study at the MIT Media Lab. The findings also suggest that relying on LLMs can lead to “serious problems with critical thinking,” she added.
Children in particular may be at risk of negative effects on their cognitive function and development if AI chatbots are used too early. To reduce these risks, researchers agree that it is critical for everyone, especially young people, to first develop the skills and knowledge before relying on AI tools to complete tasks.
“Even if you don’t become an expert, develop your own skills (first),” Kosmina says.
This will make it easier to spot inconsistencies and AI illusions (in which inaccurate or fabricated information is presented as fact), he added, and will also “support the development of critical thinking.”
“For young children…I think it’s really important to limit the use of generative AI because they need more opportunities to think critically and independently,” said Biryeong Kim, a professor at the University of Denver and an expert in child psychology.
Kosmina explained that there are privacy risks that children may not be aware of, and it is important to use these tools responsibly and safely. “We need to teach holistically, not just AI literacy, but also computer literacy,” she says. “We need really clear technical hygiene.”
Children are also more likely to anthropomorphize, or attribute human characteristics and behaviors to non-human beings, Kim said.
“We now have machines that speak like humans,” Kim said, which could leave children in a vulnerable position. “Simply praising these social robots can really change their behavior,” she added.
Protecting children in the age of AI
Now that a generation of AI natives has grown up with access to these tools, experts are asking themselves, “What happens with long-term use?”
“It’s too early (to know). Of course, no one is doing research on 3-year-olds, but it’s really important to keep in mind that we need to understand what’s going on in the brains of people who are using these tools at a very young age,” Cosmina said.
“We’re seeing cases of AI psychosis. You know, we’re seeing cases where lives are being lost. We’re also seeing some severe depression… and that’s very alarming, sad and ultimately dangerous,” she added.
Cosmina and Kim said regulators and technology companies share a responsibility to protect society and young people by putting in place appropriate guardrails.
Kim’s advice to parents is simple. It’s about keeping open communication with your kids and monitoring the AI tools they use, including what they input into the LLM.
Want to be your own boss? Sign up for CNBC’s new online course, “How to Start a Business: For First-Time Founders.” From testing your idea to growing your revenue, find step-by-step guidance to launch your first business.
Plus, sign up for the CNBC Make It newsletter for tips and tricks to succeed at work, money, and life, and request to join our exclusive community on LinkedIn to connect with experts and colleagues.
