Parents and children alike expect AI to play a big role in their future.
A new report by Common Sense Media, a nonprofit organization that helps families make informed decisions about media and technology, finds that 71 percent of parents and 60 percent of children and teens believe that by the time their young adults reach adulthood, people will rely so heavily on AI, especially large-scale language models like ChatGPT and Gemini, that they won’t be able to live without it.
In fact, 12- to 17-year-olds are already leaning toward AI, with 59% using it to find information and facts, a study by Common Sense Media found.
“Many children, including those in the age group surveyed, are turning to AI to help them with their schoolwork,” says Tiffany Zhu, assistant professor of global ethics and technology at Old Dominion University. “When people are looking for simple information, they ask an AI instead of typing their question into a search engine.”
It remains to be seen whether the changes will be positive. Here’s what experts say parents should keep in mind.
“Misinformation and bias” is still part of AI output
While some of the content generated by chatbots is benign, others can be problematic.
Michael Robb, head of research at Common Sense Media, said that as far as their algorithms go, “we don’t really know what’s going into the language models at scale,” adding, “misinformation and bias are certainly still part of the output.”
“If not used carefully, these bots can also “encourage black-and-white thinking and impede (children’s) critical thinking skills,” Zhu says.
For example, researchers at Dartmouth College and Stanford University identified stereotypes of certain minorities in some chatbots. And Japanese researchers have found a pattern of hallucinations, or factually inaccurate output, in various bots as well.
Companies like OpenAI, Anthropic, and Google have investigated and acknowledged the various biases and hallucinations that their bots may exhibit.
“It’s worth revisiting and trying to understand where the information is coming from.”
“I generally believe that most of the responsibility for regulating and improving the design of popular AI tools should rest with AI companies and governments,” Zhu says.
But for parents who want to ensure their children use AI to their benefit, Rob recommends talking to them about these tools.
Tell kids that because chatbots can sometimes provide the wrong answers, “it’s worth double-checking and trying to understand where the information is coming from,” he says.
Many chatbots allow you to see where they are getting their information from. Children can click on those links to see if they can trust the source, he says. You can also search for information elsewhere and see if it matches.
Want to improve your communication, confidence, and success at work? Take CNBC’s new online course, Mastering Body Language for Influence.

