Log in · Sign up

Is Your Teen Using AI Chatbots? Most Are, New Data Shows

Photo of author

Steph Bazzle

Teen uses computer in the dark
Photo by serezniy on Deposit Photos

The popularity of AI chatbots is increasing rapidly, and parents should be on the lookout for potential risks and dangers.

A new survey suggests that about two-thirds of adolescents are using this technology, and data indicate that some are using it as a substitute for human friendship or in place of dating. Others say they use these bots to practice social skills.

While AI has some practical uses and can be a fun novelty, becoming dependent on the programs is a serious concern, especially when it’s replacing human interaction.

Pew Research Finds About 2/3 Of Teens Use Chatbots

A teen is using the computer while looking unhappy
Photo by stokkete on Deposit Photos

Pew Research has released the 2025 survey data on teens’ use of social media and technology, and much of it is nearly identical to the results two years ago.

Most teens (76%) say they are on YouTube at least once a day, with nearly half reporting that they visit the platform several times a day, and 17% saying they’re on it “almost constantly.” (That’s actually down a bit. In 2023 it was 93%.) Twitter use has fallen drastically, from 33% in 2023 to 16% now. Most other social platforms have remained pretty steady over the past few years.

The most significant change, though, is that the survey now includes AI Chatbots.

Roughly two-thirds of teens (64%) say they ever use an AI chatbot…About three-in-ten teens say they use AI chatbots every day, including 16% who do so several times a day or almost constantly.

What Are Teens Using Chatbots For?

This new study did not ask teens about the specific ways they use AI chatbots. However, another survey conducted earlier this year by Common Sense Media did.

See also  The Decision One Influencer Mom Will "Forever Second-Guess" After Son's Death

In that survey, about a third of teens said they “have used AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice,” and about the same percentage said they’ve turned to chatbots instead of real people to discuss serious topics.

About a quarter said they’d shared personal information with chatbots, 11% said they’d used a chatbot to help practice apologizing, and 8% said they’d used the programs to practice dating interactions.

Just under half of teens surveyed said they viewed AI chatbots as tools or programs, rather than as companions.

What Are The Risks Of Chatbot Use?

While chatbots are convenient and readily available to answer almost any question at any time, their answers aren’t necessarily the ones we’d feel safe or comfortable with our teens receiving and taking to heart.

Chatbots are not “intelligent” in the sense that people often assume. They use language models to provide answers, often without any effort to verify factual accuracy. Instead, the answers are text that the program deems a good answer.

You can see this in some of the silly or funny responses that AI has become infamous for, such as the time that Google’s AI Overview suggested that glue would be a good way to keep toppings on pizza. In more serious incidents, attorneys have faced consequences for allowing AI to generate their documents, resulting in citations to non-existent cases and decisions. (This is often referred to as AI “hallucinating.”)

This becomes more dangerous when the person interacting is young, impressionable, and not yet savvy to the errors of AI, especially if they’re asking questions about relationships, mental health, or physical health. In fact, there’s currently a lawsuit moving forward against one such program, Character.AI, as the parents of at least two teens connect their children’s suicides to the use of the chatbots. That site announced in October that it would begin limiting teen access and alter how users under 18 could use its programs, according to Ars Technica.

See also  An Airlined Booted An Unaccompanied Minor From A Toronto-Based Flight

It’s not just suicides. Chatbots have reinforced delusions in people suffering mental illness, and can give inaccurate and even dangerous information about health and relationships.

What Should Parents Do?

A father using a laptop in kitchen with teenager
Photo by Lopolo on Deposit Photos

It’s hard to give a blanket answer for how parents should handle kids’ access to any new technology.

We know that any tech that’s fun, experimental, and glitchy today could be the same tech they are later expected to be proficient with for their jobs. This is no different for AI, which is already being implemented in workplaces.

We also know that hard bans on specific technology can often drive kids to sneak access to it, so every household will have to make individual decisions about whether forbidding teens to access AI chatbots would even be a workable solution.

That said, every parent can take some steps to protect their kids, and those steps revolve heavily around open conversations and reasonable supervision.

Experts have always advised keeping the family computer in a shared space. Now that most adolescents have a cell phone in their pocket and a laptop or tablet in their backpack, this is somewhat less feasible, but parents can still set rules about internet access. These may include implementing parental controls on devices and/or requiring children (especially younger teens) to turn in their devices before bedtime.

The most crucial ongoing practice is conversation. Be a safe person to talk to, and speak honestly to your kids about the risks of AI. Make sure they know that when they hand over their data, they feel as though they’re talking to an imaginary person, but an honest company may be harvesting that information.

See also  Experts Say Parenting Teens Can Be As Hard As Newborns, Fortunately There Is Help

Explain to your kids how AI can “hallucinate,” and give examples, so that they know that any information it provides should be double-checked with reliable sources, and keep the conversation open.

Your Mastodon Instance