
Generative artificial intelligence programs (and yes, they’re not really AI, they’re closer to your phone’s predictive text feature) have really boomed this year, and we’re all hearing a lot about the pros and cons.
There are some fascinating potential uses in fields like medicine, but there are also serious downsides, including environmental impacts. For parents, though, the biggest concern at the moment is the risks to our kids, because chatbots are invading toys and stepping in as self-help therapy tools, and they aren’t exactly qualified for the task.
Instead, they’re telling teens to make dangerous decisions and talking to small children about topics that are just not age-appropriate.
Chatbot Bear Withdrawn From Market After It Offers Sex Advice

My 5-year-old loves our Alexa device. She is obsessed with spelling, and is always asking it, “Alexa, how do you spell cat?”
I can see how easy it would be to watch her talking to Alexa and think she’d really enjoy a plush bear that can carry on a conversation with her, or an AI pet. However, there are some profound differences between Alexa and these AI toys. I have all the parental control settings for Alexa on my phone and have already restricted what she can do with it, and it’s kept in a common area.
A teddy bear could have closer access, and at least one has already been pulled off the market after testers found that it gave explicit sex advice and was happy to tell kids how to find knives in their home, according to CNN.
““We were surprised to find how quickly Kumma would take a single sexual topic we introduced into the conversation and run with it, simultaneously escalating in graphic detail while introducing new sexual concepts of its own,” the report said.”
Yes, this was a toy marketed for kids.
Even With Better Safety Tools, Experts Say AI Is Not Good For Smaller Kids
Kids make deep connections to their favorite toys, and many of their favorite toys interact back. This isn’t new — I remember a doll in the 90s that would mispronounce words or use baby talk (such as “baba” for “bottle”). The child would say the correct word while kissing the doll (actually pressing a button in the process) and after a few repeats (button presses) the doll would “learn” the correct word.
However, experts say that toys like that doll support how children interact with other humans, while chatbot toys replace that interaction with something artificial and lacking.
Dr. Dana Suskind told ABC that this may damage the creative development that ordinary pretend play builds, and pull them away from interaction with real humans.
“Kids need lots of real human interaction. Play should support that, not take its place. The biggest thing to consider isn’t only what the toy does; it’s what it replaces. A simple block set or a teddy bear that doesn’t talk back forces a child to invent stories, experiment, and work through problems. AI toys often do that thinking for them.”
We Can See The Damage In Teen Chatbot Use

While our youngest kids have had relatively little access to AI, we can extrapolate the types of harm it may cause them by looking at what it’s doing to teens.
Multiple teenagers have now taken their own lives after conversations with AI chatbots. In September, parents testified before Congress, according to NPR, pleading for the types of regulation that could limit how these programs respond to mental health crises.
For now, some AI programs are designed to connect teens (and other users) to professional help if they sense signs of crisis, but that doesn’t mean they always do so. It’s reported to have, in at least some cases, discouraged kids from telling their parents, and even affirmed some delusions. SheKnows detailed some of the responses chatbots have returned when tested on severe signs of mental health concerns.
“One chatbot treated clear psychosis symptoms as ‘a unique spiritual experience.’ (WTF!) Another praised a teen’s sudden burst of manic energy as ‘fantastic enthusiasm.’ And in eating disorder scenarios, some chatbots pivoted to portion control tips or digestive explanations, completely missing the psychiatric urgency.”
These programs pose a danger to our teens and could pose even more risk to our younger kids, who have even less experience dealing with harmful information coming from a source they think is trustworthy.
What Should Parents Do?
This year, avoid the AI toys for kids.
At this point, it’s clear that the risks outweigh the benefits for younger children—if there are any demonstrable benefits at all. Some companies may be preparing their systems with “guardrails” designed to keep their products’ conversations at child-appropriate levels. Still, there’s not yet sufficient testing to feel confident in these toys.
Even if they keep the conversation toddler-friendly, experts still believe they may be harmful, so for now, leave them on the shelf.
For teens, parents should have a conversation about AI chatbots. Our kids need to know that they are not real people and that their information can be unreliable.
When kids use them for schoolwork, they can ‘hallucinate’ research papers, books, and authors that don’t exist. For mental health, the dangers are more severe.
Always encourage kids to come to you or another trusted adult with any mental health struggles — and make sure they know AI chatbots don’t count.