Teens Are Flocking to AI Chatbots. Is this Healthy?

Kids crave approval from their peers. Chatbots offer an alternative to IRL relationships, but they can come at a price

Conceptual illustration of a young person sitting on a sofa affectionately leaning their head upon and wrapping their arm around a robot, representative of an AI chatbot

Malte Mueller/Getty Images

Relationships are messy, whether you are an adult with lots of experience or a kid navigating tough times with a best friend, boyfriend or girlfriend. You can’t predict moods, interests or desires. For teens learning the ins and outs of relationships for the first time, disagreements, fights and breakups can be crushing.

But what if your teen’s best friend weren’t human? It may seem far-fetched, but it’s not. A recent report from Common Sense Media says 72 percent of teens surveyed have used artificial-intelligence chatbot companions, and 33 percent have relationships or friendships with them.

The language that AI companions use, the responses they offer and the empathy they exude can make a user think they truly understand and sympathize. These chatbots can make someone feel liked or even loved. They are programmed to give users the experience of a real connection. And adolescents have a natural interest in romance and sexuality; if they feel ignored by the kids in their high school, well, now on the nearest screen there is a hot girlfriend who is constantly fascinated by them and their video games or a supercute boyfriend whom they never have to engage in small talk with to form a bond.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This may be perplexing to some parents, but if your child is navigating the complex worlds of technology, social media and AI, the likelihood of their being curious about an AI companion is pretty high. Here’s what you need to know to guide them.

Chatbots have been around for a long time. In 1966 a professor at the Massachusetts Institute of Technology named Joseph Weizenbaum created the first chatbot, called ELIZA. Today AI and natural-language processing have sprinted far past ELIZA. You probably have heard of ChatGPT, but you might not be familiar with some of the common companion AI platforms: Replika, Character.AI and My AI are just a few. In 2024 Mozilla counted more than 100 million downloads of a group of chatbot apps. Some apps set 18 as a minimum age requirement, but it’s easy for a younger teen to get around that.

You might think that your kid won’t get attached, that they will know this chatbot is an algorithm designed to give responses based on the text inputs it receives and isn’t “real.” But an intriguing Stanford University study of students who use the app Replika found that 81 percent considered their AI companion to have “intelligence,” and 90 percent thought it was “humanlike.”

If you’re a teen who is struggling to make friends, chatbots may provide much needed companionship.

On the plus side, these companions are sometimes touted for their supportiveness and promotion of mental health; the Stanford study even found that 3 percent of users felt their Replika had directly helped them avoid suicide. If you’re a teenager who is marginalized, isolated or struggling to make friends, AI chatbots may be able to provide much needed companionship. They may offer practice in building conversational and social skills, as well as helpful information and tips about relationships.

But are they safe?

In 2024 a woman in Florida sued the company that owns Character.AI, alleging that her 14-year-old son formed an obsessive relationship with a chatbot and that the AI companion ultimately encouraged him to attempt suicide (which he tragically completed). Another suit filed that year alleges the same chatbot encourages self-harm in teens and violence toward parents who try to set limits on how often kids use the app.

Then there’s privacy: Wired, drawing on Mozilla’s research, labeled AI companions a “privacy nightmare.” Many are crawling with data trackers, and the information they collect might be used to manipulate people into thinking a chatbot is their soulmate, encouraging negative or harmful behaviors.

Given what we know about teens, screens and mental health, online influences are sometimes powerful, largely unavoidable and potentially life-changing for children and families.

So what do you do?

Remind kids that human friends offer so much AI companions don’t. “In real life,” or IRL, friendships are challenging, and that is a good thing. Explain to children that in their younger years, play was how they gained new skills; if they didn’t know how to put LEGO bricks together, they could learn with a friend. If they struggled with collaboration and cooperation, play taught them how to take turns and how to adjust their actions based on their playmates’ responses.

Friends give children relationship practice. A friend can be tired, crabby or overexcited. They might be lots of fun but also easily frustrated, or maybe they’re sometimes boring but very loyal. Growing up, a child has to learn how to take into account their friend’s personality and quirks, and they have to figure out how to keep the friendship going. Maybe most poignant, they learn how incredibly valuable friends are when things get tough. In cases of social stress, such as bullying, the support of a friend who sticks by you is priceless. In my study of more than 1,000 teenagers in 2020, keeping close to a friend was by far the most helpful strategy for kids who said they were the targets of bullies. A different study of more than 1,000 teens found that IRL friends can lessen the effects of problematic social media use.

If your kids are curious about AI companions, educate them. Good information can increase their skepticism and awareness about these programs and why they exist (and why they’re often free). It’s important to acknowledge the pluses as well as the minuses of digital companionship. AI companions can be very supportive; they’re never fuming on the school bus because their mother made them wear a sweater on a cold morning, they’re hardly ever jealous when you have a new girlfriend, and they rarely accuse you of ignoring their needs. But they won’t teach you how to handle things when they drop you for a new best friend or when they develop an interest that you just can’t share. Discussing profit motives, personal security risks, and social or emotional risks doesn’t guarantee that a teenager won’t go online and get an AI girlfriend, but it will at least plant the seeds of healthy doubt.

It may be important to identify high-risk kids who already struggle with social skills or making friends and who might be particularly vulnerable to toxic AI companions. In a world populated by children with generally depleted social skills, being able to eliminate the complex, sometimes uncomfortable human factor can seem like a great advantage, at least in the short term. In a preliminary analysis of 1,983 teens in three states, I found that of the kids who made romantic connections online, 50 percent said they liked that approach because it eliminated the need for meeting, talking, and all the other awkward “stuff” you have to do in person.

That said, most teens don’t report having any serious problems or negative outcomes from their online activities. In a preliminary analysis of a 2022 study that I presented at a conference this year, only 3 percent of 642 older teens from Colorado, Massachusetts and Virginia reported that they had ever had a significant online problem. We hear about online problems so frequently that we tend to assume they’re common, but that doesn’t appear to be the case. I don’t think it’s inevitable that human friendships will be uniformly abandoned for AI ones, resulting in catastrophic loneliness and loss of online privacy.

Finally, keep the conversations going, and don’t feel like you need to know everything. In a 2015 study, I found that two thirds of the teenagers whose parents discussed digital behaviors reported that their parents’ opinions and thoughts were quite helpful. If your child knows something about AI companions that you don’t, let them enjoy educating you.

AI companions may become a transformative social and technological development, raising questions about trust, ethics marketing and relationships, and we need to help youth prepare as best we can.

Research has long established that it’s developmentally appropriate for young children and teenagers to crave the attention and approval of their peers. It’s going to be easy for some to choose virtual friends over real ones. Stay engaged, learn about the platforms they are using, and remind them of the value of struggle and conflict. They will probably be all right.

IF YOU NEED HELP 

If you or someone you know is struggling or having thoughts of suicide, help is available. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat at chat.988lifeline.org. 

Elizabeth Englander has been a researcher and professor of psychology for almost 30 years. She is a nationally recognized expert in the areas of bullying and cyberbullying, childhood causes of aggression and abuse, and children's use of technology. Her ninth book, You Got A Phone!, was awarded a National Parenting Product Award. She holds a Ph.D. from the University of Southern California.

More by Elizabeth Englander
Scientific American Magazine Vol 333 Issue 5This article was published with the title “Are AI Chatbots Healthy for Teens?” in Scientific American Magazine Vol. 333 No. 5 (), p. 78
doi:10.1038/scientificamerican122025-6fAV1G9ijN1PLmODCprQj4

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe