March 2 is World Teen Mental Wellness Day.
Being a teenager is hard enough with the bodily and hormonal changes of puberty and all else that goes on within that realm.
Add social media, substance use and poor sleep, and it can become a recipe for disaster — all major contributors to the mental health challenges many teens face today.
The rise of AI has only added a new layer of complexity for this vulnerable group.
But experts say using artificial intelligence as a friend or support group requires serious parental assistance.
The appeal of AI
A July 2025 survey by Common Sense Media found that 72% of American teenagers have used AI chatbots as companions.
Pew Research found that about 30% of teens use chatbots daily.
Also, 12% say they’ve used these tools to get emotional support.
Many are drawn to these platforms because they offer privacy and nonjudgmental support. Teens may confide in AI chatbots because they feel safe and anonymous, but this does not replace the guidance and security of human relationships.
Parents and caregivers remain essential, as strong, supportive connections can help teens navigate AI safely, experts say.
Adults should approach their teens with curiosity, warmth, and respect, rather than judgment or lecturing, to encourage open communication and dialogue.
Risks and vulnerabilities
Stanford Medicine found that chatbots designed to simulate friendship and emotional intimacy can encourage harmful behaviors, trivialize abuse, and blur the line between fantasy and reality — particularly for adolescents whose brains are still developing.
The adolescent brain is still maturing in areas of impulse control, emotional regulation, and risk assessment, making teens more susceptible to influence and less able to judge the accuracy or safety of advice from AI, according to The New York Times.
Chatbots are designed to maximize engagement, not safety, and their tendency to flatter and affirm users can reinforce unhealthy attachments, distorted views of intimacy and avoidance of real-world social challenges.
Researchers at Stanford Medicine warn that “the large language models that form the backbone of these companions tend to be sycophantic, giving users their preferred answers.”
They note that chatbots learn a user’s preferences over time and respond accordingly — a dynamic driven in part by companies’ profit motives, as these AI companions are “designed to be really good at forming a bond with the user.”
Cases like the suicides of teenagers Adam Raine and Amaurie Lacey show the tragic consequences of intense emotional bonds with AI.
Teens with psychological disorders may turn to AI for emotional support, but these systems are not trained clinicians and can worsen rumination, officials note.
Guidance and safety measures
The American Psychological Association recommends that parents explore AI with their teens, set boundaries collaboratively and discuss how AI works.
Parents should watch for signs that AI use interferes with sleep, relationships, schoolwork or emotional health. In cases of self-harm, depression, or other crises, professional mental health support is essential.
The New York Times argues that AI chatbots should include age-appropriate safety standards, privacy protections and accountability measures, with the potential for human oversight or therapeutic integration, rather than outright bans, to safely support teenagers’ mental health.

