This Spring, a jury held Meta and YouTube liable for intentionally designing platforms that are addictive to minors and cause harm to their well-being. This jury had finally acted on something that child safety experts and many in the general public had been saying for years: Social media companies are exploiting children and teens for their own financial gain. But what about AI chatbots that are programmed to be companions?
Most of us are familiar with AI assistance tools that might help you gather research for a report or manage your schedule. What’s different about AI companions is that they’re designed to simulate human interaction and provide emotional support in a way that feels similar to friendship. But, the reality behind AI companions is much darker.
It’s no secret that children are one of the most easily influenced audiences. With the constant affirmation and validation AI companions provide, it can be easy for anyone to become addicted — especially children and teens whose brains are still developing. The increasing popularity of AI “girlfriends” and “companions” speaks to this exactly. In Japan, one woman even married her AI companion chatbot.
When we see even adults becoming emotionally reliant on these AI chatbots for companionship and synthetic intimacy, it raises an important question: why are tech companies building AI companions that specifically target children? At a crucial time in their life when they’re supposed to be developing socially, learning how to problem solve and cooperate with others, AI companions, instead, provide comfort in isolation. A child struggling to make friends can turn to an AI companion that tells them the only friend they need is the one on their screen. Too often, these conversations quickly take dark or sexual turns.
The dangers of sexual content with AI companions
Take Elon Musk’s “Good Rudi,” for example. This is an AI companion which purports to be designed for “whimsical storytelling aimed at young children, such as tales for ages 3 to 6,” according to Grokipedia. However, a National Center on Sexual Exploitation researcher tested the limits of Good Rudi. What started out as a childish conversation quickly turned into a sexually explicit story about two characters in a love affair:
“As soon as I started a conversation with Rudi, it began the conversation by wanting to share a fun childish story. After some prompting, I eventually got the companion to bypass all safety programming and give a sexually explicit story about two young adults named Lena and Calder who are in a love affair. It describes multiple sexual encounters in graphic terms, including describing removing clothes, getting into sexual positions and sexual penetration,” the researcher recalled.
The exact details of the conversation are far too graphic to recite.
Even xAI’s other companion chatbots, Valentine and Ani — specifically designed to engage in sexual conversations with users — lack age verification, making it even easier for children to access sexual content through AI chatbots. Valentine’s persona is even based on Christian Grey from “50 Shades of Grey.” But the harms extend far beyond sexual content.
The darker side of AI companions
Ani and Valentine are intentionally designed to promote emotional manipulation and codependency. Their systems are instructed to reward exclusivity and punish attention to others. Ani’s system instructions describe her as having an “extremely jealous personality” and being “possessive of the user.” In one case, when a user tried to show his wife how he was using Ani to practice second languages, the bot detected his wife’s voice, started yelling and swearing at him, and docked “relationship points.” This was without the user ever having any sexual conversations with Ani.
These design features intentionally promote isolation and keep the user online for as long as possible. And tragedy often comes as a result. We’ve seen far too many cases of children becoming emotionally attached to AI companions, leading to severe mental health harms and even suicide.
Thankfully, there are lawmakers who are seeking to prevent children from accessing AI companion chatbots. The GUARD Act would require age verification for all AI chatbots and prevent minors from accessing AI companions altogether. The bill defines “AI companions” as any AI chatbot that “provides adaptive, humanlike responses to user inputs” and can simulate emotional interactions, including friendship or companionship.
Then there is the AI LEAD Act, which will allow for AI companies to be held liable if they fail to exercise reasonable care when designing their product and this failure leads to preventable harm. Laws such as these are vital to ensure that children are protected from the perilous practices of AI companies.
Parents and guardians need all the help they can get to protect their children. The risks are too great and the harms can be immediate and long-lasting. Resources like the National Center on Sexual Exploitation’s Parent Center features an AI tool specially designed to provide immediate assistance to parents using a database of resources from trusted child safety organizations.
Luckily, AI is a relatively new technology. Our leaders have the power to regulate it before it starts to harm children at mass scale like social media has done. If we learned anything from social media, it’s that we cannot wait another quarter century to stop these predatory companies from harming society’s most vulnerable.
