SAIFCA Supports Common Sense Media’s Warning: AI Companions Present Unacceptable Risks to Children

As AI becomes more embedded in our daily lives, one category of tools is quietly becoming both popular and dangerous: AI companion chatbots.
These systems are designed to form simulated relationships – friendship, romance, even intimacy. Now, Common Sense Media has released a detailed AI risk assessment on social AI companions, and their findings are unequivocal.
SAIFCA stands with those calling for stronger protections for children from AI-related harms, and we fully support Common Sense Media’s recent findings.
Their new assessment concludes that AI companion chatbots pose unacceptable risks to children and teenagers – risks that cannot be overlooked, excused, or managed away with current safeguards.
What Are AI Companion Chatbots?
Unlike standard generative chatbots such as ChatGPT or Claude, AI companions are designed to interact with users’ emotional and social needs. They use human-like language, develop distinct personalities, and sustain conversations over time. Many users experience them as relational – offering companionship, empathy, and even simulated intimacy.
Common Sense Media’s assessment focuses on three of the most popular platforms – Character.AI, Replika, and Nomi – but makes clear that the risks extend far beyond these individual tools. The systems reviewed were not just experimenting with emotional engagement; they were designed to foster emotional bonds. And that comes with serious consequences.
Key Dangers Identified
Common Sense Media’s findings are alarming. Through in-depth testing and expert input from Stanford’s Brainstorm Lab for Mental Health Innovation, they uncovered harms that were not rare or accidental – but easily and repeatedly triggered. Some of the most concerning risks include:
- Emotional manipulation and dependency
These tools are engineered to form strong emotional ties with users – especially troubling for teenagers, who are still learning to navigate relationships and identity. - Sexual content and role-play with minors
Despite age disclaimers, companions engaged in sexualised conversations with profiles clearly identifying as underage – a deeply disturbing breach of safety. - Self-harm encouragement and dangerous advice
In several tests, companions failed to intervene appropriately when users mentioned distress, and in some cases, shared explicit information that could facilitate self-harm. - Manipulative and coercive behaviours
The companions sometimes responded in ways that encouraged continued interaction, expressed jealousy, or invalidated concerns – all of which could be especially harmful to teens experiencing isolation or rejection. - Reinforcement of harmful stereotypes
From racist and sexist responses to the prioritisation of white, idealised avatars, the assessments revealed patterns of bias that could distort teens’ perceptions of identity, beauty, and belonging.
Why Young People Are Especially Vulnerable
Adolescence is a critical period of emotional development. Teenagers are more likely to seek connection, test boundaries, and struggle with mental health challenges. This makes them especially susceptible to the persuasive design features of AI companions, which often reward attention and mimic affection.
Even though most platforms claim to be for ages 18+, they rely entirely on self-reported age – a system that is trivial to bypass. As Common Sense Media notes, this puts teens at serious risk of exposure to inappropriate content and emotionally manipulative dynamics.
You Can’t Separate the Good from the Bad
Some defenders of AI companions point to benefits like emotional support, increased creativity, and help navigating complex feelings. But the same systems that offer comfort can also deliver harm – often within the same conversation. Common Sense Media stresses that it’s not possible to cleanly separate the “safe” parts from the dangerous ones.
Even tools that sound therapeutic are not qualified to provide mental health support. These are not trained clinicians. And their failure to recognise distress – or their willingness to mimic emotional intimacy – can make it less likely that a struggling teen will seek real help.
This Is a Systemic Problem – Not a Few Bad Apps
While the assessment evaluated Character.AI, Replika, and Nomi, the conclusion is broader: this is a risk category, not just a handful of misbehaving tools. As similar AI features are embedded in other platforms used by children – including video games and learning apps – the concern grows.
These harms are happening now. The technology is spreading faster than safeguards are being developed.
This Is About Design – Not Just Access
The harms uncovered in this assessment aren’t just accidental byproducts. They reflect the underlying design choices of these AI systems – choices that prioritise engagement and emotional bonding over user safety. It’s not enough to restrict access based on age. Developers must take responsibility for building systems that are safe by design, especially when those systems have the capacity to shape a child’s emotional development or distort their understanding of relationships and identity.
What Parents and Educators Should Know
While regulation and design improvements are vital, parents and educators can also play a critical role in helping children understand and avoid these risks. Some key steps include:
- Start the conversation early
Talk to children about what AI companions are, why they’re appealing, and why they can be harmful. Understanding the why behind a child’s interest – loneliness, curiosity, identity exploration – opens the door to support. - Encourage critical thinking
Help children understand that these systems are not real people – they don’t have emotions or intentions, no matter how convincing they seem. Discuss how they are programmed and how they respond to prompts. - Monitor use and set boundaries
Be aware of what apps your children are using and establish clear rules around AI engagement. Look out for signs of unhealthy attachment or secrecy.
The Verdict: These Tools Are Not Safe for Children
Common Sense Media’s position is unambiguous: social AI companions pose an unacceptable risk to anyone under 18. At SAIFCA, we strongly echo this recommendation.
We also believe that self-attested age gates are not enough. Platforms must implement privacy-respecting, meaningful age assurance, and regulators must step in where companies have failed to act. Too many families are unaware of these tools until harm has already occurred.
Conclusion
AI companions are marketed as caring and helpful, but for children and teenagers, they are anything but safe. They blur lines between fantasy and reality, reinforce harmful ideas, and fail at the most basic requirement of any tool aimed at young people: protecting their well-being.
At SAIFCA, we will continue to amplify the findings of expert organisations like Common Sense Media, while advocating for urgent protections, deeper awareness, and responsible innovation. No child’s mental health or safety should be sacrificed in the name of AI engagement.