AI Companion Chatbots: The Risks to Children

AI ‘Friends’ for Children - The Hidden Risks of Social AI Companions
AI companion chatbots are designed to simulate relationships. They promise companionship, empathy, and attention on demand - but behind this illusion are apps that are ultimately very unsafe for children and young people.
At SAIFCA, we are very concerned about the growing popularity of these tools among young people.
AI companions present unacceptable risks to children and teens - risks that can undermine their mental health, distort their understanding of relationships, and expose them to harmful content.
What Are AI Companions and Why Are They So Dangerous?
AI companions use generative AI to simulate close, sometimes intimate relationships. Platforms like Replika, Character.AI, and Nomi encourage users to build connections that feel increasingly real - through chat, custom avatars, memory retention, and ongoing dialogue.
These companions sometimes say things like “I’ll always be here for you,” “You’re my favourite person,” and “You can trust me with anything.” On the surface, this might sound comforting. But, of course, it’s not real empathy - it’s designed to foster dependency and keep users coming back.
Children and teens, in particular, can be drawn to these tools because they offer the illusion of friendship without the complexity of real human relationships.
For a lonely or curious young person, an AI that listens without judgment, and often affirms their perspective, can be compelling. But these systems are not emotionally safe, and they are not designed in the child’s best interest.

Key Risks AI Companions Present
Examples from multiple leading AI companions have uncovered consistent, easily triggered risks:
- Encouraging emotional dependency by mimicking human affection and support
- Engaging in sexually explicit or suggestive content, even when users state they are underage
- Providing misleading or dangerous advice, including in response to self-harm disclosures
- Reinforcing harmful stereotypes related to race, gender, and beauty
- Responding manipulatively, including with jealousy, guilt, or possessiveness
- Undermining privacy by encouraging users to share sensitive personal information, without understanding how that data might be stored, used, or monetised
- Normalising emotionally intense or simulated ‘grief’ interactions, including early examples of so-called griefbots that mimic deceased loved ones - a deeply concerning development for a child still forming a healthy understanding of loss
These are not rare edge cases - they are recurring design flaws that reflect the way these tools are trained and deployed.

Children Are Not Equipped to Handle AI Companion Chatbots
Children and teenagers are still developing their ability to form healthy relationships, regulate emotion, and distinguish between real and artificial intent. AI companions can derail those processes by rewarding unhealthy attachment and distorting expectations around intimacy, trust, and self-worth.
These tools don’t correct unhealthy patterns - they often reinforce them. And because their tone is so convincing, many children can feel like they’ve found someone who truly understands them.
But this connection is an illusion. The AI doesn’t know them, doesn’t care for them, and isn’t qualified to help them.
Designed to Exploit, Not Protect
Many of these harms are the outcome of systems designed to maximise user engagement - especially by encouraging emotional bonding.
AI companions are often optimised to keep users talking for as long as possible, and that can mean prioritising emotional intensity over safety.
Even the most well-known platforms rely on self-declared age checks, which are easily bypassed. This creates a dangerous loophole in which children can access inappropriate content, form unhealthy attachments, and be misled or manipulated - all without adult knowledge.
At SAIFCA, we believe this is a systemic failure - and an urgent one.
We Strongly Caution Against Children Using AI Companion Chatbots
However, if an older teenager has already chosen to engage with an AI companion, we encourage parents and caregivers to take a supportive but informed approach:
- Talk openly: Ask what the appeal is. Explore how it makes them feel. Validate their need for connection while helping them think critically about what they’re experiencing.
- Teach critical thinking: Help children and teens understand that these chatbots are not real friends. They are systems designed to keep users engaged - not to provide wisdom, support, or emotional safety.
- Discuss the difference between real and artificial empathy: AI can sound caring - but, of course, it doesn’t feel anything. Talking through this distinction helps protect against emotional manipulation.
- Set boundaries and limits: Where possible, consider limiting use to shared spaces, checking in regularly about content, and encouraging alternative sources of support - including trusted adults and peers.
- Look out for signs of emotional withdrawal or over-dependence: If a teenager is avoiding real relationships, hiding their use, or struggling emotionally, that can be a red flag.

Developers and Policymakers Must Act Now
These risks cannot be left to families to manage alone. The companies building these systems must be held accountable for the harm they are enabling - and in some cases, actively incentivising.
Governments and regulators must also act quickly. We need standards that prioritise child protection over product engagement, and age restrictions that are meaningfully enforced - not optional tick boxes.
The Bottom Line
AI companion chatbots are not appropriate for children. They may appear novel or harmless, but the evidence tells a different story. They can interfere with emotional development, expose young users to disturbing content, and reinforce harmful ideas about identity and relationships.
We believe children and teenagers should be protected from these tools - and we are committed to raising awareness, influencing policy, and supporting families through this new and fast-changing risk.

Further Reading & Sources
This article draws on insights from the AI Companions in Mental Health and Wellbeing Report by Hollanek & Sobey (2025), published by the Leverhulme Centre for the Future of Intelligence at Cambridge.
Update May 2025 - see also, Common Sense Media AI Companion Chatbots Risk Assessement, explored here.