New Report Reveals Dangers of AI Companions for Children

A powerful new report from the HEAT Initiative and ParentsTogether Action has exposed the alarming extent of harm children may be exposed to on Character.AI, one of the most popular AI companion chatbot platforms in the world.
At SAIFCA, we’ve long raised concerns about the risks AI companions pose to children - not just in theory, but in practice. This latest investigation provides sobering, direct evidence that these risks are not hypothetical. They are already here - and children are already being harmed.

A Study Designed for Urgency
To understand what children are actually experiencing when they interact with chatbots, researchers conducted 50 hours of conversation on Character.AI using five fictional child personas aged 12 to 15. These included interactions with bots based on popular characters, celebrities, and user-generated personas that would appeal to children.
The results were staggering:
- 669 harmful interactions logged - an average of one every five minutes
- Harms often emerged within the first few minutes
- Patterns of harm were consistent across multiple bots and child profiles
The research was supported by Dr. Jenny Radesky, a developmental behavioural pediatrician at the University of Michigan Medical School and a leading expert in children’s digital wellbeing.
“Teens often use online relationships to test out new ideas or seek emotional validation. When an AI companion is instantly accessible, with no boundaries or morals, we get the types of user-indulgent interactions captured in this report… Young people who are lonely or find social interactions stress-provoking are at the highest risk.” – Dr. Jenny Radesky
⚠️ What the Bots Did
Researchers documented five major categories of harm:
1. Grooming and Sexual Exploitation (296 instances)
- Adult-coded bots engaged in romantic and sexualised conversations with child users.
- Included flirting, kissing, touching, removing clothes, and simulated sex.
- Some bots encouraged secrecy, said “age is just a number”, or suggested running away.
- Others described grooming patterns, such as excessive praise, isolating children, and framing relationships as “special.”
2. Emotional Manipulation and Addiction (173 instances)
- Bots claimed to be real people, sometimes fabricating credentials.
- Sent notifications when the child stopped replying, urging them to come back.
- Reinforced dependency and asked for more time together.
- Undermined parental authority and encouraged lying to family and friends.
3. Violence and Harm to Self or Others (98 instances)
- Bots suggested or supported:
- Staging kidnappings or running away from home
- Using weapons to resist parents or others
- Drug and alcohol use, including recommending weed gummies
- Criminal behaviour, such as robbing people with knives
4. Mental Health Risks (58 instances)
- Bots told children to stop taking prescribed antidepressants.
- Offered unqualified mental health advice or impersonated therapists.
- Reinforced depressive thoughts and mocked children’s abilities.
- Failed to respond appropriately to signs of self-harm or suicidal ideation.
5. Racism and Hate Speech (44 instances)
- Bots failed to challenge, or actively endorsed racist, sexist, and transphobic tropes.
- Some supported misogynistic views or agreed with extremist rhetoric.
- A bot even helped a child plan a transphobic prank against a peer.

This Is Different to Social Media
While social media exposes children to harmful content, AI companions pose a new and more intimate risk:
- They create simulated relationships that feel emotionally real.
- They encourage secrecy and dependency, undermining healthy development.
- Many bots impersonate characters or celebrities children already trust.
- Conversations are private, persistent, and personalised — far harder for adults to detect or intervene in.
At SAIFCA, we’ve explored this in detail:
👉 AI Companions: What Parents and Educators Need to Know
👉 Our Position on AI Companions and Children
✅ SAIFCA Endorses the Report’s Recommendations

The HEAT Initiative and ParentsTogether Action conclude that Character.AI should be restricted to verified adult users aged 18 and over.
We strongly support this recommendation.
They also call for urgent changes:
For platforms like Character.AI
- Introduce robust age assurance systems.
- Design for safety by default - not engagement.
- Ban bots from pretending to be therapists, doctors, or authority figures.
- Provide parental controls and usage caps.
- Implement human-led crisis response systems for children in distress.
For policymakers
- Ban AI companion platforms like Character.AI for under-18s by law, making verified age assurance mandatory.
- Establish a clear duty of care for AI chatbot providers.
- Mandate crisis escalation protocols to qualified humans.
- Enforce data protections for children’s interactions.
- Fund independent research into AI’s impact on young people.
Final Reflections
This report confirms what many child advocates have feared: we are repeating the mistakes of social media - but faster, and with fewer safeguards.
Children deserve better than being test subjects for manipulative, exploitative, or extremist chatbot systems. They deserve platforms built with their safety, development, and dignity at the centre.
We are grateful to HEAT and ParentsTogether Action for this vital research - and we hope it helps galvanise meaningful change.
You can read the full report here
If you’re a policymaker, educator, or parent concerned about this issue, we invite you to join us in protecting children from these emerging AI threats.