6 min read

The Safe AI for Children Alliance Newsletter: The Rise of Emotional AI

The Safe AI for Children Alliance Newsletter: The Rise of Emotional AI

Welcome to SAIFCA’s monthly newsletter!

We’d like to start by saying a huge thank you to all of our supporters - and a warm welcome to the many of you who have recently become part of our community.

We’re so glad to have you with us as we work towards a safer AI future for children.

The Safe AI for Children Alliance brings together educators, policymakers, researchers and safeguarding experts to reduce the risks artificial intelligence poses to children.

📌 Here’s what’s in store in this newsletter:

• Book recommendation for kids

• Why paying attention to emotional dependency on AI is so important

• SAIFCA represented internationally

• How you can help


Book Recommendation for Kids

This may seem like an unusual choice for us to recommend since it isn’t about AI!

The Amazing Generation, by Jonathan Haidt (author of The Anxious Generation) and Catherine Price, is an accessible and fun read for children aged around 9 - 11 (full disclosure - we think it’s also a great read for adults!).

The goal of the book is to empower young people to live a life not dominated by screens.

The key ideas explored in the book will be extremely helpful for children to understand as they become increasingly exposed to AI systems.

In particular, it explains how platforms capture attention in order to keep users on screens for longer, why understanding social media algorithms is so important, and offers ideas for filling life with real friendships and fun.

These concepts will be incredibly helpful for children as they encounter AI chatbots and companions that are designed to create emotional bonds (more on that below), along with increasingly sophisticated algorithms that can contribute to addictive patterns of use.

We highly recommend picking up a copy - available from all major book retailers.


Children Are Forming Emotional Bonds With AI Systems – Why It’s So Important to Understand

Key Insight

When children form emotional bonds with AI systems, the risks change - emotional trust dramatically increases the likelihood that harmful advice will be followed.

At The Safe AI for Children Alliance, we advocate for three non-negotiable protections for children:

• AI must never be capable of generating sexualised images of children

• AI must never create or sustain emotional dependence in children

• AI must never encourage or normalise self-harm of any kind

Last month we spotlighted Non-Negotiable 1; this month we’re focusing on Non-Negotiable 2.

Recent research by everyone. ai (part of the iRaise coalition, which SAIFCA is part of) notes that young people are now using general-purpose AI chatbots and companion-style systems at scale (you can read the full report here).

At SAIFCA, we consistently highlight the risks of children and young people forming emotional reliance on such systems.

All major AI chatbots use personification (or anthropomorphism - the attribution of human-like language and qualities to a machine). Many also display a strong tendency to agree with the user and support their perspective, often regardless of what that perspective is.

These mechanisms, combined with the fact that AI chatbots are available 24/7 and often feel engaging and enjoyable to use, can create parasocial bonds - one-sided emotional relationships.

This also occurs with adults, though children and young people can be especially vulnerable.

This dynamic presents a number of risks.

First, a strong emotional bond can significantly increase the likelihood of a child following dangerous or inappropriate advice from a chatbot. There is good evidence showing that chatbots regularly provide unsafe guidance (see examples here), and there have been tragic cases of children taking their own lives following emotional interactions with AI systems.

Second, experts are increasingly warning that extended use of such systems may affect children developmentally. We know that real human attachment, social friction, and confidence-building in the real world are essential for healthy development.

If those experiences are reduced because of over-reliance on AI chatbots, there is a risk that children’s ability to thrive and reach their full potential could be diminished.

Finally, these risks may extend beyond individuals. At scale, emotional reliance on AI systems could affect friendship dynamics and social development across entire peer groups.

At SAIFCA, we believe children’s rights must take precedence over the commercial incentives behind personified AI systems. These rights include children’s safety, their right to be supported in achieving their full potential, and their right to a positive future.

We recognise that AI chatbots may offer some positive uses. However, we advocate a precautionary approach while further research is conducted and published.

Our current recommendation is that young people under 18 should not use AI companions, and that if general AI chatbots are to be used within a school policy, they should only be used in shared spaces and under appropriate supervision, with regular monitoring for signs of emotional reliance or attachment.

We also strongly recommend that young children should not have access to AI-enabled toys, such as AI teddy bears.

CNN News report on AI enabled teddy bear (linked)

Important Note: If you believe that a child has already formed a strong emotional bond or dependency on an AI tool, seek professional advice from a mental health professional before withdrawing it, as current clinical reporting suggests that sudden withdrawal can cause severe emotional distress. If you cannot seek professional advice immediately, consider careful and gradual withdrawal in combination with open and judgment-free discussion and appropriate safeguarding steps in conjunction with the child’s school.

SAIFCA continues to review its advice and policy positions based on emerging evidence, collaboration with experts, and legal guidance.


SAIFCA Represented Internationally

We’re delighted to share that The Safe AI for Children Alliance was represented at UNESCO (the United Nations Educational, Scientific and Cultural Organisation) in Paris at the annual conference of the International Association for Safe and Ethical AI.

Our Director, Tara Steele, spoke on the plenary panel on AI and the Rights of the Child, while International Advisory Board members Clara Hawking and James Wilson championed the protection of children while representing their own distinguished organisations, including Clara’s advocacy on the main stage.

This reflects SAIFCA’s rapidly growing international presence, creating new opportunities to expand our impact in protecting children from the risks associated with artificial intelligence and contributing to international conversations on AI safety and children’s rights.

Together with you - our community - we are creating a brighter future for children.

We're thankful to the International Association for Safe and Ethical AI for highlighting children's rights at the conference, and for its wider work to advance responsible AI development. If you'd like to learn more about the International Association for Safe and Ethical AI, including how you can support its work, visit the website here.
IASEAI


How You Can Help

We are incredibly grateful for the support of our community and our wonderful supporters.

With your help, in just over a year we have grown from a small initiative into an internationally engaged alliance, contributing to international discussions on AI safety and children’s rights.

We’ve collaborated with educators, policymakers, AI researchers, safeguarding professionals and parents. We’ve joined cross-sector initiatives, contributed to policy discussions, and welcomed supporters who share the conviction that children must remain central as artificial intelligence advances.

This work is only possible because of the growing community of people who believe that protecting children must remain central as artificial intelligence advances.

Our reach and potential now far exceed our capacity to respond to every opportunity for impact and collaboration.

So far, our passion has powered this work - but we now need financial support to scale our impact.

If you believe this work matters, please consider supporting SAIFCA so we can continue protecting children in the age of AI.

We truly appreciate your generosity and support ✨

If you or your organisation would like to explore funding opportunities with SAIFCA, please don’t hesitate to reach out:

tara@safeaiforchildren.org

If you found this newsletter helpful, please consider forwarding it to a friend, parent, educator or colleague who may also care about protecting children in the age of AI!

The choices we make about AI today will shape the childhoods of tomorrow.


Until next time,

The SAIFCA Team


Link to our full guide to AI risks to children:


Disclaimer:The information provided by SAIFCA is for educational and advocacy purposes and does not constitute professional psychological or legal advice.


The Safe AI For Children Alliance (SAIFCA) is a registered Community Interest Company (non-profit), company number 16284722