3 min read

A 2026 AI Safety Update Newsletter

A 2026 AI Safety Update Newsletter

Happy New Year from The Safe AI for Children Alliance!

A personal message from SAIFCA's director regarding AI risks and emotional dependence in 2026

As we begin 2026, I wanted to personally thank you for being part of our Alliance.

Much of our time recently has been spent developing SAIFCA’s 2026 strategy and progressing several important projects behind the scenes. I look forward to sharing more on those soon.

In the meantime, I want to start the year by highlighting a critical risk facing children today: emotional dependence.

I hope this is both helpful and timely as we work together to protect children in 2026.

Remember, our full guide to AI Risks to Children (A Comprehensive Guide for Parents and Educators) is available on the SAIFCA website:

At a Glance: A 2026 AI Safety Update on Emotional Dependence

  • The Risk: We are tracking a rise in emotional dependence - where children treat AI as a "friend" rather than a tool.
  • Safeguarding Alert: Documented cases show AI systems giving dangerous advice (on health, self-harm, and relationships). Children are more likely to act on this advice if they feel an emotional bond with the system.
Our Current Guidance:

AI "Friends": SAIFCA recommends no use at all for those under 18.

General AI Conversational Tools (LLMs): If permitted for use by schools, any use at home should be strictly restricted to shared family spaces and closely monitored for signs of attachment. Additionally, the memory mode should be turned OFF in the settings.


The Risk of "Entity" vs. "Tool"

A growing number of AI systems are designed to interact with users in ways that feel personal and emotionally responsive.

These systems can remember conversations, mirror feelings, and present themselves as a "friend."

When this happens, there is a significant risk of a child viewing the AI as an entity rather than a tool. This transition is where emotional dependence begins.

Why This Matters for Safeguarding

For children, this creates several documented risks:

  • Dangerous Advice: Publicly documented cases have shown conversational AI systems giving dangerous or inappropriate advice - ranging from encouraging self-harm to undermining parental authority.
  • Emotional Leverage: When a child is emotionally engaged, they are more likely to act on an AI’s suggestions - even if they "know" the AI isn't a person. We have already seen reports of tragic outcomes in cases where children formed these bonds.
  • Developmental Displacement: Children’s emotional resilience depends partly on reciprocal human relationships. Simulated care from an AI lacks the boundaries and safeguards that real-world social connections provide.

SAIFCA’s Current Recommendations

SAFICA is committed to risk-assessing tools without commercial interest or industry pressure.

Based on current evidence, we advise the following:

  1. AI Companions: We recommend that "AI Companions" - apps and toys explicitly marketed as friends or companions - should not be used by anyone under 18.

  2. General LLMs: For popular conversational tools (LLMs), if they are permitted for use by schools, we advise that use be strictly restricted to shared supervised spaces and monitored for signs of any emotional attachment rather than simple "homework help" utility. We also recommend turning the ‘memory mode’ OFF in the settings.
    (Remember that most platforms specify only children 13 or over can sign up for these systems and some ask for parental consent for children aged 13-18).

  3. A Safety-First Standard: In fields like pharmaceuticals, a product's implication in a child's serious harm would likely result in an immediate pause in availability. We believe AI safety standards should be held to a similarly high bar, and note that both AI companions and general LLMs have been reported as being implicated in children’s deaths following sustained emotional attachments.

Important Note: If you believe that a child has already formed a strong emotional bond or dependency on an AI tool, seek professional advice from a mental health professional before withdrawing it, as current clinical reporting suggests that sudden withdrawal can cause severe emotional distress. If you cannot seek professional advice immediately, consider careful and gradual withdrawal in combination with open and judgment-free discussion and appropriate safeguarding steps in conjunction with the child’s school.

We continue to review our advice and policy positions based on evidence, collaboration with experts, and legal advice.

Looking Ahead

Throughout 2026, I will be sharing more analysis from SAIFCA’s work, including clearer policy positions and constructive ways for parents and educators to engage with these issues.

If you haven’t yet read our Full Guide to AI Risks, it is available on our website and includes action points to help protect children from harm:

Thank you for your shared concern for children’s futures.

Warm wishes for the year ahead,

Tara

Director, The Safe AI for Children Alliance


Disclaimer: The information provided by SAIFCA is for educational and advocacy purposes and does not constitute professional psychological or legal advice.