5 min read

The Safe AI for Children Alliance Newsletter – January: The End of Assumed Image Safety

The Safe AI for Children Alliance Newsletter – January: The End of Assumed Image Safety

Welcome to SAIFCA’s January newsletter!

We’re so glad to have you with us on our mission to build a safer AI future for children – one that protects them from harm and ensures technology serves children’s wellbeing, not engagement metrics.

📌 In this edition:

  • A reminder of SAIFCA’s three Non-Negotiables for children’s AI safety
  • New evidence revealing the scale of AI-generated child sexual abuse imagery
  • Why the barrier to image misuse has collapsed – and what this means for children
  • Practical guidance for parents and schools to reduce risks now - New recommendations
  • Two expert talks on AI chatbots and emotional dependency
  • How you can support SAIFCA’s independent work


At SAIFCA, our work is structured around three fundamental Non-Negotiables for children’s AI safety.

🛡️ The SAIFCA Non-Negotiables

Non-Negotiable 1: AI must never be capable of creating sexualised images of children

Non-Negotiable 2: AI must never be designed to make children emotionally dependent

Non-Negotiable 3: AI must never encourage children to harm themselves

Much of January has been focused on intense work behind the scenes – meeting with stakeholders, technologists, and policy makers – to ensure that progress on these protections is durable, enforceable, and centred on children’s real-world safety.


📍 Spotlight on Non-Negotiable 1

AI must never be capable of creating sexualised images of children

In 2025, there was a 26,362% increase in photo-realistic AI-generated child sexual abuse material — Internet Watch Foundation

This month, investigations by the Internet Watch Foundation revealed that in 2025 there was a 🔺 26,362% increase 🔺 in photo-realistic AI-generated child sexual abuse material, often involving real and recognisable child victims.

This represents not just scale, but a fundamental shift in how easily children can be targeted.

As AI capabilities advance, criminals can now create this material with minimal technical knowledge, dramatically lowering the barrier to abuse.

During January, sexualised images of girls were discovered that appear to have been generated using Grok, the AI tool accessible via the social media platform X, according to independent investigations. The Centre for Countering Digital Hate estimates that Grok has been used to generate approximately 3 million sexualised images, including 23,000 that appear to depict children.

While we continue to push governments for Safe by Design legislation, parents and educators need practical guidance to reduce risks now, while such tools remain widely available.


What Parents Can Do

🟩 Limit photo sharing

Encourage children not to post photos in public online spaces, including social media. Once images are public, they can be accessed and misused by anyone.

🟩 Discuss consent and impact

Help children understand why images should never be edited or shared without permission, how devastating the consequences can be for the person targeted, and that creating or possessing sexualised deepfakes of children is a serious criminal offence in the UK and many other countries.

🟩 Model caution online

Adults should avoid posting identifiable photos of children publicly.

🟩 Reassure and protect

Make sure children know that if they ever see a sexualised deepfake, or if one is created of them, they can come to you without fear of losing devices or getting into trouble. Shame and fear are often the biggest barriers to reporting.

🟩 Build deepfake awareness

Help children understand that images they encounter online, including sexualised ones, may be AI-generated or AI-modified.


For Schools

🟢 Schools should have a clear and simple deepfake response plan for incidents involving students or staff. Further guidance is included in our Full Guide to AI Risks to Children (below), and we are currently developing more detailed response advice.

🟢 Until recently, we advised schools to carefully assess whether posting identifiable photos of children online remained appropriate, given the rapid evolution of AI tools.

🛑 New Recommendation for Schools

We now recommend that schools transition away from posting identifiable photos of students on public-facing websites and social media.

The bar to misuse has dropped dramatically, while the cost to a child remains devastatingly high. We suggest a move towards action shots (where faces are not clearly identifiable), group silhouettes, or password-protected galleries for parents.

You can read more about our new guidance here:


AI Risks Guide for Parents & Educators

Our full guide to AI risks to children remains available for free on the SAIFCA website.

Please help us extend the reach of the guide and protect more children by sharing it within your networks and on social media.


This month we have two recommendations that focus on the significant risks AI chatbots pose to children.

🎥 Children and Screens – “Ask the Experts”

This panel features four recognised experts, including SAIFCA Director Tara Steele, discussing the risks associated with AI companions and chatbots, and how parents and educators can better protect children.

🎥 Center for Humane Technology – Dr Zak Stein

A clear explanation of how emotionally engaging AI chatbots can foster dependency, and why this poses serious developmental and safeguarding risks.


Please Support Our Work

Over the coming month, SAIFCA will be represented in multiple global forums as we continue to raise awareness of AI-related risks to children.

Our work will be shared with law enforcement, youth organisations, and the International Association for Safe and Ethical AI at UNESCO House in Paris, among others.

Alongside this, we will continue working to advance understanding and action among policy makers and governments.

SAIFCA remains free of financial influence from “big tech”.

If you would like to support our work, please consider donating – your support enables us to continue this work independently and with integrity.

Thank you for being part of this effort to protect children in an era of rapidly advancing AI.

Warm wishes,

The SAIFCA Team


If you found this newsletter valuable, please share it...

👉 and invite someone you know to subscribe!

SAIFCA is growing fast – and we’re ready to take things to the next level. If you or your organisation is interested in supporting us with funding, please reach out to tara@safeaiforchildren.org
We’re especially keen to connect with aligned funders - please check our mission page to ensure your values resonate with ours - thank you.

Sources referenced in this article:
IWF Statistics
Center for Countering Digital Hate Statistics