3 min read

Why SAIFCA Is Updating Our Guidance on Schools Sharing Children’s Photos Online

We're recommending that schools transition away from posting identifiable photos of students on public-facing websites and social media.
Why SAIFCA Is Updating Our Guidance on Schools Sharing Children’s Photos Online

At SAIFCA, we aim to ensure that safeguarding advice evolves when the risk environment changes.

Until recently, we advised schools to carefully consider the balance of risks and benefits when deciding whether to share identifiable photos of children on public-facing websites and social media, while highlighting that the bar to accessing technology capable of sexualising images had already lowered.

That guidance reflected a balance between recognising the rapid evolution of AI technologies and the importance of celebrating children’s achievements, as well as the role such images can play in building school community and trust.

We are now updating that guidance:

SAIFCA recommends that schools transition away from posting identifiable photos of students on public-facing websites and social media.

This change does not reflect a shift in values. It reflects a material shift in risk.


What Has Not Changed

Schools remain places of care, pride, and community. Sharing children’s achievements, creativity, and daily life is done with genuine best intentions, with parental consent.

We recognise that photographs can help to:

  • Celebrate learning and progress
  • Build connection with families
  • Reflect school culture and values

The value of these practices has not changed.

The risk environment in which they occur has.


What Has Changed: The Risk Landscape Has Shifted

Advances in AI technology have fundamentally altered how images can be misused.

The Internet Watch Foundation has reported a 26,362% increase in reports of photo-realistic AI-generated child sexual abuse material in 2025.

This reflects the rapid escalation in realism, accessibility, and ease with which such material can now be produced using widely available AI tools. In many cases, real and recognisable child victims appear to be involved.

As AI capabilities advance, criminals can now create this material with minimal technical knowledge, dramatically lowering the barrier to abuse.

During January, sexualised images of girls were discovered that appear to have been generated using Grok, the AI tool accessible via the social media platform X. The Centre for Countering Digital Hate estimates that Grok has been used to generate approximately 3 million sexualised images, including 23,000 that appear to depict children.

Criminals are also using publicly available photos to create realistic fake sexual images of specific children, which are then used to extort victims through threats of distribution to family members, peers, or online networks.

Where misuse was once limited by technical knowledge, time, and effort, it is now fast, scalable, and accessible.


Safeguarding Is About Foreseeability, Not Intent

Our guidance on the use of identifiable photos of children online is grounded in whether harm is foreseeable.

Schools already apply this principle across safeguarding practice, including physical site security, online access controls, supervision ratios, and data protection.

As AI capabilities advance, the foreseeable misuse of publicly available child images has increased significantly. In safeguarding terms, this changes the duty of care.

Even where parental consent is given, and even where images are shared for positive reasons, consent does not prevent misuse by third parties once images are publicly accessible.


Updated Guidance: What SAIFCA Is Recommending

In light of these developments, SAIFCA now recommends that schools:

  • Transition away from posting identifiable photos of students on public-facing websites and social media
  • Review existing photo-sharing practices through the lens of AI-enabled misuse
  • Treat publicly accessible child images as a safeguarding issue, not solely a communications decision

This guidance is intended to reduce risk exposure while remaining proportionate and practical.


Practical Alternatives for Schools

We recognise that schools need workable, realistic alternatives, and many are already moving in this direction.

Options include:

  • Action shots where faces are not clearly identifiable
  • Silhouettes or rear-facing images
  • Password-protected galleries accessible only to parents and carers
  • Secure internal platforms rather than public social media

These approaches allow schools to share school life without unnecessarily increasing risk.


Looking Ahead

We will continue to support schools as they adapt to a rapidly changing technological environment in which children face new and emerging risks.

Schools are already navigating complex safeguarding responsibilities under pressure.

Our role at SAIFCA is to provide clear, evidence-informed guidance that prioritises children’s safety while remaining practical and fair.

As AI technology continues to evolve, safeguarding practices must evolve with it. At the same time, we continue to advance our work towards sensible regulation of AI technologies, so that parents and schools are not left managing risks created by systems that should not be publicly available in the first place.

Learn more about AI risks to children in our Full Guide for Parents and Educators

Sources referenced in this article:
IWF Statistics
Center for Countering Digital Hate Statistics

Please support our work...

SAIFCA remains free of financial influence from technology companies. If you would like to support our work, please consider donating - your support enables us to continue this work independently and with integrity.

Thank you for being part of this effort to protect children in an era of rapidly advancing AI.


Support Us