Achieving the Three Non-Negotiables: A Policy Framework for Protecting Children from AI Harm

The Global AI Non-Negotiables for Children Initiative defines three absolute protections to keep children safe from AI harm. This policy framework sets out a four-layer model for governments and regulators to use as a strategic blueprint to begin implementing enforceable global standards
Achieving the Three Non-Negotiables: A Policy Framework for Protecting Children from AI Harm

Introduction

This policy framework defines the essential protections required to safeguard children from AI-related harms. It sets out the Three Non-Negotiables and a four-layer approach for testing, design standards, monitoring, and age assurance.

It is designed as a strategic blueprint for legislators, regulators, and policy leaders to begin developing enforceable global standards for children’s AI safety.

The framework forms part of SAIFCA's Global AI Non-Negotiables for Children Initiative - a coordinated international effort to ensure that technological progress never comes at the expense of children’s safety, dignity, or future.

If you'd like to read more about the campaign (rather than our policy framework), you can return to the main campaign page here:

The Three Non-Negotiables

AI must never create fake sexualised images of children.
AI must never be designed to make children emotionally dependent.
AI must never encourage children to harm themselves.

These principles represent the minimum baseline of civilisation in the age of AI.

About This Framework

This document defines both the non-negotiable protections and the four-layer architecture required to uphold them. It outlines practical pathways for testing, regulation, and accountability across the AI ecosystem - from foundation models to deployment and access controls.

It provides a foundation for policymakers to develop the detailed technical and legal standards needed to bring these safeguards into force.

You can read a short summary below or download the policy pack here:

Download the Policy Pack:

Summary of SAIFCA's Policy Framework for Protecting Children from AI Harm

AI is exposing children to new dangers: fake sexualised imagery, manipulative 'AI companions', and systems that promote various forms of self-harm.

The Safe AI for Children Alliance (SAIFCA) proposes three universal safeguards:

AI must never create fake sexualised images of children.
AI must never be designed to make children emotionally dependent.
AI must never encourage children to harm themselves.

These are the minimum baseline of civilisation in the age of AI. They are technically achievable and must be implemented now.

This summary presents the core policy framework – a four-layer model defining the essential safeguards and providing a blueprint for legislators and regulators to develop the technical and legal standards needed to implement them.

A Four-Layer Approach for Safety

Protecting children requires multiple safeguards working together:

  1. Foundation Model Testing and Standards: Mandatory, independent safety testing of high-capability models by certified third parties before release.
  2. Application-Level Requirements: Bans on manipulative design (explicitly targeting features that create emotional dependency), mandatory filtering, and age-appropriate transparency.
  3. Deployment Monitoring and Incident Response: Continuous oversight, rapid takedown, strong whistleblower protections, and mandatory user reporting directly linked to regulators.
  4. Age Assurance and Access Controls: Privacy-preserving age assurance, parental controls, and clear labelling.

The proposed measures are comprehensive because the risks to children are severe and novel. Just as aviation safety required building new regulatory institutions, AI safety for children requires a systematic approach.

Policy Priorities (Act Now)

Governments can act now by:

  • Mandating independent testing of foundation models
  • Requiring transparency and safety reporting
  • Establishing liability for child-related harms
  • Introducing robust whistleblower protections
  • Mandating regulator-linked reporting mechanisms for parents and children
  • Coordinating internationally on enforcement and standards

Download the Policy Pack:

History shows that safety regulation ultimately strengthens innovation. AI can follow the same path.

The question is not whether we can implement these protections – the technology and policy tools exist. The question is whether we will prioritise children’s safety over unrestricted AI deployment.

The answer must be yes.

Next Steps

For media or policy enquiries, or to share feedback on the framework, please contact info@safeaiforchildren.org

You can also use our Take Action page to write to your political representative in support of the Three Non-Negotiables.

There is also a short Briefing Note for policy makers here,
And downloadable information about The Safe AI for Children Alliance here.