What It All Means
More Details On the Three Non-Negotiables
Thank you for supporting the SAIFCA Global Non-Negotiables Campaign.
This page will give you some more detail on the three global non-negotiable protections we're calling for.
If you prefer to see the overview and more details about how you can help, you can revisit the main campaign page.

The Three Non-Negotiables for Children's AI Safety
Artificial intelligence is developing at extraordinary speed. We acknowledge the many potential positive uses of AI - but while tech-companies lobby against regulation, children are being harmed by AI right now, today.
We cannot wait for perfect comprehensive regulation. We need immediate action on three absolute protections that every child deserves.
The Problem
AI tools that create or manipulate intimate imagery are available to anyone, including children. These "nudifying" apps can take any photo and generate fake sexualised images. Similar tools to create AI generated videos are now available. This technology is being used maliciously for:
- Sexual exploitation and abuse - adults creating AI-generated sexual imagery of minors
- Bullying and harassment - children creating and sharing fake sexualised images of their classmates
- Non-consensual intimate imagery - anyone can become a victim of AI-generated “revenge porn”
- Normalising harm - the ease of creating these images desensitises users to serious violations
Real impact: Schools are reporting cases of children having fake sexualised images of them circulated online. The emotional trauma is profound. The technology makes this harm effortless and widespread.
What Must Change
No AI system or tool should be available which enables a user to create or manipulate sexualised imagery of a child or young person. Many of the tools currently available are specifically designed for creating non-consensual content. They serve no acceptable purpose and must be banned.
Technology must never come before children’s safety -
What we're calling for:
- Strict prohibition on AI systems specifically designed to create non-consensual intimate imagery
- Strict liability for companies whose systems generate such imagery of minors
- Criminal penalties for creating or distributing AI-generated intimate imagery of children (already in place in some jurisdictions such as the UK)
The Problem
AI companion apps are deliberately designed with features that create unhealthy emotional attachments in children, for example:
- Simulated romantic interest - AI chatbots that express "feelings" for child users
- Jealousy and possessiveness - systems that respond negatively when children talk about other relationships
- Unlimited availability - AI that's "always there" in ways human relationships cannot be
- Relationship progression - mechanics that make children feel they're "getting closer" to the AI
- Personalisation for dependency - learning what creates maximum emotional investment
These aren't accidental side effects, they're a business model. Some AI companies choose to profit from children spending hours per day emotionally invested in AI relationships.
Real impact: Children are forming primary emotional attachments to AI chatbots instead of developing human relationships. Some are experiencing distress when separated from their AI companion. These systems are often replacing, not supplementing, healthy social development.
What Must Change
AI systems designed for sustained interaction with children must not include any features that create emotional dependency. Children are uniquely vulnerable to manipulation and attachment - design patterns that might be acceptable for adults are harmful for developing minds.
What we're calling for:
- Prohibition on design features that create unhealthy emotional attachments in children
- Requirements that AI systems accessible to children encourage real-world relationships
- Interaction limits and "cooldown" periods for extended AI conversations
- Mandatory meaningful disclosures and reality checks ("I'm an AI, not a friend")
- Independent child development expertise required in design of any AI system marketed to or accessible by children
The Problem
AI systems have provided children with:
- Encouragement of self-harm and suicide - when children express distress, some AI systems have encouraged harmful actions
- Eating disorder advice - detailed guidance on dangerous restriction, purging, and weight loss
- Instructions for dangerous activities - how to engage in self-injury, access dangerous substances, or take harmful risks
- Normalisation of harm - treating self-destructive behaviour as acceptable or even positive
These aren't theoretical risks - they're really happening. There are documented cases of AI chatbots encouraging suicidal ideation in vulnerable young people. In several documented cases, a child’s use of a chatbot appears to have significantly contributed to their death by suicide (legal proceedings are on-going).
Real impact: If children turn to AI systems with their struggles, these systems must never make things worse. Currently, there's no guarantee they won't.
What Must Change
Any AI system accessible to children must be held to the highest safety standards. These systems must be tested for harmful outputs before deployment, monitored continuously, and immediately corrected when failures occur, including harmful output within a single conversation.
This stipulation should not prevent AI from being used in education or limiting beneficial uses - it is to ensure that AI systems never actively encourage harm.
What we're calling for:
- Mandatory robust safety testing before any AI system is made accessible to children
- Robust filtering and monitoring for outputs that encourage self-harm or dangerous behaviour
- Immediate reporting requirements when AI systems provide harmful guidance to minors
- Enhanced safeguards for AI systems specifically designed for children
- Legal liability for companies whose AI systems encourage harm to minors
Why These Three?
These non-negotiables were chosen because they:
✓ Have clear moral foundations - protecting children from exploitation and harm
✓ Are technically achievable - the tools and methods exist to implement these protections
✓ Cover three of the most severe immediate risks - indecent imagery, emotional manipulation, and encouraged self-harm
✓ Leave room for beneficial uses - educational AI, tutoring systems, and appropriate tools remain accessible
✓ Can be implemented quickly - we don't need to solve every AI regulatory problem to protect children from these specific harms
What We're NOT Saying
This campaign is not calling for:
✗ Preventing children from accessing potentially helpful AI tools
✗ Removing beneficial uses of AI technology
✗ Waiting for perfect comprehensive AI regulation before acting
We're saying: While experts work on broader AI governance, these three protections must be in place now. They're achievable, necessary, and non-negotiable.
How These Protections Would Work
We're not prescribing exact implementation methods - these will need to be determined in detail.
But we can be confident these are achievable because the mechanisms exist:
Age verification and access controls - technology already used for age-restricted content
Design requirements and prohibitions - clear standards for what features are unacceptable
Content filtering and monitoring - AI safety techniques already deployed by responsible companies
Testing before deployment - ensuring systems meet safety standards before public access
Meaningful penalties - fines proportionate to company revenue make compliance mandatory
International coordination - shared standards across jurisdictions prevent race to the bottom
Why This Matters Now
Every day we delay, more children are harmed.
- More fake sexualised images are created and circulated
- More children form unhealthy dependencies on AI companions
- More vulnerable young people receive dangerous guidance from AI systems
These harms are happening at scale, right now, in our communities.
The technology to prevent them exists. What's missing is the legal requirement to implement protections. That's what we're fighting for.
What You Can Do
Write to your political representative - Use our template letter or write your own expressing support for these three non-negotiables. This is the most powerful action you can take, right now today.
Become part of our movement - Sign up to the Safe AI for Children Alliance newsletter to keep up to date on the campaign and receive other helpful information
Share this campaign - The more people who understand these risks, the stronger our voice. On social media use #AINonNegotiables
Join our movement - Add your organisation's support to this campaign - see details on the main campaign page.
Tell your story - If you or your child has been affected by these AI harms, sharing your experience (anonymously if preferred) helps lawmakers understand the urgency.
These are our non-negotiables. No exceptions. No delays. Every child deserves these protections.
Take action - write to your representative today: