The Safe AI for Children Alliance Newsletter – March: A Turning Point in Accountability
Welcome to SAIFCA’s March newsletter!
We’re so glad to have you with us on our mission to build a safer AI future for children – one that protects them from harm and ensures technology serves children’s wellbeing, not engagement metrics.
This month has brought a potentially important shift in the effort to protect children from harmful digital systems.
Two major US jury verdicts have begun to challenge the long-standing assumption that technology companies can design highly engaging platforms, expose children to foreseeable harm, and avoid meaningful accountability.
In this edition:
📌 A major legal moment for children’s online safety
📌 Why these verdicts matter not only for social media, but for AI
📌 Two important recent developments in SAIFCA’s work on dangerous AI chatbots
📌 A reminder that our full free guide to AI risks to children remains available on the SAIFCA website
A turning point in accountability for platforms that harm children
In New Mexico, a jury ordered Meta to pay $375 million after finding it misled the public about the safety of its platforms for children. One day later in Los Angeles, a jury found Meta and Google liable over the design of Instagram and YouTube, awarding $6 million to a young woman who said those platforms contributed to body dysmorphia, depression and suicidal thoughts. (Meta and Google say they intend to appeal.)
These decisions are important because they push back against the idea that technology companies can build powerful systems around engagement and growth, then step away from responsibility when foreseeable harms follow.
Crucially, the focus was not simply on what content appeared on those platforms, but on how they were designed, and whether child safety was treated as a genuine priority.

At SAIFCA, we believe the same principle must now be applied to AI - including conversational AI systems that are increasingly designed to maximise engagement, simulate emotional connection, and deepen user reliance.
We have published a more detailed write-up on the SAIFCA website exploring what these cases may mean not only for social media, but for AI and children’s safety more broadly:
More news from SAIFCA
SAIFCA recently joined other leading organisations in supporting an important call from the Online Safety Act Network for the UK Government to hold AI chatbot companies to account for the harm they cause.
The call argues that current approaches focused only on illegal content do not go far enough, including in relation to personified and emotionally engaging chatbot systems that can encourage emotional dependency and create serious psychological and societal harms.
We were proud to support this intervention alongside organisations working across child safety, violence against women and girls, suicide and self-harm prevention, extremism, online abuse, and democratic integrity.

SAIFCA was also represented by Director Tara Steele at the recent House of Lords roundtable on AI Chatbots and Violence Against Women and Girls, chaired by Baroness Boycott. The event marked the launch of the important report Invisible No More: How AI Chatbots Are Reshaping Violence Against Women and Girls, which sets out in stark terms how chatbot systems are already contributing to serious harm and calls for stronger accountability.
Full guide to AI risks to children
Our full guide to AI risks to children remains available for free on the SAIFCA website.
Please help us extend the reach of the guide and protect more children by sharing it within your networks and on social media.

Please Support Our Work!
Over the coming month, SAIFCA will be represented in multiple global forums as we continue to raise awareness of AI-related risks to children.
We will be collaborating with international children's organisations, contributing at AI ethics events in Copenhagen and throughout the UK, working with leading global AI governance organisations, and attending UK Parliament - all to advance the protection of children from AI risks and to promote responsible evolution and governance of AI systems.
SAIFCA remains free of financial influence from “big tech”.
Thank you for being part of this effort to protect children in an era of rapidly advancing AI.
Warm wishes,
The SAIFCA Team
If you found this newsletter valuable, please share it...
👉 and invite someone you know to subscribe!