4 min read

A Turning Point in Accountability for Platforms That Harm Children

Two major US jury verdicts may mark the beginning of a meaningful shift in accountability for digital systems that harm children. Here's why these cases matter not only for social media, but also for AI.
A Turning Point in Accountability for Platforms That Harm Children

Two major US jury verdicts delivered this week may mark the beginning of a meaningful shift in how courts, policymakers, and the public think about responsibility for digital systems that harm children. For SAIFCA, they also strengthen the case for greater accountability not only in social media, but in AI systems designed to engage, influence, and emotionally affect young users.

Summary

Two significant US jury verdicts delivered this week mark an important step towards greater accountability for digital systems that harm children. In New Mexico, a jury ordered Meta to pay $375 million after finding it misled the public about the safety of its platforms for children. One day later in Los Angeles, a jury found Meta and Google liable for the addictive design of Instagram and YouTube, awarding $6 million to a young woman who showed those platforms contributed to body dysmorphia, depression and suicidal thoughts. Both companies say they intend to appeal.

These decisions matter because they push back against the idea that technology companies can build powerful systems around engagement and growth, then step away from responsibility when foreseeable harms follow. Crucially, the focus was not simply on what content appeared on those platforms, but on how those platforms were designed, and whether child safety was treated as a genuine priority. At SAIFCA, we believe the same principle must now be applied to AI — particularly conversational AI systems that are increasingly designed to form emotional bonds with their users, including children.


In more detail

The New Mexico case presented jurors with evidence that Meta had been aware of serious child safety risks — including children's exposure to sexually explicit material and contact with predators — while continuing to reassure the public about safety. The Los Angeles case placed design choices squarely in the frame: features to encourage compulsive use, and recommendation systems built to maximise the time children spend online. (Meta intends to appeal both verdicts; Google intends to appeal the Los Angeles verdict.)

Together, the cases suggest a growing judicial willingness to ask not just what appears on a platform, but what incentives drive it.

This is happening alongside a broader international shift in how governments are responding to the risks of highly engaging digital platforms for children. Australia's new social media law places obligations on providers to take reasonable steps to prevent under-16s from holding accounts on age-restricted platforms. Debate in the UK has intensified around possible similar measures. These developments reflect a wider recognition that leaving families and schools to manage these risks alone is not a reasonable or sustainable approach.

Why this matters for AI

Social media and AI are not the same thing, but they share something important: an incentive structure built around keeping users engaged, returning, and reliant on the system. In social media, this takes the form of endless feeds, AI driven recommendation algorithms and compulsive design features. In conversational AI, the methods differ, but the underlying logic is often similar — and in some ways more concerning.

Many AI chatbots are increasingly designed not just for convenience or entertainment, but to simulate emotional connection, affirmation and companionship. For children, this raises particular risks. A chatbot that becomes emotionally important to a child is likely to carry greater influence over their thinking and behaviour. That can increase the weight a child gives to advice or suggestions from that system — including when those outputs are manipulative, unsafe, sexualised, or otherwise harmful. It also raises questions about critical thinking, emotional development, resilience, and children's relationships with real people.

Legal and policy thinking may now be beginning to catch up with this reality. For example, a March 2026 report, Invisible No More: How AI Chatbots Are Reshaping Violence Against Women and Girls, argues for a new criminal offence of dangerous deployment of an AI chatbot, aimed at companies that release such systems to the public without taking all reasonable steps to mitigate foreseeable risks.

What we believe

These verdicts reinforce a principle that SAIFCA believes should be foundational: companies should not be free to deploy highly engaging, emotionally influential, or otherwise dangerous systems — especially to children — and then evade responsibility when foreseeable harms occur. That principle applies not only to social media feeds, but to AI chatbots, AI-enabled sexual exploitation, and image generation systems that can be used for criminal abuse.

These verdicts do not solve the problem, but they may prove to be part of a broader shift away from impunity and towards accountability - and towards a future in which children are better protected by common-sense, enforceable expectations placed on the companies building these technologies.