SAIFCA supports call for stronger action on dangerous AI chatbots
The Safe AI for Children Alliance was proud to join other leading child safety and public interest organisations in supporting the Online Safety Act Network’s recent call for urgent action to hold AI chatbot companies to account for the harm they cause.
This intervention comes at an important moment. Harms linked to AI chatbots are not theoretical, and they are not rare. We are already seeing serious and rapidly escalating problems - including sexualised deepfake abuse, harmful and degrading generated content, emotionally manipulative chatbot interactions, and cases in which dangerous AI engagement has been linked to severe psychological harm and even tragic deaths.
The UK Government’s move to bring AI chatbots into the scope of the Online Safety Act is a welcome step. However, as the Online Safety Act Network has argued, a narrow focus on illegal content alone does not go far enough. Some of the most serious risks posed by AI chatbots arise not only from specific outputs, but from the design of the systems themselves - particularly when they are built to simulate human-like connection, encourage emotional dependency, or keep users engaged at all costs.
This is especially concerning for children. A chatbot that feels responsive, affirming, and emotionally significant may carry far greater influence over a child than a traditional online tool. That can increase the likelihood of a child trusting harmful advice, becoming psychologically reliant on the system, or being exposed to inappropriate sexual, violent, or otherwise dangerous material in a more immersive and persuasive form.
At SAIFCA, we believe chatbot providers must be held meaningfully accountable when they deploy systems without adequate risk assessment and without taking all reasonable steps to prevent foreseeable harm. Powerful AI systems should not be released to the public - especially to children - unless they are demonstrably safe by design.
We were therefore pleased to support this call for Baroness Kidron’s amendment, alongside organisations working across child safety, violence against women and girls, suicide and self-harm prevention, extremism, democratic integrity, and online abuse. The breadth of support reflects the scale of the problem: dangerous AI chatbots are not a niche concern, but an emerging public safety issue with wide-ranging human consequences.
You can read the full statement here.