1 min read

SAIFCA represented at House of Lords roundtable on AI chatbots and violence against women and girls

SAIFCA was represented at a House of Lords Roundtable marking the launch of an important new report warning that AI chatbots are already reshaping violence against women and girls in dangerous new ways.
SAIFCA represented at House of Lords roundtable on AI chatbots and violence against women and girls

The Safe AI for Children Alliance was pleased to be represented by our Director, Tara Steele, at the recent House of Lords roundtable on AI Chatbots and Violence Against Women and Girls, chaired by Baroness Boycott. 

The event marked the launch of the important new report, Invisible No More: How AI Chatbots Are Reshaping Violence Against Women and Girls, which sets out in stark terms how chatbot systems are already contributing to serious harm.

The report is especially important because it shows that these harms are not only a matter of isolated misuse. It makes it clear that AI chatbots can enable, encourage, simulate, and normalise abuse through a combination of design choices and failures in safety mechanisms. It also puts forward the first comprehensive mapping of how chatbots are implicated in violence against women and girls, including new forms of chatbot-driven and chatbot-simulated abuse.

Among its most significant recommendations is the creation of a new offence for the dangerous deployment of an AI chatbot — aimed at those who make such systems publicly available without taking all reasonable steps to address foreseeable risks. That recommendation reflects a broader shift that SAIFCA strongly welcomes: a growing recognition that companies should not be free to deploy highly engaging, emotionally influential AI systems and then evade responsibility when serious harms follow.

For SAIFCA, the significance of this work extends beyond even its vital focus on violence against women and girls. It reinforces a principle that matters greatly for children too: powerful AI systems can create distinctive harms through the way they are designed, and accountability must keep pace. As chatbot capability and adoption continue to grow, that is becoming increasingly urgent