SAIFCA Signs Global Call for AI Red Lines to Help Safeguard Children’s Futures

At The Safe AI for Children Alliance, we focus primarily on the risks that artificial intelligence poses to children here and now - from dangerous AI companions to exploitative ‘nudify’ apps.
But we must never lose sight of our broader responsibility: to safeguard children’s futures at a global scale.
In truth, almost all AI risks disproportionately affect children.
Children are inherently more vulnerable, and early harm compounds over time. The effects of job loss, war, surveillance, or disinformation ripple further when experienced in childhood - and the consequences can shape entire lives.
AI holds immense potential for good, particularly in areas like medicine and healthcare. But if developed irresponsibly, without meaningful oversight, the consequences for future generations could be catastrophic.
That’s why we believe it is essential to draw clear, enforceable red lines around the most dangerous uses of AI - and for those red lines to operate across borders.
This is why SAIFCA has joined over 90 leading organisations and 300 globally respected experts and leaders in signing the Global Call for AI Red Lines, urging governments to agree on enforceable international limits on unacceptable AI risks by the end of 2026.

📢 The Call was launched last week at the UN General Assembly by Nobel Peace Prize Laureate Maria Ressa, and is supported by an extraordinary coalition of voices, including:
- Nobel Laureates such as Joseph Stiglitz and Jennifer Doudna
- AI pioneers including Geoffrey Hinton, Yoshua Bengio, and Andrew Chi-Chih Yao
- Thought leaders like Yuval Noah Harari and Stephen Fry
- Former Heads of State such as Mary Robinson and Juan Manuel Santos
- AI safety experts including Stuart Russell, Mark Nitzberg, Kate Crawford, Max Tegmark, Gary Marcus, Dan Hendrycks, and Niki Iliadis
- Leading child safety advocates including SAIFCA director Tara Steele
The specific red lines will be shaped through international scientific and diplomatic dialogue - but examples could include:
🔹 Mass surveillance
🔹 Human impersonation
🔹 Lethal autonomous weapons
🔹 Loss of human control over AI systems
As AI capabilities accelerate, so too do the risks that children face. We’ve already seen lives tragically lost in connection to AI chatbots, and AI-facilitated cyber abuse. Future harms may be even more serious - and irreversible.
At SAIFCA, we believe enforceable international limits on AI risks are a common-sense approach. They align with our deep commitment to the Precautionary Principle, especially when it comes to protecting children from untested and unsafe AI systems.
We are grateful to The Future Society, The Center for Human-Compatible Artificial Intelligence, and CeSIA (French Center for AI Safety) for leading this vital initiative.
And we’re proud to stand by other leading organisations like The Future of Life Institute, 5 Rights Foundation and Child Safe Net, in supporting it.
A shared global commitment is essential if we are to protect what matters most - the lives, futures, and freedom of our children.
Find out more about the Global Call for AI Red Lines here.
And watch the YouTube explainer here: