Our Position on Catastrophic Risk from AI

Where We Stand on Catastrophic Risk from AI
Why we believe long-term AI safety matters - even as we focus on protecting children today
SAIFCA is focused on protecting children from the real and immediate harms posed by artificial intelligence.
But we also believe that ignoring credible expert concerns about the potentially catastrophic consequences of advanced AI would be a disservice to future generations.
This page sets out our position - and why our work remains valuable no matter what the future holds.
At The Safe AI for Children Alliance, our primary mission is to protect children from the real and growing risks posed by artificial intelligence. We focus on near-term harms - from unsafe AI companions to algorithmic manipulation - and on helping children thrive in an AI-shaped world.
But we also recognise a broader reality:
Many leading AI researchers and institutions warn that advanced AI may pose catastrophic or even extinction-level risks to humanity.
This page outlines where SAIFCA stands on that issue – and why our work remains vital no matter what the future holds.
(You may find it helpful to read our Theory of Change too)
🌍 Why We Take Catastrophic Risk Seriously
Turing Award and Nobel Prize winners, along with hundreds of respected researchers, have signed public statements warning of the possibility of human extinction from unaligned, autonomous general intelligence.
“Mitigating the risk of extinction from AI should be a global priority.”
CAIS Statement on AI Risk
We do not believe such warnings should be dismissed.
In fact, given the expertise and credibility of those who warn of these possibilities, we believe that it would be very irresponsible for us to dismiss them, or to state a strong counter-position.
Based on the level of expertise of those warning of catastrophic and extinction-level risks, and our own research, SAIFCA recognises that such risks from advanced AI are both real and credible. To avoid stating this openly would be, in our view, a disservice to children and to the principle of transparency we aim to uphold.
*A small sample list of those warning of catastrophic and extinction-level risks from advanced autonomous general AI, along with brief details of their experience and achievements, is given at the bottom of this page.
🎓 Respecting Diverse Views
We acknowledge that not everyone agrees. Some credible experts argue that such risks are exaggerated or very far in the future. We welcome reasoned debate, while maintaining our above position and our support for the precautionary principle*.
We do not, however, support reckless AI acceleration without adequate safeguards - a position we find ethically indefensible.
We welcome thoughtful disagreement - but we reject reckless acceleration without accountability.
🌱 Why Our Work Matters Either Way
Whether or not advanced AI becomes an extinction-level threat, our mission remains meaningful and urgently needed.
If catastrophic and extinction-level risk theories are correct:
- We help normalise the conversation, enabling better public discourse and grassroots support for regulation.
- We amplify and support organisations focused on existential risk mitigation.
- We inspire and equip future safety-focused leaders, expanding the talent pool for technical and governance roles.
- We advocate for stronger regulation, which benefits both children today and society in the long term.
If catastrophic and extinction-level risk theories are incorrect:
- Our work still protects children from present-day dangers - including manipulation, exploitation, and developmental harms.
- Our strategy continues to strengthen digital literacy, critical thinking, and societal preparedness.
- Nothing is lost.* Everything still matters.
⏳ What About Timelines?
There is no consensus on how soon we may face these risks. Some say decades. Others suggest much sooner. Some predict before 2030. We don’t take a fixed position on timing.
Instead, we focus on what we can do – right now.
- If timelines are short (1–5 years), our greatest contribution may be to strengthen and support existing safety efforts.
- If timelines are longer, we can grow a generation of informed, ethical, and capable leaders who will shape AI’s development and governance.
Even in the Worst Case
We do not believe catastrophic outcomes from AI are inevitable. But there are a few experts who suggest the possibility that humanity may already be near – or beyond – a “point of no return.”
💡 And even in that case, we would continue our work.
Protecting children from harm - in any timeline and under any circumstances - is always a cause worth fighting for.
*1)[list of names] (under edit, 26th May 2025)
*2) The Precautionary Principle: This principle asserts that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking the action. Its core tenet is that where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing measures to prevent severe harm.
How and Why it Applies in This Case:
a) Threat of Serious and Irreversible Harm: Leading AI researchers and institutions have warned of potentially catastrophic or even extinction-level risks from unaligned, autonomous general intelligence. These are threats of the gravest possible magnitude, posing severe and potentially irreversible damage to humanity.
b) Scientific Uncertainty: While the warnings are numerous and credible, there is still significant scientific uncertainty regarding the exact timelines, mechanisms, and probabilities of such catastrophic outcomes. Experts disagree on how soon these risks might materialise or precisely how they would unfold.
c) Justification for Preventive Action: In the face of such profound potential harms coupled with scientific uncertainty, the precautionary principle dictates that it would be irresponsible to wait for full scientific consensus or for the harms to manifest before taking preventive action. SAIFCA, by supporting this principle, asserts that proactive measures - such as advocating for stronger regulation, fostering safety-focused leadership, and promoting public discourse - are necessary now.
d) SAIFCA's Stance on "Reckless Acceleration": This principle underpins SAIFCA's firm stance against "reckless AI acceleration without adequate safeguards". It justifies our position that, despite ongoing debate, the potential severity of the risks demands a cautious approach to AI development and deployment, prioritising safety and robust governance from the outset rather than waiting until it might be too late. It provides a logical framework for why SAIFCA believes these risks must be taken very seriously, even as we focus on immediate harms to children.
*3) Unless one takes the position that there are no significant risks from advanced autonomous AI at all.
➕ Want to Know More?
- Read our Theory of Change to see how our work links short-term actions to long-term impact
- Visit our Mission page for a quick overview of our goals and principles
- Explore the Centre for AI Safety's expert statement for more on this global concern and read more about the most catastrophic risks
If you'd like to support our important work, please consider making a donation. Every contribution helps us expand our reach and impact.