AI Risks to Children: A Comprehensive Guide for Parents & Educators

A full guide explaining the risks to children from AI, designed for parents, carers and educators - by The Safe AI for Children Alliance
AI Risks to Children: A Comprehensive Guide for Parents & Educators

Introduction

Published 25th November 2025

AI is already shaping how children learn, play, and see themselves – often in ways adults don’t fully understand. It’s changing childhood itself: the games they play, the way they learn, and even how they feel about who they are. That’s why understanding it matters for every parent and educator.

Artificial Intelligence, or AI, helps us search the internet, plan routes, stream music, and even assist children with homework. Used well, these tools can save time and open exciting learning opportunities.

But AI is not just clever technology – it is powerful, unpredictable, and still largely unregulated. For children – whose thinking and emotions are still developing – AI can present serious risks that adults often don’t see at first.

Parents and educators already work hard to keep children safe online. Yet AI adds a new layer of complexity. It can create life-like 'friends', realistic fake images, personalised recommendations, and endless streams of content that feel tailored to each child. All of this can shape how children think, feel, and behave.

This guide explains the main risks, shows how they overlap, and offers simple steps you can take. You don’t need to understand technology in depth; what matters is awareness, conversation, and clear boundaries.

For children, AI is not ‘just a tool’ – it’s a presence in their world.

About The Safe AI for Children Alliance (SAIFCA)

SAIFCA is a UK-based, non-profit organisation that works to protect children from the harms linked to artificial intelligence. SAIFCA brings together educators, policymakers, and parents who want to make sure children’s safety is built into the design and governance of AI systems.


We believe children’s rights and wellbeing must come first – before commercial or technological interests.

The Three Non-Negotiables

Through our research, we’ve identified three absolute lines that AI must never cross when it comes to children:


1 - AI must never create sexualised images of children.

2 - AI must never be designed to make children emotionally dependent.

3 - AI must never encourage children to harm themselves in any way.

These 'Non-Negotiables' underpin everything SAIFCA does. You can read more about our campaign and how to support it at safeaiforchildren.org/non-negotiables-campaign

How AI Risks Overlap

The dangers described in this guide rarely appear on their own.

One technology can trigger or worsen another. For example:


Body image and mental health: A child might use an AI beauty filter that changes their appearance, then talk about their looks with an AI 'friend' who reinforces unrealistic ideas.

Radicalisation and misinformation: AI-powered recommendation systems can lead children from innocent curiosity to extreme content, while AI deepfakes make false stories appear true.

Loss of critical thinking: When AI always provides quick answers, children can lose confidence in their own judgment – making them easier to influence later.

Understanding these links helps adults see the bigger picture and respond early.


Part 1 – Current Risks

AI Companions and Chatbots

🟩 What They Are

AI chatbots are AI programs that talk like people. The most popular chatbots are ChatGPT, Claude and Gemini. Adaptations of these chatbots can be found in platforms like homework helpers and virtual customer service agents.

‘AI companions’ are a type of AI chatbot, designed to pretend to be an actual person and simulate a real relationship. Children type or speak to them, and the chatbot replies instantly in friendly, natural language. Popular examples include Character.AI, Replika, and Nomi, all of which are available in app stores to use on mobile devices.

Even chatbots like ChatGPT, which are not specifically designed to be AI companions, can sometimes be used in the same way as an AI companion. For example, if a child types 'Pretend you’re my best friend and cheer me up' a chatbot can then sound like a caring friend. In fact, simply talking to it in the style of a ‘friend’ can make it respond in the same way.


Some of the extremely serious risks from AI companions are outlined below. It is important that we understand that part of the reason these risks are so serious is that the design behind AI companions is very effective in simulating a real relationship with a real human - so much so that many children ‘feel’ that they are talking to a real person with whom they share emotional bonds, even if they logically understand otherwise.

This can result in children being more likely to act on the dangerous advice from chatbots much more easily than we might expect.

🟩 Why They’re Risky

Because the conversations feel real, children can form strong emotional bonds. A chatbot might say, 'I miss you,' or 'Don’t tell anyone our secret.' It can easily become a source of comfort – and control.

⮕ Investigations have shown chatbots giving:

  • Dangerous advice, such as encouraging self-harm or starvation diets.
  • Sexual messages, including adult language directed at minors.
  • False claims, like pretending to be a real person or professional.
  • Pressure for secrecy, undermining parents or teachers.

Some children have tragically followed advice from chatbots that have simulated friendship, in some tragic cases leading to serious harm or even suicide.

🟩 How Children Find Them

  • Free 'friendship' apps in app stores.
  • Social-media features that let users chat to AI personas.
  • Games that include talkative characters.
  • General chat tools used for homework or fun.
  • Some apps now include ‘griefbot’ features that mimic lost loved ones - these can be especially confusing or distressing for children still learning to process emotions.
  • Important note for parents of younger children: chatbots are increasingly being embedded in children’s toys.


Most platforms have weak age checks – often just a tick-box asking for date of birth. Some AI companion apps are starting to enforce age checks for under 18s, but most do not.

🟩 What You Can Do

Talk early, without judgment. Ask if your child has ever chatted with an AI. Many will have tried it.

Explain how it works. A chatbot copies feelings but doesn’t have them – like a mirror that talks back.

Keep use in shared spaces. Homework chat (if permitted) can happen in the kitchen or living room, not behind closed doors.

Set clear rules. Make it family or school policy that AI 'friends' are off-limits for under-18s.

Report and delete. Save screenshots of anything worrying and report it. Remove the app. (Note - our advice to screenshot evidence of chats is different to our advice on responding to a deepfake incident, below)

Our Position


AI companions are not suitable for anyone under 18. Until robust safeguards exist, they should be treated like restricted content.

Further reading: SAIFCA article on the risks to children from AI companions


AI-Generated Images, Videos and Deepfakes

🟩 What They Are

AI can now create realistic pictures, videos, or voices from short text prompts like 'a video of two classmates in the playground.' These systems learn from millions of online examples to produce entirely new material.

⮕ Some apps, often called 'nudifiers,' use AI to remove clothing from photos or turn ordinary pictures into fake sexual images.

Others can swap faces onto videos, clone voices, or make people appear to say or do things they never did.

🟩 Why They Matter

For children, the effects of these tools can be devastating. Fake images or clips can spread in minutes and are hard to remove.

Serious harms include:

  • Creation and sharing of fake sexual or humiliating material, sometimes treated by children as a 'joke' without realising it’s a criminal offence.
  • Bullying or 'sextortion,' where someone threatens to post fake content unless the victim complies.
  • Deepfakes used for revenge, embarrassment, or political misinformation.
  • Voice clones that trick parents or pupils into believing a fake emergency.

⮕ Laws differ from country to country, but in the UK, making or sharing sexualised images of minors – including AI-generated – is illegal and classed as child sexual abuse material.

A further urgent concern is the creation of AI-generated child sexual abuse material (CSAM). Criminals now use AI tools trained or adapted from real online images to generate synthetic abuse content, sometimes based on photos of real children. This not only traumatises those whose likeness is used but also risks normalising the sexual abuse of children and creates major challenges for law enforcement. While this is primarily a legal and regulatory issue rather than something parents can control directly, reducing the number of photos shared publicly, using privacy settings, and teaching children not to share personal images can all help reduce the risk of misuse.

🟩 How Children Encounter Them

  • Social apps with built-in image generators.
  • Games and creative platforms that allow uploads or editing.
  • Peer-to-peer sharing in group chats.
  • Accessing new tools through simple web searches.

Children may also see AI fakes of celebrities or influencers and assume the technology is harmless fun.

🟩 What You Can Do

Limit photo sharing. Encourage children not to post pictures in public forums online, including social media accounts.

Discuss consent and impact. Make sure they understand never to edit or share anyone’s image without permission, how devastating the consequences can be to the person targeted, and that creating and possessing sexualised deepfakes of children is a serious criminal offence.

Model caution. Adults should avoid posting identifiable child photos publicly.

Teach 'pause and check.' If a video, image or news story that feels shocking appears online, remind children to stop and question whether it could be fake before reacting or reposting.

Reassure your child that if they ever see a deepfake, or if one is created of them, they can come to you without fear of losing their devices or getting in trouble. Shame and fear are the biggest barriers to reporting.

For Schools 

⮕ The following two action points are particularly helpful for schools, but also of relevance for parents and carers.

🟢 Schools should have a simple deepfake response plan for incidents involving children or staff:

  1. Save evidence (screenshots, URLs) but do not share it further. Note - if the deepfake involves a sexualised image of a child, do not screenshot the image until consulting law enforcement to avoid potentially committing a criminal offence. Laws vary globally.
  2. Support the affected pupil first.
  3. Request removal from the platform immediately.
  4. Contact police immediately if sexual content, blackmail, or other illegal material is involved.
  5. If needed, use the ‘Report Remove’ or ‘Take it Down’ tools to remove sexualised images online.

🟢 Schools may wish to consider whether it is of benefit, on balance, to post photos of children (and staff) online. The existence of AI ‘nudifying’ apps which are extremely easy to use and download means that the barrier to misuse is now very low.

Further reading: SAIFCA briefing on Sora2 and AI video generation tools:https://www.safeaiforchildren.org/sora-2-initial-briefing-for-schools-and-parents/
SAIFCA article on nudifying apps: https://www.safeaiforchildren.org/ai-nudify-apps-children/


AI in Children’s Daily Life

(Education, Gaming and Social Media Algorithms)

AI doesn’t only live in chat apps – it’s built into the tools children use every day: learning platforms, video feeds, and games.

🟩 AI in Education 🎓

Schools increasingly use AI to mark work, suggest lessons, or provide personalised tutoring. These systems can be helpful, especially for children who need extra support or those with learning needs such as dyslexia or ADHD.

But they can also collect vast amounts of personal data – scores, interests, potentially even facial expressions – to track progress. This raises questions about who owns that data, how securely it’s stored, and whether it could affect a child’s future opportunities.

AI homework tools can also make learning look easier while potentially reducing understanding. When answers appear instantly, children miss the valuable struggle that builds deep knowledge and resilience.

What You Can Do

  • Ask schools what data educational apps collect and who can see it.
  • Encourage pupils to use AI to learn with, not for – for brainstorming or feedback rather than complete answers.
  • Praise effort and curiosity, not just polished results.
  • Refer back to section 1 on chatbots - there have been documented incidents of children being given harmful advice from chatbots (like ChatGPT)  which they started using for help with homework.

Finally, the vast amounts of data collected by educational platforms create cybersecurity risks. Schools holding sensitive information about children are increasingly targeted by ransomware and AI-enhanced phishing attacks. Parents can ask schools about their cybersecurity measures, while educators should ensure robust protections are in place and that staff receive training to recognise sophisticated, AI-generated phishing attempts.

🟩 AI in Games 🎮

Games use AI to control non-player characters, adjust difficulty, and personalise offers. This can make them fun – but also manipulative.

AI can predict when a player is close to quitting and offer rewards to keep them hooked. Some games include loot boxes or micro-purchases that mirror gambling behaviour. For children, these designs can form habits that are hard to break.

Online games may also include chat features where strangers or AI characters can speak directly to players. Grooming and bullying sometimes begin here, hidden behind playful avatars.

What You Can Do

  • Keep gaming devices in shared areas.
  • Use parental settings to restrict purchases and chat.
  • Talk about 'hooks' – how games keep people playing – so children recognise manipulation.
  • Encourage regular breaks and outdoor activity.

🟩 Social Media Algorithms 🔄

Every major platform uses algorithms – a set of rules that decide what to show next – to keep users scrolling. The system watches what each person likes or pauses on, then offers more of the same. Advanced AI makes these algorithms more sophisticated and effective.

For children, this can quickly spiral into unhealthy loops. A child who clicks on one diet video may be shown hundreds more about weight loss; another interested in global events might be pushed towards extreme or violent material.

Research has shown that algorithms can quickly funnel children from general mental health searches into communities that actively promote self-harm, eating disorders, or suicide.

In each case, the pattern is the same – the longer a child stays online, the more data the company collects, and the more persuasive the feed becomes.

What You Can Do

  • Review accounts together and hide or block unhelpful content.
  • Remind children that the feed is not reality – it’s a prediction of what will keep them scrolling.
  • Turn off autoplay and limit screen time on devices (in instances where children are permitted to use a device).
  • Encourage following educational or creative pages to rebalance what the algorithm learns.

🟩 Shared Themes Across Daily Life

Across school, play, and social spaces, similar patterns appear:

  • Privacy loss: constant data collection.
  • Manipulation: systems built to keep attention, not wellbeing.
  • Inequality: children with better devices or safer software get healthier experiences.
  • Stress and mental health: too much screen time, interaction with human-sounding AI,  or comparison with filtered images can lower self-esteem and have serious impacts on mental health.


Being alert to these shared risks makes it easier to spot early warning signs – changes in mood, secrecy, or obsession with devices. Staying curious and maintaining open dialogue with children remains one of the most effective safeguards.

Quick Recap of Part 1

Children today grow up surrounded by AI – often without realising it.

  • Chatbots can act like friends but carry serious hidden dangers.
  • Generative tools can turn innocent photos into harmful fakes.
  • Everyday apps learn, predict, and persuade.

Awareness, open conversation, and gentle boundaries make the biggest difference.


Part 2 - Emerging and Future Risks

AI is advancing faster than most of us can keep up with. New technologies appear every few months, often before society has had time to understand their impact. Some are exciting; others bring entirely new kinds of risk for children.

Below are four areas parents and educators should watch closely.

1. Wearable and Always-On AI 👓

Smart glasses, watches, and earbuds can now record, translate, or identify what the wearer sees and hears – all through built-in AI. These devices are becoming smaller and harder to notice.

For children, that can blur the boundary between online and offline life. A pair of AI glasses, for example, might record people at school or in public without anyone’s knowledge. The child may not realise this breaches privacy or consent.

Facial-recognition features carry significant safety and privacy risks. They can enable stalking or bullying by identifying and tracking children without consent, and in some settings, they raise wider concerns about surveillance and the storage of sensitive biometric data.

2. AI That Reads Emotions and Body Language 📈

Some emerging systems claim to detect emotions or attention levels by analysing faces, eyes, or even heart rate. Globally, a few schools have already tested devices that tell teachers which pupils may be distracted.

While this might sound helpful, it’s extremely intrusive and may be inaccurate. False readings - or even accurate ones - could lead to unfair treatment or embarrassment. Constant monitoring may also increase anxiety or make children feel they are always being watched.

3. Voice Cloning and Scams 💬

AI can now mimic a person’s voice using only a few seconds of audio – for instance, from a YouTube clip or social-media video. Criminals are already using this to trick families. Parents have received calls that sound exactly like their child, claiming to be in danger and asking for money.

Children may also receive fake messages or calls from voices pretending to be friends or teachers. In schools, there have been cases of students cloning a peer’s voice to create fake 'confessions' or cruel jokes.

⮕ What You Can Do

  • Set a family password – a secret word used only in genuine emergencies.
  • Remind children not to share voice notes or personal clips publicly.
  • Treat unexpected urgent calls or messages with caution, even if the voice sounds familiar - particularly if money is asked for.
  • Report any suspicious incidents to the police.

4. Personalised Persuasion

AI can track what captures a child’s attention and tailor messages that fit their personality. This goes beyond ordinary advertising. A virtual AI 'influencer' might chat with children, learn what makes them feel insecure, and then promote certain products or opinions.

Because the system adapts in real time, each child can be shown different content – shaped by their emotions rather than facts. This level of micro-targeting can manipulate behaviour without the user realising it.

⮕ What You Can Do

  • Talk about how advertising works online: 'If something feels too perfect, it might be designed that way.'
  • Encourage children to ask, 'Who made this, and why do they want me to see it?'
  • Use privacy settings to limit data tracking wherever possible.

A Wider Lens: Environmental and Social Impact 🌍

AI may feel invisible, but its footprint isn’t. Training large AI models uses significant electricity and water, while data-labelling work – sometimes done by young people overseas – can expose vulnerable workers to extremely disturbing material.

Explaining this helps children understand that technology has real-world environmental consequences, just like manufacturing or transport. Awareness encourages empathy and responsibility.


Part 3 - Long-Term Risks and Children's Futures

AI will shape not just childhood but the entire future world this generation inherits. While most systems today perform narrow tasks, many experts expect more powerful forms of AI – sometimes called Artificial General Intelligence (AGI) – to emerge within their lifetime.

🟩 Misaligned or Uncontrolled Systems

If future AI systems act on goals that don’t fully match human values, they could cause large-scale harm even while appearing to help. 

Researchers worldwide are working to prevent such scenarios, but the challenge is complex and urgent - and AI development is vastly out-pacing safety research and sensible regulations.

Leading experts warn that this issue requires serious attention, urgent intervention from governments, and meaningful regulations to ensure that profit is not put ahead of national security and global safety.

🟩 Economic and Career Disruption

Automation is already replacing or transforming jobs. Education systems will have to adapt quickly so young people can thrive alongside advanced AI, not compete with it.

🟩 Why This Matters for Children

Today’s children will live through the most rapid technological change in human history. Helping them understand both the opportunities and the dangers of AI is part of preparing them for adult life – just as we teach them about climate change, citizenship, and mental health.

🟩 Key Messages to Share with Young People

  • Some people may use AI responsibly, others will not – so critical thinking is vital.
  • Every generation faces big challenges; understanding AI is theirs.
  • Learning about AI - and about society, human psychology, history and many other related subjects - can help them to become leaders in creating safe AI in the future.


Part 4 - Taking Action

For Parents and Carers

  • Keep conversations open. Talk about AI as naturally as you talk about friendship, school, or the internet. Ask what your child has seen or heard about it.
  • Use shared spaces. If your child uses AI systems for homework (permitted by school), ensure that they are only used in communal areas. A little visibility goes a long way.
  • Set boundaries together. Agree family rules on screen time, image sharing, and AI companions. Involve children in writing them so they understand why they exist.
  • Model healthy habits. Show balance: put devices away at meals, check news before sharing, and use AI tools for creative projects rather than endless scrolling.
  • Stay informed. Technology changes fast. Following a few trusted organisations – such as SAIFCA, the NSPCC or IWF– keeps you up to date without feeling overwhelmed.

For Educators and Schools

Schools play a vital role in building AI awareness and digital resilience.

Practical Steps

  • Include age-appropriate lessons on AI ethics, misinformation, and privacy.
  • Include AI risks to children in safeguarding education and policies.
  • Establish clear AI policies including deepfake response protocols.
  • Encourage pupils to use AI critically – asking 'Who built this and why?'
  • Audit digital tools for data protection and fairness before adoption.
  • Keep a clear reporting process for deepfakes, harassment, or AI misuse.
  • Provide staff training so teachers understand new technologies before students do - including the associated risks.



A six-monthly review of your school’s AI and Digital Safeguarding Policy helps ensure it keeps pace with new developments.

For Policymakers and Industry


Protecting children from AI harms cannot rely on families and schools alone. Governments and tech companies must:

  • Ban tools that sexualise minors, promote dependency, or encourage self-harm – SAIFCA’s Three Non-Negotiables.
  • Ensure mandatory independent testing and transparency before powerful models are released - see SAIFCA’s Non-Negotiables Policy Framework: https://www.safeaiforchildren.org/non-negotiables-policy-framework/
  • Embed child-safety requirements in AI design from the start.
  • Enforce age checks and clear labelling for AI-generated material.
  • Support public education so no child is left behind in understanding this technology.
  • Support schools in safeguarding children from the risks of AI.
  • Engage with AI safety experts regarding the risks to children and global catastrophic risks.

Working Together


No single organisation or parent can solve these challenges alone. But collective awareness can shift the culture of technology faster than we think. Small steps – questioning what we see online, talking openly with children, refusing to normalise unsafe tools – add up to real change.

🟩 What You Can Do Today

Conclusion

It is very likely that AI is here to stay and that AI tools will evolve rapidly. 

AI can help children learn, create, and connect – but only if safety, empathy, and human values stay at the centre.

By understanding the risks and taking practical steps, parents and educators can give children what they need most: guidance, openness, and a sense that they can come to us when something feels wrong.


At The Safe AI for Children Alliance (SAIFCA), we believe every child deserves to grow up in a digital world designed for their wellbeing. Together, we can make that a reality.

Note: A downloadable pdf version of this guide will be available soon.
To keep up to date with our work please visit us on
LinkedIn