Sora 2 and AI-Generated Video: An Initial Briefing for Schools and Parents

OpenAI's new AI video generation tool, Sora 2, was released on 30th September 2025 to users in the United States and Canada. Within days, the platform has demonstrated both impressive technological capabilities and serious potential for misuse - particularly in ways that may affect children and young people.
This initial briefing provides schools, parents and carers with essential information about Sora 2 and other similar AI-video generators, the risks they present, and practical steps to protect children from harm.
While Sora 2 is not yet available world-wide, it can still be accessed via work-arounds, and similar tools are quickly becoming more widely available.
What is Sora 2?
Sora 2 is an AI-powered video generation tool that creates realistic videos from text descriptions. Sora 2 includes synchronised audio with dialogue, sound effects, and music, making the videos significantly more convincing than previous models.
The tool is accessible through a dedicated app with a social media-style feed where users can share their creations. It also includes a "cameo" feature that allows users to insert their own face and voice into AI-generated scenarios.
OpenAI has positioned Sora 2 as a creative tool for content creators. However, the technology's capacity to generate highly realistic videos raises immediate concerns about misuse, particularly involving children and young people.

How Sora 2 is Already Being Misused
Within the first week of release, Sora 2 has been used to create deeply problematic content, including:
Deepfakes of Deceased Public Figures
Users have generated videos depicting deceased celebrities and historical figures in disrespectful or violent scenarios. Examples include theoretical physicist Stephen Hawking being physically assaulted, skateboarding, or depicted in boxing matches. Other deceased figures including Michael Jackson, Martin Luther King Jr., John F. Kennedy, Robin Williams, and Queen Elizabeth II have been similarly misused in videos that range from the absurd to the offensive.
These videos demonstrate how easily the technology can be used to manipulate someone's likeness without consent - and importantly, how difficult it is to protect the dignity of individuals who can no longer speak for themselves.
Copyright Infringement and Brand Misuse
The platform has been flooded with AI-generated videos featuring copyrighted characters from franchises including Pokémon, South Park, Nintendo properties, and major film studios. Whilst copyright violations may seem less immediately harmful than other forms of misuse, they indicate how readily users are willing to ignore guidelines and over-ride purported restrictions.
Lack of Effective Consent Controls
While living individuals should theoretically have control over whether their likeness can be used through Sora 2's "cameo" system, deceased individuals have no such protection. There is no mechanism for estates or families to opt out or request removal of content misusing a deceased person's image.
There are serious concerns about whether such tools could be misused to insert children's likenesses into inappropriate, harmful or abusive content.

What Guardrails Does OpenAI Say Are in Place?
OpenAI has stated that Sora 2 includes several safety measures:
- Age restrictions: Stronger protections for younger users, including limitations on mature output
- Content moderation: Restrictions on creating realistic videos of identifiable people without their consent
- Upload restrictions: Limited ability to upload photorealistic images of people
- Provenance signals: Technical markers to help identify AI-generated content
- Reporting systems: Mechanisms for users to report misuse
- Cameo verification: One-time identity verification for users who wish to appear in videos
However, these guardrails have already been breached in significant ways. The prevalence of deepfakes featuring deceased public figures demonstrates that the system's protections are insufficient.
This consistent pattern - launch first, fix later - raises serious concerns about how quickly similar tools from other companies will appear, potentially with even fewer safeguards.
The Wider Landscape: What Comes Next
Sora 2 is not an isolated development. It represents the cutting edge of AI video generation technology, but other companies are developing similar tools. We can expect:
- Competing platforms with varying levels of safety measures
- Open-source alternatives that may have no content restrictions whatsoever
- Rapid proliferation across different jurisdictions, making enforcement challenging
- Integration into existing social media platforms, increasing accessibility
- Lower barriers to entry, making these tools available to younger users
This technology is unlikely to go away. Schools and parents need to prepare for a future where AI-generated video may become commonplace - and where distinguishing real from fake becomes increasingly difficult.
What This Means for Schools
Schools should expect AI-generated video to become an escalating safeguarding concern.

Children and young people are likely to encounter - and potentially create - this content. The risks include:
Bullying and Harassment
Students may create videos depicting classmates in embarrassing, violent, or sexual scenarios. Even if the video is clearly fake, the emotional harm to the victim is real. The ease of creating such content lowers the barrier for bullying behaviour.
Sexual Exploitation
AI image and video generation tools can be - and are being - used to create CSAM. This may include:
- Face-swapping onto sexual content
- Creating entirely synthetic sexual imagery of real or AI-generated children
- 'Sextortion' attempts using AI-generated "evidence"
In the UK, creating or sharing such images constitutes a serious criminal offence, even when the images are AI-generated.
Misinformation and Reputational Harm
Students may create videos depicting teachers, school leaders, or other students saying or doing things they never did. This can damage reputations, undermine trust, and create serious safeguarding concerns.
It’s also important to remember that AI generated videos that do not use people can also cause serious harm - such as staged disasters and crime scenes.
Normalisation of Manipulation
When children grow up in an environment where video evidence can be easily fabricated, it erodes trust in authentic content and may normalise manipulative behaviour.
The Addictive Design Factor
Sora 2 is designed as a social media platform with a feed-based interface optimised for engagement. Like other social media platforms, it uses psychological techniques to encourage continued use. Schools should be aware that students may become absorbed in creating, viewing, and sharing AI-generated content, potentially to the detriment of their wellbeing, sleep, and academic work.
Recommendations for Schools
1. Update Safeguarding Policies
Ensure your safeguarding and online safety policies explicitly address AI-generated content, including synthetic images and videos. Make clear that creating or sharing harmful AI-generated content will be treated as seriously as sharing non AI-generated harmful content.
2. Include in Online Safety Education
Integrate information about AI video generation into your online safety curriculum. Help students understand:
- How AI video generation works
- Why consent matters - even for "fake" content
- The real emotional harm caused by AI-generated abuse
- How to critically evaluate video content
- The potential legal implications of creating harmful content
While it is essential to prepare the delivery and content of such education in a thoughtful and grounded way, it is also important to ensure a timely response to this escalating situation.
Remember that it is essential that parents and carers are also aware of these issues.
3. Establish Clear Response Protocols
Develop clear procedures for responding when AI-generated harmful content involving students (or staff) comes to light. This should include:
- Immediate steps to support the victim
- Assessment of whether the content may constitute a criminal offence
- When and how to involve law enforcement
- How to document evidence (noting that possessing some images may itself be an offence in some cases - seek police guidance)
- Communication with parents and carers
4. Know When to Involve Law Enforcement
In the UK, if AI-generated content involves:
- Indecent images of children
- Serious harassment or threats
- Potential criminal offences under the Communications Act or Malicious Communications Act
Schools should involve police immediately. Contact your local police safeguarding team or report through CEOP (Child Exploitation and Online Protection).
In other jurisdictions, familiarise yourself with local reporting requirements and maintain relationships with local law enforcement who understand online safety issues. If in doubt, ask law enforcement for guidance.
5. Create a Supportive Environment
Ensure students know they can report concerns about AI-generated content without fear of punishment. Victims of AI-generated abuse may feel shame or fear they will be blamed - make clear this is never their fault.
6. Work with Parents and Carers
Keep parents informed about emerging technologies and risks. Provide clear guidance on what AI-generated content is, why it matters, and how they can support their children. Consider hosting parent information sessions on AI and digital safety.
7. Stay Informed
This technology is evolving rapidly. Designate a member of staff to monitor developments in AI-generated content and update policies and practices accordingly. The Safe AI for Children Alliance (SAIFCA) and organisations such as the UK Safer Internet Centre provide regular updates.
Guidance for Parents and Carers
Have Open, Non-Judgmental Conversations
Talk with your children about AI-generated video before problems arise. Explain what the technology can do, why it can be harmful, and what your family values are regarding consent and respect for others.
If your child encounters or creates problematic content, respond calmly. Your reaction can determine whether they feel safe coming to you with problems in future.

Teach Critical Thinking
Help your children develop healthy scepticism about online content. Discuss questions like:
- Could this video be AI-generated?
- Who created it and why?
- How can we verify if something is real?
Set Clear Expectations
Make clear that creating AI-generated content depicting others without their consent is unacceptable in your family - and may be illegal. Ensure your children understand that "it's just AI" or "it's just a joke" doesn't make harmful content acceptable.
Consider Access Carefully
Think carefully about whether your child should have access to AI video generation tools. Consider:
- Their age and maturity
- Whether they understand consent and digital citizenship
- How you will monitor their use
- The potential for addictive use patterns
Many of these platforms require users to be 13 or 18, though age verification is often minimal.
Be Aware of the Engagement Factor
AI video generation apps are designed to be engaging and may use the same psychological techniques as ‘traditional’ social media to encourage continued use. Be alert to signs your child is spending excessive time creating or consuming AI-generated content, particularly if it's affecting sleep, schoolwork, or wellbeing.
Know How to Get Help
If your child is targeted by AI-generated harmful content:
- Stay calm - Your response shapes how safe they feel
- Listen without judgment - This is not their fault
- Seek guidance on evidence - Contact law enforcement immediately about whether screenshotting is required
- Report it - Contact police and report to the platform
- Use removal tools - The Take It Down Tool (takeitdown.ncmec.org) can help remove non-consensual intimate images from participating platforms ('Report Remove' is also available in the UK)
- Access support services - Seek professional support for your child if needed
What We Don't Yet Know
It's important to acknowledge significant uncertainties about AI-generated video content:
Long-Term Cognitive Effects
We don't yet know how growing up in an environment where video can be easily fabricated will affect children's:
- Ability to trust evidence
- Relationship with reality
- Development of critical thinking
- Sense of privacy and bodily autonomy
Societal Impacts
The broader societal effects of ubiquitous AI-generated video are unclear. We could see:
- Erosion of trust in institutions and media
- Changes in how evidence is treated in legal proceedings
- New forms of manipulation and fraud
- Shifts in how we think about truth and authenticity
Mental Health Implications
Being targeted by AI-generated deepfakes can cause serious psychological harm. However, we're still learning about:
- Long-term trauma from AI-generated abuse
- The psychological impact of consuming large amounts of synthetic media
- How the addictive design of these platforms affects young people's mental health
We urge further research into these emerging questions to ensure that ethical and psychological considerations are not left behind in the race for innovation.
Future Capabilities
AI video generation technology is advancing rapidly. Current limitations around video length, quality, and consistency will likely be overcome. We should prepare for a future where AI-generated video is indistinguishable from authentic footage.
Moving Forward
Sora 2 represents a significant development in AI technology, but it is likely just the beginning. Schools and parents need to prepare for a landscape where AI-generated video becomes commonplace.
Some of the most important protective factors are:
- Open communication between adults and children
- Strong digital citizenship education that emphasises consent, respect, and critical thinking
- Clear policies and procedures in schools and families
- Effective reporting and support systems for victims
- Ongoing advocacy for stronger laws and platform accountability
At the Safe AI for Children Alliance (SAIFCA), we continue to call for:
- Stronger regulation of AI-generated content tools, particularly regarding children
- Greater platform accountability for harmful content and safeguarding
- Better age verification systems
- Investment in detection technology to identify AI-generated content
- International cooperation on enforcement
- Mandatory safety testing before AI tools are released to the public
This is an evolving situation. We will continue to monitor developments and provide updated guidance as the landscape changes.

Resources and Support
UK
For Schools:
- UK Safer Internet Centre: www.saferinternet.org.uk
- CEOP (Child Exploitation and Online Protection): Report abuse at www.ceop.police.uk/Safety-Centre
- Internet Watch Foundation (IWF): www.iwf.org.uk
- NSPCC Learning: Resources for professionals at learning.nspcc.org.uk
- NSPCC Helpline: 0808 800 5000 | www.nspcc.org.uk
- Childline (for children): 0800 1111 | www.childline.org.uk
- Internet Matters: www.internetmatters.org
Reporting:
- Police: 101 (non-emergency) or 999 (emergency)
- CEOP Safety Centre: For concerns about online child sexual exploitation
US / International
Reporting:
- NCMEC CyberTipline: report.cybertip.org
- Take It Down Tool (NCMEC): Help removing intimate images | takeitdown.ncmec.org
This briefing was prepared by the Safe AI for Children Alliance (SAIFCA) on 9th October 2025 following the release of Sora 2. Given the rapidly evolving nature of AI technology, we recommend checking our website regularly for updates.
Note: This article discusses potential misuse of AI technology involving children. The content is intended solely for protective and educational purposes, in line with safeguarding practice.