The Concerning Use of AI Technologies
On October 7, a TikTok account, @fujitiva48, captured widespread attention with a provocative video introducing a dubious toy meant for children. This so-called commercial parodied a typical children's advertisement, but the implications were anything but innocent. This video, showcasing a photorealistic young girl with a toy that resembles a sex toy, ignited heated discussions across social media about the ethical boundaries of technology.
“Hey so this isn’t funny,” said one commenter. “Whoever made this should be investigated.”
Such reactions highlight not just the distaste, but the urgent need for robust discussions about the misuse of AI technologies. It's a reminder of how easily AI-generated content can blur the lines between reality and fiction, especially when it comes to sensitive subjects involving children.
Understanding Sora 2 and Its Capabilities
OpenAI's Sora 2 is an advanced video generator that has become a tool for creators looking to push boundaries. Released in late September 2025, Sora 2 allows users to create stunningly lifelike videos, but it has also inadvertently become a weapon for creating controversial and often harmful content. Within a week of its launch, TikTok was flooded with videos like the one featuring the Vibro Rose toy—blurring the lines of law and morality.
While the initial offerings may appear whimsical, the implications of allowing AI to replicate children's likenesses for potentially exploitative content cannot be overstated. Reports indicate that the nature of AI-generated fetish content remains legally ambiguous, complicating efforts to regulate such material effectively.
Surging Trends and Alarming Statistics
According to the Internet Watch Foundation, the volume of AI-generated child sexual abuse material, or CSAM, has doubled within a year. The troubling statistics indicate that 426 such reports were documented between January and October 2025 alone. What's even more chilling is that 56% of this heinous content falls into the most serious category, where exploitation of minors is concerned.
CEO Kerry Smith emphasizes the pointed targeting of young girls in content generated via AI:
“Often, we see real children's likenesses being commodified to create nude or sexual imagery. It is yet another way girls are targeted online.”
The Need for Legal Reform
In response to the influx of AI-generated CSAM, the UK has introduced a new amendment to its Crime and Policing Bill aimed at fortifying regulations around AI technologies. This amendment aims to ensure that tools like Sora are equipped with safeguards to prevent the production and distribution of harmful content. Meanwhile, 45 states in the U.S. have implemented laws to criminalize AI-generated CSAM, but the sufficiency of these measures remains under scrutiny.
What Are Tech Companies Doing?
OpenAI has made strides to combat misuse of Sora 2, instituting measures designed to keep children's likenesses from being exploited inappropriately. However, the ability for content creators to game systems often proves a challenge. Even with self-imposed safeguards, egregious acts continue to slip through the cracks, prompting serious questions regarding the effectiveness of these policies.
Implications for TikTok and Other Platforms
Instances have been reported where TikTok users circumvent content moderation practices, leading to more hazardous materials being uploaded onto the platform. Despite actions taken by TikTok to remove inappropriate content and ban offending accounts, the prevalence of AI-generated clips sensationalizing even the most disturbing subjects, including historical tragedies, raises considerable red flags.
The Broader Discussion: A Call to Action
Efforts to combat this disturbing trend must start at the inception of technology design itself. As articulated by advocates, we need platforms to be “safe by design.” Tech companies must proactively prevent the creation of harmful materials, not reactively respond to them once they surface. The discussion extends beyond legislation to the ethics of creation, raising essential questions: How can we equip technology with safeguards against exploitation before it emerges as a problem?
Conclusion
The rapid emergence of AI tools like Sora 2 offers unprecedented opportunities for creativity but also represents a frontier rife with ethical dilemmas. As society grapples with these challenges, our collective vigilance, coupled with stringent regulatory measures, will be essential in navigating the complexities of AI's role in content creation. We must remain steadfast in our commitment to protect the most vulnerable among us.
Key Facts
- AI Technology: OpenAI's Sora 2 is an advanced video generator capable of creating lifelike videos.
- Content Concerns: AI-generated child sexual abuse material (CSAM) has reportedly doubled from 199 to 426 cases between January and October 2025.
- Target Audience: Sora 2-generated content often targets young girls, leading to ethical and legal concerns.
- Legal Landscape: The UK has introduced new amendments to strengthen regulations on AI technologies related to CSAM.
- TikTok Oversight: TikTok users have reportedly circumvented content moderation policies, allowing inappropriate materials to resurface.
- Call for Action: Advocates urge for technology to be designed with safety measures that prevent the creation of harmful content.
Background
The emergence of AI-generated content, particularly via Sora 2, has raised urgent ethical and safety concerns, highlighting the vulnerabilities in digital content creation and the exploitation of children's likenesses.
Quick Answers
- What is Sora 2?
- OpenAI's Sora 2 is an advanced video generator that allows users to create lifelike videos.
- What issues have arisen from the use of Sora 2?
- The use of Sora 2 has led to the creation of disturbing content, including AI-generated child sexual abuse material.
- What measures are being taken to regulate AI-generated CSAM?
- The UK has introduced a new amendment to its Crime and Policing Bill aimed at regulating AI technologies and preventing CSAM.
- How has TikTok responded to inappropriate content generated by Sora 2?
- TikTok has removed videos and banned accounts that uploaded content created on other AI platforms violating minor safety policies.
- What statistics are reported regarding AI-generated CSAM?
- Reports indicate that the volume of AI-generated child sexual abuse material has doubled from 199 to 426 cases in 2025.
- What do advocates recommend for technology design?
- Advocates recommend that technology should be 'safe by design' to prevent the creation of harmful materials.
Frequently Asked Questions
What ethical concerns are associated with Sora 2?
Sora 2 raises ethical concerns about the exploitation of children's likenesses and the potential for creating harmful content.
How are creators circumventing regulations with Sora 2?
Creators are reportedly finding ways to bypass moderation systems, allowing harmful materials to be uploaded and shared on platforms like TikTok.
What kind of content has been created using Sora 2?
Content generated using Sora 2 includes disturbing videos like fake commercials featuring children with inappropriate items.
What actions has OpenAI taken regarding the misuse of Sora 2?
OpenAI has implemented measures to prevent the exploitation of children's likenesses and banned accounts creating inappropriate content.
Source reference: https://www.wired.com/story/people-are-using-sora-2-to-make-child-fetish-content/





Comments
Sign in to leave a comment
Sign InLoading comments...