Newsclip — Social News Discovery

Business

The Dark Side of Sora 2: AI Children in Disturbing Videos

December 22, 2025
  • #AIethics
  • #ChildSafety
  • #DigitalContent
  • #Sora2
  • #TechRegulation
Share on XShare on FacebookShare on LinkedIn
The Dark Side of Sora 2: AI Children in Disturbing Videos

The Concerning Use of AI Technologies

On October 7, a TikTok account, @fujitiva48, captured widespread attention with a provocative video introducing a dubious toy meant for children. This so-called commercial parodied a typical children's advertisement, but the implications were anything but innocent. This video, showcasing a photorealistic young girl with a toy that resembles a sex toy, ignited heated discussions across social media about the ethical boundaries of technology.

“Hey so this isn’t funny,” said one commenter. “Whoever made this should be investigated.”

Such reactions highlight not just the distaste, but the urgent need for robust discussions about the misuse of AI technologies. It's a reminder of how easily AI-generated content can blur the lines between reality and fiction, especially when it comes to sensitive subjects involving children.

Understanding Sora 2 and Its Capabilities

OpenAI's Sora 2 is an advanced video generator that has become a tool for creators looking to push boundaries. Released in late September 2025, Sora 2 allows users to create stunningly lifelike videos, but it has also inadvertently become a weapon for creating controversial and often harmful content. Within a week of its launch, TikTok was flooded with videos like the one featuring the Vibro Rose toy—blurring the lines of law and morality.

While the initial offerings may appear whimsical, the implications of allowing AI to replicate children's likenesses for potentially exploitative content cannot be overstated. Reports indicate that the nature of AI-generated fetish content remains legally ambiguous, complicating efforts to regulate such material effectively.

Surging Trends and Alarming Statistics

According to the Internet Watch Foundation, the volume of AI-generated child sexual abuse material, or CSAM, has doubled within a year. The troubling statistics indicate that 426 such reports were documented between January and October 2025 alone. What's even more chilling is that 56% of this heinous content falls into the most serious category, where exploitation of minors is concerned.

CEO Kerry Smith emphasizes the pointed targeting of young girls in content generated via AI:

“Often, we see real children's likenesses being commodified to create nude or sexual imagery. It is yet another way girls are targeted online.”

The Need for Legal Reform

In response to the influx of AI-generated CSAM, the UK has introduced a new amendment to its Crime and Policing Bill aimed at fortifying regulations around AI technologies. This amendment aims to ensure that tools like Sora are equipped with safeguards to prevent the production and distribution of harmful content. Meanwhile, 45 states in the U.S. have implemented laws to criminalize AI-generated CSAM, but the sufficiency of these measures remains under scrutiny.

What Are Tech Companies Doing?

OpenAI has made strides to combat misuse of Sora 2, instituting measures designed to keep children's likenesses from being exploited inappropriately. However, the ability for content creators to game systems often proves a challenge. Even with self-imposed safeguards, egregious acts continue to slip through the cracks, prompting serious questions regarding the effectiveness of these policies.

Implications for TikTok and Other Platforms

Instances have been reported where TikTok users circumvent content moderation practices, leading to more hazardous materials being uploaded onto the platform. Despite actions taken by TikTok to remove inappropriate content and ban offending accounts, the prevalence of AI-generated clips sensationalizing even the most disturbing subjects, including historical tragedies, raises considerable red flags.

The Broader Discussion: A Call to Action

Efforts to combat this disturbing trend must start at the inception of technology design itself. As articulated by advocates, we need platforms to be “safe by design.” Tech companies must proactively prevent the creation of harmful materials, not reactively respond to them once they surface. The discussion extends beyond legislation to the ethics of creation, raising essential questions: How can we equip technology with safeguards against exploitation before it emerges as a problem?

Conclusion

The rapid emergence of AI tools like Sora 2 offers unprecedented opportunities for creativity but also represents a frontier rife with ethical dilemmas. As society grapples with these challenges, our collective vigilance, coupled with stringent regulatory measures, will be essential in navigating the complexities of AI's role in content creation. We must remain steadfast in our commitment to protect the most vulnerable among us.

Source reference: https://www.wired.com/story/people-are-using-sora-2-to-make-child-fetish-content/

More from Business