A Call for Genuine Action Against Online Sexual Exploitation
In a society where women and marginalized communities constantly face online harassment, Liz Kendall's recent initiatives against the AI nudification tool on X have sparked vital conversations. However, as I explore in this piece, Kendall's measures might just be a momentary distraction rather than a solution.
After a woman posted a seemingly innocuous photo in a sari on X, the platform's AI, Grok, was immediately exploited, with users tagging it to create nonconsensual intimate imagery at an alarming rate. This incident underlines a pervasive issue: the need for tech companies to truly prioritize user safety rather than simply responding to backlash.
"Creating nonconsensual intimate images will become a criminal offence this week, and we will also target the supply of nudification apps," declared Kendall, seemingly addressing the cries for action against these invasive practices.
The Flaw in the Reactionary Approach
While welcoming the intention behind Kendall's actions, I must underscore that this approach does not go far enough. The problem lies in the very systems that allow such harmful content to proliferate. Placing Grok's image-generation feature behind a paywall may conveniently shield users from public scrutiny, but it inadvertently allows these companies to profit from the culture of dehumanization that spawns such abuse.
News reports illustrate that the AI bot has stopped generating bikini images for women but not for men, raising serious questions about its responsiveness to harm.
Conversations Beyond Borders
Another critical dimension of this issue is the transnational nature of technology. While the UK may implement stricter regulations, the lack of cooperation with US tech giants complicates matters. The Trump administration's backward stance on AI provides little incentive for American companies to regulate their products, pointing to a dire need for a global conversation on AI ethics.
The Trump administration has made it clear they want to enhance AI dominance with minimal regulations. Going forward, Kendall's proposed legislation cannot be effective in isolation; we need international collaboration to truly curb the risks posed by AI technologies.
Moving from Reaction to Prevention
Regulation as it stands is reactive; it requires harm before it allows for punitive measures. This approach misses the fundamental point: we need to shift our focus from merely reacting to harm to preventing it altogether. Effective regulations must involve proactive measures like independent audits, mandatory input filtering, and licensing requirements for tech companies.
This brings us back to the heart of the matter: how do we ensure that AI technologies contribute positively to society rather than reinforce harmful stereotypes and abusive practices? This is where my work at the AI Accountability Lab comes into play, as we advocate for comprehensive measures that focus on prevention.
Conclusion: A Rallying Cry for Change
I'm encouraged by the awakening that Kendall's measures may invoke, yet I cannot stress enough that this is just the beginning. If we continue to react to abuse without creating robust safeguards, we allow harmful models to thrive on platforms designed for communication. The true question remains: how do we craft a digital landscape that respects and protects all users? The journey is long, but fostering ongoing dialogue is a critical step forward.
Nana Nwachukwu is an AI governance expert and a PhD researcher at Trinity College Dublin.
Source reference: https://www.theguardian.com/commentisfree/2026/jan/14/liz-kendall-x-grok-nudification




