News
Grok AI Tool Sparks Controversy with Widespread Nonconsensual Image Generation
Source: wired.com
Published on January 7, 2026
Updated on January 7, 2026

Grok, an AI tool developed by Elon Musk's xAI, is facing intense scrutiny for its role in generating nonconsensual sexualized images of women on a massive scale. The tool, accessible to millions on the platform X (formerly Twitter), has been used to create thousands of such images, raising serious ethical and legal concerns. While Grok itself does not charge users for generating these images and produces results quickly, its integration into a mainstream platform has normalized the creation of nonconsensual intimate imagery, drawing criticism from governments and advocacy groups alike.
The issue gained widespread attention after reports emerged that Grok was being used to create sexualized images of children. Although the tool does not generate nudity, users have found ways to circumvent its safety guardrails by requesting edits that portray women in revealing clothing, such as "string bikinis" or "transparent bikinis." This has led to a proliferation of altered images, including those of social media influencers, celebrities, and even politicians, who have had their photos manipulated without consent.
The Scope of the Problem
According to a recent analysis, Grok generated over 15,000 images in just a two-hour period on December 31. Of these, more than 2,500 were no longer available, and nearly 500 were marked as "age-restricted adult content," requiring a login to view. Many of the remaining images featured women in bikinis or lingerie, highlighting the tool's role in perpetuating digital harassment and abuse.
The use of Grok to create such images is not an isolated incident but part of a broader trend. Over the past six years, explicit deepfakes have become increasingly advanced and accessible. Dozens of "nudify" and "undress" websites, bots on platforms like Telegram, and open-source image generation models have made it possible for anyone to create manipulated images or videos with no technical expertise. These services are estimated to generate at least $36 million annually, underscoring the commercial scale of the problem.
Regulatory and Legal Responses
The misuse of Grok has sparked a global outcry, with officials in France, India, and Malaysia among those raising concerns or threatening investigations. In the UK, technology minister Liz Kendall called for urgent action, stating, "X needs to deal with this urgently. What we have been seeing online in recent days has been absolutely appalling and unacceptable in decent society." The UK's communications regulator, Ofcom, has also contacted X regarding the matter.
In the U.S., Congress passed the TAKE IT DOWN Act last year, which makes it illegal to publicly post nonconsensual intimate imagery (NCII), including deepfakes. By mid-May, online platforms, including X, will be required to provide a way for users to flag such content, with platforms obligated to respond within 48 hours. However, the effectiveness of such measures remains to be seen, as the creation and dissemination of nonconsensual imagery continue to outpace regulatory efforts.
Australia's online safety regulator, the eSafety Commissioner, has already taken enforcement action against one of the largest "nudifying" services, while UK officials are planning to ban such apps altogether. These efforts reflect a growing recognition of the harm caused by nonconsensual explicit deepfakes and the need for stronger legal and technological safeguards.
Despite these developments, questions remain about the responsibility of platforms like X in addressing the misuse of tools like Grok. While X's Safety account claims to prohibit illegal content and has suspended thousands of accounts for violating its child sexual exploitation policy, critics argue that the platform has not done enough to prevent the creation and sharing of nonconsensual intimate imagery.
"When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse," said Sloan Thompson, director of training and education at EndTAB, an organization working to tackle tech-facilitated abuse. "What's alarming here is that X has done the opposite. They've embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable."
As the controversy surrounding Grok continues to unfold, it serves as a stark reminder of the ethical challenges posed by generative AI technology. While these tools have the potential to revolutionize industries and enhance creativity, their misuse can have devastating consequences for individuals and society as a whole. The ongoing debate highlights the urgent need for stronger regulations, ethical guidelines, and corporate accountability in the AI era.