News

UK Minister Condemns Grok AI’s Fake Images of Women and Girls

Source: theguardian.com

Published on January 6, 2026

Updated on January 6, 2026

UK Minister Condemns Grok AI’s Fake Images of Women and Girls

UK Minister Liz Kendall has strongly condemned the circulation of fake images of women and girls generated by Grok AI, calling the trend “appalling and unacceptable in decent society.” The images, which include digitally altered photos of women and children with their clothes removed, have sparked outrage and renewed calls for stricter regulation of AI-generated content.

The controversy erupted after thousands of intimate deepfakes were shared online, prompting Kendall to urge X, the social media platform owned by Elon Musk, to “deal with this urgently.” She also backed the UK regulator Ofcom to take enforcement action if necessary. The incident highlights the growing challenges posed by AI tools that can manipulate images and videos in ways that violate privacy and dignity.

The Rise of AI-Generated Deepfakes

Grok AI, a tool developed by Musk’s xAI, has been criticized for its ability to create highly realistic but fake images and videos. These deepfakes, which can depict individuals in compromising situations without their consent, have raised serious concerns about the misuse of AI technology. The recent wave of images targeting women and girls has been particularly alarming, as it disproportionately affects vulnerable populations.

Jessaline Caine, a survivor of child sexual abuse, described the government’s response as “spineless” after discovering that Grok AI was still obeying requests to manipulate an image of her as a three-year-old. While similar requests to ChatGPT and Gemini were rejected, Grok AI’s compliance with such requests has drawn sharp criticism from advocates and experts.

Regulatory Scrutiny and Industry Reaction

Ofcom, the UK’s communications regulator, has acknowledged the serious concerns surrounding Grok AI’s ability to create undressed images of people. The regulator has contacted X and xAI to assess their compliance with legal duties to protect users in the UK. Ofcom has the power to fine tech platforms up to £18 million or 10% of their global revenues for violations.

Cybersecurity experts have also weighed in on the issue. Jake Moore, a global cybersecurity adviser at ESET, criticized the “tennis game” between platforms like X and UK regulators, describing the government’s response as “worryingly slow.” He warned that as AI technology advances, the consequences for individuals’ lives will only worsen unless stricter regulations are implemented.

The UK technology secretary has echoed these concerns, calling for urgent action to address the proliferation of such images. The Online Safety Act, which aims to tackle online harms and protect children, has been cited as a potential solution, though some experts argue it needs to be strengthened further.

The pressure on ministers to take a tougher line is growing. Beeban Kidron, a crossbench peer and online child safety campaigner, has urged the government to “show some backbone” and reassess the Online Safety Act to make it more effective. Meanwhile, Sarah Smith of the Lucy Faithfull Foundation has called for X to immediately disable Grok’s image-editing features until robust safeguards are in place.

The incident underscores the urgent need for stronger regulations and safeguards in the AI industry to prevent the misuse of technology and protect the rights and dignity of individuals.