News

AI and Deepfakes: The Legal Battle Against Image-Based Sexual Abuse

Source: impartialreporter.com

Published on January 17, 2026

Updated on January 17, 2026

AI and Deepfakes: The Legal Battle Against Image-Based Sexual Abuse

The intersection of artificial intelligence (AI) and deepfake technology has sparked a urgent debate about the legal and ethical implications of image-based sexual abuse. Recent headlines have highlighted the ability of AI tools, such as Grok on the platform formerly known as Twitter, to generate non-consensual sexually explicit images, raising serious concerns about the safety and dignity of individuals, particularly women and girls.

Kate Nicholl, an Alliance Party MLA for South Belfast and the party’s Economy & Brexit spokesperson, has been at the forefront of this issue, advocating for stronger legal measures to address the misuse of AI. In a recent article, Nicholl emphasized the urgent need to confront the technology that enables such abuse, arguing that legislation alone is insufficient to tackle the problem.

The Rise of AI-Enabled Image-Based Sexual Abuse

The ability for AI tools to 'undress' women or create sexually explicit images without consent has become a pressing concern. This misuse of technology is not only a violation of individual privacy and autonomy but also a form of violence against women and girls. Nicholl highlighted the gendered nature of this abuse, noting that while men can also be victims, women and girls are overwhelmingly targeted.

The English Children’s Commissioner’s 2025 report underscored the alarming rise in the use of 'nudification' tools and sexually explicit AI-generated images involving children. This issue is particularly urgent, as these technologies are being accessed by young people themselves, often without understanding the long-term consequences.

Legal and Societal Responses

Nicholl stressed the importance of a holistic response to this problem, involving parents, educators, platforms, policymakers, and communities. She announced her intention to introduce a change in the law that would make it illegal to provide the apps or technology used to create non-consensual, sexually explicit images. This move is part of a broader effort to ensure that legislation keeps pace with the capabilities of current technology.

Justice Minister Naomi Long is also undertaking important work on deepfakes through the forthcoming Justice Bill. However, Nicholl emphasized that regulating distribution alone is not enough. The focus must also be on preventing the creation of such images in the first place, as this is the most effective way to stop the harm.

The non-consensual creation of sexually explicit images is a profound violation of dignity, autonomy, and personal safety. Nicholl called for a societal response that recognizes the human beings behind this harm and the need for meaningful safeguards and legal consequences.

In conclusion, the misuse of AI to generate non-consensual sexual images is a complex and urgent issue that requires a multi-faceted approach. Nicholl’s advocacy highlights the need for legal reforms, societal awareness, and collective action to protect individuals, especially children, from the devastating consequences of this emerging form of abuse.