Aid Agencies Face Criticism for Using AI-Generated 'Poverty Porn' Images

Source: theguardian.com

Published on October 20, 2025 at 11:34 AM

What Happened

A disturbing trend is emerging: aid organizations are increasingly using AI-generated images of extreme poverty in their communications. These images, critics say, often perpetuate harmful stereotypes and raise serious ethical questions about consent and exploitation. Driven by budget cuts and a desire to sidestep the complexities of obtaining consent, some organizations are turning to readily available AI visuals, sparking outrage among global health professionals.

The Rise of Synthetic Suffering

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, has documented over 100 AI-generated images used in social media campaigns addressing hunger and sexual violence. These images frequently depict exaggerated, stereotypical scenes, such as children in muddy water or a tearful African girl in a wedding dress. Alenichev argues that such images represent a new form of “poverty porn.” Kate Kardol, an NGO communications consultant, echoed these concerns, recalling earlier debates about the ethical implications of exploiting poverty for shock value.

Cost vs. Consent: A Dangerous Trade-Off

According to Noah Arnold, from Fairpicture, the shift towards AI imagery is fueled by two primary factors: cost and consent. With shrinking budgets, NGOs are seeking cheaper alternatives to traditional photography. Furthermore, obtaining consent from vulnerable populations can be challenging and time-consuming, making AI-generated images an appealing shortcut. Still, this convenience comes at a steep price: the potential to dehumanize and further marginalize the very people these organizations aim to help.

Platform Responsibility and the Echo Chamber of Bias

Joaquín Abela, CEO of Freepik, argues that the responsibility for using such images lies with the consumers, not the platforms that host them. He claims that Freepik attempts to curb biases by injecting diversity into its photo library. However, Alenichev warns that the proliferation of biased images can worsen the problem. These images can filter back into the internet and be used to train future AI models, amplifying existing prejudices. This creates a vicious cycle where algorithms learn to associate poverty with specific racial or ethnic groups, perpetuating harmful stereotypes.

The UN's Misstep and Subsequent Apology

Even the United Nations has stumbled into this ethical minefield. Last year, the UN posted a YouTube video featuring AI-generated “re-enactments” of sexual violence in conflict. The video, which included AI-generated testimony from a Burundian woman, was ultimately removed after The Guardian contacted the UN for comment. A UN Peacekeeping spokesperson acknowledged that the video showed an “improper use of AI” and posed risks to information integrity. This incident highlights the potential dangers of using generative models without careful consideration of their ethical implications.

Our Take

The allure of cheap and readily available AI imagery is undeniable, but the ethical costs are too high. While platforms like Freepik may attempt to mitigate biases, the responsibility ultimately lies with organizations to use these tools responsibly. The pursuit of impactful storytelling should never come at the expense of dignity and respect for vulnerable populations. A more ethical approach involves investing in authentic representation, prioritizing consent, and actively challenging harmful stereotypes.

Moving Forward: Towards Ethical AI Imagery

The use of AI-generated imagery in global health communications presents both challenges and opportunities. While it can be a cost-effective solution, it also carries the risk of perpetuating harmful stereotypes. The key is to use these tools thoughtfully and ethically, prioritizing the dignity and representation of the people they aim to depict. Further debate is needed around the usage of AI in representing vulnerable people.