News

Aid Agencies Face Criticism for Using AI-Generated 'Poverty Porn' Images

Source: theguardian.com

Published on October 20, 2025

Updated on October 20, 2025

AI-generated images depicting poverty raise ethical concerns

AI-Generated Images Spark Ethical Debate Among Aid Organizations

Aid organizations are increasingly turning to AI-generated images to depict extreme poverty in their communications, a trend that has sparked significant ethical concerns. Critics argue that these images, often referred to as "poverty porn," perpetuate harmful stereotypes and raise questions about consent and exploitation. This shift is driven by budget constraints and the challenges of obtaining consent from vulnerable populations, but it comes at the cost of dehumanizing those the organizations aim to help.

The Rise of Synthetic Suffering

Researcher Arsenii Alenichev from the Institute of Tropical Medicine in Antwerp has documented over 100 AI-generated images used in social media campaigns addressing issues like hunger and sexual violence. These images often depict exaggerated scenes, such as children in muddy water or a tearful African girl in a wedding dress. Alenichev warns that these visuals represent a new form of "poverty porn," which exploits suffering for shock value. Kate Kardol, an NGO communications consultant, echoes these concerns, noting the long-standing debates about the ethical implications of using such images.

Cost vs. Consent: The Ethical Dilemma

Noah Arnold, from Fairpicture, highlights two primary factors driving the use of AI imagery: cost and consent. With shrinking budgets, NGOs are seeking cheaper alternatives to traditional photography. Obtaining consent from vulnerable populations can be challenging and time-consuming, making AI-generated images an appealing shortcut. However, this convenience raises serious ethical questions about the potential to dehumanize and further marginalize those depicted.

Platform Responsibility and the Echo Chamber of Bias

Joaquín Abela, CEO of Freepik, argues that the responsibility for using such images lies with the consumers, not the platforms hosting them. Freepik attempts to mitigate biases by diversifying its photo library, but Alenichev warns that biased images can proliferate and be used to train future AI models. This creates a vicious cycle where algorithms learn to associate poverty with specific racial or ethnic groups, perpetuating harmful stereotypes.

The UN's Misstep and Subsequent Apology

Even the United Nations has faced criticism for using AI-generated images. Last year, the UN posted a YouTube video featuring AI-generated "re-enactments" of sexual violence in conflict. The video, which included AI-generated testimony from a Burundian woman, was removed after The Guardian raised concerns. A UN Peacekeeping spokesperson acknowledged the "improper use of AI" and the risks to information integrity, highlighting the potential dangers of generative models without careful ethical consideration.

The Ethical Path Forward

While AI-generated imagery offers a cost-effective solution, it also carries the risk of perpetuating harmful stereotypes. Organizations must prioritize ethical use, investing in authentic representation and obtaining consent. The pursuit of impactful storytelling should never compromise the dignity and respect of vulnerable populations. Further debate is needed to ensure that AI is used responsibly in global health communications, balancing innovation with ethical considerations.