No Real Child Was Harmed:

Can AI-Generated Images Make Nonprofit Communications More Ethical?

The camera has often been a dire instrument. In Africa, as in most parts of the dispossessed, the camera arrives as part of the colonial paraphernalia, together with the gun and the bible, diarising events, the exotic and the profound, altering reality, introducing new impulses and confessions, cataloguing the converted and the hanged.

The more remote or exotic the place, the more likely we are to have full frontal views of the dead and dying. […] These sights carry a double message. They show a suffering that is outrageous, unjust and should be repaired. They confirm that this is the sort of thing which happens in that place. The ubiquity of those photographs, and those horrors, cannot help but nourish belief in the inevitability of tragedy in the benighted or backward—that is, poor—parts of the world.

Does it reflect the reality on the ground? Is it accurately and appropriately aligned with and reflecting the story we want to tell? 

It’s important to avoid images that reduce people to the circumstances of their suffering, and to highlight people as empowered and agents of change, not just paint people as victims or passive actors.

There is a case to be made for the use of AI-generated imagery. Gülsüm Özkaya of IHH International Relief Foundation in Türkiye is conducting her MA thesis on this very question, speaking directly with people in crisis-affected communities. In a recent discussion for the Humanitarian Leadership Academy’s Fresh Humanitarian Perspectives podcast, she shared that some research participants prefer real images while others prefer AI-generated ones. 

Even where an image is not based on a living person, it may still be a composite drawn from real photographs in training data—meaning the notion of a ‘purely synthetic’ child image may not be clear-cut. 

When it comes to children specifically, generating photorealistic images raises ethical questions and operational grey areas and risks. With photography, a child’s caregivers can provide informed consent and withdraw permission at any time. With AI-generated images, once entered into circulation, there is no clear course of redress that I’m aware of if it is modified in inappropriate or harmful ways.

It is worth considering stylistic alternatives to photorealism – illustration, graphics and other non-realistic representations. Elrha has made a case for hand-drawn and cartoon imagery as a thoughtful middle ground. The right choice depends on context

Another point can be made that, apart from the biggest organisations, most nonprofits, particularly in low- and middle-income countries, are under constant pressure in deciding how to spend their limited funds. This means that investing in communications and marketing, for example, by hiring documentary photographers to take bespoke images for each campaign, means spending less on the work on the ground. It is also something that some of the smaller organisations, such as grassroots, simply cannot afford – or at least not regularly. Without a budget for professional photography, it is also much harder to take photos that will resonate, especially if any of the risk factors are present.

One of the ways nonprofits and charities navigate marketing imagery with scarce budgets is through stock photos. While it might take some time to find the right one, there are now many good platforms that offer high-quality, realistic, diverse and inclusive stock imagery. This point is quite important, as stock imagery is often generic and impersonal, frequently leaning into tropes and stereotypes. Still, this option can be appropriate, for example, if the nonprofit wants to post an article on a given topic, shed a spotlight on something else relevant that is going on, and it doesn’t have its own adequate imagery to use. Stock visuals are comparable to AI-generated images in the way that they do not depict reality and lack authenticity. But there are situations where the point of an image is not to document a particular situation, but, e.g. to allude to it more broadly. Of course, full disclosure and transparency are needed to ensure the audience is not misled or manipulated. 

AI also comes with an advantage that stock imagery does not have, which is being able to personalise an image or simply create it according to a detailed description. This, combined with the easy access, time and energy efficiency, is where AI is clearly unparalleled. Alongside its constantly growing capabilities, this is one of the reasons why it is already being used in nonprofit communications. This is something that has been discussed recently by David Girling and Deborah Adesina in a research project, Artificial Authenticity: The Rise of Images Generated by Artificial Intelligence (AI) in Charity and Development Communications, in which they analysed social campaigns using AI by some of the major organisations such as Amnesty International, Plan International and the World Wildlife Fund. One of the interesting observations they made, regarding the response to these campaigns, is that what drew the most comments was not the humanitarian issues at stake, but the AI-generated nature of the images attached to them. This has been particularly visible in the case of WWF Denmark, which faced public scrutiny for its use of AI tools in a sustainability campaign, with its audience voicing concerns about a clash of values.

Image: WWF Denmark The Hidden Cost Campaign created using ChatGPT  by an AI designer Nikolaj Lykke Viborg

Is there a way to mitigate this backlash, for example, by increasing transparency about AI use and refining the output to obtain the best quality possible? While full disclosure – sharing not just a clear statement that AI was used, but optimally also adding an alt text to images disclosing the tool and AI prompt used, or signposting your AI policy – is important to elevate the ethical standpoint of the practice, this in itself is not enough. Girling and Adesina point to a larger scepticism at play:

What is particularly striking is that labelling images as AI-generated did not help much. Even when organisations were transparent — 85% of images in the study were properly disclosed — audiences still shifted into a critical, sceptical mode. Rather than being moved by a cause, they became investigators, scrutinising images for flaws and questioning the ethics of the technology itself.

What, then, are the ethics of the technology itself?

While AI can propose innovative solutions, many people oppose the use of this technology altogether – in the nonprofit world and beyond. One of the cited reasons is the fact that relying on new technologies to replace human labour in those organisations threatens the livelihoods of human professionals. To gauge that, I have spoken with a frontline journalist and photographer, Thomas Noonan, whom I asked how he feels about the use of AI-generated images in nonprofit campaigns.

I am firmly opposed to the use of AI-generated images for any reason whatsoever due to ecological and authorship rights, he shared, noting that AI generation is essentially an imitation of someone else’s work. 

His concerns reflect two of the common objections to generative AI: its ecological footprint and its relationship to authorship, consent, and cultural extraction. AI systems require significant energy and water to train and run, and the burden of that infrastructure is not evenly distributed. At the same time, many generative AI tools have been trained on vast collections of human-made images and artworks, often scraped without the knowledge, consent, compensation, or credit of the people who created them.

This is why AI-generated art is questionable not only due to its economic impact on livelihoods, but also from a symbolic and artistic standpoint. While its capabilities and quality are improving rapidly and can produce impressive results in some contexts, much of this content remains generic, derivative, or aesthetically hollow. Used at such a scale, synthetic imagery threatens the broader cultural ecosystem by crowding out original, human-made work and normalising a visual culture detached from lived experience, authorship, and accountability.

This does not mean that every use of generative AI is automatically unjustifiable. Such so-called AI for Good applications, encompassing generating medical research insights, improving disaster early-warning systems, or accessibility tools, are for sure easier to justify than generating endless disposable images or low-quality automated slop

AI can absolutely be useful. It can solve real problemsAnisa Abeytia acknowledges, as she tells me about her work with an NGO channelling generative AI to help women in Uganda access information about reproductive health. – But it comes with costs — environmental, social, political. If we’re using AI-generated images at scale, we also have to ask: what does that cost the planet? Does the purpose we want to implement AI for justify its use? 

That might be the core issue: people discuss whether AI can have positive uses, but that’s not the right question—we know it can. The question that matters more is whether each use is necessary, proportionate, and honest about the human, cultural, and environmental costs it carries.

The introduction of AI-generated images into the realm of nonprofit communications is a new and complicated challenge. Entering a field already marked by difficult questions around visibility, dignity, consent, and power, it makes them even harder to answer. But while the use of technology is proliferating at a rapid pace, public understanding of it – including the ethical dilemmas and possible repercussions – is not as swift. This is why it is imperative now for nonprofits, charities, and humanitarian organisations to pay attention, develop digital and AI literacy, and think critically about how new technologies are used in their organisations.

The humanitarian sector has something important to offer here. It already works with governance, ethics, and accountability in complex, fast-changing environments. AI development isn’t so different in that sense. Instead of just adopting new technologies wholesale, or rejecting them outright, humanitarian organisations could help shape how they’re used – grounding them in principles like inclusion, participation, and responsibility. That’s where the conversation needs to start.

For better or worse, AI has already entered the sector, and its future likely depends on how we deal with it. Therefore, it is up to them to spearhead responsible and ethical AI use, leading the conversations on concerns and boundaries.

While there could be a case for the use of AI-generated imagery in nonprofit communications, such use can only be ethical when it’s transparent, carefully governed, culturally reviewed and not used as a shortcut to mislead or manipulate emotion. It also cannot replace photography in its documentary or artistic purpose – it does not evoke trust in the same way; it won’t show people the reality on the ground or evidence of the organisation’s work. But perhaps it could be used to supplement it, and there is a place for it in communications strategies. Such use makes sense when the aim is to reduce harm – to protect children’s privacy and avoid the extraction of real suffering, to represent sensitive issues without exposing their victims, or to allow smaller and resource-constrained organisations to communicate more effectively. As concisely encapsulated by Ka Man Parkinson, 

Whatever the approach – photography or AI – the red line remains the same: are individuals, and children in particular, depicted with dignity? AI gives us more creative choices, but it does not reduce our ethical responsibilities. That means no reductive depictions, no stereotypes, no dehumanising portrayals. It means keeping a human-in-the-loop – ideally working with people from the communities being portrayed to ensure imagery is sensitive, accurate and respectful.

Author: Barbara Listek