Artificial intelligence is increasingly being used in heritage projects – from image reconstruction and immersive interpretation to automated archiving and digital engagement. For organisations working with sensitive histories, the challenge is not whether to use AI, but how to do so responsibly.
Heritage deals with memory, trauma, identity and trust. Used badly, AI risks oversimplifying complex histories or undermining confidence in the historical record. Used well, it can support understanding, empathy and engagement in ways that traditional interpretation sometimes cannot.
Over the past year, I have led communications and digital interpretation work on a National Lottery Heritage Fund–supported programme marking the 85th anniversary of the Sheffield Blitz. As part of that work, AI was used carefully and deliberately to help audiences understand a city that was devastated over two nights in December 1940.
The project offers some clear principles for using AI responsibly in heritage settings.
Why responsible use of AI matters in heritage
AI in heritage is often framed around innovation. But innovation alone is not a sufficient justification.
Heritage organisations have a duty of care to their audiences, particularly when dealing with war, loss and collective trauma. Trust is hard won and easily lost. Any use of AI must strengthen, not weaken, confidence in the authenticity and integrity of the story being told.
In the case of the Sheffield Blitz, the stakes were high. Survivors are still alive. Families continue to live with the legacy of destruction. The story is not abstract history – it is lived experience.
That context shaped every decision about how AI was used.
Start with the story, not the technology
One of the most common mistakes in digital heritage projects is beginning with the technology rather than the narrative.
The Sheffield Blitz work did not begin with a question about what AI could do. It began with a problem of understanding. Large parts of the city were destroyed and rebuilt. For many people, especially younger audiences, it is difficult to visualise what was lost.
AI was introduced as a tool to support that understanding, not as a replacement for archival research, oral history or scholarship. Reconstructions were built from historical maps, photographs, architectural records and first-hand accounts gathered over many years.
The story led the technology, not the other way around.
Be transparent about interpretation and reconstruction
A core principle of responsible AI use in heritage is transparency.
All AI-generated material within the Sheffield Blitz programme was clearly labelled and contextualised. Reconstructions were presented as interpretive tools, not definitive representations of the past. Audiences were told what evidence underpinned each image and where uncertainty remained.
This approach maintained trust. It also encouraged active engagement. Rather than passively consuming content, visitors asked questions about sources, accuracy and interpretation.
Transparency turns AI from a potential liability into an educational asset.
Avoid spectacle and sensationalism
AI makes it easy to create dramatic, emotionally charged visuals. In heritage contexts, that can quickly tip into spectacle.
The Sheffield Blitz interpretation deliberately avoided gratuitous imagery. The aim was not to recreate explosions or shock audiences, but to convey scale, loss and absence. Streets that no longer exist. Homes that were never rebuilt. Communities permanently altered.
In many cases, the most powerful moments came from restraint rather than visual intensity.
Responsible heritage communication uses technology to deepen empathy, not to manufacture drama.
Keep lived experience at the centre
No AI system can replace human testimony.
Throughout the Sheffield Blitz project, survivor voices remained central. Oral histories, diaries and interviews were the foundation of the work. AI outputs were designed to sit alongside those voices, not overshadow them.
This avoided one of the major risks of AI in heritage: smoothing complex, emotional histories into something too neat or emotionally distant.
History is not clean or consistent. Responsible use of AI must preserve that complexity.
Demonstrating impact, not just innovation
Funders increasingly expect digital heritage projects to demonstrate impact as well as creativity.
In Sheffield, AI supported clear engagement outcomes. It helped visitors visualise lost parts of the city, increased dwell time within the exhibition, and encouraged people to explore the city through a free Sheffield Blitz walking-tour app.
The work attracted broadcast news coverage from ITV News (both national and regional), including discussion of the project’s digital and AI elements. Click here to view.
An example of how AI was used to interpret lost streets and buildings can be seen here.
Crucially, the technology supported learning, reflection and remembrance rather than distracting from it.
Restraint as a design principle
Perhaps the most important lesson from the Sheffield Blitz work is that responsible use of AI often involves restraint.
Not every heritage story needs reconstruction. Not every archive benefits from automation. Not every audience wants immersion.
In sensitive heritage contexts, doing less – but doing it well – is often the most ethical choice.
AI should be used where it adds clarity, empathy or understanding. If it does not serve those purposes, it should not be used at all.
Conclusion: AI as a tool for understanding, not replacement
AI will continue to shape how heritage is interpreted. The question for organisations is not whether to engage with it, but how to do so in a way that preserves trust, integrity and humanity.
The Sheffield Blitz programme demonstrated that when AI is grounded in evidence, guided by ethics and anchored in lived experience, it can deepen understanding rather than dilute it.
Responsible use of AI in heritage is not about chasing novelty. It is about respecting the past while communicating it clearly and honestly to the present.
