How we’ll use artificial intelligence
Definition
There are many definitions of AI. We use it to refer to computer-driven automation systems that perform tasks which often involve prediction or decision-making. This includes technologies such as image and text recognition, large language models (LLMs) and the capability to generate text, imagery, audio and video.
Our starting point for engaging with AI
We will always act lawfully, ethically and responsibly in our use of AI. As an arm’s length body of the UK Government, we look first and foremost to their guidance including the ten principles set out in the Generative AI framework for HM Government. We commit to following these principles as we experiment with using AI to support the delivery of our strategy, Heritage 2033. As the largest funder for the UK’s heritage, entrusted with important decisions affecting many organisations and people, we have a responsibility to act with integrity and transparency in our use of AI.
In addition, as an employer and as a data controller, processor and publisher of grant data, we must think carefully about when and how we use new, powerful technologies. We believe that only by combining AI with human curiosity, intuition and problem-solving skills will we realise its potential benefits.
Our commitment to responsible AI, in line with our values
Ambitious
- We’ll be curious, imaginative and creative about what AI can do to help us deliver the ambitions set out in Heritage 2033.
- We’ll keep learning from others who are leading the way on AI to adapt how we work and to stretch our goals.
- We’ll make sure our staff are supported and empowered to deal with changes in AI technology, including with appropriate training.
Inclusive
- Our use of AI will be accompanied by effective and proportionate human oversight, which considers the different perspectives and lived experiences of our staff and stakeholders.
- Where we choose to use AI, we’ll do so in a way that is ethical, secure, robust and safe for everyone. We’ll look for opportunities to use AI to increase inclusion and accessibility. And we’ll assess the accuracy, reliability and consistency of AI tools we use and encounter.
- We’ll be conscious that biases and inequalities exist in the development or use of AI tools and we’ll be active in identifying, communicating and addressing these.
Collaborative
- We’ll work with partners and peers to share expertise and experience to benefit customers and staff.
- We expect our partners and suppliers to take these values and protections against potential harms of AI technologies as seriously as we do.
- We’ll consider the rights of all our stakeholders when using AI including applicants, funding partners, staff and suppliers.
Trusted
- We’ll ensure our use of AI complies with data protection legislation and respects the rights of individuals. We’ll work with AI tools only where we’re confident in their efficacy and accuracy and our ability to manage any risks they present.
- We’ll be transparent about where we’ve knowingly used AI tools, indicating where possible when and how content or data has been machine generated. We expect our stakeholders to do the same.
- We’ll seek to balance the negative environmental impact of using AI tools against their benefits.