
Regardless of political orientation, Prime Minister Keir Starmer was right in saying that AI is the challenge for our generation.
How we respond to this and make it work will make a profound impact on the Britain we live in in decades to come.
How we get the foundations right will have a big bearing on how the future will pan out.
UK Government has been quietly doing some excellent work in the field for a number of years now. Their AI Play Book which I’ll look at in the post is a gold standard for any organisation. Boiled down, it has 10 principles which are jargon-free.
Transparency shines through this approach as it does through so many well written AI policies I’ve looked at. The question of transparency is a critically important one. In 2025, people are nervous about AI and maintaining trust is important.
I put it to you that no organisation can use AI on the quiet. It will come out. It is far better for you to pick when and where you shape this conversation. Do it at a time of your choosing. Don’t be left with an hour to draw-up a statement in response to a media query based on an FOI request. You will curse that you didn’t do it sooner.
This is a comms issue and comms would be well served to help the organisation drive this conversation.
As the UK Government principles show, AI is not an IT issue. It is IT, legal, information governance, senior leadership team, equalities, HR, comms and frontline services issue.
So, here are some policies from a range of organisations.
Remember to communicate
What needs to be made clear is that things do not stop with publishing a policy as a pdf as an attachment to some meeting notes. That’s not communicating. That’s shuffling paper.
In the public sector, the RNLI have made a public appeal for staff to get involved with their national AI review. They have set-up an AI review group to evaluate the best path forward. I love this as an approach. By casting the net wide they are encouraging voices to come forward not the usual suspects. That’s brilliant. It’s also the opposite of IT in a room Googling to come up with a policy.
Why you need an AI policy
Much can go wrong with thoughtlessly applied AI. One nightmare scenario is the serious case review discussed over a Teams call where a meeting note taker summarises the points. That personal data is later extracted by a bad actor and published online.
Or how about personal data on a member of your family also being extracted from an AI tool.
Or people thinking you are up to no good because you use voice-generating software that doesn’t quite ring true.
The NHS is trusted by 80 per cent of the UK population to act in your best interest while UK Government on 38 per cent narrowly beats social media companies. Trust is best gathered in drips over time and can be lost quickly.
NINE examples of AI policies
This is a magnificent piece of work that UK Government have published. This does the job of an AI policy for the Civil Service. Even better than that, it also gives a template or at the very least signposting for how AI can be used in the public sector.
Principle 1: You know what AI is and what its limitations are.
Principle 2: You use AI lawfully, ethically and responsibly.
Principle 3: You know how to use AI securely.
Principle 4: You have meaningful human control at the right stage.
Principle 5: You understand how to manage the full AI lifecycle.
Principle 6: You use the right tool for the job.
Principle 7: You are open and collaborative.
Principle 8: You work with commercial colleagues from the start.
Principle 9: You have the skills and expertise that you need to implement and use AI
Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place.
This should be part of your reading if you are looking to understand how AI can be made to work in the UK public sector. In particular, the emphasis is on transparency.
2. Friends of the Earth Harnessing AI for Environmental Justice
As a campaign group to save the environment Friends of the Earth can be pulled in two ways on AI. Yes, it uses electricity and by doing so can be harmful to the environment. But it can also be a useful tool.
I love what the campaign group have done with their approach. They’ve distilled it to seven main principles.
- Curiosity around AI creates opportunities for better choices.
- Transparency around usage, data and algorithms builds trust.
- Holding tech companies and governments accountable leads to responsible action.
- Including diverse voices strengthens decision making around AI.
- Sustainability in AI systems helps reduce environmental impact and protect natural ecosystems.
- Community collaboration in AI is key to planetary resilience.
- Advocating with an intersectional approach supports humane AI.
None of these will frighten the horses. By being broad principles, they are likely to be more adaptable and flexible. In a fast-moving environment this makes sense. Set out how to use ChatGPT all you like but what happens when a new tool is launched?
The charity also mark content they produce as being created with the help of AI and also set out how it was used. For example, the downloadable version of this on page 29 gives an appendix to set out how AI was used to produce it.
For example:
In addition to accepting low-level autocomplete suggestions from Google Docs and fixing spelling mistakes, we used a collaborative notebook through NotebookLM.
I love this approach. This is the most granular exemple I’ve come across. Maximum transparency like this will start a conversation on how AI is being used
This is a good example of what a policy should look like. It also builds into it a review every six months which is absolutely on the money. The approach also asks council staff to set out what they are using and why they are using it. This is good to see. They point to the rather jargon-filled algorithmic transparency recording standard set out by UK Government as a template to complete. This is not mandatory – yet – for the public sector but I do admire the approach. Using it, you can set out what you are using and why you are using it.
I very much like the AI footnote that it sets out where content created with AI needs to be marked with the footnote:
Note: This document contains content generated by Artificial Intelligence (AI). AI generated content has been reviewed by the author for accuracy and edited/revised where necessary. The author takes responsibility for this content.
The curious journo in me wonders what tools were used and how in a granular way. Again, this is a pdf and I’d love to see how Watford Council communicate all this to the public in an accessible way.
I also like the requirement to test new APIs and plug-ins. How it deals with hate or discriminatory inputs seems sensible, for example.
4. Leicester City Council Generative AI policy
Like Watford, Leicester also require a disclaimer to mark out content that has been created through AI. It also looks to capture the use of tools on council machines as well as staff’s own devices being used for AI. This is quite canny as there’s going to be many people using AI on the quiet.
It also requires a data protection impact assessment to be carried out before a tool can be used. I’d love to know if there is a central repository of the tools that are being used and why.
5. Russell Group of Universities AI policy
I’m not a fan of it being presented as a pdf. But the five points at the heart of this are simple to understand. In a nutshell, this encourages Universities to support staff and students to use AI so it can be used appropriately and effectively. That teaching should be adapted to include generative AI while academic rigour is maintained and Universities work together in the field.
By the looks of things, this is a one off declaration rather than marking each piece of content when and where it is AI-created.
6. Cambridge University: How we use generative AI
This seems more public-facing. It’s presented as a webpage and in plain English. It sets out how the University will use generative AI.
It also rules several things out. The University won’t create images from scratch although there is one to illustrate the post. It also won’t create text entirely using AI. It rules out using deepfake videos and also using voice generators unless it is demonstrating how these tools can be used. But it will use AI for photo editing and tools like ChatGPT for inspiration.
Interestingly, this is posted in the ‘staff’ tab when an audience is also the public.
7. Humber and North Yorkshire Healthcare Partnership AI Governance Policy
This is a really thorough example that covers a lot of bases. It sets out what it is and how it will be used. It’s thorough and robust. It sets out who is ultimately responsible. It names what it calls a Senior Information Risk Owner. A SIRO. God bless the NHS for it’s acronyms. Clearly, a great deal of work has gone into this and this looks like it’s not just been cobbled-together on the hoof by IT.
However, it’s a pdf and I’d love to see what extra steps are being made to communicate this to staff and to patients.
8. Bedfordshire Hospitals NHS Trust
This policy document covers the ground but also acknowledges the tactics needed in the scenario of drafting a letter to patients. Add a placeholder, it recommends. By the looks of it, the only stakeholders to have some input have come from within the organisation. I’m not convinced this is as transparent as its possible to be.
9. St George’s and Priorslee Parish Council AI policy
It’s fascinating to see a Parish Council also publishing an AI policy. It’s well thought through and covers the important bases. If a Shropshire parish of 11,000 souls can manage as an organisation to get their act together then what’s your excuse?
Creative commons credit: Line up buses in the 1980s by Les Chatfield.
I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMS, ESSENTIAL COMMS SKILLS BOOSTER, ESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.