GUIDANCE GUIDE: Here’s what I learned when I compared AI policies for police, NHS, charity and Welsh, Scottish and UK Government

Right now, we are firmly in the Wild West period of AI in the public sector.

We know it’s getting used but we don’t have policies in place.

In summer 2025, I carried out research that showed almost half of public sector comms people are using AI without permission and almost 60 per cent of organisations don’t have a policy.

These were worrying numbers.

A policy means you have some guardrails and a licence to operate safely.

Yet, national policies increasingly exist

Yet, the picture towards the end of 2025 is that policies are starting to exist.

Maybe people don’t know about them.

As more and more policies are being published I’ve taken a look at seven of the main documents to compare and contrast.

From the public sector:

NHS Confederation (October 2025) which operates in Northern Ireland, England and Wales

UK Government (February 2025)

Government Communications Service (August 2025)

Scottish Government (2024)

Welsh Government (July 2025)

National Police Chief’s Council (April 2025).

In addition I’ve added the Friends of the Earth AI policy as a leading example of a charity approach.

But first…

The first thing to mention is that any review is subjective. This was a human review of the documents. Any organisation will frame their AI policy in accordance with their organisation’s priorities.

I’ve tried to review the documents fairly. For example, if encouraging future research was mentioned in the principle directly I’ve classed that as a ‘yes’. If it’s mentioned obliquely, I’ve classed this as ‘indirectly’. If it’s not mentioned in the principles I’ve classed this as ‘no’.

It’s perfectly possible to talk about something within the document without mentioning them as a clear principle and no criticism is implied by not adopting a principle clearly.

But what are the core areas of AI policies?

There are a few areas that shine through in all the approaches.

These are big picture documents that don’t go into the specifics of only recommending a particular tool. This makes sense. Who wants an outdated social media policy that mandates MySpace and Twitter?

Here’s some key words.

Fairness shines through in the policies and demands that AI is used fairly for people who will be affected.

Transparency also runs through the approaches. We need to have a dialogue between civil society and others on how we are using AI and in addition be clear that content is AI. In social media, we are obliged to mark what we post to channels such as YouTube, Facebook, Instagram or TikTok. But organisations should be clear too on how they are using tools.

Human oversight in one form or another is also demanded in all the policies’ principles except Friends of the Earth.

In other areas there is broad agreement including the importance of such areas as working with commercial colleagues and encouraging curiosity.

While the documents often don’t set it out explicitly, there are paths set out to using AI safely in an organisation. For example, the College of Policing demands testing of tools by academics or with other forces. That’s quite a high bar.

Elsewhere, there is less agreement on which factors are important. This is understandable. International co-operation is made explicitly clear in Friends of the Earth and Scottish Government’s principles but not so much in others.

Where are the gaps?

In late 2025, there are a number of gaps in guidance for parts of the public sector.

In local government, there is no national set of UK-wide principles. There is no bespoke framework offered in England by the Ministry of Housing, Communities and Local Government while the LGA has been making representations to UK Government on AI issues.

In the UK Fire & Rescue, the National Fire Chief Council has drawn up an ethical framework for AI with transparency as a core, but frustratingly, it has not been published online by them or the UK Fire Standards Board who may enforce it.

In the third sector, there is no universal guidance set out by the Charity Commission nor is there in the UK hosuing sector.

Besides this, there are some grey areas. Transparency is mentioned yet in training, this is the area most likely to be flagged for being problematic. People see the principle but in comms are often alert to the potential for incoming criticism.

My argument here is that it’s national guidance. It’s better to pick and choose how and when you have the conversation rather than wait for AI use to be leaked through an FOI as it surely will do.

What about you?

All of this feels very top down. In many ways, it really should. There should be leadership on this and a pathway to using AI safely. The encouraging thing is that there is. But how about you? Should you sit back and wait to be spoon-fed the central thinking?

I’d encourage for you to take a different path.

For me, a healthy curiosity in innovation and doing what you can to lead your organisation to the available guidance is critically important.

The future is out there it’s just unevenly distributed. Making sure the decision makers in the organisation can find their way to the future would be a wise use of time.

I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMSESSENTIAL COMMS SKILLS BOOSTERESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

Creative commons credit: New and old road sign by Thoma.

Exit mobile version