GUIDANCE GUIDE: Here’s what I learned when I compared AI policies for police, NHS, charity and Welsh, Scottish and UK Government

Right now, we are firmly in the Wild West period of AI in the public sector.

We know it’s getting used but we don’t have policies in place.

In summer 2025, I carried out research that showed almost half of public sector comms people are using AI without permission and almost 60 per cent of organisations don’t have a policy.

These were worrying numbers.

A policy means you have some guardrails and a licence to operate safely.

Yet, national policies increasingly exist

Yet, the picture towards the end of 2025 is that policies are starting to exist.

Maybe people don’t know about them.

As more and more policies are being published I’ve taken a look at seven of the main documents to compare and contrast.

From the public sector:

NHS Confederation (October 2025) which operates in Northern Ireland, England and Wales

UK Government (February 2025)

Government Communications Service (August 2025)

Scottish Government (2024)

Welsh Government (July 2025)

National Police Chief’s Council (April 2025).

In addition I’ve added the Friends of the Earth AI policy as a leading example of a charity approach.

But first…

The first thing to mention is that any review is subjective. This was a human review of the documents. Any organisation will frame their AI policy in accordance with their organisation’s priorities.

I’ve tried to review the documents fairly. For example, if encouraging future research was mentioned in the principle directly I’ve classed that as a ‘yes’. If it’s mentioned obliquely, I’ve classed this as ‘indirectly’. If it’s not mentioned in the principles I’ve classed this as ‘no’.

It’s perfectly possible to talk about something within the document without mentioning them as a clear principle and no criticism is implied by not adopting a principle clearly.

But what are the core areas of AI policies?

There are a few areas that shine through in all the approaches.

These are big picture documents that don’t go into the specifics of only recommending a particular tool. This makes sense. Who wants an outdated social media policy that mandates MySpace and Twitter?

Here’s some key words.

Fairness shines through in the policies and demands that AI is used fairly for people who will be affected.

Transparency also runs through the approaches. We need to have a dialogue between civil society and others on how we are using AI and in addition be clear that content is AI. In social media, we are obliged to mark what we post to channels such as YouTube, Facebook, Instagram or TikTok. But organisations should be clear too on how they are using tools.

Human oversight in one form or another is also demanded in all the policies’ principles except Friends of the Earth.

In other areas there is broad agreement including the importance of such areas as working with commercial colleagues and encouraging curiosity.

While the documents often don’t set it out explicitly, there are paths set out to using AI safely in an organisation. For example, the College of Policing demands testing of tools by academics or with other forces. That’s quite a high bar.

Elsewhere, there is less agreement on which factors are important. This is understandable. International co-operation is made explicitly clear in Friends of the Earth and Scottish Government’s principles but not so much in others.

Where are the gaps?

In late 2025, there are a number of gaps in guidance for parts of the public sector.

In local government, there is no national set of UK-wide principles. There is no bespoke framework offered in England by the Ministry of Housing, Communities and Local Government while the LGA has been making representations to UK Government on AI issues.

In the UK Fire & Rescue, the National Fire Chief Council has drawn up an ethical framework for AI with transparency as a core, but frustratingly, it has not been published online by them or the UK Fire Standards Board who may enforce it.

In the third sector, there is no universal guidance set out by the Charity Commission nor is there in the UK hosuing sector.

Besides this, there are some grey areas. Transparency is mentioned yet in training, this is the area most likely to be flagged for being problematic. People see the principle but in comms are often alert to the potential for incoming criticism.

My argument here is that it’s national guidance. It’s better to pick and choose how and when you have the conversation rather than wait for AI use to be leaked through an FOI as it surely will do.

What about you?

All of this feels very top down. In many ways, it really should. There should be leadership on this and a pathway to using AI safely. The encouraging thing is that there is. But how about you? Should you sit back and wait to be spoon-fed the central thinking?

I’d encourage for you to take a different path.

For me, a healthy curiosity in innovation and doing what you can to lead your organisation to the available guidance is critically important.

The future is out there it’s just unevenly distributed. Making sure the decision makers in the organisation can find their way to the future would be a wise use of time.

I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMSESSENTIAL COMMS SKILLS BOOSTERESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

Creative commons credit: New and old road sign by Thoma.

OPEN SHUT: How open and transparent should public sector comms be when using AI?

In training, one of the 10 UK Government principles for using AI gets most attention.

Like the stone in the shoe on a long walk, the principle of being open and transparent gets the most thoughtful chin-stroking.

Now, in principle – that word again – everyone nods in agreement at the idea. Of course, we should be open. But exactly how? 

The UK Government’s AI Play Book sets out clearly how to work with this principle: 

Where possible, you should engage with the wider civil society including groups, communities, and non-governmental, academic and public representative organisations that have an interest in your project. Collaborating with people both inside and outside government will help you ensure we use AI to deliver tangible benefits to individuals and society as a whole. Make sure you have a clear plan for engaging and communicating with these stakeholders at the start of your work.

But all of a sudden when faced with how to apply this people often start to feel uncertain. There’s nothing wrong with this. One of the gifts of a good public sector communicator is to spot the potholes in the road they are travelling down. With AI we are in new territory.

What do the public think of AI? 

There’s concern about whet people will say if they find out you are using AI.

Maybe you’ve not fessed up to IT on how you are using it.

Polling data shows the public are uncertain of AI in many areas of life. While nine out of 10 people are positive about using AI for health diagnosis less than a fifth are happy with political advertising, according to the Ada Lovelace Foundation.

Not only that, but the public sector doesn’t have masses of friends right now. Local government budgetshave had billions of pounds stripped from them by Government. It’s a situation other parts of the public sector can recognise too be it NHS, fire and rescue, or central government.

So, what do we do?

Well, if you’re using AI you could go for the Ostrich approach and hope nobody spots you.

But if word gets out – and it will – you’ll be playing catch-up to all sorts of lurid rumours of how what you secretly want are robot nurses in our hospitals or that AI will make everyone in the Town Hall redundant. To be fair, people have got legitimate concerns about AI and their jobs.

Far better to be, as the principle states, open and transparent.

What does open and transparent AI look like?

So, if the debate is not if you should be transparent but how much what should how much look like?

Scottish Government have been the first in the UK to have a registry for projects and how AI is used. Interestingly, a trawl of the sight shows the government comms team, insight team and marketing teams have acknowledged using the Brand Watch social listening tool

Five minutes of searching Google News has not located a single story covering this fact.

So, it appears that being open and transparent would normalise the use of AI. This is how it should be. 

Can you dodge out of it? 

Now, you can say to yourself that you don’t have to worry about UK Government guidelines because you are not in the civil service. If that’s true, you won’t get a tap on the shoulder from a civil servant. But is that seriously good enough? 

Other parts of the public sector have been slower in getting their act together.

The tide of AI is rising far faster than the ability of policymakers to draw up sector-specific policy. 

If you’re bright I’d urge you to put your own thoughts to it. 

Granular or big picture?

If the Scottish Government example is big picture what about other tools? 

Well, if you are posting an AI-treated image to Facebook or Instagram then you need to mark it up as such. That’s been the case for some months. It’s the same on other platforms for images and video. 

I’ve not seen a requirement to mark as AI-generated text made with the help of a tool like ChatGPT or Copilot. In fact, LinkedIn encourages you to use AI tools to write the post. 

In local news. Reach plc since 2023 have been using AI to generate reporting with this ‘Seven things to do in Newport’ one of the first acknowledged examples of using AI to generate content. It’s not marked as being written by AI. Since then, they have trimmed down writing times using AI tools.

Let’s not forget the curious case of the Bournemouth Observer a site with fake journalists created by AI which closed down after being exposed by the Hold the Frontpage website. We don’t mind AI, the moral of the story appears to be, but we don’t like being mislead.

How granular should you be?

You should be marking AI-assisted images and video as AI when you post to social media.

Should you also say if each individual post, web page or press release was created with AI? Or should you have a space on your team webpage that explains how AI is used? Or maybe, follow the Scottish model and have a sector-wide registry? 

These are questions that haven’t been resolved in the public sector.

Elsewhere, parts of the third sector is taking a lead on this by requiring ALL content to acknowledge the role AI has played in its creation.

For example: 

Staff are free to use AI tools if they wish for their own work, but are asked to make it clear to others when they do so, including in any work we publish. Hannah Smith, Director of Operations, Green Web Foundation

Friends of the Earth in their policy document on setting out their seven principles of how they  will use AI included transparency. In keeping with the spirit of this they set out as an appendix to the document how they used AI. Google Notebook was used, they say, along with spell-checking from Google Docs. They add:

We used generative AI to generate text sparingly and didn’t use it to generate images at all. A small number of paragraphs in this article started out as AI-generated based on our prompts as a form of placeholder text while we built up arguments in other sections. These placeholders were then deleted, rewritten, edited and otherwise remixed. The vast majority of the article was written and edited without any generative AI. 

The question should not be if you should be transparent in using AI but how.

I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMSESSENTIAL COMMS SKILLS BOOSTERESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

GCS GUIDE: Your guide rope for using generative AI in public sector comms is here

When I was a kid Dad would take us to see my Gran and Grandpa in the Lake District where he was born.

At one end of Derwentwater, he’d point up at a rockface where little human specs with orange helmets could be seen hundreds of feet up. Trailing brown ropes trailed behind them. As we’d stop and watch those figures would carefully manoeuvre themselves and stretch an arm for a new grip on the crag in a slow motion drama.

“They’re rock climbers,” Dad would tell us. “They’re all a bit mad.”

Years later I would go on my own reading binge of rock climbing memoirs. Climber Joe Simpson whose climbing career was fired by such books tells of overcoming the deep fear by technique, clear thinking and process. Where any sane person would panic a climber would overcome their terror by calmly going through a check list and balance the risk.

In the climbing community, there is special respect for the climber who turns round just short of the summit because they can see from their check list it is the only safe thing to do.

With AI, like rock climbing it is perfectly acceptable to be both terrified and excited at the same time.

Thankfully, good ropes, carabiniers and orange helmets are now being supplied by UK Government. They can help keep communicators safe. Yes, it is scary. Yes, it can be done. There are risks and there is a checklist.

The latest piece of equipment to keep the AI Alpinist safe is the UK Government’s Government Communications Service generative AI policy. It is a profoundly useful addition to your rucksack of comms safety equipment.

When social media emerged in the public sector it was done through a band of militant optimists. I’m proud to be one of them. Our mantra was that it was better to ask forgiveness than permission. With AI, I think it’s now more demanding permission. These documents will help you climb safely.

This will keep you safe 

The most important thing about the GCS generative AI policy is that it will help keep you safe if you’ve got common sense.

At no stage is this a green light to go charging ahead with AI in any way you may dream of.

Firstly, the brass tacks. Generative AI basically means the tools that use AI that will help you create something. This includes text and images.

Let’s look at the key points. 

Always use generative AI in accordance with the latest official government guidance. 

This part of the policy links itself safely by a piece of climbing rope to the UK Government Generative AI Framework. I’ve blogged on this here. By securing it against Government policy this gives you an unbeatable scissors paper stone option.

Uphold factuality in our use of generative AI.

You can’t use AI to create something that misleads. This is a really important piece of policy equipment to be guided by. I can see this being useful to a communicator being put under any implied pressure to try and spin something that isn’t there. It’s also a public declaration of how to use it.

Engage with appropriate government organisations, strategic suppliers, technology providers, and civil society, around significant developments in generative AI, and the implications for its use in government communications.

It’s important that a dialogue is created and maintained to show the wider world how AI is being used. There is no doubt that doubt creates fear and misinformation which can damage hard won reputation.

Continue to review reputable research into the public’s attitudes towards generative AI and consider how this policy should evolve in response.

Again, this is important to root the work in a wider discussion and debate. For example, the Ada Lovelace Foundation have been a beacon of common sense in the field. Their 2023 research on what people in the UK think about AI should be part of your reading list

Government Communications may use generative AI where it can drive increasingly effective, and engaging government communications, for the public good, in an ethical manner.

This is an absolute winner of a paragraph. Print it out and memorise it. It is the starting pistol, the green light, the permission granted and the opening of the door. In days to come people will look at this and be baffled that there was a time before this technology. 

Interestingly, the document refers to first draft text visuals or audio. It can also be a useful tool in making content more accessible. Note that isn’t waving through the final draft sight unseen. To borrow the title of the CIPR document, humans are still very much needed in this process.

Government communications aims to build trust in our approach through acting transparently.

In this section, GCS say that permission will be sought before using a human as an avatar. In plain language, an avatar is a computer generated representation of a person. This can be entirely off-the-shelf and created using some of the tools that are already available. The problem with this is that they can have an American accent or come over as being insincere.

What this particular line also tackles is seeking the permission of people to have their likeness converted into an avatar. This could be useful for HR to create a training avatar to talk you through processes. Tools such as veed.io can do this although the cost of doing so is price on application. 

The benefit of having a human avatar is clear. If you’re in the Black Country, a Black Country accent will land better with the local audience. It can also speed up and cut the cost of training video production. However, while I can see this working in HR if it is marked as AI.

I’m really not sold on the idea of an avatar spokesperson tackling a thorny issue. 

We will clearly notify the public when they are interacting with a conversational AI service rather than a human.

This is essential. People have mixed views about AI and feel far happier when they are told they are speaking to a robot. This chimes with EU regulations that to me is common sense. We generally don’t mind talking to a customer service live chat pop-up if its marked as AI asking some basic questions a human operator can then use to help you.

Government communications will not apply generative AI technologies where it conflicts with our values or principles.

This makes sense but its probably worth spelling it out. Just because you can doesn’t mean you should.

Government communications will not use generative AI to deliver communications to the public without human oversight, to uphold accuracy, inclusivity and mitigate biases. 

Again, humans are involved with this process. 

A useful template for communicators

Of course, this is handy if you are a government communicator but its also useful if you are in the public sector or even third sector. 

So much hard work I’m sure has gone into this. It would be daft not to take advantage of the learning. To tie what you are looking to do in your own team to these principles or to base your own version on them is common sense. 

Huge credit for those involved with this.

I deliver training that now has the elements of AI that you need ESSENTIAL COMMS SKILLS BOOSTER and ESSENTIAL VIDEO SKILLS REBOOTED.

TRUST WARS: Yes, the public sector should be clear on how they use AI

When I was a kid I’m sure I was delivered a lecture on how a reputation was so hard to build and so easy to lose. 

Maybe it was for something pretty minor although – full disclosure – me and eight of my mates were suspended for a day in the VI form for drinking alcohol on Cannock Chase while we were running an orienteering checkpoint. 

I told my kids this a few years ago and they were both – again, full disclosure – ASTOUNDED. 

Reputation and trust also applies to public sector institutions. In the UK, trust in the pillar of Government is sparse with the Edelman Trust Barometer running a 12-year low with just 31 per cent of people in the UK having trust in Government institutions.  

Trust is also easy to lose and hard to build. Look at the Kate Middleton’s photoshopped Mother’s Day picture issue.

Never mind misleading photoshop, AI can demolish trust in an institution overnight.   

What made me reflect on the issue of identifying AI content was Scottish Government’s bold announcement that all public sector bodies north of the border will be required to register their use of AI for projects. Importantly, this logs projects rather than all AI use. At the moment, the register is voluntary but is the first in the UK to become mandatory.

What’s on the Scottish AI Registry now? 

A quick look at the AI Register shows just three projects. Included in this list is a tool that shows how vulnerable children may be to exploitation and a virtual concierge assistant to help you choose the right tool for blind or deaf people to take part in civic society. 

The benefit of being transparent

Back in the day, Tom Watson MP was a junior Minister responsible for the Civil Service (full disclosure: Tom was a very approachable contact when I was assistant chief reporter in the Express & Star’s Sandwell office). 

One weekend, Tom crowdsourced what should be in the first draft of the civil service social media guidance. This included a suggestion to declare your connection to the Civil Service as you used social media in connection with a civil service matter. I’ve always thought this broad approach was a good idea.

If you’re declaring how you are using AI this can only build trust. There is no ‘gotcha’ moment but there may be a debate about the methods you use. But if you can’t justify it then should you even be using it? 

Why setting out how you use AI is a good idea

For me, yes a comms team should set out how AI is used. 

Indeed, Meta has already required that content created with AI are labelled. So, images and video created with AI tools need to be identified. But so too must text that’s been shaped with a tool like ChatGPT and posted to a Meta platform such as Facebook, WhatsApp, Threads or Instagram.

Not only this but in the UK, uploading deepfakes without the owners’ consent is already a crime. I cannot sensibly think of a time when a public sector comms team would create such a deepfake without the subject’s permission. However, the state of political campaigning in America is another thing entirely.  

I’d be interested to hear what others think. 

Exit mobile version