TEXT BOOK: Nine examples of an AI policy and what you can learn from them

Regardless of political orientation, Prime Minister Keir Starmer was right in saying that AI is the challenge for our generation.

How we respond to this and make it work will make a profound impact on the Britain we live in in decades to come.

How we get the foundations right will have a big bearing on how the future will pan out.

UK Government has been quietly doing some excellent work in the field for a number of years now. Their AI Play Book which I’ll look at in the post is a gold standard for any organisation. Boiled down, it has 10 principles which are jargon-free. 

Transparency shines through this approach as it does through so many well written AI policies I’ve looked at. The question of transparency is a critically important one. In 2025, people are nervous about AI and maintaining trust is important.

I put it to you that no organisation can use AI on the quiet. It will come out. It is far better for you to pick when and where you shape this conversation. Do it at a time of your choosing. Don’t be left with an hour to draw-up a statement in response to a media query based on an FOI request. You will curse that you didn’t do it sooner.

This is a comms issue and comms would be well served to help the organisation drive this conversation.

As the UK Government principles show, AI is not an IT issue. It is IT, legal, information governance, senior leadership team, equalities, HR, comms and frontline services issue.

So, here are some policies from a range of organisations.

Remember to communicate

What needs to be made clear is that things do not stop with publishing a policy as a pdf as an attachment to some meeting notes. That’s not communicating. That’s shuffling paper. 

In the public sector, the RNLI have made a public appeal for staff to get involved with their national AI review. They have set-up an AI review group to evaluate the best path forward. I love this as an approach. By casting the net wide they are encouraging voices to come forward not the usual suspects. That’s brilliant. It’s also the opposite of IT in a room Googling to come up with a policy.

Why you need an AI policy 

Much can go wrong with thoughtlessly applied AI. One nightmare scenario is the serious case review discussed over a Teams call where a meeting note taker summarises the points. That personal data is later extracted by a bad actor and published online.

Or how about personal data on a member of your family also being extracted from an AI tool.

Or people thinking you are up to no good because you use voice-generating software that doesn’t quite ring true.

The NHS is trusted by 80 per cent of the UK population to act in your best interest while UK Government on 38 per cent narrowly beats social media companies. Trust is best gathered in drips over time and can be lost quickly.

NINE examples of AI policies

  1. UK Government AI Play Book

This is a magnificent piece of work that UK Government have published. This does the job of an AI policy for the Civil Service. Even better than that, it also gives a template or at the very least signposting for how AI can be used in the public sector.

Principle 1: You know what AI is and what its limitations are.

Principle 2: You use AI lawfully, ethically and responsibly.

Principle 3: You know how to use AI securely.  

Principle 4: You have meaningful human control at the right stage.

Principle 5: You understand how to manage the full AI lifecycle. 

Principle 6: You use the right tool for the job.

Principle 7: You are open and collaborative.

Principle 8: You work with commercial colleagues from the start.

Principle 9: You have the skills and expertise that you need to implement and use AI

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place.

This should be part of your reading if you are looking to understand how AI can be made to work in the UK public sector. In particular, the emphasis is on transparency.

2. Friends of the Earth Harnessing AI for Environmental Justice

As a campaign group to save the environment Friends of the Earth can be pulled in two ways on AI. Yes, it uses electricity and by doing so can be harmful to the environment. But it can also be a useful tool. 

I love what the campaign group have done with their approach. They’ve distilled it to seven main principles.

  1. Curiosity around AI creates opportunities for better choices.
  2. Transparency around usage, data and algorithms builds trust.
  3. Holding tech companies and governments accountable leads to responsible action.
  4. Including diverse voices strengthens decision making around AI.
  5. Sustainability in AI systems helps reduce environmental impact and protect natural ecosystems.
  6. Community collaboration in AI is key to planetary resilience.
  7. Advocating with an intersectional approach supports humane AI.

None of these will frighten the horses. By being broad principles, they are likely to be more adaptable and flexible. In a fast-moving environment this makes sense. Set out how to use ChatGPT all you like but what happens when a new tool is launched?

The charity also mark content they produce as being created with the help of AI and also set out how it was used. For example, the downloadable version of this on page 29 gives an appendix to set out how AI was used to produce it.

For example: 

In addition to accepting low-level autocomplete suggestions from Google Docs and fixing spelling mistakes, we used a collaborative notebook through NotebookLM.

I love this approach. This is the most granular exemple I’ve come across. Maximum transparency like this will start a conversation on how AI is being used

3. Watford Council AI policy

This is a good example of what a policy should look like. It also builds into it a review every six months which is absolutely on the money. The approach also asks council staff to set out what they are using and why they are using it. This is good to see. They point to the rather jargon-filled algorithmic transparency recording standard set out by UK Government as a template to complete. This is not mandatory – yet – for the public sector but I do admire the approach. Using it, you can set out what you are using and why you are using it.

I very much like the AI footnote that it sets out where content created with AI needs to be marked with the footnote:

Note: This document contains content generated by Artificial Intelligence (AI). AI generated content has been reviewed by the author for accuracy and edited/revised where necessary. The author takes responsibility for this content. 

The curious journo in me wonders what tools were used and how in a granular way. Again, this is a pdf and I’d love to see how Watford Council communicate all this to the public in an accessible way.

I also like the requirement to test new APIs and plug-ins. How it deals with hate or discriminatory inputs seems sensible, for example.

4. Leicester City Council Generative AI policy

Like Watford, Leicester also require a disclaimer to mark out content that has been created through AI. It also looks to capture the use of tools on council machines as well as staff’s own devices being used for AI. This is quite canny as there’s going to be many people using AI on the quiet. 

It also requires a data protection impact assessment to be carried out before a tool can be used. I’d love to know if there is a central repository of the tools that are being used and why.

5. Russell Group of Universities AI policy

I’m not a fan of it being presented as a pdf. But the five points at the heart of this are simple to understand. In a nutshell, this encourages Universities to support staff and students to use AI so it can be used appropriately and effectively. That teaching should be adapted to include generative AI while academic rigour is maintained and Universities work together in the field.

By the looks of things, this is a one off declaration rather than marking each piece of content when and where it is AI-created.

6. Cambridge University: How we use generative AI

This seems more public-facing. It’s presented as a webpage and in plain English. It sets out how the University will use generative AI. 

It also rules several things out. The University won’t create images from scratch although there is one to illustrate the post. It also won’t create text entirely using AI. It rules out using deepfake videos and also using voice generators unless it is demonstrating how these tools can be used. But it will use AI for photo editing and tools like ChatGPT for inspiration.

Interestingly, this is posted in the ‘staff’ tab when an audience is also the public.

7. Humber and North Yorkshire Healthcare Partnership AI Governance Policy

This is a really thorough example that covers a lot of bases. It sets out what it is and how it will be used. It’s thorough and robust. It sets out who is ultimately responsible. It names what it calls a Senior Information Risk Owner. A SIRO. God bless the NHS for it’s acronyms. Clearly, a great deal of work has gone into this and this looks like it’s not just been cobbled-together on the hoof by IT. 

However, it’s a pdf and I’d love to see what extra steps are being made to communicate this to staff and to patients. 

8. Bedfordshire Hospitals NHS Trust

This policy document covers the ground but also acknowledges the tactics needed in the scenario of drafting a letter to patients. Add a placeholder, it recommends. By the looks of it, the only stakeholders to have some input have come from within the organisation. I’m not convinced this is as transparent as its possible to be.

9. St George’s and Priorslee Parish Council AI policy 

It’s fascinating to see a Parish Council also publishing an AI policy. It’s well thought through and covers the important bases. If a Shropshire parish of 11,000 souls can manage as an organisation to get their act together then what’s your excuse?

Creative commons credit: Line up buses in the 1980s by Les Chatfield.

I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMSESSENTIAL COMMS SKILLS BOOSTERESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

TRUST WARS: Yes, the public sector should be clear on how they use AI

When I was a kid I’m sure I was delivered a lecture on how a reputation was so hard to build and so easy to lose. 

Maybe it was for something pretty minor although – full disclosure – me and eight of my mates were suspended for a day in the VI form for drinking alcohol on Cannock Chase while we were running an orienteering checkpoint. 

I told my kids this a few years ago and they were both – again, full disclosure – ASTOUNDED. 

Reputation and trust also applies to public sector institutions. In the UK, trust in the pillar of Government is sparse with the Edelman Trust Barometer running a 12-year low with just 31 per cent of people in the UK having trust in Government institutions.  

Trust is also easy to lose and hard to build. Look at the Kate Middleton’s photoshopped Mother’s Day picture issue.

Never mind misleading photoshop, AI can demolish trust in an institution overnight.   

What made me reflect on the issue of identifying AI content was Scottish Government’s bold announcement that all public sector bodies north of the border will be required to register their use of AI for projects. Importantly, this logs projects rather than all AI use. At the moment, the register is voluntary but is the first in the UK to become mandatory.

What’s on the Scottish AI Registry now? 

A quick look at the AI Register shows just three projects. Included in this list is a tool that shows how vulnerable children may be to exploitation and a virtual concierge assistant to help you choose the right tool for blind or deaf people to take part in civic society. 

The benefit of being transparent

Back in the day, Tom Watson MP was a junior Minister responsible for the Civil Service (full disclosure: Tom was a very approachable contact when I was assistant chief reporter in the Express & Star’s Sandwell office). 

One weekend, Tom crowdsourced what should be in the first draft of the civil service social media guidance. This included a suggestion to declare your connection to the Civil Service as you used social media in connection with a civil service matter. I’ve always thought this broad approach was a good idea.

If you’re declaring how you are using AI this can only build trust. There is no ‘gotcha’ moment but there may be a debate about the methods you use. But if you can’t justify it then should you even be using it? 

Why setting out how you use AI is a good idea

For me, yes a comms team should set out how AI is used. 

Indeed, Meta has already required that content created with AI are labelled. So, images and video created with AI tools need to be identified. But so too must text that’s been shaped with a tool like ChatGPT and posted to a Meta platform such as Facebook, WhatsApp, Threads or Instagram.

Not only this but in the UK, uploading deepfakes without the owners’ consent is already a crime. I cannot sensibly think of a time when a public sector comms team would create such a deepfake without the subject’s permission. However, the state of political campaigning in America is another thing entirely.  

I’d be interested to hear what others think. 

AI OMG: Strengths and weaknesses of Open AI’s Sora text to video for the public sector

Every week I’m reading, listening and updatunbg my knowledge on AI tools that public sector comms people can use.

Up till now I’ve not been that impressed by the video production tools I’ve come across.

They can be clunky and tend to miss the point.

However, OpenAIs new tool Sora looks truly astonishing.

It takes text prompts and turns them into video.

First, I’d like to show you some and then I’d like to weigh-up the pros and cons.

Example 1: a Tokyo street

In this clip, the prompt is quite detailed.

Prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.

It’s amazing isn’t it?

Example 2: A spaceman in a knitted motorbike helmet

While the first example hung back from the subject the second goes close in.

Prompt: A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.

Again, astounding.

Example 3: Reflections in a train window

I’m sure that some things are easier to produce than not. The difficult of replicating reflections in a window I’d imagine is towards the top of the hardest list.

Prompt: Reflections in the window of a train traveling through the Tokyo suburbs.

And it achieves the look beautifully.

Example 4: Grandma’s birthday

While the other examples have dealt with people in different ways this looks at a group.

Prompt: A grandmother with neatly combed grey hair stands behind a colorful birthday cake with numerous candles at a wood dining room table, expression is one of pure joy and happiness, with a happy glow in her eye. She leans forward and blows out the candles with a gentle puff, the cake has pink frosting and sprinkles and the candles cease to flicker, the grandmother wears a light blue blouse adorned with floral patterns, several happy friends and family sitting at the table can be seen celebrating, out of focus. The scene is beautifully captured, cinematic, showing a 3/4 view of the grandmother and the dining room. Warm color tones and soft lighting enhance the mood..Prompt: A grandmother with neatly combed grey hair stands behind a colorful birthday cake with numerous candles at a wood dining room table, expression is one of pure joy and happiness, with a happy glow in her eye. She leans forward and blows out the candles with a gentle puff, the cake has pink frosting and sprinkles and the candles cease to flicker, the grandmother wears a light blue blouse adorned with floral patterns, several happy friends and family sitting at the table can be seen celebrating, out of focus. The scene is beautifully captured, cinematic, showing a 3/4 view of the grandmother and the dining room. Warm color tones and soft lighting enhance the mood.

Weakness: Simulating complex interactions between objects and multiple characters is often challenging for the model, sometimes resulting in humorous generations.

Interestingly, OpenAI have also set out the weaknesses of such an approach.

Conclusion

The quality of the images are astounding in their quality. They look like video wheras previous tools didn’t quite ring true.

The visual clues you may look for, like reflections on windows, easily confound the brain.

That’s real, isn’t it?

Only, it isn’t.

Right now OpenAI are pulling a blinder by teasing amazing content but regulating the use of the product. People are talking but not able to use it right now but this will change.

As we can’t use it we can’t see how hard it is to experiment with good content.

The pitfalls of Sora AI video

For the public sector, the flaw isn’t yet cost or even a pathway to start using it. UK Government have released some guidelines to encourage the use of it.

I feel like looking a gifthorse in the mouth when I say this but the issue for the public sector maybe that right now the content is too generic.

A campaign for a commercial could do something enlightening. Filmmakers I suspect will make something useful with this.

I’ve seen an AI how to video made by a council neasr me with a generic English accent and I hated it for its insincerity. I was left feeling played.

One issue with web content for a council, NHS Trust, police force or fire and rescue is that generic content doesn’t do so well. As I’ve blogged this week people pictures work really well. They are both real and of people. They also capture the area. So, generic shots of Tokyo, yes. Shots of Dudley in the West Midlands, probably not.

Right now, shooting your own content of people and landmarks tops it. But can AI-made content be used to supplement it? We wear futuristic artist impressions of new developments. Will we go for AI-made content of a new town centre development? Or what a new hospital ward would look like? I’m guessing yes.

Of course, such is the onward pace of AI this hurdle may well become surmountable. What I’ve just written may seen laughable quite quickly, I accept that. An interface with Google Street View and Google Photos could be one way to do that, I’m speculating. But wouldn’t Google be building their own equivalent?

Oh, heck, my head hurts.

COMPUTER WORLD: What AI tools like ChatGPT mean for PR and communications

Bear with me, I’m going to open gently with a story before I move to the central point because I think the central point is almost too large to grasp.

When I worked in local government there was a man in charge of committee clerks. He was a grey haired man, always approachable and always helpful. He made sure committee meetings ran smoothly and in accordance with the laws and constitution. He was a deep well of information and anecdote.

I remember being in his office one day and he pointed at the grey filing cabinet in the corner of the room.

“Back in the day,” he said. “That’s where all the archives were stored. Every set of agenda papers, every minute, every decision. It was all there. There used to be a queue of people asking to check things and we’d have someone who would check things for them.”

His tone darkened.

“Then the internet came along and someone in their infinite wisdom decided that was better.”

At first, I thought he was joking. He wasn’t.

As the gatekeeper of that information he was an important man. He still was important. But something told me he missed being that gatekeeping librarian. For the first 20 years of his career he was the internet as far as constitutional matters were concerned.

The internet meant that anyone could do it when previously just one could. There was many winners and one loser.

Something is going on with Chat GPT

I don’t think I’m overselling it to say that something is happening right now that is truly revolutionary and I’m not sure if we’ve got our heads around it.

In late 2022, the Microsoft-backed AI chatbot ChatGPT was released. It plugs into 20 years of internet knowledge to produce solutions to tasks given it. Google finds you the links to help you piece together the solution eventually. ChatGPT finds the solution and gives you it with a bow tied around it.

I’m coming to the main point.

If you have earned your living from the knowledge economy your job is about to be turned upside down.

What you’ve spent years working and studying for can be replicated in seconds by ChatGPT.

As an experiment, I gave it a few tasks.

I used it to create a tenancy agreement under the law of England & Wales that favoured the tenant here.

A Dad’s Army scene that involves Captain Mainwaring, Sgt Wilson and a Tech Bro from Shoreditch looking to move to Warmington-on-Sea here

I also asked it to write a communications strategy for a charity that looks to communicate with young people and to set out the channels here.

Other people have used it for far greater tasks.

Like writing a Seinfeld script, a tool for debugging code, design meal plans, as a rival for Google search, writing a piano piece in the style of Mozart, writing verse in the style of Shakespeare on climate destruction or a poem about electrons written in iambic pentameter.

Looking at what I asked it to write it looks as though it’s about 75 per cent there.

It looks as though it was written by a human and it makes sense.

The thing is, AI looks to improve itself constantly. These are the baby steps. Far more powerful tools are expected in the next few years.

What ChatGPT and AI tools mean

There is a school of thought that says that we are moving overnight from being information creators to information curators.

The most extreme of predictions are that potentially everyone who has a career in the knowledge economy can be replaced. Why pay for five £40,000-a-year professionals, the argument goes, when two using AI can do the job?

AI companies that have written about the industry insist that AI is not to be feared. They’re here to help, they say, not replace. There’s part of me that’s not so sure they even believe that themselves.  

Many schools and Universities who have started to wake up to the threat have moved to ban ChatGPT from work submitted here and here. This prompts the idea of an arms race between AI essay writers like ChatGPT and software that can detect AI writing. Internet Q&A site Slack Overflow have also banned ChatGPT for providing answers that are not reliable.

What does ChatGPT and AI tools mean for comms and PR?

On the face of it a tool like ChatGPT is a threat. It can produce what you do to an increasingly good standard. That’s dangerous, surely? Well, partly, yes and partly no.

If we step aside from the shock of seeing the outline of a comms plan being produced by a robot we need to ask ourselves the question ‘then what?’.

A comms plan on its own is an attachment that sits on a hard drive. On its own, it won’t produce and post content. Right now, that will need some human involvement. Sure, ChatGPT could help to produce the rough content but right now it still needs shaping and scheduling. 

What’s coming out of an AI tool is not 100 per cent fool proof. So, there’s still need for humans.

Right now, tools such as ChatGPT can be a help to the day-to-day. It’ll be fascinating to see where they take is in two, five and ten years. 

Exit mobile version