TEXT BOOK: Nine examples of an AI policy and what you can learn from them

Regardless of political orientation, Prime Minister Keir Starmer was right in saying that AI is the challenge for our generation.

How we respond to this and make it work will make a profound impact on the Britain we live in in decades to come.

How we get the foundations right will have a big bearing on how the future will pan out.

UK Government has been quietly doing some excellent work in the field for a number of years now. Their AI Play Book which I’ll look at in the post is a gold standard for any organisation. Boiled down, it has 10 principles which are jargon-free. 

Transparency shines through this approach as it does through so many well written AI policies I’ve looked at. The question of transparency is a critically important one. In 2025, people are nervous about AI and maintaining trust is important.

I put it to you that no organisation can use AI on the quiet. It will come out. It is far better for you to pick when and where you shape this conversation. Do it at a time of your choosing. Don’t be left with an hour to draw-up a statement in response to a media query based on an FOI request. You will curse that you didn’t do it sooner.

This is a comms issue and comms would be well served to help the organisation drive this conversation.

As the UK Government principles show, AI is not an IT issue. It is IT, legal, information governance, senior leadership team, equalities, HR, comms and frontline services issue.

So, here are some policies from a range of organisations.

Remember to communicate

What needs to be made clear is that things do not stop with publishing a policy as a pdf as an attachment to some meeting notes. That’s not communicating. That’s shuffling paper. 

In the public sector, the RNLI have made a public appeal for staff to get involved with their national AI review. They have set-up an AI review group to evaluate the best path forward. I love this as an approach. By casting the net wide they are encouraging voices to come forward not the usual suspects. That’s brilliant. It’s also the opposite of IT in a room Googling to come up with a policy.

Why you need an AI policy 

Much can go wrong with thoughtlessly applied AI. One nightmare scenario is the serious case review discussed over a Teams call where a meeting note taker summarises the points. That personal data is later extracted by a bad actor and published online.

Or how about personal data on a member of your family also being extracted from an AI tool.

Or people thinking you are up to no good because you use voice-generating software that doesn’t quite ring true.

The NHS is trusted by 80 per cent of the UK population to act in your best interest while UK Government on 38 per cent narrowly beats social media companies. Trust is best gathered in drips over time and can be lost quickly.

NINE examples of AI policies

  1. UK Government AI Play Book

This is a magnificent piece of work that UK Government have published. This does the job of an AI policy for the Civil Service. Even better than that, it also gives a template or at the very least signposting for how AI can be used in the public sector.

Principle 1: You know what AI is and what its limitations are.

Principle 2: You use AI lawfully, ethically and responsibly.

Principle 3: You know how to use AI securely.  

Principle 4: You have meaningful human control at the right stage.

Principle 5: You understand how to manage the full AI lifecycle. 

Principle 6: You use the right tool for the job.

Principle 7: You are open and collaborative.

Principle 8: You work with commercial colleagues from the start.

Principle 9: You have the skills and expertise that you need to implement and use AI

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place.

This should be part of your reading if you are looking to understand how AI can be made to work in the UK public sector. In particular, the emphasis is on transparency.

2. Friends of the Earth Harnessing AI for Environmental Justice

As a campaign group to save the environment Friends of the Earth can be pulled in two ways on AI. Yes, it uses electricity and by doing so can be harmful to the environment. But it can also be a useful tool. 

I love what the campaign group have done with their approach. They’ve distilled it to seven main principles.

  1. Curiosity around AI creates opportunities for better choices.
  2. Transparency around usage, data and algorithms builds trust.
  3. Holding tech companies and governments accountable leads to responsible action.
  4. Including diverse voices strengthens decision making around AI.
  5. Sustainability in AI systems helps reduce environmental impact and protect natural ecosystems.
  6. Community collaboration in AI is key to planetary resilience.
  7. Advocating with an intersectional approach supports humane AI.

None of these will frighten the horses. By being broad principles, they are likely to be more adaptable and flexible. In a fast-moving environment this makes sense. Set out how to use ChatGPT all you like but what happens when a new tool is launched?

The charity also mark content they produce as being created with the help of AI and also set out how it was used. For example, the downloadable version of this on page 29 gives an appendix to set out how AI was used to produce it.

For example: 

In addition to accepting low-level autocomplete suggestions from Google Docs and fixing spelling mistakes, we used a collaborative notebook through NotebookLM.

I love this approach. This is the most granular exemple I’ve come across. Maximum transparency like this will start a conversation on how AI is being used

3. Watford Council AI policy

This is a good example of what a policy should look like. It also builds into it a review every six months which is absolutely on the money. The approach also asks council staff to set out what they are using and why they are using it. This is good to see. They point to the rather jargon-filled algorithmic transparency recording standard set out by UK Government as a template to complete. This is not mandatory – yet – for the public sector but I do admire the approach. Using it, you can set out what you are using and why you are using it.

I very much like the AI footnote that it sets out where content created with AI needs to be marked with the footnote:

Note: This document contains content generated by Artificial Intelligence (AI). AI generated content has been reviewed by the author for accuracy and edited/revised where necessary. The author takes responsibility for this content. 

The curious journo in me wonders what tools were used and how in a granular way. Again, this is a pdf and I’d love to see how Watford Council communicate all this to the public in an accessible way.

I also like the requirement to test new APIs and plug-ins. How it deals with hate or discriminatory inputs seems sensible, for example.

4. Leicester City Council Generative AI policy

Like Watford, Leicester also require a disclaimer to mark out content that has been created through AI. It also looks to capture the use of tools on council machines as well as staff’s own devices being used for AI. This is quite canny as there’s going to be many people using AI on the quiet. 

It also requires a data protection impact assessment to be carried out before a tool can be used. I’d love to know if there is a central repository of the tools that are being used and why.

5. Russell Group of Universities AI policy

I’m not a fan of it being presented as a pdf. But the five points at the heart of this are simple to understand. In a nutshell, this encourages Universities to support staff and students to use AI so it can be used appropriately and effectively. That teaching should be adapted to include generative AI while academic rigour is maintained and Universities work together in the field.

By the looks of things, this is a one off declaration rather than marking each piece of content when and where it is AI-created.

6. Cambridge University: How we use generative AI

This seems more public-facing. It’s presented as a webpage and in plain English. It sets out how the University will use generative AI. 

It also rules several things out. The University won’t create images from scratch although there is one to illustrate the post. It also won’t create text entirely using AI. It rules out using deepfake videos and also using voice generators unless it is demonstrating how these tools can be used. But it will use AI for photo editing and tools like ChatGPT for inspiration.

Interestingly, this is posted in the ‘staff’ tab when an audience is also the public.

7. Humber and North Yorkshire Healthcare Partnership AI Governance Policy

This is a really thorough example that covers a lot of bases. It sets out what it is and how it will be used. It’s thorough and robust. It sets out who is ultimately responsible. It names what it calls a Senior Information Risk Owner. A SIRO. God bless the NHS for it’s acronyms. Clearly, a great deal of work has gone into this and this looks like it’s not just been cobbled-together on the hoof by IT. 

However, it’s a pdf and I’d love to see what extra steps are being made to communicate this to staff and to patients. 

8. Bedfordshire Hospitals NHS Trust

This policy document covers the ground but also acknowledges the tactics needed in the scenario of drafting a letter to patients. Add a placeholder, it recommends. By the looks of it, the only stakeholders to have some input have come from within the organisation. I’m not convinced this is as transparent as its possible to be.

9. St George’s and Priorslee Parish Council AI policy 

It’s fascinating to see a Parish Council also publishing an AI policy. It’s well thought through and covers the important bases. If a Shropshire parish of 11,000 souls can manage as an organisation to get their act together then what’s your excuse?

Creative commons credit: Line up buses in the 1980s by Les Chatfield.

I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMSESSENTIAL COMMS SKILLS BOOSTERESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

AI OMG: Strengths and weaknesses of Open AI’s Sora text to video for the public sector

Every week I’m reading, listening and updatunbg my knowledge on AI tools that public sector comms people can use.

Up till now I’ve not been that impressed by the video production tools I’ve come across.

They can be clunky and tend to miss the point.

However, OpenAIs new tool Sora looks truly astonishing.

It takes text prompts and turns them into video.

First, I’d like to show you some and then I’d like to weigh-up the pros and cons.

Example 1: a Tokyo street

In this clip, the prompt is quite detailed.

Prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.

It’s amazing isn’t it?

Example 2: A spaceman in a knitted motorbike helmet

While the first example hung back from the subject the second goes close in.

Prompt: A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.

Again, astounding.

Example 3: Reflections in a train window

I’m sure that some things are easier to produce than not. The difficult of replicating reflections in a window I’d imagine is towards the top of the hardest list.

Prompt: Reflections in the window of a train traveling through the Tokyo suburbs.

And it achieves the look beautifully.

Example 4: Grandma’s birthday

While the other examples have dealt with people in different ways this looks at a group.

Prompt: A grandmother with neatly combed grey hair stands behind a colorful birthday cake with numerous candles at a wood dining room table, expression is one of pure joy and happiness, with a happy glow in her eye. She leans forward and blows out the candles with a gentle puff, the cake has pink frosting and sprinkles and the candles cease to flicker, the grandmother wears a light blue blouse adorned with floral patterns, several happy friends and family sitting at the table can be seen celebrating, out of focus. The scene is beautifully captured, cinematic, showing a 3/4 view of the grandmother and the dining room. Warm color tones and soft lighting enhance the mood..Prompt: A grandmother with neatly combed grey hair stands behind a colorful birthday cake with numerous candles at a wood dining room table, expression is one of pure joy and happiness, with a happy glow in her eye. She leans forward and blows out the candles with a gentle puff, the cake has pink frosting and sprinkles and the candles cease to flicker, the grandmother wears a light blue blouse adorned with floral patterns, several happy friends and family sitting at the table can be seen celebrating, out of focus. The scene is beautifully captured, cinematic, showing a 3/4 view of the grandmother and the dining room. Warm color tones and soft lighting enhance the mood.

Weakness: Simulating complex interactions between objects and multiple characters is often challenging for the model, sometimes resulting in humorous generations.

Interestingly, OpenAI have also set out the weaknesses of such an approach.

Conclusion

The quality of the images are astounding in their quality. They look like video wheras previous tools didn’t quite ring true.

The visual clues you may look for, like reflections on windows, easily confound the brain.

That’s real, isn’t it?

Only, it isn’t.

Right now OpenAI are pulling a blinder by teasing amazing content but regulating the use of the product. People are talking but not able to use it right now but this will change.

As we can’t use it we can’t see how hard it is to experiment with good content.

The pitfalls of Sora AI video

For the public sector, the flaw isn’t yet cost or even a pathway to start using it. UK Government have released some guidelines to encourage the use of it.

I feel like looking a gifthorse in the mouth when I say this but the issue for the public sector maybe that right now the content is too generic.

A campaign for a commercial could do something enlightening. Filmmakers I suspect will make something useful with this.

I’ve seen an AI how to video made by a council neasr me with a generic English accent and I hated it for its insincerity. I was left feeling played.

One issue with web content for a council, NHS Trust, police force or fire and rescue is that generic content doesn’t do so well. As I’ve blogged this week people pictures work really well. They are both real and of people. They also capture the area. So, generic shots of Tokyo, yes. Shots of Dudley in the West Midlands, probably not.

Right now, shooting your own content of people and landmarks tops it. But can AI-made content be used to supplement it? We wear futuristic artist impressions of new developments. Will we go for AI-made content of a new town centre development? Or what a new hospital ward would look like? I’m guessing yes.

Of course, such is the onward pace of AI this hurdle may well become surmountable. What I’ve just written may seen laughable quite quickly, I accept that. An interface with Google Street View and Google Photos could be one way to do that, I’m speculating. But wouldn’t Google be building their own equivalent?

Oh, heck, my head hurts.

ROBOT COMMS: What public sector comms people need to know about artificial intelligence

There’s this amazing clip from late 1970s Blue Peter where the presenters are demonstrating the first commercial mobile phone.

John Noakes stays in the studio while Peter Purves heads into the Blue Peter garden and whips out from under his mac an over the shoulder plastic phone. You can tell the smugness in his voice as he dials his colleague.

Ladies and gentlemen, the mobile phone.

And so, Artificial Intelligence – or AI – will become as normal as texting or taking a selfie is now. This is not sci-fi fantasy but what is happening today. Just less than four million Google Home and Amazon Alexa devices have been sold in the UK, researchers voicebot.ai say. By far the largest number in Europe.

But, what is Artificial Intelligence?

In 2018, most people don’t know what Artificial Intelligence is. But what they do know is it sounds scary. In a nutshell, they are computers that learn. The dictionary definition is computer systems that can complete tasks that normally require human intelligence such as visual recognition, speech recognition and decision making.

To get you started, I’d suggest taking six minutes to watch the HubSpot animation that makes it as Blue Peter as possible without a trip into the Italian Sunken Garden:

Artificial Intelligence can be very scary, can’t it?

AI at home is still the preserve of early adoptors. My video skills colleague Steven has had one for months. When he asks Google to do something it often even does the thing he’s asked it to do.

Me? I’m in several camps. I want to know more as it is going to shape the world we live in but I’m dubious. I’m not thrilled by the idea of a swarm of killer drones. I’m not that thrilled that the top search for military drones is the Chinese mail-order giant Alibaba. Robotics researcher Peter Haas in his Ted Talk talks about the lack of ethics in the field.

Me? I’m more struck by the rather excellent @internetofshit that talks RTs accounts of Teslas being stranded in the desert as they can only re-start with a mobile phone signal. Or the lift that can’t be used because of a system update.

In that context, AI is very, very scary indeed. But that’s not where AI is right now.

Artificial Intelligence is here, baby. Right here

Of course, its not all swarms of drones with machine guns.  In fact. It’s hardly that at all. Former CIPR President Stephen Waddington has been leading some superb work to look at where AI is in PR. I simply cannot recommend his work enough.

Through the #AIinPR project, Stephen and around 20 volunteers have collated an open list of tools that already have elements of AI in them. The results are truly surprising. There are more than 150 tools identified that have an element of AI in them.

The full Google sheet with the findings can be found here.

What’s striking about the list is how commonplace the tools are. Link shortener bitly, for exampls, has been a staple for the best part of a decade. Mailchimp, If This Then That and Canva are staples of my working day. Your’s too, maybe.

So, if AI is also day-to-day, doesn’t that mean that AI is already having an impact on PR and comms?

The answer to that is ‘yes.’

How much AI is affecting you… and will affect you

Again, Stephen Waddington’s inspired research is useful to map the next steps. His work leads into CIPR’s excellent ‘Humans Still Required’ report by Canadian academic Jean Valin. This sets out how much of PR is already AI-affected. At the moment, 12 per cent of PR is potentially AI. That’s things like evaluation, data processing, programming and curation.

But it starts getting even more interesting when looking at the future. The figure rises to 36 per cent by 2023. There’s a whole range of areas that can be maximised from stakeholder analysis to reputation monitoring. Areas like ethics, law and career management stay outside the long reach of the robots.

Other research from the University of Oxford put PR managers as the 67th safest from a list of more than 700.

All this is striking. But where does it affect you?

The future is already quietly colliding with the presentout of view.  There is no one single moment but a series of moments. It’s already happening. There is no announcement to close 100 pits but 100,000 decisions to use different software that can help you do your job more easily.

AI will come not through the organisation but through suppliers. In all likelihood, this won’t be driven by individual teams writing code but an arms race between providers. A to-be-invented Google tool, for example. Or the news management software company that adds AI elements to existing AI elements to its own existing press release management system.

Content will be written by AI. News agency Press Association are experimenting with distributing news stories written by AI. If news stories can be written this way then press releases and other content can be too. But that’ll be through a supplier doing the hard yards and pricing it.

At first, AI knowledge will be outsourced. Given the rapid developments in the sector and the fact that existing public sector teams are busy enough already there isn’t the headspace. Advice from outside will be important at the start. It’ll be as much about efficiencies as it is delivering a better job.

Sit back, but don’t sit back. Others will be doing the hard yards to make this work. But don’t sign your future away.  A baseline need to understand AI is needed. You won’t need to know how to code. But you will know how that code can affect you and most importantly of all, you’ll need to know the ethics and the law of it.  For the public sector, this is going to be tricky. Right now, there isn’t a publicly-accepted code of ethics for AI. But there are broader approaches that can govern it across the sector. Like GDPR, for example.

Leaders will have to lead to bring teams along. AI is and can be scary. It is different. Yes, it can mean fewer people doing the job. But the tasks it may replace are likely to be the routine in comms and PR rather than the the big ticket. You won’t be sending a robot along the corridor to the crisis meeting with emergency planning to discuss the three day old fire. You will be automating the fire’s evaluation.

The risk of ‘computer says no’ IT teams. PR and comms risk outsourcing AI knowledge at their own peril. From fear or ignorance, there is a temptation to look to IT for answers. But with many IT teams being the blocker and struggling even 10 years on with social media, this isn’t a strategy to take. You need to know some of the basics yourself to work out what can and can’t be done.

Data driven decisions. Often public sector comms can be driven by personality, politicians and practice. One of the great achievements of the UK’s Government Communications Service is to move away from comms that’s just churning stuff out for the sake of it. But other teams and other organisations still shoot from the hip. In an emergency, there is nothing better than working at speed on-the-hoof. That skill will stay hugely valuable. But there feels like a clash between this and the more data-driven strategic approach of AI. It’ll be interesting to see how this works itself out.

Reputational damage and lots of it. The application of bad AI in parts of the sector will be keenly felt. The self-driving car delivering meals on wheels to the wrong house. The very idea of self-driving cars delivering meals on wheels in the first place.  This will all be bread and butter. The benefits of AI won’t be celebrated but the disasters absolutely will be. There is a huge role for comms in explaining – and warning – against the delivery.

‘Hey Google, what time does the tip close?’ Websites are useful but cumbersome things. Your organisation will not prosper if they can’t work with tools like Alexa. One idea kicking around is for a box in the kitchen that talks to the local council website and flashes the colour of the right bin that needs to go out the night before. That’s AI right there, that is.

Learning. Ever learning. The comms person who thinks they’ve learned everything is the one who will be replaced. This is not remotely a bold statement. We’re seeing it. If the only skill you have is writing press releases that’s not something you’ll be getting a new job with. But a range of skills and a willingness to learn gives you a chance of a career. AI just underlines this. Stephen Waddington’s advice to learn, read and keep learning is valuable.

Open the pod bay doors, Hal. In 2001 A Space Odyssey the human is faced down by Hal the robot who refuses to open the pod bay doors. This is one moment is the nightmare scenario for humans. It’s the moment when computers take control. But I’m genuinely not seeing this in comms and PR just yet. Hal the robot refusing to do write the Facebook update? Probably not. R2D2 software running the alerts and producing the reports for you? Then next week producing machine-learnt better reports? Absolutely.

Pic credit: Robot by Alexander Svensson /Flickr.  

 

 

 

 

Exit mobile version