GCS GUIDE: Your guide rope for using generative AI in public sector comms is here

When I was a kid Dad would take us to see my Gran and Grandpa in the Lake District where he was born.

At one end of Derwentwater, he’d point up at a rockface where little human specs with orange helmets could be seen hundreds of feet up. Trailing brown ropes trailed behind them. As we’d stop and watch those figures would carefully manoeuvre themselves and stretch an arm for a new grip on the crag in a slow motion drama.

“They’re rock climbers,” Dad would tell us. “They’re all a bit mad.”

Years later I would go on my own reading binge of rock climbing memoirs. Climber Joe Simpson whose climbing career was fired by such books tells of overcoming the deep fear by technique, clear thinking and process. Where any sane person would panic a climber would overcome their terror by calmly going through a check list and balance the risk.

In the climbing community, there is special respect for the climber who turns round just short of the summit because they can see from their check list it is the only safe thing to do.

With AI, like rock climbing it is perfectly acceptable to be both terrified and excited at the same time.

Thankfully, good ropes, carabiniers and orange helmets are now being supplied by UK Government. They can help keep communicators safe. Yes, it is scary. Yes, it can be done. There are risks and there is a checklist.

The latest piece of equipment to keep the AI Alpinist safe is the UK Government’s Government Communications Service generative AI policy. It is a profoundly useful addition to your rucksack of comms safety equipment.

When social media emerged in the public sector it was done through a band of militant optimists. I’m proud to be one of them. Our mantra was that it was better to ask forgiveness than permission. With AI, I think it’s now more demanding permission. These documents will help you climb safely.

This will keep you safe 

The most important thing about the GCS generative AI policy is that it will help keep you safe if you’ve got common sense.

At no stage is this a green light to go charging ahead with AI in any way you may dream of.

Firstly, the brass tacks. Generative AI basically means the tools that use AI that will help you create something. This includes text and images.

Let’s look at the key points. 

Always use generative AI in accordance with the latest official government guidance. 

This part of the policy links itself safely by a piece of climbing rope to the UK Government Generative AI Framework. I’ve blogged on this here. By securing it against Government policy this gives you an unbeatable scissors paper stone option.

Uphold factuality in our use of generative AI.

You can’t use AI to create something that misleads. This is a really important piece of policy equipment to be guided by. I can see this being useful to a communicator being put under any implied pressure to try and spin something that isn’t there. It’s also a public declaration of how to use it.

Engage with appropriate government organisations, strategic suppliers, technology providers, and civil society, around significant developments in generative AI, and the implications for its use in government communications.

It’s important that a dialogue is created and maintained to show the wider world how AI is being used. There is no doubt that doubt creates fear and misinformation which can damage hard won reputation.

Continue to review reputable research into the public’s attitudes towards generative AI and consider how this policy should evolve in response.

Again, this is important to root the work in a wider discussion and debate. For example, the Ada Lovelace Foundation have been a beacon of common sense in the field. Their 2023 research on what people in the UK think about AI should be part of your reading list

Government Communications may use generative AI where it can drive increasingly effective, and engaging government communications, for the public good, in an ethical manner.

This is an absolute winner of a paragraph. Print it out and memorise it. It is the starting pistol, the green light, the permission granted and the opening of the door. In days to come people will look at this and be baffled that there was a time before this technology. 

Interestingly, the document refers to first draft text visuals or audio. It can also be a useful tool in making content more accessible. Note that isn’t waving through the final draft sight unseen. To borrow the title of the CIPR document, humans are still very much needed in this process.

Government communications aims to build trust in our approach through acting transparently.

In this section, GCS say that permission will be sought before using a human as an avatar. In plain language, an avatar is a computer generated representation of a person. This can be entirely off-the-shelf and created using some of the tools that are already available. The problem with this is that they can have an American accent or come over as being insincere.

What this particular line also tackles is seeking the permission of people to have their likeness converted into an avatar. This could be useful for HR to create a training avatar to talk you through processes. Tools such as veed.io can do this although the cost of doing so is price on application. 

The benefit of having a human avatar is clear. If you’re in the Black Country, a Black Country accent will land better with the local audience. It can also speed up and cut the cost of training video production. However, while I can see this working in HR if it is marked as AI.

I’m really not sold on the idea of an avatar spokesperson tackling a thorny issue. 

We will clearly notify the public when they are interacting with a conversational AI service rather than a human.

This is essential. People have mixed views about AI and feel far happier when they are told they are speaking to a robot. This chimes with EU regulations that to me is common sense. We generally don’t mind talking to a customer service live chat pop-up if its marked as AI asking some basic questions a human operator can then use to help you.

Government communications will not apply generative AI technologies where it conflicts with our values or principles.

This makes sense but its probably worth spelling it out. Just because you can doesn’t mean you should.

Government communications will not use generative AI to deliver communications to the public without human oversight, to uphold accuracy, inclusivity and mitigate biases. 

Again, humans are involved with this process. 

A useful template for communicators

Of course, this is handy if you are a government communicator but its also useful if you are in the public sector or even third sector. 

So much hard work I’m sure has gone into this. It would be daft not to take advantage of the learning. To tie what you are looking to do in your own team to these principles or to base your own version on them is common sense. 

Huge credit for those involved with this.

I deliver training that now has the elements of AI that you need ESSENTIAL COMMS SKILLS BOOSTER and ESSENTIAL VIDEO SKILLS REBOOTED.

AI OMG: Strengths and weaknesses of Open AI’s Sora text to video for the public sector

Every week I’m reading, listening and updatunbg my knowledge on AI tools that public sector comms people can use.

Up till now I’ve not been that impressed by the video production tools I’ve come across.

They can be clunky and tend to miss the point.

However, OpenAIs new tool Sora looks truly astonishing.

It takes text prompts and turns them into video.

First, I’d like to show you some and then I’d like to weigh-up the pros and cons.

Example 1: a Tokyo street

In this clip, the prompt is quite detailed.

Prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.

It’s amazing isn’t it?

Example 2: A spaceman in a knitted motorbike helmet

While the first example hung back from the subject the second goes close in.

Prompt: A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.

Again, astounding.

Example 3: Reflections in a train window

I’m sure that some things are easier to produce than not. The difficult of replicating reflections in a window I’d imagine is towards the top of the hardest list.

Prompt: Reflections in the window of a train traveling through the Tokyo suburbs.

And it achieves the look beautifully.

Example 4: Grandma’s birthday

While the other examples have dealt with people in different ways this looks at a group.

Prompt: A grandmother with neatly combed grey hair stands behind a colorful birthday cake with numerous candles at a wood dining room table, expression is one of pure joy and happiness, with a happy glow in her eye. She leans forward and blows out the candles with a gentle puff, the cake has pink frosting and sprinkles and the candles cease to flicker, the grandmother wears a light blue blouse adorned with floral patterns, several happy friends and family sitting at the table can be seen celebrating, out of focus. The scene is beautifully captured, cinematic, showing a 3/4 view of the grandmother and the dining room. Warm color tones and soft lighting enhance the mood..Prompt: A grandmother with neatly combed grey hair stands behind a colorful birthday cake with numerous candles at a wood dining room table, expression is one of pure joy and happiness, with a happy glow in her eye. She leans forward and blows out the candles with a gentle puff, the cake has pink frosting and sprinkles and the candles cease to flicker, the grandmother wears a light blue blouse adorned with floral patterns, several happy friends and family sitting at the table can be seen celebrating, out of focus. The scene is beautifully captured, cinematic, showing a 3/4 view of the grandmother and the dining room. Warm color tones and soft lighting enhance the mood.

Weakness: Simulating complex interactions between objects and multiple characters is often challenging for the model, sometimes resulting in humorous generations.

Interestingly, OpenAI have also set out the weaknesses of such an approach.

Conclusion

The quality of the images are astounding in their quality. They look like video wheras previous tools didn’t quite ring true.

The visual clues you may look for, like reflections on windows, easily confound the brain.

That’s real, isn’t it?

Only, it isn’t.

Right now OpenAI are pulling a blinder by teasing amazing content but regulating the use of the product. People are talking but not able to use it right now but this will change.

As we can’t use it we can’t see how hard it is to experiment with good content.

The pitfalls of Sora AI video

For the public sector, the flaw isn’t yet cost or even a pathway to start using it. UK Government have released some guidelines to encourage the use of it.

I feel like looking a gifthorse in the mouth when I say this but the issue for the public sector maybe that right now the content is too generic.

A campaign for a commercial could do something enlightening. Filmmakers I suspect will make something useful with this.

I’ve seen an AI how to video made by a council neasr me with a generic English accent and I hated it for its insincerity. I was left feeling played.

One issue with web content for a council, NHS Trust, police force or fire and rescue is that generic content doesn’t do so well. As I’ve blogged this week people pictures work really well. They are both real and of people. They also capture the area. So, generic shots of Tokyo, yes. Shots of Dudley in the West Midlands, probably not.

Right now, shooting your own content of people and landmarks tops it. But can AI-made content be used to supplement it? We wear futuristic artist impressions of new developments. Will we go for AI-made content of a new town centre development? Or what a new hospital ward would look like? I’m guessing yes.

Of course, such is the onward pace of AI this hurdle may well become surmountable. What I’ve just written may seen laughable quite quickly, I accept that. An interface with Google Street View and Google Photos could be one way to do that, I’m speculating. But wouldn’t Google be building their own equivalent?

Oh, heck, my head hurts.

COMPUTER WORLD: What AI tools like ChatGPT mean for PR and communications

Bear with me, I’m going to open gently with a story before I move to the central point because I think the central point is almost too large to grasp.

When I worked in local government there was a man in charge of committee clerks. He was a grey haired man, always approachable and always helpful. He made sure committee meetings ran smoothly and in accordance with the laws and constitution. He was a deep well of information and anecdote.

I remember being in his office one day and he pointed at the grey filing cabinet in the corner of the room.

“Back in the day,” he said. “That’s where all the archives were stored. Every set of agenda papers, every minute, every decision. It was all there. There used to be a queue of people asking to check things and we’d have someone who would check things for them.”

His tone darkened.

“Then the internet came along and someone in their infinite wisdom decided that was better.”

At first, I thought he was joking. He wasn’t.

As the gatekeeper of that information he was an important man. He still was important. But something told me he missed being that gatekeeping librarian. For the first 20 years of his career he was the internet as far as constitutional matters were concerned.

The internet meant that anyone could do it when previously just one could. There was many winners and one loser.

Something is going on with Chat GPT

I don’t think I’m overselling it to say that something is happening right now that is truly revolutionary and I’m not sure if we’ve got our heads around it.

In late 2022, the Microsoft-backed AI chatbot ChatGPT was released. It plugs into 20 years of internet knowledge to produce solutions to tasks given it. Google finds you the links to help you piece together the solution eventually. ChatGPT finds the solution and gives you it with a bow tied around it.

I’m coming to the main point.

If you have earned your living from the knowledge economy your job is about to be turned upside down.

What you’ve spent years working and studying for can be replicated in seconds by ChatGPT.

As an experiment, I gave it a few tasks.

I used it to create a tenancy agreement under the law of England & Wales that favoured the tenant here.

A Dad’s Army scene that involves Captain Mainwaring, Sgt Wilson and a Tech Bro from Shoreditch looking to move to Warmington-on-Sea here

I also asked it to write a communications strategy for a charity that looks to communicate with young people and to set out the channels here.

Other people have used it for far greater tasks.

Like writing a Seinfeld script, a tool for debugging code, design meal plans, as a rival for Google search, writing a piano piece in the style of Mozart, writing verse in the style of Shakespeare on climate destruction or a poem about electrons written in iambic pentameter.

Looking at what I asked it to write it looks as though it’s about 75 per cent there.

It looks as though it was written by a human and it makes sense.

The thing is, AI looks to improve itself constantly. These are the baby steps. Far more powerful tools are expected in the next few years.

What ChatGPT and AI tools mean

There is a school of thought that says that we are moving overnight from being information creators to information curators.

The most extreme of predictions are that potentially everyone who has a career in the knowledge economy can be replaced. Why pay for five £40,000-a-year professionals, the argument goes, when two using AI can do the job?

AI companies that have written about the industry insist that AI is not to be feared. They’re here to help, they say, not replace. There’s part of me that’s not so sure they even believe that themselves.  

Many schools and Universities who have started to wake up to the threat have moved to ban ChatGPT from work submitted here and here. This prompts the idea of an arms race between AI essay writers like ChatGPT and software that can detect AI writing. Internet Q&A site Slack Overflow have also banned ChatGPT for providing answers that are not reliable.

What does ChatGPT and AI tools mean for comms and PR?

On the face of it a tool like ChatGPT is a threat. It can produce what you do to an increasingly good standard. That’s dangerous, surely? Well, partly, yes and partly no.

If we step aside from the shock of seeing the outline of a comms plan being produced by a robot we need to ask ourselves the question ‘then what?’.

A comms plan on its own is an attachment that sits on a hard drive. On its own, it won’t produce and post content. Right now, that will need some human involvement. Sure, ChatGPT could help to produce the rough content but right now it still needs shaping and scheduling. 

What’s coming out of an AI tool is not 100 per cent fool proof. So, there’s still need for humans.

Right now, tools such as ChatGPT can be a help to the day-to-day. It’ll be fascinating to see where they take is in two, five and ten years. 

Exit mobile version