
When I was a kid Dad would take us to see my Gran and Grandpa in the Lake District where he was born.
At one end of Derwentwater, he’d point up at a rockface where little human specs with orange helmets could be seen hundreds of feet up. Trailing brown ropes trailed behind them. As we’d stop and watch those figures would carefully manoeuvre themselves and stretch an arm for a new grip on the crag in a slow motion drama.
“They’re rock climbers,” Dad would tell us. “They’re all a bit mad.”
Years later I would go on my own reading binge of rock climbing memoirs. Climber Joe Simpson whose climbing career was fired by such books tells of overcoming the deep fear by technique, clear thinking and process. Where any sane person would panic a climber would overcome their terror by calmly going through a check list and balance the risk.
In the climbing community, there is special respect for the climber who turns round just short of the summit because they can see from their check list it is the only safe thing to do.
With AI, like rock climbing it is perfectly acceptable to be both terrified and excited at the same time.
Thankfully, good ropes, carabiniers and orange helmets are now being supplied by UK Government. They can help keep communicators safe. Yes, it is scary. Yes, it can be done. There are risks and there is a checklist.
The latest piece of equipment to keep the AI Alpinist safe is the UK Government’s Government Communications Service generative AI policy. It is a profoundly useful addition to your rucksack of comms safety equipment.
When social media emerged in the public sector it was done through a band of militant optimists. I’m proud to be one of them. Our mantra was that it was better to ask forgiveness than permission. With AI, I think it’s now more demanding permission. These documents will help you climb safely.
This will keep you safe
The most important thing about the GCS generative AI policy is that it will help keep you safe if you’ve got common sense.
At no stage is this a green light to go charging ahead with AI in any way you may dream of.
Firstly, the brass tacks. Generative AI basically means the tools that use AI that will help you create something. This includes text and images.
Let’s look at the key points.
Always use generative AI in accordance with the latest official government guidance.
This part of the policy links itself safely by a piece of climbing rope to the UK Government Generative AI Framework. I’ve blogged on this here. By securing it against Government policy this gives you an unbeatable scissors paper stone option.
Uphold factuality in our use of generative AI.
You can’t use AI to create something that misleads. This is a really important piece of policy equipment to be guided by. I can see this being useful to a communicator being put under any implied pressure to try and spin something that isn’t there. It’s also a public declaration of how to use it.
Engage with appropriate government organisations, strategic suppliers, technology providers, and civil society, around significant developments in generative AI, and the implications for its use in government communications.
It’s important that a dialogue is created and maintained to show the wider world how AI is being used. There is no doubt that doubt creates fear and misinformation which can damage hard won reputation.
Continue to review reputable research into the public’s attitudes towards generative AI and consider how this policy should evolve in response.
Again, this is important to root the work in a wider discussion and debate. For example, the Ada Lovelace Foundation have been a beacon of common sense in the field. Their 2023 research on what people in the UK think about AI should be part of your reading list.
Government Communications may use generative AI where it can drive increasingly effective, and engaging government communications, for the public good, in an ethical manner.
This is an absolute winner of a paragraph. Print it out and memorise it. It is the starting pistol, the green light, the permission granted and the opening of the door. In days to come people will look at this and be baffled that there was a time before this technology.
Interestingly, the document refers to first draft text visuals or audio. It can also be a useful tool in making content more accessible. Note that isn’t waving through the final draft sight unseen. To borrow the title of the CIPR document, humans are still very much needed in this process.
Government communications aims to build trust in our approach through acting transparently.
In this section, GCS say that permission will be sought before using a human as an avatar. In plain language, an avatar is a computer generated representation of a person. This can be entirely off-the-shelf and created using some of the tools that are already available. The problem with this is that they can have an American accent or come over as being insincere.
What this particular line also tackles is seeking the permission of people to have their likeness converted into an avatar. This could be useful for HR to create a training avatar to talk you through processes. Tools such as veed.io can do this although the cost of doing so is price on application.
The benefit of having a human avatar is clear. If you’re in the Black Country, a Black Country accent will land better with the local audience. It can also speed up and cut the cost of training video production. However, while I can see this working in HR if it is marked as AI.
I’m really not sold on the idea of an avatar spokesperson tackling a thorny issue.
We will clearly notify the public when they are interacting with a conversational AI service rather than a human.
This is essential. People have mixed views about AI and feel far happier when they are told they are speaking to a robot. This chimes with EU regulations that to me is common sense. We generally don’t mind talking to a customer service live chat pop-up if its marked as AI asking some basic questions a human operator can then use to help you.
Government communications will not apply generative AI technologies where it conflicts with our values or principles.
This makes sense but its probably worth spelling it out. Just because you can doesn’t mean you should.
Government communications will not use generative AI to deliver communications to the public without human oversight, to uphold accuracy, inclusivity and mitigate biases.
Again, humans are involved with this process.
A useful template for communicators
Of course, this is handy if you are a government communicator but its also useful if you are in the public sector or even third sector.
So much hard work I’m sure has gone into this. It would be daft not to take advantage of the learning. To tie what you are looking to do in your own team to these principles or to base your own version on them is common sense.
Huge credit for those involved with this.
I deliver training that now has the elements of AI that you need ESSENTIAL COMMS SKILLS BOOSTER and ESSENTIAL VIDEO SKILLS REBOOTED.