TRUST WARS: Yes, the public sector should be clear on how they use AI

When I was a kid I’m sure I was delivered a lecture on how a reputation was so hard to build and so easy to lose. 

Maybe it was for something pretty minor although – full disclosure – me and eight of my mates were suspended for a day in the VI form for drinking alcohol on Cannock Chase while we were running an orienteering checkpoint. 

I told my kids this a few years ago and they were both – again, full disclosure – ASTOUNDED. 

Reputation and trust also applies to public sector institutions. In the UK, trust in the pillar of Government is sparse with the Edelman Trust Barometer running a 12-year low with just 31 per cent of people in the UK having trust in Government institutions.  

Trust is also easy to lose and hard to build. Look at the Kate Middleton’s photoshopped Mother’s Day picture issue.

Never mind misleading photoshop, AI can demolish trust in an institution overnight.   

What made me reflect on the issue of identifying AI content was Scottish Government’s bold announcement that all public sector bodies north of the border will be required to register their use of AI for projects. Importantly, this logs projects rather than all AI use. At the moment, the register is voluntary but is the first in the UK to become mandatory.

What’s on the Scottish AI Registry now? 

A quick look at the AI Register shows just three projects. Included in this list is a tool that shows how vulnerable children may be to exploitation and a virtual concierge assistant to help you choose the right tool for blind or deaf people to take part in civic society. 

The benefit of being transparent

Back in the day, Tom Watson MP was a junior Minister responsible for the Civil Service (full disclosure: Tom was a very approachable contact when I was assistant chief reporter in the Express & Star’s Sandwell office). 

One weekend, Tom crowdsourced what should be in the first draft of the civil service social media guidance. This included a suggestion to declare your connection to the Civil Service as you used social media in connection with a civil service matter. I’ve always thought this broad approach was a good idea.

If you’re declaring how you are using AI this can only build trust. There is no ‘gotcha’ moment but there may be a debate about the methods you use. But if you can’t justify it then should you even be using it? 

Why setting out how you use AI is a good idea

For me, yes a comms team should set out how AI is used. 

Indeed, Meta has already required that content created with AI are labelled. So, images and video created with AI tools need to be identified. But so too must text that’s been shaped with a tool like ChatGPT and posted to a Meta platform such as Facebook, WhatsApp, Threads or Instagram.

Not only this but in the UK, uploading deepfakes without the owners’ consent is already a crime. I cannot sensibly think of a time when a public sector comms team would create such a deepfake without the subject’s permission. However, the state of political campaigning in America is another thing entirely.  

I’d be interested to hear what others think. 

Leave a comment

Leave a Reply

Discover more from Dan Slee

Subscribe now to keep reading and get access to the full archive.

Continue reading