
One of the striking things from a UK Government survey earlier this year was a word cloud in which the largest word was ‘scary.’
Even advocates of it, and I’d include myself in that, have moments where I find myself blindsided by something. But that’s to be expected, isn’t it? Any innovation has moments of excitement and worry.
So, with that in mind, I’d thought I’d write something around what a public sector comms person needs to know about audio and AI.
One other factor for writing this is that so much stuff out there are generic AI think pieces. They are from the tech press, academia, the sector or geeks. Nothing wrong with that, but they don’t have the public sector filter that has to be layered on. What can be used? What can’t be used safely?
Some basic AI terms
As with any field, there’s some jargon with this.
Artificial Intelligence – or AI – is the ability of computers to carry out tasks normally done by humans.
Generative AI is the ability of AI systems to come up with images, text, video, code or ideas. It does this because it can draw upon vast amounts of knowledge to help it make rapid decisions.
Large Language Models – or LLMs – like Copilot, ChatGPT, Claude or Meta AI, are generative AI tools that can help you produce new content. They are given vast amounts of data to base their decisions on.
Avatar. This is a human-like head and shoulders who can deliver through video the message you want to give.
That’s the jargon, here’s some things that public sector people need to factor in.
What public sector people need to bear in mind
Trust. Working with police the number one issue for them is not ‘can we’ but ‘should we?’
Trust is hard earned and easily lost. Right now, people are suspicious of AI and are not totally won over by it. But drill into that and it’s clear that it depends on what AI is being used for. Data from the Ada Lovelace Foundation & Alan Turing Foundation survey in 2025, shows that almost 90 per cent of people in the UK are happy at using AI for cancer diagnosis, for example. The survey shows we are least happy about mental health chatbots with about a third fine with it.
Be open and transparent. The UK Government AI playbook is a magnificent document which I recommend everyone read. In it, transparency is essential. Tell people that you are using it and how you are using it. The Scottish Government, for example, has a webpage that sets out what tools are being used and how.
Have a policy. Right now, it often feels like the Wild West with AI being used without too much in the way of oversight. Having that policy for your organisation. If needed, cut and paste huge swathes of the UK Government playbook.
GDPR. This is absolutely critical. How you use personal data is very much your responsibility. If you want to add someone to an email list you need their permission. You need to tell people exactly what you’ll do.
Information governance. This is how a public sector org looks after data legally, ethically and securely. Unless you know how the unpublished data you are uploading is being stored don’t do it.
The three potential areas where AI is used in public sector comms
AI can have uses in many places across the organisation you work for.
Explaining what service areas are doing with AI. For example, NHS Grampian has been leading on an AI trial which sees AI being used to help consultants diagnose breast cancer. The trial showed diagnosis times reduced from 14 days to three and accuracy improved. Being able to explain this to the outside world demands a basic knowledge of AI for public sector comms.
Idea generation. For public sector comms, a recent survey I carried out showed 76 per cent of public sector comms using it for this purpose. That’s the largest category and beat spell check and grammar into second place.
Content generation. This leads to creating content be that words, images, video or audio.
Okay, so how about audio?
This is where the impressive website blurb hits the reality of working in the public sector.
Meeting note takers. Big red klaxon. Unless the tool you are using is secure, don’t use a meeting notetaker. At a stroke this knocks out a number of AI tools. The meeting notetaker which is silently sitting in on a serious case review can have the data it is collecting silently accessed. Don’t do it.
Chatbots. The comms team may be responsible for the website and chatbots here may be relevant. In particular, users could use their voice to chat with the bot installed on the website. This is fine, but where would the data be going? Is it secure? An insecure off-the-shelf tool may look great on its website but could land you in trouble.
A generic voice in video or your web content. There are several platforms that can give you a generic voice or avatar. HeyGen and veed.io are a couple. TikTok for a long time has had this in its editing app. But just because you can doesn’t mean you should. I’ve seen the occasional use of these tools for public-facing video content and I’ve never been wholly convinced by them. A Home Counties voice to deliver a message to people in the Black Country is not the best option, for me.
Translations. There are websites that assure you they can translate your video into different languages. Right now, avoid them unless you have them checked against delivery by a native speaker. This is because they are not anywhere near 100 per cent accurate. Delivering a public health message that is 20 per cent wrong is problematic.
An AI rendition of someone’s voice. Now, this is where it gets interesting. You can add someone’s voice to an AI tool but you’d need their permission. For me, you’d also need their permission for each delivery and you also run the risk of that person withdrawing their consent. The benefit of having a local voice talking to you is obvious. However, this is more problematic.
AI music. Tools like suno.ai can use AI to create music along with the lyrics you provide in the style you suggest. So, the lost 80s synch-pop classic that celebrates recycling really is possible.
With trust at an issue, I can see a potential role for in-house training videos. However, public facing videos on contentious subjects I can’t see flying too well.
Conclusion
Just because you can doesn’t mean you should. Be open and be clear when you are using an AI voice tool. Make sure the data that is being collected is secure. The obstacles that are in the path of public sector comms aren’t there to block use but to channel you towards using it safely.
Picture credit: Sony Walkman By Retired electrician – Own work, CC0.
I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMS, ESSENTIAL COMMS SKILLS BOOSTER, ESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.
This is a very helpful reprise of the current art of the possible. Great call out about AI minute takers,Dan.