AI TOOL: How the Generative AI Framework for HM Government can help comms people

UK Government has released a hugely document that sets a path for comms teams and others to use AI safely.

The Generative AI Framework for HM Government is 74-pages and published by the Central Digital and Central Data Office. It sets out exactly how you can and can’t use generative AI. On other words, tools like ChatGPT that create text, audio, video and images. 

What’s also striking is that there is a commitment to update the document as our collective understanding changes and evolves. That’s really good to see so it won’t stay preserved in aspic.

Here’s what they say.

The 10 principles of ten common principles to guide the safe, responsible and effective use of generative AI in government organisations

Principle 1: You know what generative AI is and what its limitations are

This encourages people to learn about AI to understand what you can do, can’t do and what the risks are. Generative tools are not accurate but are designed to be plausible. 

Principle 2: You use generative AI lawfully, ethically and responsibly

This puts a responsibility on you to act within the law whether that be copyright, data protection. It also makes the point about AI not replacing strategic decision making. 

The principle also should also use the AI regulation white paper’s fairness principle which states that AI systems should not undermine the legal rights of individuals and organisations. And that they should not discriminate against individuals or create unfair market outcomes.

Principle: Fairness

Definition and explanation

AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law.

Fairness is a concept embedded across many areas of law and regulation, including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people.

Regulators may need to develop and publish descriptions and illustrations of fairness that apply to AI systems within their regulatory domain, and develop guidance that takes into account relevant law, regulation, technical standards, and assurance techniques.

Regulators will need to ensure that AI systems in their domain are designed, deployed and used considering such descriptions of fairness. Where concepts of fairness are relevant in a broad range of intersecting regulatory domains, we anticipate that developing joint guidance will be a priority for regulators.

A pro-innovation approach to AI regulation, UK Government, 2023

Principle 3: You know how to keep generative AI tools secure  

This talks about the importance of allowing AI tools to use the data you want it to and not give it free reign across areas where sensitive personal data is stored. This recommends checks to guard against malicious intent and are not leaking data.   

Principle 4: You have meaningful human control at the right stage

This talks about the need for humans in the process. Someone i needed to review the outputs to make sure they are producing as well as the tools and data that were fed into it in the first place. 

Principle 5: You understand how to manage the full generative AI lifecycle 

This looks at the importance of knowing what a number of terms are. Such as AI drift. This is the term that describes a loss in focus of the tool and deviation from the original purpose. It also covers hallucinations where fake newspaper stories or academic research, for example, can be conjured up to prove a point or argument.   

Principle 6: You use the right tool for the job

This looks at the importance of selecting the right tool for the job. It encourages the use of generative AI when it is the best place tool. In order to do this it implicitly encourages the user learn and experiment in safe spaces. How else would you know what the best tool is if you don’t know how to use them?

Principle 7: You are open and collaborative

This encourages people to work with other parts of Government who are experimenting in the field. 

Principle 8: You work with commercial colleagues from the start

This encourages working with people outside of Government to understand the limitations of generative AI tools. It shouldn’t just be people in Government playing into that decision making. 

Principle 9: You have the skills and expertise that you need to build and use generative AI

Using generative AI needs skills like the ability to ask a question – also known as a prompt. Prompt engineering – or polishing the questions asked is one such skill that’s needed. 

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place

There needs to be governance of the AI process. You need to understand the risks and mitigate them early in the process.

Conclusion

UK Government has been keen to develop the UK as a place where AI innovation takes place. This document is a useful tool for it to be used responsibly and in a way that people inside and outside the organisation can be reassured by. 

The 10 principles are available as an anchor point for responsible AI use.

You can use them in the rest of the public sector but you’ll probably have to explain them. But what you can do is point to a trusted organisation as the basis for what you are doing.

Of course, if you’re not in the UK you’ll have to look at your own home government’s approach.

Trust is the absolute issue when it comes to adopting AI. There is suspicion of AI in the wider population and using tools that people don’t understand with no safeguards in place is not only reckless it is also career limiting. 

One dilemma does face me. People in the comms and PR community are not especially keen on AI. There is not the space and capacity for people to learn. There are no Google Fridays that allow self discovery and experimentation. With that in mind, learning under your own steam is to be encouraged no matter how difficult. 

Leave a comment

Leave a ReplyCancel reply

Discover more from Dan Slee

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version