STOCK CLIPS: How to build a B-roll library for a public sector comms team

In the words of Mark Zuckerburg, video has been the prime way that people consume content online for several years.

You may be used to image libraries of visual assets but how do you now create a B-roll library of shots that can be re-used in future content?

B-roll is the name for the supporting footage that illustrates your film. Maybe, that’s things like buses in the town centre, summer in the park or social care staff talking to a client.

Now, I don’t think every film should feature it but it is certainly useful from time-to-time. 

Here’s some ideas for you and some pitfalls to avoid.

GDPR and shooting for the public sector 

Firstly, anything you do shoot in the public sector is subject to GDPR. So, under the ICO’s office’s rules you need ‘explicit permission’ from people if they are recognisable. 

If you are conducting an interview, then get their permission and explain exactly how the footage will be used. That’s what’s meant by the ‘explicit’. 

So, the resident saying: ‘I think Dudley Council is great’ can be used for the video of the park event the council stepped in to save. However, the quote can’t be re-used for the budget cuts video unless the speaker agrees.

My colleague Julia suggests using something like Google Forms to create a form which can then be adapted for each job. Each form URL can then be run through a tool like qr-code.io to generate a QR code you can take with you while you are out and about. The interviewee then uses the QR code from their phone to access the link. Smart.

This avoids the issue of building up a sheaf of paper that then gets rained on or left in the car.

For big events, the ICOs office suggests a catch-all permission sign by the gate to the park informing people that Dudley Council are filming for social media a film that celebrates the park fun day. People can contact a steward if they want to opt out and can be given a coloured lanyard. That way the videographer knows to avoid them on the day or in the edit.

With children make sure you get that explicit permission.

Interestingly, journalism isn’t covered by GDPR. The Councillor accused of punching the bus driver can’t tick a box saying ‘no publicity’ as they arrive at court. Nor should they. 

Shooting B-roll 

Most B-roll you’ll want to shoot probably won’t have people who are identifiable. It can be things like the park in summertime or buses running through the town centre. This stock footage can then be repurposed in future for other film projects.

There’s an obvious advantage for shooting your own B-roll. 

Whatever you do shoot is likely to have local landmarks or be recognisable. Where I live, the buses are National Express West Midlands. If I see London buses illustrating a film about subsidised buses through Quarry Bank all credibility in the film has gone.

Do get into the habit of shooting B-roll when you are out and about.

Here’s what to do:

  • Shoot 20 seconds of landscape footage
  • Shoot 20 seconds of vertical footage
  • Shoot some alternative perspectives of the same in landscape and vertical.

Creating your own B-roll library

The first thing to do is save the file with the right key words.

So, a file name “Transport_Halesowen_bus_station_landscape” may work for the landscape shot of the Queensway bus station and “Transport_Halesowen_bus_station_vertical” will work for the upright.

Remember to use the same system of labelling for all your B roll.

Now that’s been shot where to store it?

Well, there are commercial providers I’ve looked at, but they start from around £4k a year and your budgets may not stretch to them.

I’m not totally convinced they are needed.

A perfectly workable alternative is to use Google Drive or Microsoft’s OneDrive. Create folders for the subjects you’d like. So, Parks, transport, social care or whatever works best.

Here’s an example of the areas to save as part of your file name.

Having a file naming system like this will help you recover it again. Feel free to change, adapt or simplify. 

Here’s an example of wide and vertical B-roll. This is St John’s church, Halsowen outside my office.

And here’s the landscape shot. This was shot straight after the upright.

I’ve kept the low murmer of the churchyard on as audio in these cases but there is an argument for removing sound for general shots like these. Not everyone remembers to adjust sound levels on each clip in the edit.

Spending time creating B-roll

There may be an argument for either commissioning a videographer to create you some B-roll. If you can’t do that, you may want to devote some time for gathering footage you know you are likely to re-use. That time spent can very quickly pay for itself.

If you are out and about filming once you’ve posted your video take a few minutes to add your individual shots to the library. 

B-roll libraries

Depending on who you are, B-roll libraries could be an option. They can be quite generic and the danger would be to use some footage which clearly wasn’t from the area you are talking about. A few years ago, a designer for Birmingham City Council famously used a shot of Birmingham, Alabama in some literature to wide Brummie derision. This is a risk you need to be aware of if you go down this path.

They can also be quite pricey. 

A good tip would be to make a search for what content is available. 

Here’s an example of bus B-roll from Pixabay. It was found using the search ‘UK bus’. Closer examination indicates it was shot in Liverpool.

Here’s an example of some footage from Pixabay:

Under the terms and conditions of Pixabay, you don’t have to credit the website or even the person who uploaded it, although they say they encourage it. You can also give the uploader a few quid, too. Again, that’s optional.

Almost all footage in B-roll libraries are landscape so if you were looking to create in vertical you’d need to import the clip as a cutaway and layer it over so it overlaps in the edit.

I’ve had a look at different B-roll providers and made some test searches along UK, UK regional and UK rural options. The cost and quality varied.

Most libraries allow their assets to be downloaded and stored within your own libraries. Double check. 

Of course, you also need to be alive to the fact that a slick drone shot of a town centre at night you are using in a film to illustrate your council’s night time economy may lead to questions. Like ‘how much was that drone?’ 

External B-roll libraries I’ve looked at aren’t strong on people content. So, if you are making content for the NHS or social care there may not be anything down for you.

Lastly, be alert to the fact that some external libraries may have AI-generated footage. This may or may not be in line with your AI policy.


I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMS, ESSENTIAL COMMS SKILLS BOOSTER, ESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

BUCKLE UP: What does AI mean for the future of PR skills? New research is out

Okay, I’ll strike a bargain with you there’s some good news and bad news for the future skills needed in 2030.

First bad news, research from academics working with Microsoft has been published and it doesn’t look pretty for public relations. Its 24th on the list of 200,000 jobs most exposed to being replaced by AI.

Here’s the list of top 40:

So, what’s in the bottom 40 which are predicted to be the least likely to be replaced by AI? 

The answer to that is dredge operators, bridge and lock tenders, water treatment plant system operators, motor boat operators and foundry coremakers. In short, they are practical jobs that are not part of the knowledge economy.

Optimism

So, what optimism can be found from this?

In the cycle of shock, denial, bargaining and acceptance here’s denial.

Public relations is 24th on the chart. Queen’s ‘Don’t Stop Me Now’ only reached 24 when it was first released and that still has value.

The good news is that every study and forecast doesn’t appear bleak for communications people. The World Economic Forum’s forecasts for the labour market in 2025 not only doesn’t see decline it also sees some growth in the number of people employed.

Social media strategists are towards the top of this image with media and communications also growth. That’s shown by the blue column.

Communications doesn’t show as much growth as AI specialists but it’s heartening to see a more positive image here. Before you ask, jobs at the bottom are bank tellers and postal workers. You can see the full World Economic Forum Fuure of Jobs 2025 report is here.

The report shows a third of UK’s core skills will change by 2030. Compared to other countries, that’s quite tame. That figure is almost at the bottom of the league table for the amount of change expected.

Now for bargaining and acceptance

On the Microsoft chart, as an English & History graduate I don’t see how historians are second. This is a job of interrogating files, papers, documents, interviewing and evaluating.

So, could the research be wrong? 

As with any forecasting there are elements of prediction and guesswork. It may come to pass and it may not. This is also research that Microsoft have a big hand in. 

It’s also important to remember we are on a hype cycle with AI. Tech company Gartner came up with this way of looking at technology that sees new ideas followed by a peak of inflated expectations and then the fall into the trough of disillusionment. From that we can recover to the slope of enlightenment where we start to work out what’s usable and what’s not. After this, if we are lucky we can have the plateau of productivity.

Here it is visually:

This approach maps the dot.com bubble. At first, every start-up was going to change the world and then we realised it wouldn’t. Then we realised that Google and Amazon were really quite good. Then we hit the plateau of productivity.

UK Government talks of a £45 billion saving through AI trimming back six per cent of Government spending by harnessing AI. I’m not yet convinced by this. They are putting a lot of faith into hyped technology. Who knows if they are backing a winner or just boo,com.

Yet, this also leads into a third truth. With technology we often over-estimate what the short term will look like and then under-estimate the impact long term. This is known as Amara’s Law. 

So, what does the future look like?

To be optimistic, one thing that AI doesn’t have is lived experience, common sense and political nouse that public relations, particularly in the public sector, need to have. Many of the starter positions have long gone from comms teams. When I started, there was an employee working two and a half days a week scanning newspaper cuttings before circulating them to senior people. That went in the first wave of cuts following the Banking Crisis.

Yet, the public sector will only go as fast with AI as the public will let it. When I talk to police communicators, trust is the number one issue. Blow trust with a community and the police are in real trouble. At various points in history, when that goes a breakdown in law and order can take place. Think Toxteth, 1981 for example.

In short, the predicted pace of change does not factor in these very human variants.

The reality is that the world is changing rapidly and standing still isn’t going to work for many people. As communicators, we absolutely have to be prepared for it. 

Is the public sector ready for it? If I’m honest. I think many comms people would like it to go away. 

When I post AI content, it performs as third as well as that on more conventional comms. If you’ve read down to here, thank you. You could well be the people that find a future with AI. There are those who don’t still up that simply won’t.
I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMS, ESSENTIAL COMMS SKILLS BOOSTER, ESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

TEXT BOOK: Nine examples of an AI policy and what you can learn from them

Regardless of political orientation, Prime Minister Keir Starmer was right in saying that AI is the challenge for our generation.

How we respond to this and make it work will make a profound impact on the Britain we live in in decades to come.

How we get the foundations right will have a big bearing on how the future will pan out.

UK Government has been quietly doing some excellent work in the field for a number of years now. Their AI Play Book which I’ll look at in the post is a gold standard for any organisation. Boiled down, it has 10 principles which are jargon-free. 

Transparency shines through this approach as it does through so many well written AI policies I’ve looked at. The question of transparency is a critically important one. In 2025, people are nervous about AI and maintaining trust is important.

I put it to you that no organisation can use AI on the quiet. It will come out. It is far better for you to pick when and where you shape this conversation. Do it at a time of your choosing. Don’t be left with an hour to draw-up a statement in response to a media query based on an FOI request. You will curse that you didn’t do it sooner.

This is a comms issue and comms would be well served to help the organisation drive this conversation.

As the UK Government principles show, AI is not an IT issue. It is IT, legal, information governance, senior leadership team, equalities, HR, comms and frontline services issue.

So, here are some policies from a range of organisations.

Remember to communicate

What needs to be made clear is that things do not stop with publishing a policy as a pdf as an attachment to some meeting notes. That’s not communicating. That’s shuffling paper. 

In the public sector, the RNLI have made a public appeal for staff to get involved with their national AI review. They have set-up an AI review group to evaluate the best path forward. I love this as an approach. By casting the net wide they are encouraging voices to come forward not the usual suspects. That’s brilliant. It’s also the opposite of IT in a room Googling to come up with a policy.

Why you need an AI policy 

Much can go wrong with thoughtlessly applied AI. One nightmare scenario is the serious case review discussed over a Teams call where a meeting note taker summarises the points. That personal data is later extracted by a bad actor and published online.

Or how about personal data on a member of your family also being extracted from an AI tool.

Or people thinking you are up to no good because you use voice-generating software that doesn’t quite ring true.

The NHS is trusted by 80 per cent of the UK population to act in your best interest while UK Government on 38 per cent narrowly beats social media companies. Trust is best gathered in drips over time and can be lost quickly.

NINE examples of AI policies

  1. UK Government AI Play Book

This is a magnificent piece of work that UK Government have published. This does the job of an AI policy for the Civil Service. Even better than that, it also gives a template or at the very least signposting for how AI can be used in the public sector.

Principle 1: You know what AI is and what its limitations are.

Principle 2: You use AI lawfully, ethically and responsibly.

Principle 3: You know how to use AI securely.  

Principle 4: You have meaningful human control at the right stage.

Principle 5: You understand how to manage the full AI lifecycle. 

Principle 6: You use the right tool for the job.

Principle 7: You are open and collaborative.

Principle 8: You work with commercial colleagues from the start.

Principle 9: You have the skills and expertise that you need to implement and use AI

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place.

This should be part of your reading if you are looking to understand how AI can be made to work in the UK public sector. In particular, the emphasis is on transparency.

2. Friends of the Earth Harnessing AI for Environmental Justice

As a campaign group to save the environment Friends of the Earth can be pulled in two ways on AI. Yes, it uses electricity and by doing so can be harmful to the environment. But it can also be a useful tool. 

I love what the campaign group have done with their approach. They’ve distilled it to seven main principles.

  1. Curiosity around AI creates opportunities for better choices.
  2. Transparency around usage, data and algorithms builds trust.
  3. Holding tech companies and governments accountable leads to responsible action.
  4. Including diverse voices strengthens decision making around AI.
  5. Sustainability in AI systems helps reduce environmental impact and protect natural ecosystems.
  6. Community collaboration in AI is key to planetary resilience.
  7. Advocating with an intersectional approach supports humane AI.

None of these will frighten the horses. By being broad principles, they are likely to be more adaptable and flexible. In a fast-moving environment this makes sense. Set out how to use ChatGPT all you like but what happens when a new tool is launched?

The charity also mark content they produce as being created with the help of AI and also set out how it was used. For example, the downloadable version of this on page 29 gives an appendix to set out how AI was used to produce it.

For example: 

In addition to accepting low-level autocomplete suggestions from Google Docs and fixing spelling mistakes, we used a collaborative notebook through NotebookLM.

I love this approach. This is the most granular exemple I’ve come across. Maximum transparency like this will start a conversation on how AI is being used

3. Watford Council AI policy

This is a good example of what a policy should look like. It also builds into it a review every six months which is absolutely on the money. The approach also asks council staff to set out what they are using and why they are using it. This is good to see. They point to the rather jargon-filled algorithmic transparency recording standard set out by UK Government as a template to complete. This is not mandatory – yet – for the public sector but I do admire the approach. Using it, you can set out what you are using and why you are using it.

I very much like the AI footnote that it sets out where content created with AI needs to be marked with the footnote:

Note: This document contains content generated by Artificial Intelligence (AI). AI generated content has been reviewed by the author for accuracy and edited/revised where necessary. The author takes responsibility for this content. 

The curious journo in me wonders what tools were used and how in a granular way. Again, this is a pdf and I’d love to see how Watford Council communicate all this to the public in an accessible way.

I also like the requirement to test new APIs and plug-ins. How it deals with hate or discriminatory inputs seems sensible, for example.

4. Leicester City Council Generative AI policy

Like Watford, Leicester also require a disclaimer to mark out content that has been created through AI. It also looks to capture the use of tools on council machines as well as staff’s own devices being used for AI. This is quite canny as there’s going to be many people using AI on the quiet. 

It also requires a data protection impact assessment to be carried out before a tool can be used. I’d love to know if there is a central repository of the tools that are being used and why.

5. Russell Group of Universities AI policy

I’m not a fan of it being presented as a pdf. But the five points at the heart of this are simple to understand. In a nutshell, this encourages Universities to support staff and students to use AI so it can be used appropriately and effectively. That teaching should be adapted to include generative AI while academic rigour is maintained and Universities work together in the field.

By the looks of things, this is a one off declaration rather than marking each piece of content when and where it is AI-created.

6. Cambridge University: How we use generative AI

This seems more public-facing. It’s presented as a webpage and in plain English. It sets out how the University will use generative AI. 

It also rules several things out. The University won’t create images from scratch although there is one to illustrate the post. It also won’t create text entirely using AI. It rules out using deepfake videos and also using voice generators unless it is demonstrating how these tools can be used. But it will use AI for photo editing and tools like ChatGPT for inspiration.

Interestingly, this is posted in the ‘staff’ tab when an audience is also the public.

7. Humber and North Yorkshire Healthcare Partnership AI Governance Policy

This is a really thorough example that covers a lot of bases. It sets out what it is and how it will be used. It’s thorough and robust. It sets out who is ultimately responsible. It names what it calls a Senior Information Risk Owner. A SIRO. God bless the NHS for it’s acronyms. Clearly, a great deal of work has gone into this and this looks like it’s not just been cobbled-together on the hoof by IT. 

However, it’s a pdf and I’d love to see what extra steps are being made to communicate this to staff and to patients. 

8. Bedfordshire Hospitals NHS Trust

This policy document covers the ground but also acknowledges the tactics needed in the scenario of drafting a letter to patients. Add a placeholder, it recommends. By the looks of it, the only stakeholders to have some input have come from within the organisation. I’m not convinced this is as transparent as its possible to be.

9. St George’s and Priorslee Parish Council AI policy 

It’s fascinating to see a Parish Council also publishing an AI policy. It’s well thought through and covers the important bases. If a Shropshire parish of 11,000 souls can manage as an organisation to get their act together then what’s your excuse?

Creative commons credit: Line up buses in the 1980s by Les Chatfield.

I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMSESSENTIAL COMMS SKILLS BOOSTERESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

OPEN SHUT: How open and transparent should public sector comms be when using AI?

In training, one of the 10 UK Government principles for using AI gets most attention.

Like the stone in the shoe on a long walk, the principle of being open and transparent gets the most thoughtful chin-stroking.

Now, in principle – that word again – everyone nods in agreement at the idea. Of course, we should be open. But exactly how? 

The UK Government’s AI Play Book sets out clearly how to work with this principle: 

Where possible, you should engage with the wider civil society including groups, communities, and non-governmental, academic and public representative organisations that have an interest in your project. Collaborating with people both inside and outside government will help you ensure we use AI to deliver tangible benefits to individuals and society as a whole. Make sure you have a clear plan for engaging and communicating with these stakeholders at the start of your work.

But all of a sudden when faced with how to apply this people often start to feel uncertain. There’s nothing wrong with this. One of the gifts of a good public sector communicator is to spot the potholes in the road they are travelling down. With AI we are in new territory.

What do the public think of AI? 

There’s concern about whet people will say if they find out you are using AI.

Maybe you’ve not fessed up to IT on how you are using it.

Polling data shows the public are uncertain of AI in many areas of life. While nine out of 10 people are positive about using AI for health diagnosis less than a fifth are happy with political advertising, according to the Ada Lovelace Foundation.

Not only that, but the public sector doesn’t have masses of friends right now. Local government budgetshave had billions of pounds stripped from them by Government. It’s a situation other parts of the public sector can recognise too be it NHS, fire and rescue, or central government.

So, what do we do?

Well, if you’re using AI you could go for the Ostrich approach and hope nobody spots you.

But if word gets out – and it will – you’ll be playing catch-up to all sorts of lurid rumours of how what you secretly want are robot nurses in our hospitals or that AI will make everyone in the Town Hall redundant. To be fair, people have got legitimate concerns about AI and their jobs.

Far better to be, as the principle states, open and transparent.

What does open and transparent AI look like?

So, if the debate is not if you should be transparent but how much what should how much look like?

Scottish Government have been the first in the UK to have a registry for projects and how AI is used. Interestingly, a trawl of the sight shows the government comms team, insight team and marketing teams have acknowledged using the Brand Watch social listening tool

Five minutes of searching Google News has not located a single story covering this fact.

So, it appears that being open and transparent would normalise the use of AI. This is how it should be. 

Can you dodge out of it? 

Now, you can say to yourself that you don’t have to worry about UK Government guidelines because you are not in the civil service. If that’s true, you won’t get a tap on the shoulder from a civil servant. But is that seriously good enough? 

Other parts of the public sector have been slower in getting their act together.

The tide of AI is rising far faster than the ability of policymakers to draw up sector-specific policy. 

If you’re bright I’d urge you to put your own thoughts to it. 

Granular or big picture?

If the Scottish Government example is big picture what about other tools? 

Well, if you are posting an AI-treated image to Facebook or Instagram then you need to mark it up as such. That’s been the case for some months. It’s the same on other platforms for images and video. 

I’ve not seen a requirement to mark as AI-generated text made with the help of a tool like ChatGPT or Copilot. In fact, LinkedIn encourages you to use AI tools to write the post. 

In local news. Reach plc since 2023 have been using AI to generate reporting with this ‘Seven things to do in Newport’ one of the first acknowledged examples of using AI to generate content. It’s not marked as being written by AI. Since then, they have trimmed down writing times using AI tools.

Let’s not forget the curious case of the Bournemouth Observer a site with fake journalists created by AI which closed down after being exposed by the Hold the Frontpage website. We don’t mind AI, the moral of the story appears to be, but we don’t like being mislead.

How granular should you be?

You should be marking AI-assisted images and video as AI when you post to social media.

Should you also say if each individual post, web page or press release was created with AI? Or should you have a space on your team webpage that explains how AI is used? Or maybe, follow the Scottish model and have a sector-wide registry? 

These are questions that haven’t been resolved in the public sector.

Elsewhere, parts of the third sector is taking a lead on this by requiring ALL content to acknowledge the role AI has played in its creation.

For example: 

Staff are free to use AI tools if they wish for their own work, but are asked to make it clear to others when they do so, including in any work we publish. Hannah Smith, Director of Operations, Green Web Foundation

Friends of the Earth in their policy document on setting out their seven principles of how they  will use AI included transparency. In keeping with the spirit of this they set out as an appendix to the document how they used AI. Google Notebook was used, they say, along with spell-checking from Google Docs. They add:

We used generative AI to generate text sparingly and didn’t use it to generate images at all. A small number of paragraphs in this article started out as AI-generated based on our prompts as a form of placeholder text while we built up arguments in other sections. These placeholders were then deleted, rewritten, edited and otherwise remixed. The vast majority of the article was written and edited without any generative AI. 

The question should not be if you should be transparent in using AI but how.

I deliver training to help you make sense of the changing landscape ESSENTIAL AI FOR PUBLIC SECTOR COMMSESSENTIAL COMMS SKILLS BOOSTERESSENTIAL MEDIA RELATIONS and ESSENTIAL VIDEO SKILLS REBOOTED.

30 Days of human comms #62: The JustPark app social advert

An elderly couple going shopping may be one of the greatest social videos I’ve ever seen.

The pair go with their son and use the JustPark app as a way to locate and pay for a parking space.

The video is shot POV by someone close to the family so the couple Michael & Teresa behave naturally for them.

The couple look in their late 60s and get into the car. There’s a minor squabble about the number of times Teresa took her driving test. She says two. Her husband Michael says four.

They drive along and Teresa struggles with the idea of using a stranger’s drive to park.

“That’s tresspass isn’t it?” she worriedly asks.

They park. It’s fine. She’s converted.

You can see it here:

It’s a beautiful film filled with warmth and humanity.

You can’t fail to like them both.

It’s also a fast edit that also has a voiceover intro from what is possibly Michael & Teresa’s son.

It’s human, it’s not AI generated and is filled with the rough edges of people’s relationships. There’s a feeling Teresa’s driving test has been discussed before. You are entering priviliged space.

So, could this be replicated?

Well, maybe Michael and Teresa can’t come to your [Insert service here] but what it does open the field up to is for people to capture an experience.

So, what does the family make of the trip to the leisure centre? Open with the excitement of getting into the car, the drive, the heading into the baths, the post-swim drink, the children’s feedback in the car on the way home, maybe.

All this needs the consent of participants, of course.

But spent a couple of minutes on options and there’s a whole vista of experiences.

A shopping trip to the town centre, testing a smoke alarm, putting out the recycling, or whatever.

I love that this captures the lived experience rather than the corporate message or the slick marketing that in comparison falls down flat.

Bravo.

AI: What does the rash of Gareth Southgate deepfakes tell us?

As public sector communicators we can expect deepfakes to try and throw elections… but as satire?

In the final days of Euro 2024 a rash of fake videos involving England manager Gareth Southgate were posted. But rather than derail a campaign these were so obviously fakes that it’s worth asking what’s the point.

Here’s one after England beat Holland.

Would an England manager celebrate drug use? Of course not.

The quality of the deepfake isn’t the best. The audio doesn’t quite synch with the video and it’s fairly easy to spot the joins.

So why did they bother?

For the LOLs of course. It’s a chance to spoof and to post satire. In the 19th century ‘Punch’ cornered the market with cartoons. Just lately, photo shop satire has even been part of the official political campaign such as this spoof of Rishi Sunak’s bad mortgage advice.

Of course, it’s tempting to disregard AI deepfakes like this.

But not so long ago there was a Gareth Southgate-related clip that really had me guessing. This was an interview with an Arsenal defender who had ruled himself out of being picked by the England manager.

This time it was believable and skillfully edited with cutaways in the style of a Sky Sports package. I’ll confess I went looking online for corroboration.

That video is this…

So, what does this mean?

The technology is there, getting better and bad actors need to base what they do in reality for it to really cut through.

LONG READ: It was the memes wot won it and other lessons for communicators from Labour’s General Election win

‘A new dawn has broken, has it not,’ Tony Blair famously said as he addressed supporters at Royal Festival Hall in London as its new Prime Minister.

The Labour operation had deliberately waited until the first golden rays of the morning sun had reached over the Thames to brighten the shot that framed the 1997 Labour landslide.

Their approaches of message discipline and news management became the textbook of how to communicate. 

Yet, everything changes, as Take That once pointed out. In 2007, The Sun sold 3.1 million copies and the News of the World shifted 3.5 million. Today, one doesn’t exist and the other no longer publishes circulation figures. 

In 2024, The Sun’s intervention to support Labour with a lame football-themed frontpage that called for a new manager was met with general indifference and a shrug. It was a bulletin from another era. 

Eighty per cent of the UK population has a social media account where we will graze our entertainment and news will come and find you if its important enough. The very idea of cycling to the paper shop to find out what’s happened belongs in the 20th century. 

My General Election from a different perspective

In 2024, free of being politically restricted I volunteered to work on Labour’s campaign in Halesowen. It was professionally eye opening. 

My first General Election was that Blair Labour triumph of 1997 where I covered it as a reporter for the Halesowen News. Labour fought and won that Black Country seat that fringes the Worcestershire countryside.

As a reporter, the phone would be ringing with calls from candidates most days in the six months beforehand. In the last six weeks, we would have a theme and invite the candidates to tell us what they’d do to handle crime, the NHS, jobs and other perennials. One week, we even got each candidate to submit an example of handwriting with their permission to a retired company director who was the UK head of a graphoanalyst society.

In 2024, The Halesowen News, is no longer based in the town, featured the Labour candidate a handful of times. Print media was an after thought to the campaign.

This was the meme election 

But if it wasn’t local media driving the debate what was? I think I’ve got a meme that can tackle that.

Memes are sharable pieces of content that can make an observation, crack a joke or make a point. Agree? Hit like. Disagree? Fall into the trap and start an argument that will boost the original post with the algorithm.

Both Labour and Conservatives used memes as the sharp spear point of their election message. Activists were signed-up to spread local-themed and national messages across their networks.

Politics has long moved on from 19th century beer-laced election festivals to hustings to newspapers to the mobile phone that you scroll through. Had Blair, Churchill or Attlee being campaigning today they would be all across the meme.

The Conservatives had an app while Labour had a website with downloadable imagery. 

But for all the officially-shaped content there was also a blizzard of combative unofficial content that would never have got past the approval process. Reform have a downloadable profile picture that’s all about spreading the branding. 

The Sun boasted in 1992 that ‘it was The Sun wot won it.’ In 2024, if there was one thing more important than another maybe it was the meme election.

But…

This was also the anti-meme meme election 

Need a message? Here, have one. Then move onto the next thread. To counter that there’s the anti-meme meme. You’re making this point? Here’s a meme that pricks your balloon.

There was plenty of this in the meme wars that raged across the internet and in particular in community Facebook groups and Nextdoor.

Back in 2019, I ran some research that showed big ‘P’ politics did not sit well in community Facebook groups in the run-up to the General Election which Boris Johnson won for the Conservatives. However, small ‘p’ politics was often fine and the angle to approach the national issue. The person complaining about not being able to get a GP appointment was the wedge to start talking about which side of the argument you’d favour.

As the campaign went on, it was clear that even more subtle ways to get past the admin gatekeepers was needed. In particular I was impressed at the Stourbridge resident who offered the olive branch that this really was all about love and used the highly incendiary shot of Matt Hancock kissing an aide during COVID in breach of COVID regulations.

Nextdoor were particularly adept at throttling the algorithm on content that may have mentioned elections. 

This was the AI election (sort of) 

In 1924, the Daily Mail printed the Zinoviev letter. This quoted an emissary from the newly-formed Soviet Union that spoke in support of the Labour Party which was knocking on the gates of Downing Street. It alarmed Middle England. It was a fake. But the public didn’t know this until after the election.

A century later, there was no AI-generated equivalent that pointed an accusing finger at a Labour Leader poised to take power in then last few days. This doesn’t mean that there wasn’t AI if you went looking for it. 

The big warning that AI was going to flood our timelines with misinformation and disinformation didn’t land this time.

What we did see was a lot of ‘patriotic’ right wing AI art of Reform’s Nigel Farage and more racist content that was also called out. It was clearly artificial. But both Conservative and Labour also created memes that showed opposition figures in unreal scenarios. Labour using Rees-Mogg’s face to show what it would look like to wake up next to him if there was five more years of a Conservative government.

Was the Rees-Mogg image made with AI or just PhotoShop? I don’t know. It’s hard to tell. But that’s just it. It’s supposed to be hard to tell. It was definitely not real. 

There was also the fake TikTok videos of leading politicians playing and commentating on Minecraft. To be really effective in their manipulation they have to carry a grain of truth. This couldn’t have pulled the wool over anyone. So, harmless then? Yes, largely. But it does nothing for building up politics as a worthwhile and noble profession.

More worryingly, The Guardian pointed to one example of AI tools being used to manipulate audio recorded on a Ring camera. This footage was shot which captured Labour supporters calling to deliver a leaflet. The candidate who posted it alleged a racial slur. An analysis of the recording showed anomalies. 

This was the TikTok election

Big noise was made about the role TikTok would play. The Guardian have written this excellent balanced piece which puts the platform into context. Ten per cent of people get their news from TikTok and this could be a traditional broadcaster such as the BBC, a new entrant like ‘Oh, God What Now?’ or The News Agents or a citizen journalist.

Aside from that, the parties themselves were active creating content specifically for TikTok that looked and felt unlike video from other places. 

This was the podcast approach election

I heard an episode of Radio 4’s ‘The Westminster Hour’ during the campaign. It was dreadful. Set piece lines to take deployed against each other by rival MPs not yet famous enough to have won their spurs.

The only Leaders’ debates that looked anything other than painful was Sky News in front of an audience whose laughter stripped past the lacquer of pre-prepared interviews. I cannot think that the set piece interview as it stands has any life left. It has been sanitised to death buried with a green pharmacy cross on its grave.

Yet, the informal podcast approach taken by programmes such as Electoral Dysfunction with Beth Rigby, Tory Ruth Davidson and Labour MP Jess Phillips or the genre-defining The Rest is Politics with Alastair Campbell and Rory Stewart are far more engaging. 

It was the clipped-up news election 

So, if print media is largely irrelevant does this mean that journalism is dead? Of course it doesn’t. It just means that the news will find people in clips that are seen online by far more who watched the original. 

Keir Starmer’s stumble on Bangladeshi immigration cost his Party a big chunk of the Bangladeshi population across Britain, for example. Even the local journalism turned into sharable content.

And finally

If you think all this is just political communication and it won’t affect you as a communicator think again. Political campaigns, as I’ve said many times, are a petri dish for innovation. 

The memes played a role but so did other factors. I can focus on the digital element but the door knocking, data gathering and get the vote out operation was all part of it.

Several of these approaches with a degree of imagination I can see working across the public sector. History shows that new tools which are at the bow wave in an election often become firmly part of the toolkit.

BIG LIST: In 2024, there are now 14,106 marketing tools that use AI

The martech map has long been a good yardstick of the pace of change for AI.

The 2024 edition has been published at martechmap.com.

While it articulates visually the maze of tools that are out there it also works as an interactive map that checks and logs new tools by category.

Martech – short for marketing technology – lists applications and websites with added Artificial Intelligence.

Move your mouse across the link on the website you can see a range of different categories from PR, content marketing, display and programatic advertising, mobile marketimng, CRMs and also tools for print.

You can also find out basic information as your mouse hovers as well as click through to the site.

So, there’s a list of almost 80 tools that use AI that are PR-specific. This includes mainstream platforms such as Cision, Veulio and pr.com’s press release tool.

There’s also more than 300 for video including YouTube but also a tools that allow a range of skills such as animation, voiceovers and text to video.

It’s as close to a search engine for AI tools as I’ve seen which is going to make experimentation easier.

It’s also important to note that the 14,106 figure is a 27.8 per cent growth year-on-year. Less than three per cent of vendors have pulled out their services in the past 12-months.

There’s also hundreds that work with Open AI’s ChatGPT tool.

You can find out more about how to safely use generative AI tools within UK Government paramaters here.

TRUST WARS: Yes, the public sector should be clear on how they use AI

When I was a kid I’m sure I was delivered a lecture on how a reputation was so hard to build and so easy to lose. 

Maybe it was for something pretty minor although – full disclosure – me and eight of my mates were suspended for a day in the VI form for drinking alcohol on Cannock Chase while we were running an orienteering checkpoint. 

I told my kids this a few years ago and they were both – again, full disclosure – ASTOUNDED. 

Reputation and trust also applies to public sector institutions. In the UK, trust in the pillar of Government is sparse with the Edelman Trust Barometer running a 12-year low with just 31 per cent of people in the UK having trust in Government institutions.  

Trust is also easy to lose and hard to build. Look at the Kate Middleton’s photoshopped Mother’s Day picture issue.

Never mind misleading photoshop, AI can demolish trust in an institution overnight.   

What made me reflect on the issue of identifying AI content was Scottish Government’s bold announcement that all public sector bodies north of the border will be required to register their use of AI for projects. Importantly, this logs projects rather than all AI use. At the moment, the register is voluntary but is the first in the UK to become mandatory.

What’s on the Scottish AI Registry now? 

A quick look at the AI Register shows just three projects. Included in this list is a tool that shows how vulnerable children may be to exploitation and a virtual concierge assistant to help you choose the right tool for blind or deaf people to take part in civic society. 

The benefit of being transparent

Back in the day, Tom Watson MP was a junior Minister responsible for the Civil Service (full disclosure: Tom was a very approachable contact when I was assistant chief reporter in the Express & Star’s Sandwell office). 

One weekend, Tom crowdsourced what should be in the first draft of the civil service social media guidance. This included a suggestion to declare your connection to the Civil Service as you used social media in connection with a civil service matter. I’ve always thought this broad approach was a good idea.

If you’re declaring how you are using AI this can only build trust. There is no ‘gotcha’ moment but there may be a debate about the methods you use. But if you can’t justify it then should you even be using it? 

Why setting out how you use AI is a good idea

For me, yes a comms team should set out how AI is used. 

Indeed, Meta has already required that content created with AI are labelled. So, images and video created with AI tools need to be identified. But so too must text that’s been shaped with a tool like ChatGPT and posted to a Meta platform such as Facebook, WhatsApp, Threads or Instagram.

Not only this but in the UK, uploading deepfakes without the owners’ consent is already a crime. I cannot sensibly think of a time when a public sector comms team would create such a deepfake without the subject’s permission. However, the state of political campaigning in America is another thing entirely.  

I’d be interested to hear what others think. 

SCHEDULE QUESTIONS: Questions to ask if you’re thinking about using a social media management tool

A couple of months back I was invited to become a partner of a social media management tool. After brief reflection I declined in part because I didn’t want to be hobbled in the independent advice I give.

Are social media management tools a good idea? For me, maybe. But never always. There are some tough questions to ask first before you sit through the sales pitch and commit to the dotted line.

Q: Will posting from your social media management tool inhibit my reach?

At a time when algorithms are making life hard it’s important to squeeze every ounce of advantage. Back in 2011 Meta confirmed that third party tools would harm social media reach. However, the picture today is far less clear. You can find posts from several social media management tools assuring people that’s not the case anymore. But there is a real lack of independent academic research in this area. A trawl of Google Scholar which searches academic papers draws a blank.

Q: Will the social media management tool have access to all of the new tools and functionality the platform develops?

Famously, Facebook as a platform is in permanent Beta. This is a geek way of saying that it is never ever finished. It will always evolve. New tools are constantly emerging. Some are tested in a handful of key territories, refined, rolled out or dropped. 

There’s no realistic way that every tool is passported through to every social media management platform. After a successful trial Meta, X, Google or whoever are likely to attach a price tag on the latest must-have tool. Meta’s social media tool Business Manager on Meta doesn’t have every available tool so why would third parties? 

This is a key question because platforms want new tools to work. Therefore, using new tools is often likely to be rewarded. Posting natively using them is likely to have the advantage.

Q: Will the social media management tool monitor who is posting from the account? 

This is something the tool can usually do well. An integration with the platform means that certain people can post and some can’t. When those people can post they leave a trail behind them. You can see who did it. This can be useful. Business Manager on Meta can also grant and restrict access centrally to Meta tools. LinkedIn as a similar approach. It may be worth comparing and contrasting what the social media management tool can do against what is freely available. Is this valuable to you?   

Q: Does the social media management tool have AI skills?

This is a new front in the battle. Many platforms say they use AI to help you write content. Whether or not using AI to create content is a good idea is one for you to reflect on. The research would urge you not to blindly rely on it. A clumsily-written post can undo plenty of hard work. However, using AI to create content can make the most of limited time so long as checks and balances are in place. Dilemma time.

If you are in the public sector the issue of trust and AI is very much one you need to be aware of. Scottish Government have taken the lead by requiring every public sector organisation to register how they are using AI. So, north of the border, for transparency if you are using AI tools on social media you need to declare it.    

Q: Will the social media management tool evaluate everything?

Often, social media management tools are really good at producing a report on demand with pie charts, numbers and other useful things. There’s no doubt these can look visually attractive. But are they the complete answer? For me, they’re part of the journey. Clicks, reach and numbers are useful as a broad metric. They can help you refine and learn from content. But will it tell you how many clicked on the job ad? How many ended up applying and how many are still engaged in the process? No. You need HR for that. Their data is the acid test as to whether or not your recruitment campaign is a success. There is no substitute for this.

In football terms, looking at individual metric data alone will tell you how far the player ran and how many passes were completed. It won’t tell you the final score. That’s the bigger picture.

Q: Will the social media management tool encourage corner cutting?

One thing I’ve learned from the last few years closely researching algorithms and effectiveness is that social media is changing and evolving. All social media algorithms penalise links. Why? Because the longer you spend on the platform the more attractive you are to advertisers. Does ITV tell people to go to the BBC mid-way through a peaktime drama? The heck it does. 

The most effective content is created bespoke for a specific platform and tells the story on the platform. On Facebook, it can also be posting a link into the comments or on X, formerly Twitter, build a thread of tweets to tell the whole story.

With that in mind, I’ve lost count of the times people guiltily admit to cutting and pasting the same thing just to get the thing out of the door. We’ve all done it. I have in the past. 

Posting the same content is also a bad idea. The most I’ve seen is 17 identical posts in a 24-hour period. Reader, the algorithm did not reward them. Fewer crafted posts are more effective than repetition.  

Q: Can you use the social media platform on a mobile phone?

If you’re out and about and some news breaks can you use your mobile to post an update? 

Q: Can you hit pause on the social media platform in an emergency? 

Or if a key piece of news happens, can you quickly pause what you are about to send?     

Q:  How much will the social media management tool cost? 

This is a big one.

How much? 

If the questions you ask lead to answers you are prepared to pay for then knock yourself out. 

But balance the figure you have against the free tools the platforms already provide.   

Good luck.