
When social media was new it was like a blank page with no rules on and gradually we wrote the guidebook.
It feels like the same is emerging with AI as we start to get to grips with it.
Someone asked a really on the money question at an LGComms session I delivered on AI in the public sector. On reflection, it needs tackling at length.
If I remember it right, the question was asking what we do when ChatGPT throws out incorrect information about us?
It’s a really good question.
This week, BBC research has questioned the accuracy of news when run through an AI assistant like Siri, Bixby or Cortana.
Basically, an AI assistant responds to your voice or to text to carry out commands. Around a third of UK people in early 2025 use these tools and the figure is likely to grow.
Half of news run through an AI assistant has ‘significant issues’
In the study, the BBC found that:
- 51 per cent of all AI answers to questions about the news were judged to have significant issues of some form.
- 19 per cent of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates.
- 13 per cent of the quotes sourced from BBC articles were either altered or didn’t actually exist in that article.
People may think they can trust what they are hearing from AI assistants but they clearly can’t.
A new metric is emerging for AI tools
Once we had readership and ABC figures showed how many people were reading newspapers and magazines. How many column inches of coverage you were producing was a significant measurement.
Then came the internet and where you sat in search rankings were coveted. The SEO industry was born.
Now it seems that a new measurement that maps how you appear in AI tools is being born. ‘Share of Model’ is a term being used to show if you are cutting through to AI-tools like ChatGPT or Copilot.
This path is leading us to a fundamental shift.
It looks like we could be at the start of a new era in communication. We’ve had the print and the electronic eras, and the tectonic plates are now shifting from the digital to the generative AI (GenAI) era.
– Tom Roach, Marketing Week, 2024.
As the the wonderfully titled Brilliant Noise agency observed in a recent newsletter brands that dominate AI-generated answers could win market share simply by being the name the AI remembers.
That’s such a fascinating point.
But it leads us back to the question about how we deal with inaccurate content generated by AI.
Challenging AI produced by news sites
When I was asked the question of how to deal with news sites using AIU tools for inaccurate information at the LGComms session this struck me as being tricky.
On reflection, this is probably as easy as its ever going to get.
Under old school rules, people challenged inaccuracies in the paper by ringing up the journalist. The Editor’s Code of Conduct meant they were obliged to give house room to your objections.
Today, Reach plc and others are using AI tools to generate stories. Or rather ‘content’ as perhaps we should call it. How easy would it be to contact the out-of-town business park or back bedroom in Guildford where these content producers now sit? I’d say it would be pretty tricky and I’m not sure those AI-wielding uber journos would give a stuff about your objection anyway.
This leads to the need to challenge inaccuracies in public using social channels. In other words, calling out that publication directly.
That’s the easy bit.
Sometimes, people are hesitant to do this for fear it would jeopardise a relationship. That’s a fair point. But the question that needs to be asked is this. What relationship do you now have?
Quite how you challenge inaccurate content produced by one man’s Siri or one woman’s Bixby I have no idea.
Welcome to 2025, there are a lot of blank spaces that need filling.
How are you filling those gaps?