How to ensure AI doesn’t sabotage your customer experience in 2025

How to ensure AI doesn’t sabotage your customer experience in 2025

By Paul Rilstone (pictured), Vice President Australia and New Zealand, Kore.ai

 

Are you pondering the benefits of implementing AI-powered virtual agents in your business but worried about the possibility of them exhibiting bias or  ‘hallucinating’ when they’re on the job and doing damage to your organisation in the process?

If you answered in the affirmative, you’re far from alone. It’s a reasonable concern for many businesses, particularly those which supply complex products and services.

The term ‘AI hallucinations’ is used to refer to the inaccurate, misleading, inappropriate responses AI models can generate on occasion.

When AI gets it wrong

There are a number of reasons why this phenomenon occurs. Chief among them is limitations in the data used to train the AI model. In the absence of the right information, the model may opt to ‘fill in the gaps’ with irrelevant, inaccurate or inappropriate content, perhaps as a consequence of drawing on unreliable source material.

An example of this that’s already passed into the annals is a Microsoft computer generated travel article on Ottawa, Canada, which listed the city’s food bank as a ‘cannot miss’ tourist attraction, adjuring visitors to head there with an empty stomach!

AI models that have been over-optimised for coherency can also be problematic.  Authoritative and ultra-convincing they may sound but it doesn’t preclude them from being just plain wrong.

And then there’s the issue of user prompt complexity. Have someone ask an AI model a question in convoluted, obscure or figurative language and there’s an increased likelihood it will generate a response that’s correspondingly unclear.

Lastly, the absence of real time generation and verification capabilities can be problematic in some settings. Like The Wind in the Willows author Kenneth Grahame’s ‘clever men at Oxford [who] know all there is to be knowed’ an AI model may be in possession of an extraordinary plethora of facts and figures. But if its training was completed six months ago, it won’t have the capacity to reference or validate something that happened a day or two earlier.

Protecting your customer experience

In the face of these risks, it’s understandable some businesses may hold concerns about introducing AI-powered models into their contact centres or other customer facing settings.

In recent times, the former has become the beating heart of every organisation: the first, and often the only, point of contact for customers. The experiences they have when they call, email or message with queries, concerns or complaints now shape their perceptions of the organisation, for better or worse.

Poorly trained virtual agents can jeopardise a positive brand image painstakingly built up over time and negate any efficiencies and cost savings the adoption of AI technology may have promised to deliver.

Taking a smarter approach to AI

That’s why it pays to partner with a vendor that specialises in the development of safe, secure and ‘empathetic’ virtual agents; one that can ameliorate the risk to your organisation hallucinatory AI models can potentially pose.

With the right AI engine in place, you’ll be able to ‘teach’ your virtual agents to steer conversations away from areas in which they’re poorly trained. You’ll also have the ability to put high tech guardrails in place to prevent their interactions with customers heading into controversial or toxic territory.

And, just as human agents become more valuable over time courtesy of the experience and insights they acquire on the job, you’ll be able to capitalise on your virtual agents’ learnings. Keep training and retraining them regularly and they’ll be able to deal with an ever-increasing array of queries and issues.

Choose an AI engine with latest generation security features and you’ll also be able to detect malicious prompt injections from hackers seeking to ‘jailbreak’ the AI model used to train your agents by forcing the latter to hallucinate sensitive business data.

Harnessing the power of AI to drive your business forward

Smartly deployed, AI can generate exponential productivity and efficiency improvements across the enterprise in general and in the contact centre in particular – even as it elevates customer experience and satisfaction.

The right technology platform and vendor can help you mitigate the risk posed by hallucinatory AI models and enable you to deploy virtual agents that provide the same stand-out customer experience as their human counterparts. If turning this transformative technology into an asset for your organisation is a priority in 2025, it’s worth investing time and effort into doing it well.