Making Sure Quality Monitoring Evaluation Forms do Their Job

Verint Team July 12, 2013

Recently I participated in a virtual panel discussion with six other vendor representatives addressing best practices in quality assurance and analytics.

In prior years this session was organized as sixty ideas in sixty minutes, where each vendor would offer up one idea or best practice in less than one minute in rapid fire succession from panel member to panel member.

This year the format changed and each vendor was permitted to show and talk through five to seven slides. I thought the session was fascinating for two reasons.

First, while each presenter was supposed to provide ideas and best practices, many provided a miniaturized version of their company’s marketing messaging. Presumably, they feel that best practices lie wholly within their software products.

Second, almost nobody spoke about the critical role the evaluation form itself has on best practices. In truth, a really effective evaluation form is a best practice.

One of the presenters made a comment about constructing evaluation questions to help ensure tight calibration among multiple evaluators. The justification for this is that it is essential for the scorers to be completely in alignment and for multiple assessors to score an interaction in nearly the same way.

While this is an admirable goal, it occurred to me that many organizations seem to be pursuing ease of calibration and ease of assessment effort at the expense of developing truly effective agents. And with some vendors suggesting that agent quality assessments can be completely automated using speech analytics, the pursuit of lower-cost evaluations threatens to undermine what quality monitoring is supposed to be all about: employee skill and knowledge development.

I’ll venture a guess that most quality assessment forms are organized around the structure of the interaction with a focus on:

  1. the beginning – involving the opening, greetings and identification;
  2. the middle – involving problem identification and information gathering; and,
  3. the end – involving information and/or solution presentation, checks and confirmation, and closing.

The focus on structure and compliance with “interaction requirements” results in simple yes/no check boxes. As a result, the evaluation process becomes faster and the calibration process becomes simpler.

That said, I wish it were true that adhering to the finer points of interaction structure results in satisfied customers. Once upon a time having your call answered relatively promptly and dealing with a polite, scripted agent was considered good service.

Not anymore.

What customers react to is what can be called “communication competency.” Contact centers often refer to the same concepts using the term “soft skills.” These are things such as active listening, emotional alignment, interpretive understanding, adaptability, linguistics and/or writing skills.

I think we all agree that when we encounter a truly great interaction, one that perhaps starts very badly and ends up very positive, often the reversal is achieved because the agent had and used outstanding soft skills layered over excellent hard skills such as product or service knowledge and screen navigation. So, clearly soft skills are important.

But most centers shy away from attempting to assess soft skills, because it’s inherently judgmental and it makes scorer calibration very difficult.

To which I have this: Yup, it’s judgmental.

We get judged all the time in many areas of life. At times these judgments may be in error, difficult to justify and result in disputes.

Here’s a suggestion. Embrace the dispute as an additional learning opportunity where the employee and the evaluator can go through the evaluation together.

It’s also time to recognize that “communication competency” is a key skill to develop in customer-facing employees across the enterprise. Figuring out who is doing well and who seems to be struggling in relating to customers on an emotional basis is fundamental to achieving operational excellence. My advice: Stop avoiding judgment-type assessments simply because you don’t want to deal with disputes.

And yes, it’s more difficult to get tight scoring among multiple scorers. Perhaps it should be more difficult and should require more conversation among the scorers to truly enable making good soft skill judgment calls.