Regardless of whether you’re tempted by Donald Trump’s ambition to “Make America Great Again,” you are probably hoping to make contact center metrics great again!
Yes, they represent valuable benchmarking devices, but metrics are not meant to be blanket, empty “scores.” They are designed to inform performance against crucial contact center, customer experience and business objectives. Their value entirely comes from their relevance in context.
In support of the campaign to make contact center metrics great again, we’re proud to reveal six metrics compatible with today’s aim of omni-channel, effortless, customer-centric experiences. Some replace existing, tired contact center “metrics.” Some work in tandem with those metrics. Some bundle numerous key performance indicators together to create “super metrics.”
Applying them will help you shift from how your contact center is “scoring” to how it is actually performing.
1) True Resolution Score
Thought leaders routinely debate about the right approach to first contact resolution. Who decides whether a problem has been resolved? What constitutes the “first contact?”
Since resolution is so fundamentally important to the customer support experience, why not introduce a metric that not only mitigates the ambiguity of “FCR” but adds elements that truly define whether a contact center is solving problems?
Why not introduce true resolution score?
True resolution score is calculated based on a few key factors:
n Time to resolution: how long did it take from the moment the customer identified the problem to the moment the customer accepted a “resolution?”
n Hoops to resolution: how many channels and/or agents did a customer span (either concurrently or consecutively) before accepting a “resolution?”
n Chapters to resolution: how many disparate contacts (completely separate calls/interactions) were required before the customer accepted a “resolution?”
n Legitimacy of resolution: did the customer call back regarding a similar issue (if so, how long after receiving the initial “resolution”)?
n Resonance of resolution: how likely was the customer to return to the specific channel/agent that “solved” his problem?
n Value of resolution: did the customer do future business with the brand (where applicable)?
2) Social Promoter Score
In addition to some methodological issues, critics of Net Promoter Score often point to two weaknesses:
n It presumes customer advocacy is a universal objective. That is not always true – there are numerous scenarios in which a customer, no matter how satisfied, would have no reason or desire to actually promote the product to friends.
n It is a theoretical measure. Instead of asking whether customers are recommending a brand, it asks whether they are likely to recommend the entity.
The former is not as much an “objection” as it is a concern that some business leaders lack good judgment. Net Promoter Score obviously carries less significance in some context, but there is no reason to assume business leaders cannot make their own, accurate determinations about its relevance.
The latter is, however, a point worth addressing. More importantly, it is one we can address.
Thanks to the rise of social media, we have a way of tracking actual customer advocacy. By tracking whether customers are actually recommending (or condemning) the brand on social networks, we gain vivid insight into customer advocacy.
No, the measure is not exhaustive. Many customers possess private social media accounts, and many customer advocates promote the brand through other means. Neither set of customers can be as readily tracked.
No, the measure does not eliminate the value of Net Promoter Score. While NPS may be theoretical when it comes to advocacy, it often does serve as a practical barometer of customer satisfaction. It is probably safe to conclude, for instance, that a customer who declares himself highly likely to recommend the brand is satisfied with his experience.
It does, however, offer another dimension to existing performance measurement. It also provides a sanity check, of sorts, on the “theoretical” NPS data. Given the ubiquity of digital media, most companies inspiring customer advocacy should see an abundance of positive posts on social.
If you are not seeing many, it is probably safe to conclude you can better engage, satisfy and motivate your base of supporters.
If customer advocacy represents a notable objective, social promoter score represents a valuable way to assess performance.
3) Self-Service Efficacy Rate
Cognizant that good self-service platforms are easily accessible and actually solve problems, many businesses have begun to measure self-service utilization and abandonment rate. The former reflects whether customers are actually taking advantage of the self-service platform, while the latter speaks o the platform’s success in solving problems.
We can do better.
Why not blend these measures – and some other factors – into a single “self-service efficacy rate?”
This all-encompassing metric should account for certain elements of a customer-centric approach to self-service:
n Utilization rate – how many customers are making use of the self-service platform?
n Self-service as a “first choice” rate – how many customers specifically opted to use the self-service platform before attempting to contact a live agent?
n Abandonment rate – how many customers “gave up” in their quest to self serve?
n Live agent reduction rate – to what extent did the number of costly live agent interactions decrease in conjunction with the self-service option?
n Escalation rate – how many customers actually chose to escalate to a live agent (since escalated issues are presumably too important to just be ignored, this tells us whether self-service can solve real problems)?
n Live agent accessibility rate – how much time did it take to transition from a self-service “dead end” to an agent that knows them and their issue (this speaks to the integration between channels)?
n Time to resolution rate – how long did it take the customer to achieve his goal (either within the self-service platform or through escalation)?
4) Process Personalization Rate
Personalization is a central tenet of today’s customer experience. How do we measure that personalization?
Since expectations vary greatly among customers and industries, we cannot establish a standardized metric for “personal connection.” There is no universal score that will determine whether a business used a customer’s name or asked about the customer’s family with the desired frequency.
We can, however, establish a standardized metric for the personalization of the process. This metric reflects a form of personalization that is essential in all customer engagement situations.
n Time to identification: How long does it take the agent (or relevant virtual/automated system) to recognize the customer (name, basic profile details)?
n Time to qualification: How long does it take the agent to determine the specific product (or account, etc) about which the customer is engaging?
n Time to action: How long does it take the agent to determine the specific issue or information the customer needs addressed?
Businesses that perform optimally in these areas are demonstrating recognition of – and thus appreciation for – their customers. In most cases, they will also achieve resolution with minimal customer effort: another top priority for today’s contact centers.
5) Proactive Performance Score
Quickly and appropriately responding to customer issues is great.
Addressing them before the customer attempts to make contact – or, better yet, before the customer realizes he needs to make contact – is event better.
Indeed, proactive support is a major goal in the age of customer centricity.
A metric assessing your success in providing proactive care should, accordingly, be a major part of your performance management strategy.
One opportunity is to use a “proactive performance score.” The goal is to determine the extent to which issues could have been solved (or were solved) before customers introduced them to the contact center.
Possible measurement approaches include the percentage of inbound interactions concerning issues that the business could have pre-emptively addressed, the percentage of customers who called about issues after receiving a proactive solution/notice or the time between the business learning of a problem and reporting that problem to customers.
6) Agent Engagement Assessment
Customer satisfaction score is often seen as a gold standard for contact center performance. Indeed, one would be hard-pressed to label a contact center successful if it is not delivering an ample degree of customer satisfaction.
The downside to CSAT is that it measures an outcome rather than a driver. It tells you whether customers are being satisfied; not how different aspects of the contact center are contributing to that satisfaction.
As we work to make metrics great again, we need to focus on the key determinants of customer satisfaction.
Articulated by adages like “happy agents = happy customers,” employees represent one such determinant. If they are happy, well-trained and customer-centric, they are more likely to satisfy customers.
Measuring agent engagement is, therefore, an integral part of contact center performance strategy.
An agent engagement assessment score will account for numerous factors that reflect an agent’s competency in the role, concern for the customer and connection to the business. Items to calculate include the following:
n Accuracy rate: how often does the agent present incorrect, irrelevant or suboptimal information (using analytics and/or customer complaints and callbacks)
n Adherence to procedure: to what extent did the agent following the established script or protocol?
n Microphone Metric: how often does the agent share customer feedback with other agents, supervisors and/or the business at large?
n Voice of the agent: what is the frequency – and tone – with which the agent communicates his own feedback?
n Retention rate: how many agents stay with the business over a given period of time?