I think we need to change how we are calculating our customer satisfaction scores in the call center, because if our scores were really 90%, we wouldn’t be getting such poor reviews in third party reports.
I put my head in my hands as the executive in the front of the room continued his presentation to leadership.
Don’t get me wrong, voice calls still get the highest scores of all our contact channels, but I think we need to review how we calculate the scores across them all.
This exec was reaffirming his belief that high contact center CSAT scores should translate to real customer satisfaction, and that differences in scores across contact channels denote only a customer’s preference for a given channel.
My experiences as both a consumer and as a customer support executive scream how wrong and short-sighted this sentiment is.
Managing scores IS NOT managing satisfaction or loyalty.
As a consumer, I can think back on several times I’ve had a great agent call to a company and received (via either phone or a follow up email) an invite to respond to a “brief” survey. In almost every instance, my “high customer satisfaction” was very specifically limited to the representative I spoke to. At best I found myself neutral to the company (great, glad that’s done -now I can get on with my day), and at worst I found myself irritated with them.
Why did I have to call in the first place?
Why couldn’t I have done that on the web or in the IVR?
Why wasn’t that information easy to find elsewhere without having to call in and wait on hold?
Yes, while my high scores were focused more on the questions that asked about the representative, those questions about overall satisfaction with the company (as well as the very popular NPS question “how likely are you to recommend?”) were still higher than I truly felt simply because I was trying to reward a great representative.
The company and the voice/call channel got a high survey score, but that wasn’t truly a measure of my satisfaction with them.
Stop listening to scores and start listening to customers.
Your contact center scores have their places. They are good measuring sticks for how your representatives are performing individually and great for comparing them to their peers, but don’t expect to move the needle on your real customer satisfaction and loyalty. To do that, you’ll need to listen to what your customers are saying and anticipate their needs.
Being good in the support channel a customer actually wants is better than being great in a support channel they just settled for.
The other mistake by that CS exec. was making a strategic decision NOT to focus on channels that weren’t achieving the same 90% satisfaction scores.
Just like that score doesn’t automatically translate into real customer satisfaction, it also doesn’t hold that it’s automatically “better” than the 80% or even 70% you may be achieving elsewhere, including in self-help. The further the survey is from unfairly blaming a good representative, the more “real” that score and feedback will be.
Don’t try to measure a digital or self-help channel against that call center agent. Rather, focus on the feedback, usage, and success metrics and build an improvement process that shows your customers you’re listening.
10-15 years ago, the push for automation and self-help centered around cost savings. Today, it’s as much (or more so) about meeting customer expectations and being intelligently innovative.
In a world where cars drive themselves and Alexa turns on your lights, making a customer wade through touch-tone options in order to wait on hold, even for a highly trained phone agent, just doesn’t cut it. Implement a strategy centered around listening to your customers and understanding true customer satisfaction will be your reward.