top of page

Evolving Your Customer Experience Survey Strategy

This is part 2 of a 2-part blog discussing the ways organizations can measure customer experience and satisfaction. In Part 1, we discussed the use of NPS as the default metric to measure customer experience and why its usage may do more harm than good when it isn’t used in conjunction with other measures. If you missed part 1, you can catch up here.

Now, onto part 2 where we look at alternative methods companies should consider when it comes to measuring customer experience.

Alternative #1, Customer Satisfaction Surveys (CSAT)

The flexibility afforded by CSAT surveys are both its biggest strength and weakness. We can create questions that gauge satisfaction on a scale. For example, "On a scale of 1 to 5, how satisfied are you with the ability of our agent to solve your problem today?". Close ended questions that used a fixed scale are easier to trend and track but provide no insight into drivers behind ratings.

Alternatively, we can ask open-ended questions that don’t limit a consumers response. For example, "How can we improve your experience with the company?". The challenge here is that open ended questions are harder to quantify and understand.

The good news is that the emergence of AI tools to gauge sentiment and identify trends in open ended answers is making these questions easier to turn into actionable insights and ensuring we don’t box customers into closed ended questions based on a predetermined scale.

Open ended questions also allow us to learn more about the drivers behind a given customers satisfaction or dissatisfaction allowing us to model behaviors that make customers happy and correct those that don’t. This is one way in which CSAT surveys, particularly ones with open ended questions, prove to be more effective than NPS. It helps us understand the why of a customer interaction gone right or wrong where NPS only allows us to identify a problem may exist based on a rating taken at a point in time.

Another advantage of CSAT over NPS is that it generates granular insights. With NPS it is often hard to understand what specific weight a given interaction had on a declining or increasing NPS score. All we can do is trend customer ratings over time and try to isolate events that occurred between ratings that may have had an impact.

Pro-tip: If you want to turbo charge the granular insights you get from CSAT surveys, this is good time to take advantage of their flexible nature and use closed ended questions based on a pre-defined scale to calculate a CSAT score. To calculate your CSAT score, you take the number of satisfied customers (with a rating of 4 or 5 on a 5 point scale and divide by the total number of survey participants).

A best practice for CSAT surveys is a combination of closed and open-ended questions that allow for a definitive and easy to trend score to be calculated along with generating insights into drivers behind customer ratings provided in response to closed ended questions.

I’m Convinced We Are Starting CSAT Surveys ASAP

We hate to be the bearer of bad news but CSAT surveys also are not a panacea for all of the shortcomings of NPS or the definitive measure of customer experience either.

In fact, Fred Reichheld, the creator of NPS found that “60 to 80 percent of customers who ultimately defect had said they were satisfied or even very satisfied the last time they participated in a [CSAT] survey.”.

Another study by the Corporate Executive Board, which we explore further in our post, found that twenty percent of “satisfied” customers said they intended to leave the company in question while 28% of the “dissatisfied” customers intended to stay. There is a quantifiable disconnect between CSAT and customer loyalty.

Another challenge of CSAT is that unlike NPS there is no standard question asked so both the questions asked, and scale used to measure CSAT scores vary considerably from company to company. This makes benchmarking and comparisons difficult both within your industry and amongst your competitors.

Alternative #2, Customer Effort Score (CES)

CES is the new kid on the block as it was created in 2010. Part of the rise in its popularity is that it is very effective at predicting customer loyalty. The Corporate Executive Board conducted a study of more than 75,000 people who had interacted over the phone with contact-center representatives or through self-service channels such as the web, voice prompts, chat, and e-mail. Their findings are what is driving much of the excitement behind CES.

CES is 1.8 times better at predicting customer loyalty than customer satisfaction and 2 times better than Net Promoter Score.

Another strength of CES is that it creates more quantifiable, actionable, and specific data. It relies on closed-ended questions and asks customers to provide a rating based on a relatively fixed scale. For example, “Overall, how easy was it to solve your problem today?” with responses based on a fixed scale of 1-7 with 1 reflecting quite difficult and 7 being super easy.

You can tailor this question to specific methods of interacting to gauge where customers are finding it difficult to interact with your company. Questions like “How easy did our chatbot make solving your problem?” or “How easy was using our website to find information?” allow companies to uncover detailed and specific pain points in the customer journey.

Finally, The Perfect Measure

You had to know this was coming by this point. There are issues with CES as well. Its use of closed ended questions only allows us to find out where people are stuck in their customer journey but not why.

For example, if we ask “How easy was using our website to find information?” and get a score of 1 we don’t know what contributed to this low score. By asking follow up questions we can determine that a website isn’t offering users the ability to find information in their native language and thus making it difficult to use. This typically means to gather any real insight from CES surveys you need to follow up with additional questions.

We are also reliant on people’s perceived effort as well. This problem isn’t specific to CES but your 4 rating (not very easy) may be my 7 (super easy) or vice versa depending on people’s comfort level with using a specific channel (some people don’t mind picking up the phone to fix a problem, others prefer self service) or familiarity with your products or offerings that makes finding their own answers easier than others who lack the same level of familiarity while using your website.

There Is No Perfect Measure

This should be your primary takeaway. Any person that tells you there is a single definitive measure of customer experience is likely missing critical context around their measure of choice. A holistic approach that combines NPS, CSAT, and CES is necessary if you truly want to understand what your customers think about the experience your company is creating. The question you need to answer is when to use each measure. Here is some guidance to get you started:

NPS – this is a great measure for gauging how your customers feel about you holistically but is not a great fit for finding pain points in your customer journey or creating clear and actionable insight into what is driving both good and bad customer experiences. A good rule of thumb is quarterly NPS surveys as their best fit is gathering relational and not transactional feedback.

CSAT – these are a better fit for transactional feedback so using them after a customer interaction is the ideal time to deploy a CSAT survey. Survey makeup is important. A combination of closed and open-ended questions ensure you not only get data you can trend by creating CSAT scores but also gather the insight you need to understand underlying drivers behind scores.

To operate CSAT effectively at scale, with open ended questions, you will need to invest in technology that mines for customer sentiment as well as pulling insights (word clouds and natural language processing are good methods to generate these insights).

CES – another good fit for transactional feedback but it must be deployed more selectively. CES should be deployed only at resolution points in a customer journey. Asking how easy it was to solve your problem when the customer’s problem hasn’t been solved is a recipe for failure. Some good touchpoints to consider for CES surveys: post product purchase, after a customer subscribes to a service, after resolution of a help desk ticket.

Don’t Forget the Most Important Step

If you don’t have an effective plan to “close the loop” sending surveys is a waste of resources. Who is empowered at your company to act based on the feedback received? Do you have a cross disciplinary team to address feedback?

Assigning a score to an individual or department to fix is a fool’s errand. Did you know that the Corporate Executive Board Study mentioned above found that fifty-seven percent of inbound calls came from customers who went to the company’s website first? It’s likely your contact center leader that can determine customers are dissatisfied with your website and self service tools but likely isn’t the leader empowered to make changes to those tools. By embracing a holistic approach Cisco Consumer Products increased its self-service rate from 30% to 84% of contacts.

Still confused on what measures to use or when? Or how to create more convenient experiences that drive better CES and CSAT scores? Contact us here to continue the conversation.


bottom of page