The top three methods for measuring your customer experience’s effectiveness are Net Promoter Score (NPS), Customer Satisfaction Score (CSAT), and Customer Effort Score (CES), according to Holly Chessman in Medium.
They’re all types of surveys, administered to customers at specific points in the customer journey. Here’s what they have in common, how they differ, and best practices for using them.
Asks the customer: “How good did that interaction feel to you?”
Brands usually ask the CSAT questions about a specific interaction, such as a live chat or purchase.
So you give them five options for each question:
- Very satisfied
- Somewhat satisfied
- Neither satisfied nor dissatisfied
- Somewhat dissatisfied
- Very dissatisfied
Pro-tip: Don’t mistake good CSAT scores with overall customer satisfaction. They’re meant to tell you how the interaction went, not how the customer feels about your brand overall. If you’re measuring CSAT scores for one type of interaction but not another, you’re missing valuable data.
(Net Promoter Score)
Asks the customer: “How likely are you to recommend my brand to someone else?”
Gives the customer a 1-10 scale. Sometimes it gives the customer the opportunity to elaborate as well.
The theory behind NPS is that a stated propensity to refer your brand a friend or colleague correlates strongly with customer satisfaction and loyalty.
It’s a quick, easy, and inexpensive way to get a baseline metric. It tells you at a glance whether your customers feel better or worse about your brand over time.
It also helps you figure out who wants to champion your brand.
Here’s how to calculate it:
While a 0-10 scale is traditional, you can also use a 7-point scale (1-7). The advantage is that some studies showed people are less likely to pick randomly on this scale compared to an 11-point scale because more options are more overwhelming and the seven-point scale may throw people off guard enough to think before answering, which may make their answers more accurate.
Your Net Promoter Score is: (promoters – detractors) / (respondents) x 100
So your percentage of promoters minus your percentage of detractors.
Responses in hand, add up your “promoters,” “passives,” and “detractors.”
If you’re using a 0-10 scale, people who select
- 9 or 10 are promoters
- 7-8 are passives
- 0-6 are detractors
If you’re using a 7 point scale, people who select
- 7 are promoters
- 5-6 are passives
- 1-4 are detractors
Next you need to know what percentage of your customers each group comprises.
Some survey software will automate some or all of this process for you. But here’s how to calculate your NPS.
If you want to do it manually:
- Export responses from your questionnaire/survey into a spreadsheet.
- Divide respondents into detractors, passives, and promoters.
- Add up the total responses from each.
- Divide the group total by the total survey responses to get the percentage total of each group. Or use a percentage calculator to make it easier.
- Subtract the percentage total of detractors from the percentage total of promoters.
This is your NPS!
Scores range from −100 (meaning everyone is a detractor) to +100 (meaning everyone is a promoter).
If you’re above 0%, that’s a decent Net Promoter Score. A good NPS is any positive score. An NPS of +50 is excellent.
100 responses to your survey
10 people answered 0, 1, 2, 3, 4, 5, or 6
100/10 = 10% are detractors
20 answered 7 or 8
100/20 = 20% are passives
70 answered 9 or 10
100/70 = 70% are promoters
Subtract detractors from promoters. 70% – 10% = 60%
A Net Promoter Score is always shown as an integer and not a percentage.
So your NPS is 60. Baller!
Beware of measuring NPS and then calling it day. While NPS “can capture an overall trend in customer loyalty, but the results won’t tell you WHY that trend is happening.”
It’s tempting to try to find the spot where most customers are most happy and put the NPS question there. For example, Airbnb asks NPS in the post-hosting survey flow. As Simon.ouderkirk put it in an illuminating Support Driven Slack conversation, “I’m $600 richer, I’ve already answered eight questions, sure I’ll answer one more!”
Sure, these answers might please upper management, but that’s not the right time and place for that question. As Mary_m pointed out in Support Driven, NPS is “most useful if it is non-transactional (that is, sent at a random time rather than tied directly to a particular experience), which then tells you more about how folks feel about the brand day-to-day for particular experiences you want satisfaction or effort scores.”
“YAAAAAS,” Sarahleeyoga concurred. “NPS is about overall value – not just value of a specific interaction.”
(Customer Effort Score)
Asks the customer: “How much work did it take to get this thing done?”
Like CSAT, you ask if after an interaction. But like NPS, it’s a measure of customer loyalty. In fact, according to Chessman, “CES has been proven to be the best indicator of customer loyalty.”
That’s right. How easy it is to buy from you has a bigger impact on whether someone buys from you again than quality, price, or any other factor. “In fact, according to CEB research, 94% of customers who have a low-effort service experience will buy from that same company again.”
(Things in Between)
Okay, I just made that up. But there are other ways to measure customer experience that don’t fit neatly into any of these categories. For instance, at the bottom of Trello’s support emails customers can rate the support they received with a click of a button.
It looks like this:
(How cute is the ¯\_(ツ)_/¯ instead of “okay?”)
The Trello team can see reports to get an overview of how things are going by time period and by agent.
Trello digs into the negative reactions in order to make necessary changes, such as changing the wording of their auto-response email. Trello uses Help Scout to insert the poll, customize the responses, and do the reporting.
Pro-tip: While flexibility is great, one problem with custom surveys is that you can’t benchmark your results across different companies as easily. Be sure you’re also taking your customers’ temperature in a way that’s industry-standard.
Every measure has its pros and cons, and none will give the the complete picture. That’s why you must use more than one. Sarah Chambers has a great post on how and why you need to combine and layer NPS data with other customer metrics to get a more complete picture of your user base.
Ryan_thielen captures CSAT and NPS. “Our CSAT is on every ticket (we have about a 40% response rate – mainly for support ) – and we do NPS quarterly for both our power users (key stakeholders – mainly for customer success metric) and a random selection of users (for product).”
While you want to get a complete, accurate picture, you also want to be careful not to wear your customers out with too many surveys.
Lastly, be sure you’re taking action on whatever data you’re collecting. There are lots of ways to do this, including incentivizing agents, determining how well experiments are working, following up with angry customers, and changing your processes in light of survey data.
What do you use, and why? Let me know in the comments!