SurveyGizmo Blog

From survey design all the way through taking action, you'll find tried and tested survey knowledge on each and every page of our blog.

CSAT, NPS & SUS: Find the Right Customer Satisfaction Metric

Andrea Fryrear Aug 23, 2016 0 Comments

Would you like to find a way to more than double your customer lifetime value? It turns out that all you have to do is increase customer satisfaction levels slightly.

Statistical analysis of customer satisfaction data from over 20,000 customer surveys showed that a customer that rates themself as "Totally Satisfied" will contribute 2.6 times as much revenue as one that rates themself as just "Somewhat Satisfied."

Of course, before you can start increasing customer satisfaction you've got to measure it first. There are multiple options for pulling this data from your customer base, including Customer Satisfaction Score (CSAT), Net Promoter Score (NPS)®, and the Software Usability Scale (SUS).

In this article we'll take a look at all three and offer insights into when each metric can make the biggest impact.


Customer Satisfaction Metrics: NPS® and CSAT

The battle between Customer Satisfaction Score (CSAT) and Net Promoter Score® (NPS) has been in full swing since NPS® was introduced in 2003.

The former is designed to measure how satisfied customers are with various aspects of their experience with a product or service, while NPS® measures how likely an individual would be to suggest a product or service to a friend.

Proponents of NPS® claim that it's a more accurate indicator of customer loyalty and long term growth, while its detractors argue that its single question format is not ideal, nor is it a good indictor of whether someone would actually recommend something.

Of course, there's no rule that says you have to take sides in this debate. Let's take a closer look at CSAT and NPS® and in which situations their strengths really shine.


Example CSAT Questions

CSAT surveys are helpful because they can be used to measure specific components of the customer experience.

You can ask how satisfied someone is with your customer service, with their first experience using a piece of software, or with the payment process. For example:


The questions themselves represent a snap-shot image of a single customer's experience, meaning that when they're administered as one-off surveys, they measure only short-term satisfaction.

By asking the same customer similar questions throughout their tenure with your product, however, you can get a look at changing satisfaction levels over time.

This data can be highly actionable, but it can also be biased. In a very insightful article, Zendesk argues:

While viewing customer satisfaction this way is useful, there's still one problem. Even though you're looking at satisfaction over time, you're missing a piece of the puzzle because this view gives extra weight to your most vocal customers...and to your most active customers, who keep coming back and are likely to be fairly happy.

To counteract this potential bias, add NPS into the mix.


The NPS® Question

NPS® surveys consist of a single question: How likely are you to recommend SOMETHING to a friend or colleague?


As you can see, answers are arranged on an 11-point continuum that ranges from, "Very Likely" all the way to, "Not at all likely."

Those people who answer on the “Not likely” end of the spectrum from 0 to 6 are marked as “Detractors.” Those whose answers fall in the 7 or 8 spots are called “Passives,” and those who answer on the “Very Likely” end (9 or 10) are called “Promoters.”

To get from these numbers to the actual NPS® score you simply take your total percentage of Promoters and subtract the percentage of Detractors. You can leave it as a percentage (43%) or change it to a whole number (43), depending on what seems to make the most sense for you.

For more actionable data, we suggest following up this scale question with an open text field that allows respondents to give you qualitative feedback.

Open text analysis can be time consuming, but you're going to learn far more about the reasoning behind your NPS® score. Whether positive or negative, there is always something to learn from asking for additional input.

(To see how easy it is, start with the SurveyGizmo's NPS question type.)


Choosing Between CSAT and NPS®

If you have the ability to measure both satisfaction and likelihood to promote, using both metrics is certainly preferable.

But if you need to choose one, base your decision both on the larger goals for your product and the type of action you're prepared to take to influence these scores.

Do you already know of a few areas that could use some improvement? Then you may want to track CSAT scores around those features while you make changes.

Are you seeing a decline in new or returning customers? NPS® may be able to help you understand why people aren't sharing your product or service with their friends.

Keep in mind that CSAT tends to work best in the short term, while NPS® is more of a long term play.

The latter requires you to have an NPS® score to measure your progress against, so you need to get a baseline and then measure consistently over time to see any fluctuations.


Measuring a Product's Usability with SUS

The System Usability Scale, also known as SUS, is a simple 10-item questionnaire designed to measure people's perceptions of usability.

Unlike NPS® and CSAT, this set of questions is focused solely how your users feel about their experience using the product itself.

For this reason, it's often used in conjunction with one of the other satisfaction scales rather than as the sole customer satisfaction metric.

Although originally created to administer after usability tests on some of the first software applications 25 years ago, it's now commonly used for all sorts of products, including hardware, websites, cell phones, and other physical products.


Who Should Use the SUS Scale

SUS offers a statistically valid, reliable means of measuring usabillity, which is otherwise a nebulous concept that's difficult to pin down.

Any customer satisfaction or customer experience team that deals with a computerized system can benefit enormously from implementing the SUS scale.

SUS scores, however, are not diagnostic tools. They won't necessarily tell you which particular aspects of your system are difficult for users to navigate.

Instead, they're designed to help measure and track the overall ease of use for a piece of software.

The System Usability Scale is also very quick and inexpensive to implement, making it ideal for customer-focused teams that don't have large budgets.

Finally, if you have a small sample size (i.e. not a lot of customers), SUS is a great choice. Even with a small data set, its results have consistently proven valid and useful.


SUS Questions: 5-Point Scale

Although it's ten questions long, the SUS survey uses simple scale questions that are quick and easy for respondents to follow:

  1. I think that I would like to use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this system were well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

Each of these questions asks users to rate their agreement from "Strongly Disagree" to "Strongly Agree":



Scoring and Interpreting SUS Responses

It can be challenging to accurately turn responses to the above ten questions into an accurate SUS score, but once you've got the process down you'll be able to plug a new response into the formula easily.

It works like this:

  • Each question receives a score between 1 and 5, based on a respondent's level of agreement.
  • For the odd numbered questions, which are all positive statements, subtract 1 from the score.
  • Even numbered questions are negative statements. Subtract their value from 5.
  • Take all these adjusted results and add them together.
  • Multiply this total by 2.5.

These calculations will give you your score on a scale of 0-100, but it's not a percentage result.

Think of it more like an academic grade. The average result is 68, so if you're at or above this you're on the right track.


Correlations Between Usability and Customer Satisfaction

Just as CSAT and NPS® can give insight into different parts of the customer's life cycle, combining SUS and NPS® can do a lot to fill in gaps in your data.

Jeff Sauro (and many others) have found a very strong positive correlation between Net Promoter Score and SUS results (.61 to be exact). This means that SUS scores can explain about 36% of the variability in NPS results.

For those of you keeping track at home, that means you'll need a SUS score of at least 88 to earn a promoter.

This interrelationship is just one example of how these customer satisfaction metrics work together.

As with any survey question, you need to understand the outcomes you're hoping for, then choose the metric (or metrics) that will get you the most actionable data with stretching your internal resources too thin or fatiguing your customer bases through over surveying.

Spread the word
Try SurveyGizmo out!

By accessing and using this page, you agree to the Terms of Use. Your information will never be shared.

Be in the know!

Subscribe to get regular updates and best practices for survey creation, data collection, and data analysis.