10 Things to Know about the Microsoft NSAT Score

Jeff Sauro, PhD

We love writing about measures at MeasuringU.

We write about measures we’ve created (SUPR-Q®), industry standards (SUS, NPS, and TAM), emerging industry standards (UMUX-Lite), and lesser-known ones (lostness).

Jim Lewis and I also have a chapter dedicated to questionnaires in Quantifying the User Experience.

We’ll often encounter a new measure when working with clients, as their organizations may use it, or they have read about one and want to try it.

One we recently encountered is the Microsoft NSAT score. There isn’t much written about the NSAT that I could find, but I did find a document from Microsoft IT [.doc] that provides some guidance and a few articles describing its usage in case your organization is considering it (or you’ve heard about it). Here are ten things to know about this measurement of satisfaction.

1. NSAT stands for net satisfaction. Satisfaction, as a measure, has been around for decades. It predates Microsoft but it’s likely called Microsoft NSAT because of how they score it at the tech giant. You have no doubt answered questions about your satisfaction with a product or experience, and we’ve written about satisfaction before. The NSAT is based on a single question, phrased something like the following: “Thinking about your experience in the last 3 months, rate your overall satisfaction with .”

2. It uses a four-point Likert-type scale. The response scale consists of four satisfaction points, with the more favorable response “Very Satisfied” shown on the left and “Very Dissatisfied” on the right.
For single items, more points tends to increase reliability (and even validity). So having only four points is on the low side for a single item, but it does at least capture intensity, unlike a three-point scale.

There is a slight left-side or top-down bias with response scales—respondents will tend to pick the left- or top-most choices slightly more (possibly because it requires less effort to select). This has the effect of making the responses slightly more favorable, so if you use this question and have any external benchmarks, be sure you match the orientation with Very Satisfied shown on the left (and at least be consistent however you present it).

3. “Don’t Know” and “Not Applicable” are recommended and excluded from scoring. While the four-point scale doesn’t have a neutral option, it’s advised to add two other options for respondents, both a Don’t Know and a Not Applicable (N/A). It’s unclear why a participant wouldn’t know if they were satisfied with a product, but I suspect it’s because this question might be asked at various stages in a product journey, possibly with other questions. If possible, a better approach would be to not ask participants the question at all as it has the same effect on the calculation (their responses are excluded).

4. It uses net scoring. The “net” is derived by taking the Very Satisfied percent of total (top-box) and subtracting the two dissatisfied as percent of total (bottom-two-box). The somewhat satisfied scores aren’t factored in (only in the total responses). An unusual twist about the scoring is that Microsoft adds 100 to this net percentage to avoid negative scores. It’s unclear why, but I suspect it’s because of a misplaced belief that you can’t have negative percentages or simply because people have an aversion to negative percentages. This bounds the score from 0 to 200. It also seems to be expressed as a number rather than a percentage, another topic that seems to elicit misplaced strong opinions.

An example NSAT score of 103 is shown below from 436 hypothetical responses to a product survey:

Very SatisfiedSomewhat SatisfiedSomewhat Dissatisfied
Very Dissatisfied
Don't KnowNot Applicable
1002505036355

Total Responses: 436

Percent Very Satisfied: 100/436 = 23%

Percent Dissatisfied: (50+36)/436 = 20%

Net Satisfied: 23-20+100 = 103

5. It’s similar to other “net” scoring systems. This type of net scoring isn’t unique to the NSAT. You’ll notice the similarity to the top-two-box minus bottom-six-box approach of the NPS. It’s also something done in the Forrester Customer Experience Index (CxPI). The CxPI uses three questions with five-point scales and then subtracts the bottom two-box percentage from the top-two-box percentage (we’ll cover it in a future article).

6. Top box scoring may better predict behavior. The advantage of using net scoring means the extreme respondents are differentiated from the more tepid ones. We’ve found good evidence that the more extreme responders are a better predictor of behavior, such as recommending and purchasing. For example, in a study by de Haan and colleagues (2015), they found top-box satisfaction was the best predictor of customer retention (slightly better than the NPS).

7. One item can be sufficient for simple constructs. Like the NPS, the NSAT uses only a single item to measure satisfaction. In our earlier research, we’ve seen good evidence that simple constructs such as ease, intent to recommend, and satisfaction can be adequately captured using a single item. You can increase your reliability by adding more items, but you don’t gain that much and often the compactness of a single item is enough for simple constructs. Having only four points, however, makes the measure rather coarse. Either a few more points or additional items will likely increase the discriminating power of the measure.

8. It’s not necessarily “better” or “worse” than the NPS. One article we encountered claimed that NSAT was “better” than the NPS. Unfortunately, as is often the case online, the only rationale provided for why it’s better than the Net Promoter Score is “Humans aren’t very good with numbers and I think you’ll see more accurate results with a system like NSAT.”

We’ve shown that in general, more scale points are better than fewer. A more effective way to gauge which measure is “better” is to see which one predicts an outcome better, such as purchasing, retention, or referrals (this is what we mean by validity). We’ve found mixed results between satisfaction and NPS. In some cases satisfaction is a better predictor and in other cases the NPS is a better predictor. There are benefits to measuring both. If your organization has settled on one, there’s little to gain in trying to switch from NPS to satisfaction or from satisfaction to NPS, especially if you have existing data and benchmarks.

9. Satisfaction is related to, but different from, NPS. Satisfaction more closely measures people’s feelings and beliefs, whereas the Net Promoter Score gauges behavioral intention (intent to recommend). The two are related, because it’s more likely you’ll recommend a product or company when you’re satisfied. Consequently, there’s nothing wrong with measuring both satisfaction and NPS and it might be advantageous to understand those differences.

10. There are possible, but inaccessible, benchmarks. One of the advantages of using an existing measure is that you can ideally compare it to existing normed scores or benchmarks. This tells you whether your score is good, average, or bad. Apparently such benchmarks exist for the NSAT, but I have only found a few examples. It’s likely Microsoft has internal benchmarks on their products but until you start collecting your own data, don’t count on there being a lot to compare to.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top