The Net Promoter Score (NPS) is widely used by organizations. It’s often used to make high-stakes decisions on whether a brand, product, or service has improved or declined.

Net Promoter Scores are often tracked on dashboards, and any changes (for better or worse) can have significant consequences: adding or removing features, redirecting budgets, even impacting employee bonuses. Random sampling error, however, can often explain many of the changes in Net Promoter Scores, and organizations don’t want to be fooled by randomness. Two approaches to help differentiate the signal (meaningful changes) from the noise of sampling error are confidence intervals and significance tests.

We recently described three methods for computing confidence intervals for Net Promoter Scores. (For full computational details, see “Confidence Intervals for Net Promoter Scores.”) One was based on adjusted-Wald estimates, one was based on trinomial means, and one used bootstrapping. The adjusted-Wald method had accurate coverage and—especially for smaller sample sizes—produced more precise (narrower) intervals.

After completing our research on NPS confidence intervals, we worked out a method for testing the statistical significance of the difference between two NPS datasets based on the formula for the adjusted-Wald confidence interval. (For computational details, see “How to Statistically Compare Two Net Promoter Scores.”) Our preliminary evaluation of that method was promising, but it was limited to one test case.

For this article, we compare this new statistical test with two other methods using real-world NPS data that we collect every two years in UX surveys of consumer applications.

Comparison of the Three Test Methods

The Methods

The three methods we used were

  • Adjusted-Wald: For each set of NPS data, add a constant of 3 to the sample size (n), ¾ to the number of detractors, and ¾ to the number of promoters. Then compute the standard error based on the variance of the difference in two proportions. Combine the standard errors for each adjusted NPS to get the standard error of the difference, and then divide the difference in the two adjusted NPS by that standard error to get a Z-score. Use the Z-score to determine the two-tailed p-value for the test.
  • Trinomial means: For each set of NPS data, assign −1 to detractors, 0 to passives, and +1 to promoters. Use the means and standard errors for each NPS to compute a t-score. Use the t-score to determine the two-tailed p-value for the test.
  • Randomization test: For each set of data, assign −1 to detractors, 0 to passives, and +1 to promoters. With at least 1,000 iterations (preferably more), use a randomization test to shuffle the data and recompute the difference in the NPS, storing each difference in an array. The two-tailed p-value for the test is the percentage of differences in the array where the absolute value is greater than the absolute value of the observed difference.

The Datasets

Every two years we research select business and consumer software, most recently in 2020. As part of that research, we collected likelihood-to-recommend ratings and computed the NPS for each product. Table 1 shows the sample sizes; number of detractors, passives, and promoters; and the NPS for 17 of the most recently evaluated consumer products (with data from over 1,000 respondents). The NPS ranged from −53% to 43%, and the sample sizes ranged from 29 to 111. We organized the products into three groups based on their sample sizes (small: 29–35; medium: 49–50; and large: 101–111) and paired them so we would have three comparisons in each sample-size group with variation in the magnitude of the NPS differences (|d| in Table 1).

ProductProPasDetnNPS ProductProPasDetnNPS |d|
Video Editor A11168359%Music Service A13713330%9%
Language App161183523%Video Editor B9111030−3%26%
Tax Prep14782921%Email A661931−42%63%
Music Service B2215135018%PDF Program1917145010%8%
Browser A291284943%Music Service B2215135018%25%
Finance App13201750−8%Email B793349−53%45%
Browser B46333211113%App Suite B47392210823%10%
App Suite A53341410139%Slides App48243010218%21%
Email C61301610742%Word Processor3938321096%36%

Table 1: Nine planned comparisons of 17 real-world NPS datasets (Music Service B appears twice) (Pro = number of promoters, Pas = number of passives, Det = number of detractors, |d| = absolute difference).


Table 2 shows the results (p-values) for the three methods and nine comparisons.

Comparisonn1, n2|d|adjusted-WaldTrinomial MeansRandomization Test
Video Editor A/Music Service A35, 339%0.670.670.77
Language App/Video Editor B35, 3026%0.20
Tax Prep/Email A29, 3163%
Music Service B/PDF Program50, 508%0.630.630.72
Browser A/Music Service B49, 5025%
Finance App/Email B50, 4945%
Browser B/App Suite B111, 10810%0.330.330.35
App Suite A/Slides App101, 10221%
Email C/Word Processor107, 10936%

Table 2: Results (p-values) for the three significance testing methods across nine NPS comparisons.

We expected that small differences would generally not be statistically significant and that large differences generally would, with larger sample sizes having more power than smaller sample sizes. The patterns of p-values in Table 2 were consistent with this expectation. For example, differences of 36 and 45 percentage points were statistically significant at respective sample sizes around 100 and 50 per product, while differences of 8 to 11 points weren’t statistically significant at any of the sample sizes, even those over 100.

The p-values produced by the adjusted-Wald and Trinomial Means methods were surprisingly close. For seven of the nine comparisons, they were the same. For one comparison, the adjusted-Wald was .01 larger than the Trinomial Means method, and for the other comparison, the adjusted-Wald was .01 smaller than the Trinomial Means method.

The p-values for the Randomization Test method were about .10 larger than the others when sample sizes were small to medium, and the difference in the NPS was less than 10%. For larger differences and sample sizes, its p-values were closer to those of the other methods, but it was only equal to the others when the sample size and the NPS difference were large, or the NPS difference was very large (63%).

Even though the p-values for the Randomization Test method tended to be larger than the others when sample sizes or the NPS differences were smaller, all three methods produced consistent decisions regarding whether a difference was statistically significant (p < .05 or p < .10) for these nine comparisons.

Summary and Discussion

Using real-world NPS data to make nine comparisons, we evaluated three methods of testing the differences between two NPS datasets. Data came from surveys of 17 consumer applications and 1,079 respondents.

We found that a new method based on the formula for adjusted-Wald NPS confidence intervals worked as well as a method based on NPS trinomial means. Both of these methods had better performance than a randomization test, especially when sample sizes and the NPS differences were smaller. The randomization test makes fewer assumptions about the distribution of the NPS scores, but that seems to make it more conservative than the other methods.

If conducting tests of significance only, the method based on NPS trinomial means works surprisingly well. Because randomization tests are more conservative in many cases and otherwise have similar results, we do not recommend routinely using them to assess the NPS differences.

Based on these results, we recommend that UX researchers who need to conduct a test of significance on pairs of NPS scores use the adjusted-Wald method, especially when used in conjunction with adjusted-Wald confidence intervals.


Sign-up to receive weekly updates.