It can be helpful, though, to understand what drives the animosity. Like many things with significant impact, there’s more than one reason people hate the Net Promoter Score. Here are some of the more common reasons we’ve observed.
1. It displaces other efforts or measures.
A primary reason for the antipathy is that the Net Promoter Score displaces other efforts that are perceived as more worthwhile. The effort involved in collecting, analyzing, and prioritizing NPS findings may interfere with design or product improvements and can crowd out other measures, notably satisfaction, effort, delight, or other attitudinal metrics. This can be frustrating to businesses and academics who have made significant efforts in validating, promoting, and selling other measures.
2. It’s used when it seems irrelevant.
One of Jeff’s first experiences with the Net Promoter Score was having to ask government accountants if they’d recommend their accounting software to a friend. What seemed like a good question for TurboTax users didn’t seem relevant for a B2B enterprise where word of mouth plays a minor role in decision making. Certainly, the NPS seemed less relevant than other questions that focused on user satisfaction or effort. Surprisingly, though, the NPS correlated highly with the other questions that seemed more relevant.
3. It suffers from bad word of mouth.
There are many places online where people have expressed their frustration with the NPS. Influential voices in CX and UX who have come out strongly against the NPS have likely increased some people’s distrust in the measure. Some have told us that though they’ve used the NPS for years, they felt so shamed by others for using it that they’ve abandoned years of historical NPS measurement to move to an alternative metric such as satisfaction.
4. Some practitioners think it’s a poor measure.
People can and should question the validity and reliability of a measure, especially one that’s so consequential to many decisions. Many of the common questions we’ve addressed that tap into its quality as a measure include
- Does it predict future growth?
- Is it better than satisfaction?
- Is it reliable?
- Is the designation of promoters justified?
- Is the detractor designation justified?
- Is one item sufficient to measure a construct?
- Is the NPS scoring system wacky?
- Is it really the only measure?
- Should it be asked in all the ways the org is asking it?
We’ve reviewed the data and are convinced it’s not the ONLY measure and certainly not always the best measure. However, even when it isn’t the very best, it’s usually one of the best when used in the right context.
5. It’s affected by Campbell’s Law.
The Net Promoter Score is used in some organizations to determine bonuses or promotions. If it wasn’t already high-stakes enough at a company level, this level of accountability makes it personally high stakes. But the more a measure is used for important decision-making, the more it’s subject to corruption pressures. So, we’ve added a corollary to Campbell’s Law: Mismanagement leads to mismeasurement. In particular, mismanagement occurs when there is poor governance of how NPS data is collected combined with tying pay and promotions to it.
6. It may be overused.
In some ways, the NPS is a victim of its success. It’s so widely used that hardly a day seems to go by without customers being asked by one or more service providers to rate the likelihood that they would recommend the service to others. It’s one thing to answer this question occasionally, but at some point, customers can start to get frustrated by the perceived demands on their time.
7. When misused, it can cause customer resentment.
Campbell’s Law (above) describes how mismanaging NPS measurement leads to mismeasurement, largely driven by people who benefit from its corruption. Another corollary of Campbell’s Law is that customer pressure leads to increased customer resentment. In other words, when the corruption of an NPS measurement program involves pressuring customers to provide only the highest score, customers who are not inclined to recommend the service will resent the pressure. Furthermore, the enterprise will not receive early warning signals of customer dissatisfaction.
8. UX designers might not get credit for improved designs.
Some of the most vocal critics of the NPS come from the UX design community. We have heard concerns from UX designers that the highly discrete scoring system of the NPS can obscure improvements in the likelihood-to-recommend (LTR) ratings that are the basis of the NPS. For example, suppose the mean LTR for the first version of a service was 2, and after redesign, it was 5. The improvement is clear, but both means are in the NPS Detractor range of 0 to 6, so there would likely be no difference in the NPS of the two versions. For this reason, it makes sense to track LTR means along with the NPS, because the mean LTR is more sensitive to change while the NPS provides information about extreme responders that is critical to its connection to recommendation behavior, and you use the same data for both metrics.
Another concern is the distance in the causal chain of leading to lagging measures. Leading measures, such as those associated with detecting and eliminating usability problems, improving task completion rates, and increasing ratings of task ease, are directly affected by UX design activity. Lagging measures, such as the NPS and corporate profit, are often far removed from the work of the UX designer. There is, however, a growing body of evidence that demonstrates, for example, significant statistical connections between prototypical usability metrics and perceived usability, between perceived usability and LTR, and between standardized UX metrics and future purchasing.
Summary and Discussion
People hate the Net Promoter Score for multiple reasons:
- It displaces other efforts or measures.
- It’s used when it seems irrelevant.
- It suffers from bad word of mouth.
- People think it’s a poor measure.
- It’s affected by Campbell’s Law.
- It may be overused.
- When misused it can cause customer resentment.
- UX designers might not get credit for improved designs.
Rather than hating the NPS, it would be more pragmatic to understand the forces driving these negative attitudes and how to deal with them.
For example, when someone says that the NPS is terrible or worthless, don’t accept the statement uncritically. Is there any data supporting the claim that the NPS is bad, or is the evidence just that a competitive metric might be as good as the NPS? Do the arguments make sense? Can you find anyone offering counterarguments?
If the criticism is that a different metric should be used, ask yourself whether the person making this claim might have an incentive to do so. Are they selling or licensing the competitive metric? Have they made major investments in validating the competitive metric? Are they selling books promoting the competitive metric? Have they made public statements against the NPS that might now be embarrassing to retract?
If recommendation behavior isn’t relevant in a given context (even though the NPS tends to correlate with other more relevant metrics), then it might be reasonable to move away from complete reliance on the NPS. Consider concurrently collecting one or more competitive metrics (e.g., customer satisfaction, UMUX-Lite). Once you have established statistical relationships between the NPS and the competitor, you might stop NPS measurement.
If you think the NPS is a poor measure, ask yourself why. Is it because you read a critical tweet from a prominent UX personality, or is it based on comprehensive knowledge of the body of NPS research? There has been a lot of research conducted on the NPS over the past ten years—much of that in the past five. We at MeasuringU have done plenty of work to understand the measurement properties of the NPS. After all the research we’ve reviewed and conducted, we don’t think the NPS is a poor measure. It isn’t always the best measure of loyalty, but it’s often a good one.
If you’re concerned about the possibility of corruption in an NPS measurement process, keep Campbell’s Law in mind. If you switch from one high-stakes measure to another, you’re unlikely to avoid corruption. To address the possibility of corruption, focus more on management practices that will take measurement out of the hands of those who would benefit from its corruption, and focus more on objective over subjective business metrics to guide pay, bonus, and promotion decisions.
To detect changes in likelihood-to-recommend (LTR) that may be too subtle to register as a change in NPS, you should also track and report the mean LTR (and your UX designers and researchers will appreciate it). To detect subtle changes in a design that may have a material impact on an experience, you will likely need intermediary measures such as completion rates and perceived ease that are more sensitive (but also impact the NPS indirectly).
Finally, to avoid customer frustration and resentment, avoid overusing the NPS, and put governance processes in place to penalize those who would pressure customers to give only high NPS ratings.