Has the Net Promoter Score Been Discredited in the Academic Literature?

Jeff Sauro, PhD

The Net Promoter Score is ubiquitous and may be a victim of its own success.

According to the Wall Street Journal, in 2018 the NPS was cited more than 150 times in earnings conference calls by 50 S&P 500 companies.

And its usage seems to be increasing.

The NPS, like all measures, has its shortcomings and no shortage of vocal critics.

But as we and others investigated many of the criticisms, we found that some were legitimate, others were overblown (such as the number of points), and some were just wrong and misguided.

But overblown and inaccurate criticisms are the sort of thing you may encounter in polemic blogs and certainly on Twitter.

What about the academic, and usually peer-reviewed, literature?

The same Wall Street Journal article that cited the popularity of the NPS also cited three published studies that “have questioned the whole idea”:

Two 2007 studies analyzing thousands of customer interviews said NPS doesn’t correlate with revenue or predict customer behavior any better than other survey-based metric.

A 2015 study examining data on 80,000 customers from hundreds of brands said the score doesn’t explain the way people allocate their money. “The science behind NPS is bad,” said Timothy Keiningham, a marketing professor at St. John’s University in New York, and one of the co-authors of the three studies. He said the creators of NPS haven’t provided peer-reviewed research to support their original claims of a strong correlation to growth. “When people change their net promoter score, that has almost no relationship to how they divide their spending.”

Those three studies were from Timothy Keiningham, who is a decorated part-time professor and part-time industry professional.

And another distinguished marketing professor, Roland Rust, said that “NPS [is] now widely discredited in the academic literature.”

Or as one vocal critic of the NPS more crudely put it: the NPS has been “debunked in many smart research papers.”

But has it been discredited and debunked?

Reading academic papers is not something people generally like to do. Part of the reason why is it’s not easy. First, you have to find them across disparate journals. Then, if you find them, most papers are behind paywalls, requiring either an academic subscription or a payment of $30+ for a single paper. That’s true even when the research was funded by taxpayers, written for free, and reviewed for free.

And once you get the papers, they are long and often full of confusing statistics and jargon and have been written in the passive voice (see what I did there?).

Fortunately, many authors make their papers freely available online. I encourage you to read for yourself what the published research says. Most papers don’t offer clear conclusions but are nuanced and limited. A lot of papers have been written about the NPS. While I haven’t read them all, I have read many of them. Below I’ve summarized and provided key takeaways from academic articles that are almost all peer reviewed. These aren’t the only academic articles on NPS, but they are some of the most commonly cited and influential ones.

You’ll see that the takeaway from these papers is not that the NPS has been discredited or summarily debunked (like the link between autism and vaccination) but, rather, it has been qualified—with some claims being replicated and others not.

Any summary will lose information, and I’ve called out elements I found most relevant in assessing the claim that the NPS has been discredited. You may disagree, and I encourage you to read the full papers. I’ve had to read these papers multiple times to fully understand the nuances in the methods and metrics reported. Please let me know if you see any discrepancies. Hopefully this (lengthy) summary will help others when researching the Net Promoter Score claims.

Here are the summaries and key takeaways in chronological order, because many later papers build on and reference earlier ones. Within each summary I link to more details and relevant notes I’ve take on each paper for more context.

Below is a table listing each paper with a key takeaway and a link to its full summary. The article is long (because there is a lot of literature) and is followed by a discussion.

Marsden et al. (2005)
Replicated Reichheld’s findings, but used NPS with historical growth
Morgan & Rego (2006)
Didn't use the NPS (but called it NPS), other metrics predicted growth better
Grisaffe (2007)
NPS not only measure of loyalty, using historical growth can't establish causation
Keiningham et al. (2007a)
NPS correlates with Sat., not always best metric
Keiningham et al. (2007b)
NPS correlates with Sat, not always the best metric
Pingitore et al. (2007)
NPS not always best metric, requires larger sample sizes
Sharp (2008)
Negative Commentary on Reichheld and criticism of using historical growth rates with NPS
Schneider et al. (2008)
NPS correlates with historical growth, changing scales didn't matter much
East et al. (2011)
NPS & ACSI highly correlate, but neither account for non-customers
Kristensen & Eskildsen (2011)
NPS segments didn't align with intent to use insurance segments
van Doorn et al. (2013)
NPS predicted future growth, but no different than other metrics
Pollack & Alexandrov (2013)
NPS correlates with stated repurchase intent, but lower than a 3 item measure
de Haan et al. (2015)
NPS correlated with 2 year retention, the same as top-box satisfaction
Keiningham et al. (2015)
Transformed Sat. was a better predictor of SOW but authors didn't test NPS the same
Zaki et al. (2016)
Dismisses NPS but the criticism applies to all attitude measures.
Fiserova et al. (2017)
More satisfied customers were more likely to become promoters
Fiserova et al. (2018)
NPS predicted a large UK's store sales
Mecredy et al. (2018)
NPS correlated with future growth for one New Zealand Company
Korneta (2018)
NPS correlated with growth in the Polish Transportation Industry


Advocacy Drives Growth (2005)

Marsden et al. (2005) at the London School of Economics provided the first study to independently corroborate Reichheld’s findings, using sales growth for retail banks, car manufacturers, mobile phone networks, and supermarkets in the UK.

Takeaway: The authors found that the NPS correlated (r = .48) with sales growth data from 2003 and 2004 across the four industries, which replicated Reichheld’s finding of the NPS correlating to growth. However, like Reichheld, they correlated the NPS to historical growth, which is a common and legitimate criticism that we also raised.

The Value of Different Customer Satisfaction and Loyalty Metrics in Predicting Business Performance (2006)

Morgan and Rego’s (2006) paper is one of the most often-cited papers by later authors. It’s also one of the most fatally flawed.

The authors make a strong claim that recommendation intentions (net promoters) and behavior (average number of recommendations) have little or no predictive value. This claim is counter to the influential work by Ajzen and Fishbein, who found that behavioral intentions were a good predictor of future behavior especially when aggregated and focused on specific behaviors.

Morgan and Rego used a comprehensive analysis of ACSI (American Consumer Satisfaction Index) data from 80 publicly traded companies during 1994–2000 and examine the impact of firm-level attitudinal metrics on future business metrics.

The problem is in the attitudinal measures they use. While variations on the Likelihood to Recommend item are seen in the literature (usually with the number of points varying from 4, 5, 10, and 11), theirs is substantially different. They used two ACSI items:

  1. Have you discussed your experiences with [brand or company x] with anyone?
  2. Have you formally or informally complained about your experiences with [brand or company x]?

Their “Net Promoter Score” was calculated as the percent discussing minus the percent complaining. Other papers note this shortcoming (e.g., Keiningham, 2007a), and it raises serious questions about their findings. They also created a top-two-box satisfaction score, which they found inferior to the mean of the ACSI. However, their top-two boxes actually used four boxes (responses 7–10 on the ten-point scale). This dilutes the effects from extreme responders.

For their quasi-NPS question, none of the correlations were statistically significant (p < .05). The authors in no uncertain terms concluded that there’s no relationship between NPS and business metrics. Their study also found no correlation between recommendation behaviors and the two attitudinal customer-satisfaction metrics. This is surprising given other studies (e.g., East, 2011; Keiningham et al., 2007a) and our analysis.

In a later paper, Morgan and Rego (2008) showed that their version of the NPS correlated very highly with the more familiar version. However, given the authors found very similar correlations with other satisfaction measures, it’s hard to know whether using the LTR item would have also resulted in significant correlations for predicting future growth metrics.

Takeaway: The NPS questions used in this analysis differed substantially from the Likelihood to Recommend item, making the findings questionable. They did show that satisfaction correlated with future growth rates (explaining between 1% and 16% of the variation), and a single-item satisfaction measure correlated very highly (r = .9) with the average ACSI scores (see later papers on single items). This unfortunate result is also a reminder that the peer review process doesn’t catch all problems, even major problems.

Questions About the Ultimate Question (2007)

In 2007 Grisaffe wrote a critical examination on the NPS without providing any additional empirical evidence. But the paper was balanced and not dismissive. Grisaffe raised some important questions about NPS validation, including on Reichheld’s use of historical growth rates. However, the author saw the Likelihood to Recommend item as a measure of behavioral intention, and behavioral intention tends to predict behavior. It’s one measure of word of mouth (WOM), but not necessarily the only measure. He argued that one number is helpful, like a temperature, but not enough to diagnose a problem. Finally, he also asked if NPS is an outcome (a measure of loyalty) or a cause of loyalty, something later authors question (e.g., Pollack & Alexandrov, 2013) and address (e.g., Fiserova et al., 2018).

Takeaway: In this critical essay of the NPS (no empirical data), the author argues that Likelihood to Recommend is a valid measure of loyalty, but it’s not the only measure, and it may be a consequence and not cause of loyalty. Using historical data to link growth rates doesn’t help establish a link between intent to recommend and sales.

The Value of Different Customer Satisfaction and Loyalty Metrics in Predicting Customer Retention, Recommendations, and Share-of-Wallet (2007)

Keiningham and colleagues published three articles between 2007 and 2008. This is the first of the two 2007 articles cited in the 2019 WSJ article. In Keiningham et al. (2007a), the authors reviewed several metrics in retail banking, mass-merchant retail, and Internet Service Providers (ISPs) across two years.

The authors concluded that recommend intention does provide insight into future recommend behavior, but not as a sole indicator. They based this conclusion on the NPS (and all measures) having modest correlations, on recommend intention not always being the best predictor of their loyalty metrics (retention, share of wallet, and recommendations), and on how multiple measures are better at predicting retention and recommending behavior.

They were critical of using a single item for measuring growth and show that there is a small statistical increase in predictive ability when using multiple items (R2 increases from .1% to 1.6%). Perhaps unintentionally, they may have given credence to Reichheld’s claim that multiple measures added “insignificant predictive advantage.” The authors in a later paper advocate for a single item measure of satisfaction (converted to relative ranking); see Keiningham et al. (2015) below.

Takeaway: The authors found that a five-point version of the NPS was a nominally better (but not statistically) better predictor than satisfaction on future recommendations, intent to purchase, and retention. The NPS was not always the best predictor of their loyalty metrics (retention, share of wallet, and recommendations), and multiple measures were slightly (but not statistically) better at predicting the retain and recommend variables.

A Longitudinal Examination of Net Promoter and Firm Revenue Growth (2007)

Keiningham and colleagues’ second paper (2007b) received the Marketing Science Institute/H. Paul Root Award.

In the study, the authors used a ten-point version of the NPS with the endpoints labeled “very high probability/very low probability.”

Using data from 17 Norwegian firms and reconstructing the NPS vs. ACSI data from Reichheld, the authors showed that the NPS isn’t always the best predictor of future growth metrics (e.g., for revenue growth). The NPS was the best or second-best predictor in two of five industries they studied. However, given the small sample sizes within each industry (between three and five companies), even large differences in correlations were not statistically different.

Note: Another version of the study was published under the title “A Holistic Examination of Net Promoter” (Keiningham et al., 2008) in a different journal with the same data but slightly different prose, so be sure to not to treat these as independent sources.

Takeaway: Their analysis showed that the NPS correlates highly with other metrics, most notably customer satisfaction, and sometimes it’s the best, but not always. This does contradict Reichheld’s assertion that satisfaction doesn’t correlate with growth, but it is similar to our analysis and others. It showed that satisfaction and the NPS provide comparable predictive abilities. This paper is a good example of generally what other papers show: the NPS isn’t always clearly the best predictor of growth metrics, but even when it’s not, it’s a close contender. I’d call this a qualification of the claim rather than a discreditation.

The Single-Question Trap (2007)

Morgan and Rego are back in Pingitore et al. with a 2007 paper that acknowledged the merits of asking Likelihood to Recommend as a measure. They warned that businesses may mistakenly think that it is the ONLY measure that predicts financial performance.

The authors examined JD Power data from 2005–2006 in multiple industries: auto insurers (24 companies), full-service investment (21 firms), airlines (10 carriers), and rental car companies (9 brands). They used a mix of metrics (e.g., delighted, satisfaction), including 4-point and 11-point versions of the NPS. They correlated these to a mix of business metrics (retention rates, customer acquisition costs, change in revenue, and self-reported share of wallet).

Note that Gina Pingitore was the chief research officer at JD Power (a firm with a competing interest to Satmetrix and the NPS); see Discussion and Takeaway below.

The authors cautioned about the much larger sample sizes needed when using “net” scoring (something we discussed earlier too). However, it could be that what is lost in not using the mean is worth shedding, and the comparable performance here suggests not much is lost.

Takeaways: Their 4-point and 11-point versions of the NPS correlated with historical business metrics but they weren’t always the best. In some cases, the NPS was the best; in some cases, it was the worst; and in some cases, it performed better than a multi-item measure. Using only four points for one NPS item may have confounded their findings.

Net Promoter Score Fails the Test (2008)

Sharp (2008) wrote an essay critical of not only the Net Promoter Score but also of earlier work by Reichheld on customer defection, including the maxim that it costs five times more to attract than retain a customer. He called Reichheld’s claims about defection rates shockingly misleading.

Sharp then laid into Reichheld’s peddling of the myth of the NPS. He pointed out (as we, Grisaffe, and others did) the problem with using historical growth rates. He then called Reichheld’s work snake oil and fake science. This work was not peer-reviewed and it’s more of a commentary (or polemic). See possible conflicts below. This paper was cited by Keiningham et al. (2015) as part of the “large body of evidence” refuting the NPS.

Takeaway: Sharp’s very critical essay of Reichheld raised the concern that historical growth rates can’t prove that the NPS is a leading indicator of growth.

Measuring Customer Satisfaction and Loyalty: Improving the “Net-Promoter” Score (2008)

Famed Stanford survey researcher Jon Krosnick, along with others, examined the issue with the number of scale points of the NPS. It is cited in the Wikipedia entry for the NPS.

They described two studies in which they manipulated scale points and labels to see what would best predict self-reported recommendations (collected in the same study). In study 1, they learned from 2,226 US panel participants that the original 11-point scale and a 7-point partially labeled scale with “neutral” points predicted the historical reported recommendations rate better than a fully labeled scale. Interestingly, they also found “liking” and satisfaction to be better predictors of past recommendations, but liking was generally better for noncustomers.

The authors found a correlation between the NPS and historical growth rates (similar to Marsden, 2005, and Keiningham et al., 2007a) in both studies. They showed that changes in the number of scale points (5, 7, or 11), cut-points for detractors, and labels (fully vs. end labeled) can affect the correlations to growth metrics (external validity) for detractors, but often the differences were modest. Like Grisaffe (2007) and Pollack and Alexandrov (2013), they speculated that different measures, such as likelihood of recommendation, satisfaction, and liking, are interrelated and might be acting within causal chains.

Takeaway: Similar to other studies, they found that NPS and multiple metrics correlated with historical growth rates, and changing the number of scale points and labels improved correlations, but not by much.

The NPS and the ACSI: A Critique and an Alternative Metric (2011)

East et al. (2011) compared the ACSI and NPS and argued that both have shortcomings, as neither includes ex-users and never users of the brand. This is more of a criticism of whom to survey instead of what to ask people. This is an extension of an earlier study (East, 2008), which made similar arguments and used overlapping data.

East and colleagues also took exception to Reichheld’s claim that “detractors are responsible for 80% to 90% of a company’s negative word of mouth.” In our analysis of negative comments, we did find that the bulk of negative comments from current users came from detractors.

To show that more negative word of mouth comes from noncustomers or never customers, East and colleagues conducted a convenience sample, collecting data from 2,254 participants across ten studies in the UK and asked participants to recall the number of times they gave positive “advice” or received negative advice for a number of industries, including supermarkets, coffee shops, and skin care products. Participants also noted whether they were current, former, or never users of the brand.

Takeaway: The authors showed that the NPS and ACSI are highly correlated (r = .99, .74, and .92) but take exception to the NPS’s accounting of negative comments. They did corroborate Reichheld’s finding that promoters accounted for the largest share of positive comments, but they found that most negative comments about a brand came from former customers or never customers. In short, their criticism of the NPS was applied to the ACSI and would apply to any multi-item measure that asks only current customers. So their main criticism here is a suggestion to sample more than current customers to better gauge negative WOM.

Is the Net Promoter Score a Reliable Performance Measure? (2011)

This paper by Kristensen and Eskildsen (2011) examined the relationship between the NPS and customer retention in the insurance industry in Denmark.

The authors didn’t look at revenue or sales growth, but rather correlated the 11-point NPS to 3,400 respondents’ stated intentions to continue using their current insurance provider, with 1 = No, Absolutely Not to 10 = Yes, Absolutely.

While there is a strong correlation between the NPS and stated intent, the authors found that their clusters didn’t align with Reichheld’s designations of promoters (9–10s), passives (7–8s), or detractors (0–6s). They found three clusters: 0–4 for detractors, 5–7 for passives, and 8–10 for promoters. Curiously, they use this as proof that NPS “fails the validity test.” The authors conclude dismissively, “the best we can say about NPS is that it is a mistake! We hope that companies will realize this before too much harm is done.”

Interestingly, they claimed that one shortcoming of the NPS is its 11 points, and that a “10-point scale is by far the most efficient” but provided no citation to back that claim (see our analysis of 10- vs. 11-point scales).

The authors expressed concern in transforming 11 points to 3, increasing the margin of error and reducing the signal in the noise (see also Pingitore et al., 2007). While true (as we’ve reported), the loss might be worth it as others have shown that top-box measures are often superior to measures that use the mean.

Takeaway: The authors found different clusters for promoters, passives, and detractors when correlating the NPS with stated intent to continue as an insurance customer. Even though this deviates from Reichheld’s correlation with revenue growth, they dismissed the NPS as a mistake.

Satisfaction as a Predictor of Future Performance: A Replication (2013)

Van Doorn et al. (2013) examined 46 companies’ data from 2008 in the Dutch banking, insurance, utilities, and telecom industries, totaling 11,967 respondents. They used the standard NPS item (0 to 10 scale), a single five-point satisfaction item, top-box satisfaction, a five-item satisfaction index, and a four-item loyalty intention measure to predict future (2008–2010) performance.

They also included an alternative NPS that used 8–10 for promoters and 0–5 for detractors, because “Dutch respondents may give lower evaluations than US respondents (an 8 is already a high grade in the Netherlands).”

They found that the 2008 NPS correlated with future sales growth (r = .46) and sales margins (r = .22) but not with cash flow. All other measures (except the four-item loyalty intention measure) had similar correlations (e.g., r = .42 to r = .493 for sales growth).

Interestingly, their top-box satisfaction item performed as well (r = .493) as their five-item satisfaction item (r = .481), lending credence to both top-box scoring and the adequacy of single items.

They found that all customer metrics (except for loyalty intentions) were significantly related to single-year sales growth in 2008 and two-year sales growth in 2008–2009.

Takeaway: They found that the NPS predicted future growth metrics (corroborating our findings and Reichheld’s claim), but both versions of the NPS weren’t statistically different than other metrics. They concluded “there is no single best metric.”

Nomological Validity of the Net Promoter Index Question (2013)

Pollack and Alexandrov (2013) investigated the nomological validity of the NPS, which is the extent to which a measurement instrument correlates in theoretically predictable ways with measures of different but related constructs.

The authors examined the NPS first at the customer level and saw customer satisfaction as an antecedent, repurchase intentions as a consequence, and word of mouth as a correlate.

The authors conducted a survey of 159 respondents at a Midwest university for attitudes toward banking, cell phone services, and hairdressers/barbers. The authors asked the 11-point LTR, a three-item WOM (with seven-point scales), three purchase-intent scales, and three satisfaction scales. They found strong correlations between customer satisfaction and repurchase intention and the NPS.

Note that the authors in this paper used a relatively small sample size of students and did not correlate the attitudinal data to actual purchases. Instead they used only stated purchase intent collected from the same respondents in the same survey, likely inflating the correlations. They found that the three-item measure of WOM was a better predictor of purchase intentions, but not by much.

Takeaway: The authors showed that the NPS correlates (r = .88) with stated repurchase intentions (not actual purchases) but isn’t as strong a predictor as a three-item measure of word-of-mouth (r = .95).

The Predictive Ability of Different Customer Feedback Metrics for Retention (2015)

De Haan and colleagues argued that works of literature critical of metrics like the NPS are hard to interpret because they use different dependent variables, research settings, methodologies, units of analysis, and so on.

The authors analyzed data from 6,649 respondents, who filled out 8,924 firm evaluations for 93 Dutch firms across 18 industries. Respondents answered several CFM (customer feedback metric) questions, including a single seven-point satisfaction item, the 11-point NPS item and the single five-point Customer Effort Score (CES).

Roughly 15% of respondents (1,308) answered a follow-up survey two years later and indicated which firms they were still doing business with. Approximately 27% of customers no longer did business—“churned”—after the two-year period.

They found that top-two-box satisfaction had the highest correlation with two-year retention (r = .184), which was very similar to the conventionally reported NPS (r = .170). The length of the relationship was the best predictor (r = .199), meaning longer relationships led to more retention. CES had a negative correlation (r = -.073) and was the only measure the authors did not recommend using.

They concluded that there is no single best metric to predict customer retention across industries, and in contrast to Keiningham et al. (2007a), monitoring NPS does not seem to be wrong in most industries.

Takeaway: The authors found that the NPS correlated modestly (r = .17) with two-year retention, slightly lower than a single top-two-box satisfaction item (r = .18). Both NPS and satisfaction correlated highly with each other across 19 industries (r = .97). Contrary to Pingitore et al. (2007), they argued that using a top-box scoring approach offered superior results over using the mean, and a single-item measure was adequate, although combining measures like satisfaction and NPS may offer some predictive benefit.

Perceptions Are Relative: An Examination of the Relationship Between Relative Satisfaction Metrics and Share of Wallet (2015)

Keiningham and colleagues are back in 2015 with another paper. The authors discussed the importance of using a relative satisfaction measure to predict a measure called Share of Wallet (SOW). SOW is how people allocate their spending within a category, say, between two or three grocery stores. This paper only tangentially references the NPS, but because it was cited by the WSJ, I’ve included it.

The authors conducted a longitudinal study across six months with around 80,000 customers in 15 countries. The subject of the analysis was based only on the participants who answered the follow-up questions six months later. This reduced the final count to 1,138 US customers answering satisfaction questions about grocery stores, airlines, drugstores, and several others.

Respondents used a ten-point satisfaction scale (1 = completely dissatisfied and 10 = completely satisfied), and raw satisfaction scores were converted to ranks. For example, if a respondent rated Brand A 7 on satisfaction and Brand B 5, the ratings become 1 and 2. The authors here use a single-item measure and argue that satisfaction is “concrete and singular in nature.”

The authors used several transformations of the change in satisfaction rankings to predict the change in SOW. Their best-fitting transformation used a Zipf transformation and had a correlation of r = .403. The raw relative rank in satisfaction without the transformation had an r = .285. The raw satisfaction score had a low correlation (r = .066) with change in SOW as did recommend intention (r = .065) and the Net Promoter Score (r = .067).

The authors did not report how they collected the NPS or recommend intention; however, a clarification was provided by Bruce Cooil (personal communication, May 17, 2019) who stated that the NPS was computed using the Reichheld three-point classification of promoter, passive, and detractor, and it correlated highly with the other metrics. The authors firmly concluded that “changes in a customer’s Net Promoter classification similarly has almost no correlation to changes in share of wallet.”

However, they did not test the same transformations used on satisfaction, so the conclusion doesn’t follow the data. That is, the correlation with raw satisfaction to the change in share of wallet was virtually identical to the Net Promoter Score (r = .066 vs. r = .067). Given the high correlation shown in this paper and other studies, it’s likely that the transformed NPS would provide comparable results to transformed satisfaction. This was something the reviewers should have caught.

There’s no doubt that Tim Keiningham is not a fan of the NPS (see potential conflicts below), and that comes through in this paper. The authors claimed that there is a wide body of scientific evidence casting doubt on the NPS. Their wide body includes their earlier paper (Keiningham, 2007b), Morgan and Rego (2006) (which had earlier been discounted by the authors), and the opinion piece by Sharp (2008). They also advocated using a single-item measure of satisfaction (transformed), where earlier they claimed multi-item measures are superior (as others have claimed).

Takeaway: The authors firmly concluded (as cited in the WSJ article) that “changes in a customer’s Net Promoter classification similarly has almost no correlation to changes in share of wallet.” However, the authors did not see whether transforming NPS as they did with satisfaction would have a higher correlation in predicting changes in SOW. This limits the generalizability of their (negative) claims about the NPS from this study.

The Fallacy of the Net Promoter Score (2016)

Zaki et al.’s working paper in 2016 with the provocative title examined a longitudinal customer dataset from 2012–2015, which included attitudinal data (NPS ratings and qualitative customer comments), behavioral data (transactional data), and demographic data across multiple touchpoints from 3,000 customers. The data came from a large, international B2B asset-heavy service organization providing both products and services.

Loyal customers were classified as those who are currently spending frequently (which deviates from other definitions of loyal customers).

The authors didn’t compare the NPS to other measures they collected, such as satisfaction. Rather, the authors showed the value in analyzing verbatim comments to understand why people might discontinue service or reduce their spending.

Takeaway: Using data from one New Zealand company, the authors showed that verbatim comments collected after the NPS can help identify possible churning customers better than the NPS alone. The authors neither compared the NPS to other attitudinal metrics such as satisfaction nor correlated it with company growth. The article maybe should be titled “The Fallacy of Relying Only on Self-Reported Metrics and How to Analyze Verbatim Responses.”

What Drives Customer Propensity to Recommend a Brand? (2017)

Fiserova et al. in 2017 examined data from DFS, a leader in the UK retail upholstery market. They examined NPS data from 2,773 respondents and monthly store sales to understand the key drivers of NPS. While the authors helped shed light on the key drivers of promoters, they did not attempt to link promoting behavior to sales growth.

Takeaway: The authors found that the more satisfied the customers were, the more likely they were to become a promoter (again showing a hierarchy between satisfaction and loyalty).

The Effect of the Net Promoter Score on Sales: A Study of a Retail Firm Using Customer and Store-Level Data (2018)

In a follow up, Fiserova, et al. (2018), using the same data as the 2017 study (meaning this is not independent), examined DFS, the leader in the UK living room furniture market. They wanted to see whether NPS had an effect on future sales, analyzed NPS survey data at three time points (postpurchase, postdelivery, and six months), investigating 728 observations across 96 stores over a four-year period in the UK (August 2011–July 2015). Note that this paper received a best paper award.

Takeaway: The authors found that one UK store’s NPS had a statistically significant (non-linear) effect on store sales from five to nine months after purchase. A one-point increase in the NPS across all UK stores corresponded to approximately a 0.5% increase in the company’s annual revenue (which is a bigger impact that opening a new store).

Are Promoters Valuable Customers? (2018)

Mecredy et al. (2018) examined the sales of a B2B company (near monopoly status) in New Zealand. They examined the NPS data from 2,785 customers over five years and connected these to actual purchase records. They looked to see whether the NPS was a leading or lagging indicator of growth (a point raised by Grisaffe among others).

Takeaway: The company’s NPS score correlated with past (r = .85), current (r = .70), and future revenue when looking at all customers (r = .91). Promoters spent on average 13% more than passives and 46% more than detractors, but growth was found to come from expanding sales to all customers, not just promoters. The authors concluded that even if the NPS is no better a predictor than other customer satisfaction measures, the literature suggests it may still be superior from a practical perspective if it is easier to apply.

Net Promoter Score, Growth, and Profitability of Transportation Companies (2018)

Korneta (2018) examined concurrently collected NPS and financial data from 34 Polish transportation companies between 2014–2016, providing NPS scores from 76 observations.

Takeaway: The author found that the NPS had modest but statistically significant correlations (r = .25 to .36) with concurrently collected business metrics in the Polish transportation industry.

Discussion and Takeaways

Across 22 academic papers, we can draw a few conclusions:

The Net Promoter Score is not always the “best” metric. Across multiple studies, different researchers provided examples of where the NPS was not the best predictor of business revenue. However, in the cases when the NPS was not the best predictor, it was close. Maybe this isn’t much of a surprise given that Reichheld’s original article noted it wasn’t always the best metric (the best or second best in 11 of 14 industries and not in monopolies or DB software). It’s safe to say that the NPS will not always be the best metric, but it will be one of the best.

Multiple Versions of the LTR cloud the issue. Several studies used different versions of the Likelihood to Recommend item and scored the NPS differently. The number of points fluctuated between 4 and 11 and the scale labels changed. The most egregious was the two-item version from Morgan and Rego (2006). Pingitore used a four-point version and Keiningham used a five-point version. Given our earlier research, it’s likely that most of these will correlate highly with the commonly used 11-point LTR; however, often the differences between the best and second-best predictors in some studies were small, and these differences in scale construction confound the findings.

Are the headlines really a lot of the issue? Headlines grab clicks and readership. This is certainly the case for the Harvard Business Review, which has featured rather bold headlines over the years for the Net Promoter Score (“The One Number You Need to Grow”) and for other measures such as the CES (“Stop Trying to Delight Your Customers”). But this also extends to the criticisms of the NPS from other sources (“CEOs Embrace a Dubious Metric” and “NPS Considered Harmful”). Reading beyond the headlines of course reveals more nuance (usually). In the original NPS article, while the headline screams “BEST METRIC EVER ALWAYS,” the prose offers more qualified findings (“best or second best predictor in 11 of 14 industries”). Although Reichheld and Bain have clearly sold this metric as superior, and while that can be the case in some (and even many) industries, it isn’t always the case (as they acknowledged in the original paper). I encourage you to read beyond the headlines—you’ll find most papers are much more nuanced.

Single items are likely adequate. Some authors have criticized the NPS as being inadequate because it is a single item, and multiple items provide a more reliable picture. In general, more items will be more reliable; there’s little debate on that. But that’s not the point: the question is whether what is gained is worth it. Is the additional reliability (and validity) gained from additional items worth the cost in respondents’ time and executives’ comprehension? Even one of the most prominent critics of the NPS, Tim Keiningham, seems to have changed his mind on this. His 2007 paper made the argument for a multi-item measure, while his 2015 paper showed a single-item measure of satisfaction was adequate (and possibly even superior) after transformation.

What is easy gets used. A major tenet of the User Experience profession is to reduce the burden on users. Simpler interfaces get used more. The Net Promoter Score is simple and easy to understand. It is itself an example of the Technology Acceptance Model: people will adopt things (including measures) that meet their needs and are easy to use. There is legitimate concern about what the Net Promoter Score can and can’t do. It certainly can’t tell you what to fix (but neither can other similar measures), for example, but there is reasonable evidence that it can provide a glimpse into the future of growth (in many but not all industries). It’s not necessarily better than satisfaction or other measures, but it’s not necessarily worse. Simplicity often wins, though. This was a point raised by Mecredy et al. (2018): “Even if NPS is no better a predictor than other customer satisfaction measures, literature suggests it may still be superior from a practical perspective if it is easier to apply.”

NPS has similar predictive abilities to satisfaction. One of the boldest but least supported claims we found from reviewing the literature was that satisfaction did not correlate to growth. Several studies, including our own, have shown correlations between several measures of satisfaction and different growth metrics. It’s not always the best (it wasn’t in our analysis), but sometimes it is, and it is almost always highly correlated with the NPS. I suspect that the original claims of the inadequacy of satisfaction measures (throwing the ACSI and several researchers who supported it under the bus) likely created the antipathy that continues to this day.

NPS has similar predictive abilities to the ACSI. The study by East (2011), our own analysis, and the study by Keiningham et al. (2007a) showed that the ACSI and NPS correlate highly with each other. This not only suggests that they may offer comparable predictive ability but also means there’s little for a company to gain by switching from ACSI to NPS or from NPS to ACSI.

Likelihood to Recommend is an important (but not the only) loyalty measure. Even critics of the Net Promoter Score agree that people’s intent to recommend and their actual recommend behavior are important things to measure to understand loyalty (and word of mouth). The critics take issue with the NPS being the only measure you collect and the claim that it is always superior to other measures, notably satisfaction and intent to purchase. Our review shows the NPS is comparable to other measures. Keiningham et al. (2008) said, “Recommend intention is not, by any means, a useless metric or remotely a poor one. In fact, it is an extremely useful tool… we only question that it is the ‘only’ metric of true value.”

Competing theories and books on loyalty. There are many competing theories on how to define loyalty and multiple books that purport to describe a better measure. In the same presentation [pdf] in which Roland Rust said the NPS was discredited, he also said Reichheld was trying to sell the NPS. And that’s absolutely true. In fact, you could say Reichheld has oversold his work on the NPS; it’s become a huge success. But titles like the “Ultimate Question” and the “The Only Number You Need to Grow” make bold claims, and bold claims require strong evidence. The use of these bold claims and a dearth of third-party (peer-reviewed) evidence have no doubt contributed to the inevitable backlash from other academics and marketers who prefer their own theories. That success has crowded out other prominent marketing theories, such as the work by Rust and Keiningham and colleagues.

Critics may have conflicting interests. While the merits of articles should stand on their own, it’s important to know that some of the authors are not dispassionate observers. We would expect studies conducted by Bain or Reichheld to support their claims, similar conflicts may come from critics too. For example, Tim Keiningham has advocated for another measure, the Wallet Allocation Rule, which itself has appeared in the Harvard Business Review; he’s also served as the global chief strategy officer at IPSOS Loyalty, a competitor to Satmetrix. Gina Pingitore was the head of research at JD Power, another competitor of Satmetrix. And Byron Sharp has his own ideas about how to measure loyalty (and his share of critics). This is similar to findings in keyboard research where results from studies of nonstandard keyboard designs were better when conducted by the inventor than by an independent researcher.

Likely a causal chain from satisfied to loyal. There is some compelling evidence that, while satisfied customers might not recommend, it’s most likely that customers who recommend are satisfied with the company. This is supported directly in the paper by Fiserova (2018), which found that more satisfied customers led to more promoters. If possible, measure both satisfaction and intent behaviors such as intent to recommend and intent to repurchase.

Most qualify rather than discredit. Multiple studies have brought the lofty NPS claims down to earth by showing that the NPS often correlates with satisfaction (often in a causal chain), and both can be good (or poor) predictors of future growth at the firm or industry level. If the studies showed consistently poor or no predictive ability compared to satisfaction, I’d say it was discredited. Instead I see the NPS as qualified (not disqualified). The main claim that was discredited was that there was no correlation between satisfaction and future growth, but that doesn’t discredit the entire measure.

The references below link to article summaries and where possible, links to the published papers.


de Haan, E., Verhoef, P. C., & Wiesel, T. (2015). The predictive ability of different customer feedback metrics for retention. International Journal of Research in Marketing, 32(2), 195–206

East, R. (2008). Measurement deficiencies in the Net Promoter Score. Sydney, Australia: ANZMAC.East, R., Romaniuk, J. & Lomax, W. (2011) The NPS and the ACSI: A critique and an alternative metric. International Journal of Market Research, 53 327.

Fiserova, J., Pugh, G. T., Stephenson, A., & Dimos, C. (2017). What drives customer propensity to recommend a brand? British Academy of Management Annual Conference, University of Warwick, UK.

Fiserova, J., Pugh, G. T., Stephenson, A., & Dimos, C. (2018). The effect of the Net Promoter Score on sales: A study of a retail firm using customer and store-level data. British Academy of Management Annual Conference, Bristol Business School, UK.

Grisaffe, Douglas B. (2007). Questions about the ultimate question: Conceptual considerations in evaluating Reichheld’s Net Promoter Score (NPS). Journal of Consumer Satisfaction, Dissatisfaction, and Complaining Behavior, 20, 36–53.

Keiningham, T. L., Cooil, B., Aksoy, L., Andreassen, T. W., & Weiner, J. (2007a). The value of different customer satisfaction and loyalty metrics in predicting customer retention, recommendation, and share-of-wallet. Managing Service Quality, 17(4), 361–384.

Keiningham, T.L., Cooil, B., Andreasson, T. W., & Aksoy, L. (2007b). A longitudinal examination of “Net Promoter” and firm revenue growth. Journal of Marketing, 71(3), 39–51.

Keiningham, T., Aksoy, L., Cooil, B., Andreassen, T. W., & Williams, L. (2008). A holistic examination of Net Promoter. Journal of Database Marketing & Customer Strategy Management, 15(79), 79–90.

Keiningham, T. L., Cooil, B., Malthouse, E. C., Buoye, A., Aksoy, L., De Keyser, A., & Larivière, B. (2015). Perceptions are relative: An examination of the relationship between relative satisfaction metrics and share of wallet. Journal of Service Management, 26(1), 2–43.

Korneta, P. (2018). Net promoter score, growth, and profitability of transportation companies. International Journal of Management and Economics, 54(2), 136–148.

Kristensen, K., & Eskildsen, J. (2011). Is the Net Promoter Score a reliable performance measure? 2011 IEEE International Conference on Quality and Reliability, Bangkok, Thailand, 249–253.

Marsden, P., Samson, A., & Upton, N. (2005). Advocacy drives growth. Brand Strategy, 198, 45–47.

Mecredy, P., Wright, M. J., & Feetham, P. (2018). Are promoters valuable customers? An application of the net promoter scale to predict future customer spend. Australasian Marketing Journal, 26(1), 3–9.

Morgan, N. A., & Rego, L. L. (2006). The value of different customer satisfaction and loyalty metrics in predicting business performance. Marketing Science, 25(5), 426–439.

Morgan, N. A., & Rego, L. L. (2008). Rejoinder: Can behavioral WOM measures provide insight into the Net Promoter concept of customer loyalty? Marketing Science, 27, 533–534. doi.org/10.1287/mksc.1080.0375.

Pingitore, G., Morgan, N. A., Rego, L. L., Gigliotti, A., & Meyers, J. (2007). The single-question trap. Marketing Research, 19(2), 9–13.

Pollack, B., & Alexandrov, A. (2013). Nomological Validity of the Net Promoter Index question. Journal of Services Marketing, 27, 118–129. doi.org/10.1108/08876041311309243.

Schneider, D., Berent, M., Thomas, R., & Krosnick, J. (2008, June). Measuring customer satisfaction and loyalty: Improving the “Net-Promoter” Score. Annual Conference of the World Association for Public Opinion Research (WAPOR), Berling, Germany.

Sharp, B. (2008). Net promoter score fails the test. Marketing Research, 20(4), 28–30.

van Doorn, J., Leeflang, P. S. H., & Tijs, M. (2013). Satisfaction as a predictor of future performance: A replication. International Journal of Research in Marketing, 30(3), 314–318.

Zaki, M., Kandeil, D. A., Neely, A., & McColl-Kennedy, J. R. (2016). The fallacy of the Net Promoter Score: Customer loyalty predictive model. University of Cambridge.


Thanks to Jim Lewis for providing comments on this article.

    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top