For quantifying the user experience of a product, app, or experience, we recommend using a mix of study-level and task-based UX metrics. In an earlier article, we provided a comprehensive guide to task-based metrics. Tasks can be included as part of usability tests or UX benchmark studies. They involve having a representative set of users
The Net Promoter Score is ubiquitous, with many large organizations using it as a key metric. But despite its widespread adoption, there are vocal critics. It’s been called snake oil, deceptive, fake science, and harmful. In our webinar series and on our website, we’ve addressed several aspects of the NPS, including the enmity toward it.
In a famous Harvard Business Review article published in 2003, Fred Reichheld introduced the Net Promoter Score (NPS). The NPS uses a single likelihood-to-recommend (LTR) question (“How likely is it that you would recommend our company to a friend or colleague?”) with 11 scale steps from 0 (Not at all likely) to 10 (Extremely likely).
We recently described how to compare two Net Promoter Scores (NPS) statistically using a new method based on adjusted-Wald proportions. In addition to comparing two NPS, researchers sometimes need to compare one NPS with a benchmark. For example, suppose you have data that the average NPS in your industry is 17.5%, and you want to
Like in all research methods, many things can go wrong in surveys, from problems with sampling to mistakes in analysis. To draw valid conclusions from your survey, you need accurate responses. But participants may provide inaccurate information. They could forget the answers to questions or just answer questions incorrectly. One common reason respondents answer survey
Five-star reviews. Whether you’re rating a product on Amazon, a dining experience on Yelp, or a mobile app in the App or Play Store, you can see that the five-star rating system is quite ubiquitous. Does the familiarity of stars offer a better rating system than traditional numbered scales? We recently reported a comparison between standard
There’s no shortage of opinions about the Net Promoter Score. There’s nothing wrong with an opinion. It’s just better when there’s data backing it. Unfortunately, when it comes to opinions about the Net Promoter Score, especially how the underlying question is displayed, the opinions are often based on anecdotes and out-of-context “best practices.” At MeasuringU,
Numbers are universally understood across cultures, geography, and languages. But when those numbers are applied to sentiments (for example, satisfaction, agreement, or intention), do people respond universally or does a 4 on a five-point scale elicit different reactions based on culture or geography? Many international organizations use similar sets of measures (such as satisfaction or
In our earlier article, Jim Lewis and I reviewed the published literature on labeling scales. Despite some recommendations and “best practice” wisdom, we didn’t find that fully labeled scales were measurably superior to partially labeled scales across the 17 published studies that we read. In reviewing the studies in more detail, we found many had
In an earlier article, we reviewed five competing models of delight. The models differed in their details, but most shared the general idea that delight is composed of an unexpected positive experience. Or, for the most part, delight is a pleasant surprise. However, there is disagreement on whether you actually need surprise to be delighted.