NPS

Browse Content by Topic

UX ( 70 )
Methods ( 61 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 36 )
Usability ( 32 )
Benchmarking ( 29 )
Customer Experience ( 28 )
User Research ( 27 )
NPS ( 27 )
SUS ( 21 )
Net Promoter Score ( 19 )
Sample Size ( 19 )
Rating Scale ( 18 )
Usability Problems ( 17 )
Metrics ( 15 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Validity ( 11 )
Surveys ( 10 )
Navigation ( 10 )
Satisfaction ( 10 )
Market Research ( 9 )
Questionnaires ( 9 )
Heuristic Evaluation ( 8 )
SUPR-Q ( 8 )
Task Time ( 8 )
UX Metrics ( 7 )
Reliability ( 7 )
Task Completion ( 7 )
Rating Scales ( 7 )
Mobile Usability Testing ( 6 )
Questionnaire ( 6 )
Mobile ( 6 )
Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Analytics ( 5 )
UX Methods ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Task Times ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Usability Lab ( 3 )
Unmoderated Research ( 3 )
SEQ ( 3 )
UMUX-lite ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Task Metrics ( 3 )
Branding ( 2 )
Data ( 2 )
SUM ( 2 )
Key Driver ( 2 )
PhD ( 2 )
KLM ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Personas ( 2 )
Excel ( 2 )
A/B Testing ( 2 )
Tree Testing ( 2 )
Marketing ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
Focus Groups ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Findability ( 2 )
IA ( 2 )
Correlation ( 2 )
Affinity ( 1 )
Perceptions ( 1 )
Problem Severity ( 1 )
Performance ( 1 )
Z-Score ( 1 )
Contextual Inquiry ( 1 )
Moderating ( 1 )
Site Analytics ( 1 )
moderated ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Metric ( 1 )
protoype ( 1 )
Prototype ( 1 )
Mobile Usability ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Information Architecture ( 1 )
Task Completin ( 1 )
Margin of Error ( 1 )
Software ( 1 )
Segmentation ( 1 )
Delight ( 1 )
Ordinal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Visual Appeal ( 1 )
Persona ( 1 )
Design ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Top Task Analysis ( 1 )
Formative ( 1 )
Trust ( 1 )
Errors ( 1 )
Quality ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Desktop ( 1 )
Over half of U.S. households have a pet. And for many people, they are more than pets; they are family. But pets are also a big business. It’s estimated that pet related products and services account for $70BB annually in the U.S. alone. And a lot of that spending happens online. To better understand the pet website user experience, we conducted two studies. The first

Read More

Survey response options come in all sorts of shapes, sizes, and now, colors. The number of points, the addition of labels, the use of numbers, and the use of positive or negative tone are all factors that can be manipulated. These changes can also affect responses, sometimes modestly, sometimes a lot. There is some concern that long response scales (more than three points) are hard

Read More

Around a quarter of Americans change jobs each year. For most, that job search happens online. Job related websites are a multibillion-dollar business with plenty of competition. They have made finding and applying for jobs more accessible and easier. However, the process isn’t without issues. Job descriptions can be misleading, and the application process can be cumbersome. To better understand the job searching user experience

Read More

In an earlier article, we examined the relationship between NPS and future company growth. We found the Net Promoter Score had a modest correlation with future growth (r = .35) in the 14 industries we examined. In the 11 industries that had a positive correlation, the average correlation with two-year revenue growth was higher at r = .44. This ranged from a high of r

Read More

In an earlier article, we examined the folk wisdom that three-point scales were superior to those with more, such as five, seven, ten, or eleven response options. Across twelve published studies we found little to suggest that three-point scales were better than scales with more points and, in fact, found evidence to show that they performed much worse than scales with more points. Almost all

Read More

The Net Promoter Score introduced a new language of loyalty. At center stage are the promoters and detractors. These designations are given to respondents who answer the How Likely Are You to Recommend (LTR) question as shown below. But what is the justification for the designations? Were they just arbitrarily created? Do they just sound good for executives? How much faith should we put in

Read More

For measuring the user experience, I recommend using a mix of task-based and study-level measures that capture both attitudes (e.g. SUS, SUPR-Q, SEQ, and NPS) and actions (e.g. completion rates and times). The NPS is commonly collected by organizations and therefore UX organizations (often because they are told to). Its popularity inevitably has brought skepticism. And rightfully so. After all, the NPS was touted as

Read More

Does better usability lead to more revenue? What about positive word of mouth? Is it tied to revenue growth? Are UX metrics for usability and intent to recommend able to track future revenue growth? Many UX researchers who work for software companies or on software products collect UX metrics. In fact, we strongly advocate for it. As part of implementing a plan to improve UX,

Read More

Public officials don’t care much about what the general public thinks. Voting is the only way ordinary people can have a say in government. How much do you agree with those two statements? If the order in which those items were presented were switched, would it affect how you responded? While most UX and customer research doesn’t involve sensitive topics, does the order in which

Read More