Rating Scales

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 33 )
Benchmarking ( 32 )
Usability ( 32 )
Customer Experience ( 30 )
SUS ( 28 )
User Research ( 28 )
Rating Scale ( 23 )
Net Promoter Score ( 21 )
Sample Size ( 20 )
Usability Problems ( 17 )
Metrics ( 17 )
Measurement ( 15 )
User Experience ( 14 )
Validity ( 14 )
Questionnaires ( 13 )
Satisfaction ( 13 )
Usability Metrics ( 13 )
Surveys ( 13 )
SUPRQ ( 12 )
Rating Scales ( 12 )
Qualitative ( 11 )
Navigation ( 10 )
Market Research ( 9 )
Reliability ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
UMUX-lite ( 6 )
Research ( 6 )
Mobile ( 6 )
SEQ ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Task Times ( 4 )
Confidence ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Desirability ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
IA ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
Correlation ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
PhD ( 2 )
Key Driver ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
A/B Testing ( 2 )
Salary Survey ( 2 )
Data ( 2 )
Variables ( 2 )
Excel ( 2 )
TAM ( 2 )
Remote Usability Testing ( 2 )
Marketing ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
KLM ( 2 )
SUM ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Information Architecture ( 1 )
Conjoint Analysis ( 1 )
Site Analytics ( 1 )
Test Metrics ( 1 )
Problem Severity ( 1 )
Desktop ( 1 )
Task Randomization ( 1 )
Expectations ( 1 )
Margin of Error ( 1 )
Competitive ( 1 )
Quality ( 1 )
Design ( 1 )
Microsoft Desirability Toolkit ( 1 )
LTR ( 1 )
Voice Interaction ( 1 )
meCUE2.0 ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
moderated ( 1 )
Moderating ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Delight ( 1 )
Hedonic usability ( 1 )
CUE ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Design Thinking ( 1 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
RITE ( 1 )
Formative testing ( 1 )
MUSiC ( 1 )
ISO ( 1 )
History of usability ( 1 )
Metric ( 1 )
protoype ( 1 )
Top Task Analysis ( 1 )
Sample Sizes ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Errors ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Software ( 1 )
Ordinal ( 1 )
Segmentation ( 1 )
Persona ( 1 )
User Testing ( 1 )
Trust ( 1 )
Formative ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Regression Analysis ( 1 )
There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes? Most research on this topic shows

Read More

UX research and UX measurement can be seen as an extension of experimental design. At the heart of experimental design lie variables. Earlier we wrote about different kinds of variables. In short, dependent variables are what you get (outcomes), independent variables are what you set, and extraneous variables are what you can’t forget (to account for). When you measure a user experience using metrics—for example,

Read More

Human: Computer, can you recognize speech? Computer: I think you said, can you wreck a nice beach? Both the quality of synthesized speech and the capability of communicating with a computer using your voice have come a long way since the debut of this technology in the 1970s. One of the most famous synthetic voices was the one used by Stephen Hawking. Some of its

Read More

In our earlier article, Jim Lewis and I reviewed the published literature on labeling scales. Despite some recommendations and “best practice” wisdom, we didn’t find that fully labeled scales were measurably superior to partially labeled scales across the 17 published studies that we read. In reviewing the studies in more detail, we found many had confounding effects when comparing between full labeling and partial labeling—meaning

Read More

To have a reliable and valid response scale, people need to understand what they’re responding to. A poorly worded question, an unclear response item, or an inadequate response scale can create additional error in measurement. Worse yet, the error may be systematic rather than random, so it would result in unmodeled bias rather than just increased measurement variability. Rating scales have many forms, with variations

Read More

While customer satisfaction may be thought of as one concept, there’s isn’t a single “official” way to measure it. By one estimate there are more than 40 instances of different customer satisfaction scales described in the published literature. That, in part, is a consequence of how common satisfaction is as a measure. Satisfaction is measured on more than just brands, products, and features. It’s used to

Read More

Should you label all points on a scale? Should you include a neutral point? What about labeling neutral points? How does that affect how people respond? These are common questions when using rating scales and they’ve also been asked about the Net Promoter Score: What are the effects of having a neutral label on the 11-point Likelihood to Recommend (LTR) item used to compute the

Read More

In an earlier article, I described the PURE methodology. PURE stands for Practical Usability Rating by Experts. Evaluators familiar with UX principles and heuristics decompose tasks into small steps and rate each step based on a pre-defined scoring system (called a rubric), as shown in Figure 1. [table id=30 /] Figure 1: Scoring rubric for PURE. The PURE method is analytic. It’s not based on

Read More

It seems like there are endless ways to ask questions of participants in surveys. Variety in question types can be both a blessing and a curse. Having many ways to ask questions provides better options to the researcher to assess the opinion of the respondent. But the wrong type of question can fail to capture what’s intended, confuse respondents, or even lead to incorrect decisions.

Read More

A good measure of customer loyalty should be valid, reliable, and sensitive to changes in customer attitudes. For the most part, the Net Promoter Score achieves this (although it does have its drawbacks). One area the Net Promoter Score lacks in is how its scoring approach adds “noise” to the customer loyalty signal. The process of subtracting detectors from promoters may be “executive friendly” but

Read More