Rating Scale

Browse Content by Topic

UX ( 72 )
Methods ( 62 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 36 )
Usability ( 32 )
Benchmarking ( 31 )
Customer Experience ( 28 )
NPS ( 27 )
User Research ( 27 )
SUS ( 22 )
Rating Scale ( 19 )
Net Promoter Score ( 19 )
Sample Size ( 19 )
Usability Problems ( 17 )
Metrics ( 16 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Validity ( 11 )
Surveys ( 11 )
Qualitative ( 11 )
Questionnaires ( 10 )
Navigation ( 10 )
Satisfaction ( 10 )
Market Research ( 9 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Rating Scales ( 8 )
UX Metrics ( 8 )
SUPR-Q ( 8 )
Task Completion ( 7 )
Reliability ( 7 )
Mobile ( 6 )
Questionnaire ( 6 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Research ( 5 )
Visualizing Data ( 5 )
UX Maturity ( 4 )
Task Times ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
UX Methods ( 4 )
Confidence ( 4 )
Expert Review ( 4 )
Moderation ( 4 )
Confidence Intervals ( 4 )
Quantitative ( 4 )
Unmoderated Research ( 3 )
PURE ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
SEQ ( 3 )
Card Sorting ( 3 )
UMUX-lite ( 3 )
Lean UX ( 3 )
Salary Survey ( 2 )
Key Driver ( 2 )
Correlation ( 2 )
Branding ( 2 )
Excel ( 2 )
PhD ( 2 )
Remote Usability Testing ( 2 )
Data ( 2 )
UX Salary Survey ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
IA ( 2 )
Personas ( 2 )
SUM ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
Summative ( 2 )
KLM ( 2 )
Tree Testing ( 2 )
Cognitive Walkthrough ( 2 )
Tasks ( 2 )
Mobile Usability ( 1 )
Task Completin ( 1 )
PSSUQ ( 1 )
Prototype ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Perceptions ( 1 )
Five ( 1 )
Software ( 1 )
Problem Severity ( 1 )
Performance ( 1 )
Contextual Inquiry ( 1 )
Information Architecture ( 1 )
NSAT ( 1 )
moderated ( 1 )
Customer effort ( 1 )
Delight ( 1 )
CSUQ ( 1 )
Site Analytics ( 1 )
Moderating ( 1 )
Sample ( 1 )
Facilitation ( 1 )
protoype ( 1 )
Metric ( 1 )
Certification ( 1 )
Crowdsourcing ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Margin of Error ( 1 )
Expectations ( 1 )
Effect Size ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Segmentation ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Random ( 1 )
Desktop ( 1 )
Ordinal ( 1 )
User Testing ( 1 )
Errors ( 1 )
Trust ( 1 )
Persona ( 1 )
Unmoderated ( 1 )
To have a reliable and valid response scale, people need to understand what they’re responding to. A poorly worded question, an unclear response item, or an inadequate response scale can create additional error in measurement. Worse yet, the error may be systematic rather than random, so it would result in unmodeled bias rather than just increased measurement variability. Rating scales have many forms, with variations

Read More

Question wording in a survey can impact responses. That shouldn’t be much of a surprise. Ask a different question and you’ll get a different answer. But just how different the response ends up being depends on how a question has changed. Subtle differences can have big impacts; alternatively, large differences can have little impact. It’s hard to predict the type and size of impact on

Read More

Survey response options come in all sorts of shapes, sizes, and now, colors. The number of points, the addition of labels, the use of numbers, and the use of positive or negative tone are all factors that can be manipulated. These changes can also affect responses, sometimes modestly, sometimes a lot. There is some concern that long response scales (more than three points) are hard

Read More

In an earlier article, we examined the folk wisdom that three-point scales were superior to those with more, such as five, seven, ten, or eleven response options. Across twelve published studies we found little to suggest that three-point scales were better than scales with more points and, in fact, found evidence to show that they performed much worse than scales with more points. Almost all

Read More

Five-point scales are the best. No, seven points. Never use a ten-point scale. Eleven points “pretend noise is science.” You never need more than three points. Few things seem to elicit more opinions (and misinformation) in measurement than the “right” number of scale points to use in a rating scale response option. For example, here is a discussion on Twitter by Erika Hall making the

Read More

It seems like each year introduces a new measure or questionnaire. Like a late-night infomercial, some are even touted as the next BIG thing, like the NPS was. New questionnaires and measures are a natural part of the evolution of measurement (especially measuring difficult things such as human attitudes). It’s a good thing. I’ll often help peer review new questionnaires published in journals and conference

Read More

It seems like there are endless ways to ask questions of participants in surveys. Variety in question types can be both a blessing and a curse. Having many ways to ask questions provides better options to the researcher to assess the opinion of the respondent. But the wrong type of question can fail to capture what’s intended, confuse respondents, or even lead to incorrect decisions.

Read More

What does 4.1 on a 5-point scale mean? Or 5.6 on a 7-point scale? Interpreting rating scale data can be difficult in the absence of an external benchmark or historical norms. A popular technique used often by marketers to interpret rating scale data is the so-called “top box” and “top-two box” scoring approach. For example, on a 5-point scale, such as the one shown in Figure

Read More

Know your data. When measuring the customer experience, one of the first things you need to understand is how to identify and categorize the data you come across. It's one of the first things covered in our UX Boot Camp and it's something I cover in Chapter 2 of Customer Analytics for Dummies. Early consideration of your data types enables you to do the following:

Read More

The Single Ease Question (SEQ) is a 7-point rating scale to assess how difficult users find a task. It's administered immediately after a user attempts a task in a usability test. After users attempt a task, ask them this simple question: Overall, how difficult or easy was the task to complete? Use the seven point rating scale format below. Labels and values: We typically label

Read More