Rating Scale

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 51 )
Survey ( 38 )
NPS ( 34 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Sample Size ( 27 )
Rating Scale ( 26 )
Net Promoter Score ( 22 )
Usability Problems ( 18 )
Questionnaires ( 17 )
Metrics ( 17 )
Measurement ( 16 )
User Experience ( 14 )
Rating Scales ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
UMUX-lite ( 13 )
Surveys ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
Qualitative ( 11 )
Reliability ( 11 )
SUPR-Q ( 11 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Task Time ( 8 )
SEQ ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Research ( 6 )
Analytics ( 6 )
Unmoderated Research ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Task Times ( 4 )
UX Maturity ( 4 )
sliders ( 4 )
Confidence ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Expert Review ( 4 )
Confidence Intervals ( 4 )
Moderation ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Card Sorting ( 3 )
Summative ( 3 )
Usability Lab ( 3 )
Lean UX ( 3 )
Voice Interaction ( 3 )
Key Driver ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
ROI ( 3 )
Desirability ( 3 )
PURE ( 3 )
Data ( 3 )
TAM ( 3 )
Findability ( 2 )
Excel ( 2 )
Personas ( 2 )
Sensitivity ( 2 )
SUM ( 2 )
Salary Survey ( 2 )
Focus Groups ( 2 )
Tree Testing ( 2 )
Tasks ( 2 )
Branding ( 2 )
Correlation ( 2 )
Errors ( 2 )
PhD ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Star Scale ( 2 )
Eye-Tracking ( 2 )
Emoji scale ( 2 )
slider ( 2 )
Sample Sizes ( 2 )
LTR ( 2 )
Variables ( 2 )
Marketing ( 2 )
Prototype ( 2 )
Cognitive Walkthrough ( 2 )
Formative ( 2 )
KLM ( 2 )
Likert ( 1 )
consumer software ( 1 )
Desktop ( 1 )
Design Thinking ( 1 )
Latin Squares ( 1 )
Visual Analog Scale ( 1 )
b2b software ( 1 )
User-Centred Design ( 1 )
Meeting software ( 1 )
Formative testing ( 1 )
Margin of Error ( 1 )
CUE ( 1 )
RITE ( 1 )
MUSiC ( 1 )
Cumulative Graphs ( 1 )
History of usability ( 1 )
ISO ( 1 )
Greco-Latin Squares ( 1 )
Linear Numeric Scale ( 1 )
Information Architecture ( 1 )
Bias ( 1 )
Contextual Inquiry ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
Probability ( 1 )
Measure ( 1 )
coding ( 1 )
negative scale ( 1 )
MOS ( 1 )
Mobile Usability ( 1 )
Carryover ( 1 )
Problem Severity ( 1 )
Within-subjects ( 1 )
Task Randomization ( 1 )
Research design ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Polarization ( 1 )
Site Analytics ( 1 )
AttrakDiff2 ( 1 )
Effect Size ( 1 )
Affinity ( 1 )
Unmoderated ( 1 )
Z-Score ( 1 )
User Testing ( 1 )
Persona ( 1 )
Software ( 1 )
Segmentation ( 1 )
Task Completin ( 1 )
Design ( 1 )
Performance ( 1 )
Sample ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Five ( 1 )
Perceptions ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Microsoft Desirability Toolkit ( 1 )
Competitive ( 1 )
Expectations ( 1 )
PSSUQ ( 1 )
meCUE2.0 ( 1 )
UEQ ( 1 )
Quality ( 1 )
Hedonic usability ( 1 )
Trust ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Moderating ( 1 )
Ordinal ( 1 )
Metric ( 1 )
protoype ( 1 )
moderated ( 1 )
NSAT ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Customer effort ( 1 )
Test Metrics ( 1 )
Cases spike, home prices surge, and stock prices tank: we read headlines like these daily. But what is a spike and how much is a surge? When does something crater versus tank or just fall? Headlines are meant to grab our attention. They often communicate the dramatic story the author wants to tell rather than what the data say. It isn’t easy to write headlines.

Read More

We typically recommend small sample sizes (5–10) for conducting iterative usability testing meant to find and fix problems (formative evaluations). For benchmark or comparative studies, where the focus is on detecting differences or estimating population parameters (summative evaluations), we recommend using larger sample sizes (20–100+). Usability testing can be used to uncover problems and assess the experience. Many usability tests will play both roles simultaneously, formative

Read More

Decisions should be driven (or at least informed) by data. Raw data is turned into information by ensuring that it is accurate and has been put into a context that promotes good decision-making. The pandemic has brought a plethora of COVID-related data dashboards, which are meant to provide information that helps the public and public officials make better decisions. With the pressure to report data

Read More

There are a lot of ways to display multipoint rating scales by varying the number of points (e.g., 5, 7, 11) and by labeling or not labeling those points. There’s variety not only in how rating scales are displayed but also in how you score the responses. Two typical scoring methods we discussed earlier are reporting the raw mean of responses and using top-box scoring. We’ve also shown

Read More

There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes? Most research on this topic shows

Read More

There’s a lot of conventional wisdom floating around the Internet about rating scales. What you should and shouldn’t do. Best practices for points, labels, and formats. It can be hard to differentiate between science-backed conclusions and just practitioner preferences. In this article, we’ll answer some of the more common questions that come up about rating scales, examine claims by reviewing the literature, and summarize the

Read More

One of these things is not like the other. That’s the theme of a segment on the long-running US TV show Sesame Street. As children, we learn to identify similarities and differences. And after seeing a group of things that look similar, we tend to remember the differences. Why? Well, one theory describes something called the isolation effect, or the Von Restorff effect. The name

Read More

To have a reliable and valid response scale, people need to understand what they’re responding to. A poorly worded question, an unclear response item, or an inadequate response scale can create additional error in measurement. Worse yet, the error may be systematic rather than random, so it would result in unmodeled bias rather than just increased measurement variability. Rating scales have many forms, with variations

Read More

Question wording in a survey can impact responses. That shouldn’t be much of a surprise. Ask a different question and you’ll get a different answer. But just how different the response ends up being depends on how a question has changed. Subtle differences can have big impacts; alternatively, large differences can have little impact. It’s hard to predict the type and size of impact on

Read More

Survey response options come in all sorts of shapes, sizes, and now, colors. The number of points, the addition of labels, the use of numbers, and the use of positive or negative tone are all factors that can be manipulated. These changes can also affect responses, sometimes modestly, sometimes a lot. There is some concern that long response scales (more than three points) are hard

Read More