Rating Scales

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 36 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 30 )
NPS ( 28 )
User Research ( 28 )
SUS ( 23 )
Rating Scale ( 20 )
Net Promoter Score ( 20 )
Sample Size ( 19 )
Usability Problems ( 17 )
Metrics ( 16 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
Questionnaires ( 13 )
Satisfaction ( 13 )
Validity ( 13 )
SUPRQ ( 12 )
Surveys ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Rating Scales ( 10 )
Market Research ( 9 )
SUPR-Q ( 9 )
UX Metrics ( 8 )
Heuristic Evaluation ( 8 )
Reliability ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
Questionnaire ( 7 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Research ( 5 )
Moderation ( 4 )
UX Methods ( 4 )
Quantitative ( 4 )
UX Maturity ( 4 )
Unmoderated Research ( 4 )
Loyalty ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Confidence ( 4 )
SEQ ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
ROI ( 3 )
Card Sorting ( 3 )
Lean UX ( 3 )
Customer Segmentation ( 3 )
Usability Lab ( 3 )
PURE ( 3 )
UMUX-lite ( 3 )
Task Metrics ( 3 )
Key Driver ( 2 )
IA ( 2 )
Branding ( 2 )
UX Salary Survey ( 2 )
Eye-Tracking ( 2 )
Data ( 2 )
Salary Survey ( 2 )
Marketing ( 2 )
Summative ( 2 )
Excel ( 2 )
PhD ( 2 )
Findability ( 2 )
Remote Usability Testing ( 2 )
A/B Testing ( 2 )
Tree Testing ( 2 )
Focus Groups ( 2 )
Personas ( 2 )
Tasks ( 2 )
Cognitive Walkthrough ( 2 )
KLM ( 2 )
SUM ( 2 )
Correlation ( 2 )
Information Architecture ( 1 )
Site Analytics ( 1 )
Contextual Inquiry ( 1 )
Mobile Usability ( 1 )
Desktop ( 1 )
Margin of Error ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Problem Severity ( 1 )
Persona ( 1 )
moderated ( 1 )
Moderating ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Delight ( 1 )
Metric ( 1 )
protoype ( 1 )
Task Completin ( 1 )
Prototype ( 1 )
Certification ( 1 )
Facilitation ( 1 )
CSUQ ( 1 )
PSSUQ ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
Hedonic usability ( 1 )
CUE ( 1 )
Formative testing ( 1 )
meCUE2.0 ( 1 )
Voice Interaction ( 1 )
TAM ( 1 )
LTR ( 1 )
Microsoft Desirability Toolkit ( 1 )
Desirability ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
User Testing ( 1 )
RITE ( 1 )
Effect Size ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Segmentation ( 1 )
Software ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Ordinal ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Sample ( 1 )
Crowdsourcing ( 1 )
Five ( 1 )
Perceptions ( 1 )
Performance ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Visual Appeal ( 1 )
Errors ( 1 )
Trust ( 1 )
Formative ( 1 )
Competitive ( 1 )
Human: Computer, can you recognize speech? Computer: I think you said, can you wreck a nice beach? Both the quality of synthesized speech and the capability of communicating with a computer using your voice have come a long way since the debut of this technology in the 1970s. One of the most famous synthetic voices was the one used by Stephen Hawking. Some of its

Read More

In our earlier article, Jim Lewis and I reviewed the published literature on labeling scales. Despite some recommendations and “best practice” wisdom, we didn’t find that fully labeled scales were measurably superior to partially labeled scales across the 17 published studies that we read. In reviewing the studies in more detail, we found many had confounding effects when comparing between full labeling and partial labeling—meaning

Read More

To have a reliable and valid response scale, people need to understand what they’re responding to. A poorly worded question, an unclear response item, or an inadequate response scale can create additional error in measurement. Worse yet, the error may be systematic rather than random, so it would result in unmodeled bias rather than just increased measurement variability. Rating scales have many forms, with variations

Read More

While customer satisfaction may be thought of as one concept, there’s isn’t a single “official” way to measure it. By one estimate there are more than 40 instances of different customer satisfaction scales described in the published literature. That, in part, is a consequence of how common satisfaction is as a measure. Satisfaction is measured on more than just brands, products, and features. It’s used to

Read More

Should you label all points on a scale? Should you include a neutral point? What about labeling neutral points? How does that affect how people respond? These are common questions when using rating scales and they’ve also been asked about the Net Promoter Score: What are the effects of having a neutral label on the 11-point Likelihood to Recommend (LTR) item used to compute the

Read More

In an earlier article, I described the PURE methodology. PURE stands for Practical Usability Rating by Experts. Evaluators familiar with UX principles and heuristics decompose tasks into small steps and rate each step based on a pre-defined scoring system (called a rubric), as shown in Figure 1. [table id=30 /] Figure 1: Scoring rubric for PURE. The PURE method is analytic. It’s not based on

Read More

It seems like there are endless ways to ask questions of participants in surveys. Variety in question types can be both a blessing and a curse. Having many ways to ask questions provides better options to the researcher to assess the opinion of the respondent. But the wrong type of question can fail to capture what’s intended, confuse respondents, or even lead to incorrect decisions.

Read More

A good measure of customer loyalty should be valid, reliable, and sensitive to changes in customer attitudes. For the most part, the Net Promoter Score achieves this (although it does have its drawbacks). One area the Net Promoter Score lacks in is how its scoring approach adds “noise” to the customer loyalty signal. The process of subtracting detectors from promoters may be “executive friendly” but

Read More

You've seen them. You've answered them. It seems like everyone has an opinion about them. Here are five things to know about the famous Likert scale. (One for each response option!) The Likert scale was developed and named after psychologist Rensis Likert. The now ubiquitous Likert scale consists of multiple items. Participants are asked to rate their level of agreement to items that describe a

Read More

There are some interesting known differences between men and women in the psychological literature. For example, women tend to be better judges of emotion when looking at faces for just 0.2 of a second[pdf]! And across many measures of ability, while both men and women tend to exhibit overconfidence, men are generally more overconfident than women  and this is especially the case when men do

Read More