Rating Scale

Browse Content by Topic

UX ( 62 )
Methods ( 52 )
Usability Testing ( 52 )
Statistics ( 51 )
Usability ( 32 )
Survey ( 32 )
User Research ( 27 )
Benchmarking ( 27 )
Customer Experience ( 26 )
SUS ( 21 )
NPS ( 21 )
Sample Size ( 18 )
Usability Problems ( 17 )
Net Promoter Score ( 15 )
Rating Scale ( 14 )
Usability Metrics ( 13 )
Metrics ( 13 )
SUPRQ ( 12 )
Measurement ( 12 )
User Experience ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Task Time ( 8 )
Surveys ( 8 )
Market Research ( 8 )
UX Metrics ( 7 )
Questionnaires ( 7 )
Task Completion ( 7 )
Heuristic Evaluation ( 7 )
Questionnaire ( 6 )
Reliability ( 6 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 5 )
Usability Problem ( 5 )
Validity ( 5 )
Rating Scales ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Satisfaction ( 5 )
Research ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
Quantitative ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
Customer Segmentation ( 3 )
Expert Review ( 3 )
Card Sorting ( 3 )
SUPR-Q ( 3 )
SEQ ( 3 )
Task Metrics ( 3 )
Unmoderated Research ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Eye-Tracking ( 2 )
Excel ( 2 )
Summative ( 2 )
PhD ( 2 )
Salary Survey ( 2 )
IA ( 2 )
Key Driver ( 2 )
Marketing ( 2 )
Tree Testing ( 2 )
Data ( 2 )
UX Methods ( 2 )
Remote Usability Testing ( 2 )
SUM ( 2 )
Personas ( 2 )
A/B Testing ( 2 )
UX Salary Survey ( 2 )
Findability ( 2 )
KLM ( 2 )
Certification ( 2 )
Tasks ( 2 )
UMUX-lite ( 2 )
Correlation ( 2 )
PURE ( 2 )
Branding ( 2 )
Focus Groups ( 2 )
Cognitive Walkthrough ( 2 )
protoype ( 1 )
Z-Score ( 1 )
Facilitation ( 1 )
Perceptions ( 1 )
Site Analytics ( 1 )
Metric ( 1 )
Contextual Inquiry ( 1 )
Prototype ( 1 )
Mobile Usability ( 1 )
Performance ( 1 )
moderated ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Moderating ( 1 )
Information Architecture ( 1 )
Problem Severity ( 1 )
Test Metrics ( 1 )
Trust ( 1 )
Persona ( 1 )
Segmentation ( 1 )
Software ( 1 )
Ordinal ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Top Task Analysis ( 1 )
Design ( 1 )
Unmoderated ( 1 )
Regression Analysis ( 1 )
Conjoint Analysis ( 1 )
Random ( 1 )
Margin of Error ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Task Randomization ( 1 )
Quality ( 1 )
Competitive ( 1 )
Expectations ( 1 )
Formative ( 1 )
Errors ( 1 )
Think Aloud ( 1 )
Desktop ( 1 )
It seems like each year introduces a new measure or questionnaire. Like a late-night infomercial, some are even touted as the next BIG thing, like the NPS was. New questionnaires and measures are a natural part of the evolution of measurement (especially measuring difficult things such as human attitudes). It’s a good thing. I’ll often help peer review new questionnaires published in journals and conference

Read More

It seems like there are endless ways to ask questions of participants in surveys. Variety in question types can be both a blessing and a curse. Having many ways to ask questions provides better options to the researcher to assess the opinion of the respondent. But the wrong type of question can fail to capture what’s intended, confuse respondents, or even lead to incorrect decisions.

Read More

What does 4.1 on a 5-point scale mean? Or 5.6 on a 7-point scale? Interpreting rating scale data can be difficult in the absence of an external benchmark or historical norms. A popular technique used often by marketers to interpret rating scale data is the so-called “top box” and “top-two box” scoring approach. For example, on a 5-point scale, such as the one shown in Figure

Read More

Know your data. When measuring the customer experience, one of the first things you need to understand is how to identify and categorize the data you come across. It's one of the first things covered in our UX Boot Camp and it's something I cover in Chapter 2 of Customer Analytics for Dummies. Early consideration of your data types enables you to do the following:

Read More

The Single Ease Question (SEQ) is a 7-point rating scale to assess how difficult users find a task. It's administered immediately after a user attempts a task in a usability test. After users attempt a task, ask them this simple question: Overall, how difficult or easy was the task to complete? Use the seven point rating scale format below. Labels and values: We typically label

Read More

What makes a successful website?There are some obvious metrics like revenue, traffic and repeat visitors. But these are outcome measures. They don't tell you why revenue or traffic is higher or lower.  Key drivers of these outcomes are how the users perceive and interact with your website. Selling a product that has demand or information that is valuable is of course essential. But it's rare

Read More

It's fine to compute means and statistically analyze ordinal data from rating scales. But just because one rating is twice as high as another does not mean users are really twice as satisfied. When we use rating scales in surveys, we're translating intangible fuzzy attitudes about a topic into specific quantities. Overall, how satisfied are you with your cell-phone service? Very Unsatisfied 1 2 3

Read More

Few things tend to generate more heated debate than the format of response options used in surveys. Right in the middle of that debate is whether the number of options should be odd or even. Odd numbered response scales include a neutral response whereas even ones do not. Research generally shows that including a neutral response will affect the distribution of responses and sometimes lead

Read More

Rating scales are used widely. Ways of interpreting rating scale results also vary widely. What exactly does a 4.1 on a 5 point scale mean? In the absence of any benchmark or historical data, researchers and managers look at so-called top-box and top-two-box scores (boxes refer to the response options). For example, on a five-point scale, counting the number of respondents that selected the most

Read More

Items in questionnaires are typically worded neutrally so as not to state concepts in the extreme. They are like an even-tempered friend—they have opinions but aren't overly optimistic or chronically pessimistic about things. What happens when items in a questionnaire or survey are worded in the extreme? Two years ago we tried a little experiment at the annual UPA conference to find out. We wanted

Read More