Rating Scale

Browse Content by Topic

UX ( 74 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 41 )
NPS ( 39 )
Benchmarking ( 33 )
Usability ( 32 )
Sample Size ( 32 )
Rating Scale ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Net Promoter Score ( 28 )
Usability Problems ( 18 )
Questionnaires ( 18 )
Rating Scales ( 17 )
Metrics ( 17 )
Measurement ( 16 )
User Experience ( 15 )
UMUX-lite ( 15 )
Surveys ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 12 )
Reliability ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
SEQ ( 8 )
UX Metrics ( 8 )
Research ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Confidence ( 6 )
Confidence Intervals ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Credibility ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Task Times ( 4 )
Key Driver ( 4 )
Quantitative ( 4 )
Loyalty ( 4 )
Expert Review ( 4 )
sliders ( 4 )
Lean UX ( 3 )
Customer Segmentation ( 3 )
Card Sorting ( 3 )
Summative ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
ROI ( 3 )
PURE ( 3 )
Desirability ( 3 )
Voice Interaction ( 3 )
Task Metrics ( 3 )
Data ( 3 )
Focus Groups ( 2 )
Correlation ( 2 )
SUM ( 2 )
Excel ( 2 )
Findability ( 2 )
PhD ( 2 )
Errors ( 2 )
Remote Usability Testing ( 2 )
KLM ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
A/B Testing ( 2 )
Personas ( 2 )
slider ( 2 )
Marketing ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
LTR ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
Sensitivity ( 2 )
Emoji scale ( 2 )
Cognitive Walkthrough ( 2 )
Formative ( 2 )
Star Scale ( 2 )
Carryover ( 1 )
Within-subjects ( 1 )
Visual Analog Scale ( 1 )
Desktop ( 1 )
Linear Numeric Scale ( 1 )
User-Centred Design ( 1 )
Cumulative Graphs ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Margin of Error ( 1 )
Meeting software ( 1 )
Polarization ( 1 )
Likert ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
Mean Opinion Scale ( 1 )
Latin Squares ( 1 )
Greco-Latin Squares ( 1 )
Research design ( 1 )
Information Architecture ( 1 )
Site Analytics ( 1 )
Randomization ( 1 )
Report ( 1 )
Single Ease Question ( 1 )
R ( 1 )
t-test ( 1 )
Bias ( 1 )
Contextual Inquiry ( 1 )
Problem Severity ( 1 )
History of usability ( 1 )
MOS ( 1 )
MOS-R ( 1 )
graphic scale ( 1 )
negative scale ( 1 )
Probability ( 1 )
Measure ( 1 )
Mobile Usability ( 1 )
coding ( 1 )
Anchoring ( 1 )
Formative testing ( 1 )
Certification ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Design ( 1 )
Facilitation ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
protoype ( 1 )
Unmoderated ( 1 )
Task Completin ( 1 )
Affinity ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Trust ( 1 )
Sample ( 1 )
Statistical Significance ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Five ( 1 )
Persona ( 1 )
Metric ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
Hedonic usability ( 1 )
Expectations ( 1 )
MUSiC ( 1 )
RITE ( 1 )
Competitive ( 1 )
CUE ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
Software ( 1 )
moderated ( 1 )
Moderating ( 1 )
Segmentation ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
PSSUQ ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
ISO ( 1 )
In UX research, both studies and surveys contain a lot of questions. Getting those questions right can go a long way in improving the clarity and quality of the findings. For example, we’ve recently written about how to make survey questions clearer. And while there are many stories of how the change of a single word in a survey question can lead to different results

Read More

The Net Promoter Score (NPS) is widely used by organizations. It’s often used to make high-stakes decisions on whether a brand, product, or service has improved or declined. Net Promoter Scores are often tracked on dashboards, and any changes (for better or worse) can have significant consequences: adding or removing features, redirecting budgets, even impacting employee bonuses. Random sampling error, however, can often explain many

Read More

The Net Promoter Score (NPS) is a popular business metric used to track customer loyalty. It uses a single likelihood-to-recommend (LTR) question (“How likely is it that you will recommend our company to a friend or colleague?”) with 11 scale steps from 0 (Not at all likely) to 10 (Extremely likely). In NPS terminology, respondents who select 9 or 10 on the LTR question are

Read More

Happy New Year from all of us at MeasuringU! 2020 was a crazy year, but we still managed to post 48 new articles and continued improving MUIQ, our UX testing platform. We hosted our seventh UX Measurement Bootcamp, this time virtually. The change of format was a challenge, but it was fantastic to work with attendees from all over the world. The topics we wrote

Read More

Like pictures and pixels on a screen, words are a type of user interface. Complex language, like complex software, can lead to misunderstanding, so words should communicate effectively while being easy to understand. The solution, to paraphrase William Zinsser, is to use words that are simple and concise—a guideline that also applies to UX questionnaires. This brings us to the UMUX-Lite, an increasingly popular UX

Read More

The two-sample t-test is one of the most widely used statistical tests, assessing whether mean differences between two samples are statistically significant. It can be used to compare two samples of many UX metrics, such as SUS scores, SEQ scores, and task times. The t-test, like most statistical tests, has certain requirements (assumptions) for its use. While it’s easy to conduct a two-sample t-test using

Read More

Cases spike, home prices surge, and stock prices tank: we read headlines like these daily. But what is a spike and how much is a surge? When does something crater versus tank or just fall? Headlines are meant to grab our attention. They often communicate the dramatic story the author wants to tell rather than what the data say. It isn’t easy to write headlines.

Read More

We typically recommend small sample sizes (5–10) for conducting iterative usability testing meant to find and fix problems (formative evaluations). For benchmark or comparative studies, where the focus is on detecting differences or estimating population parameters (summative evaluations), we recommend using larger sample sizes (20–100+). Usability testing can be used to uncover problems and assess the experience. Many usability tests will play both roles simultaneously, formative

Read More

Decisions should be driven (or at least informed) by data. Raw data is turned into information by ensuring that it is accurate and has been put into a context that promotes good decision-making. The pandemic has brought a plethora of COVID-related data dashboards, which are meant to provide information that helps the public and public officials make better decisions. With the pressure to report data

Read More

There are a lot of ways to display multipoint rating scales by varying the number of points (e.g., 5, 7, 11) and by labeling or not labeling those points. There’s variety not only in how rating scales are displayed but also in how you score the responses. Two typical scoring methods we discussed earlier are reporting the raw mean of responses and using top-box scoring. We’ve also shown

Read More