Feature image showing two scatterplots, unjittered and jittered versions.

Scatterplot Jitter—Why and How?

Scatterplots are powerful tools for visualizing data, especially when data is continuous and unbounded (or nearly so). For example, Figure 1 shows the relationship between concurrently collected System Usability Scale (SUS) and UX-Lite® data for 40 consumer software products. Figure 1: Example of scatterplot of concurrently collected SUS and UX-Lite data. Examination of the scatterplot

Read More »
feature image showing 100 point scale

Understanding Different Types of 100-Point Scales

You’re 25% complete. Still a ways to go. You got a score of 90 out of 100 on a math test. Not bad. You got 1475 on the SAT—the 95th percentile. Awesome! Only 40% of users completed the task. Not great. The average score on a seven-point scale was 5.2. Hmm. Is that good? One

Read More »
feature image

Does the NPS Properly Measure Recommending Against a Brand?

Is happy the opposite of sad? Is dissatisfied the opposite of satisfied? Is discourage the opposite of recommend? And is not recommending the same as recommending against? When computing the Net Promoter Score (NPS), people who rate the 0–10-point likelihood-to recommend-item high (a 9 or 10) are categorized as promoters and those who give low

Read More »
feature image with top box grid options

Top Box, Top-Two Box, Bottom Box, or Net Box?

One box, two box, red box, blue box … Box scoring isn’t just something they do in baseball. Response options for rating scale data are often referred to as boxes because, historically, paper-administered surveys displayed rating scales as a series of boxes to check, like the one in Figure 1. Figure 1: Illustration of “boxes”

Read More »
Feature image with click scale

Are Click Scales More Sensitive than Radio Button Scales?

Response scales are a basic type of interface. They should reflect the attitude of the respondent as precisely as possible while requiring as little effort as possible to answer. When collecting data from participants, some wished they could have picked a value between two numbers, for example, 5.5 or 6.5, which can be done with

Read More »
Featured Image with SMEQ scale

Do the Interior Labels of the SMEQ Affect Its Scores?

When trying to measure something abstract and multi-faceted like “User Experience,” you should be open to considering and assessing different measurement approaches. Some are popular and others are obscure. At MeasuringU, we’ve found that even when we don’t necessarily recommend a measure or method, we can often adapt aspects of it and apply them to

Read More »
Feature Image with SMEQ rating scale

Comparing SEQ and Click SMEQ Sensitivity

Capturing someone’s attitude as precisely as possible with as little effort as possible … that’s the goal of post-task metrics collected in usability tests. Organizations devote time and money to testing products with users, not to watching users spend time reading and answering questions. Organizations want to understand if people think an experience is difficult,

Read More »
feature image

Sample Sizes for Comparing Rating Scale Means

Are customers more satisfied this quarter than last quarter? Do users trust the brand less this year than last year? Did the product changes result in more customers renewing their subscriptions? When UX researchers want to measure attitudes and intentions, they often ask respondents to complete multipoint rating scale items, which are then compared with

Read More »
Featured image

Sample Sizes for Rating Scale Confidence Intervals

Sample size computations can seem like an art. Some assumptions are involved when computing sample sizes, but it should be more math than magic. A key ingredient needed to cook up a sample size estimate is the standard deviation. You need yeast to make bread, and you need a measure of variability to make an

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top