Featured Image with SMEQ scale

Do the Interior Labels of the SMEQ Affect Its Scores?

When trying to measure something abstract and multi-faceted like “User Experience,” you should be open to considering and assessing different measurement approaches. Some are popular and others are obscure. At MeasuringU, we’ve found that even when we don’t necessarily recommend a measure or method, we can often adapt aspects of it and apply them to

Read More »
Feature Image with SMEQ rating scale

Comparing SEQ and Click SMEQ Sensitivity

Capturing someone’s attitude as precisely as possible with as little effort as possible … that’s the goal of post-task metrics collected in usability tests. Organizations devote time and money to testing products with users, not to watching users spend time reading and answering questions. Organizations want to understand if people think an experience is difficult,

Read More »
feature image

Sample Sizes for Comparing Rating Scale Means

Are customers more satisfied this quarter than last quarter? Do users trust the brand less this year than last year? Did the product changes result in more customers renewing their subscriptions? When UX researchers want to measure attitudes and intentions, they often ask respondents to complete multipoint rating scale items, which are then compared with

Read More »
Featured image

Sample Sizes for Rating Scale Confidence Intervals

Sample size computations can seem like an art. Some assumptions are involved when computing sample sizes, but it should be more math than magic. A key ingredient needed to cook up a sample size estimate is the standard deviation. You need yeast to make bread, and you need a measure of variability to make an

Read More »
Featured image

How to Estimate the Standard Deviation for Rating Scales

The standard deviation is the most common measure of variability. It’s less intuitive than measures of central tendency such as the mean, but it plays an essential role in analysis and sample size planning. The standard deviation is a key ingredient when building a confidence interval and can be easily computed from a sample of

Read More »
rating scale data grid with foreground text reading: The Variability and Reliability of Standardized UX Scales

The Variability and Reliability of Standardized UX Scales

In an earlier article, we examined a large dataset of rating scale data. After analyzing over 100,000 individual responses from 4,048 multipoint items across 25 studies, we reported the typical standard deviations for five-, seven-, and eleven-point items. We found that the average standard deviation tended to be around 25% of the maximum range of

Read More »
graphical user interface, application

How Variable Are UX Rating Scales? Data from 100,000 Responses

When working with UX metrics (e.g., rating scale data) you need to consider both the average and the variability of the responses. People have different experiences with interfaces, and sometimes they interpret items in rating scales differently. This variability is typically measured with the standard deviation. The standard deviation is a key ingredient in computing

Read More »

Evaluation of Three SEQ Variants

The Single Ease Question (SEQ®) is a single seven-point item that measures the perceived ease of task completion. It is commonly used in usability testing. Since its introduction in 2009 [PDF], some researchers have made variations in its design. Figure 1 shows the version that we currently use. In 2022, we decided to test some

Read More »

Does Changing the Number of Response Options Affect Rating Behavior?

Changing the number of response options in the survey might confuse participants. Over the years, we’ve heard variations on this concern articulated a number of different ways by clients and fellow researchers. Surveys and unmoderated UX studies commonly contain a mix of five-, seven-, and eleven-point scales. That leads some to express concern. Why are

Read More »

Difficult–Easy or Easy–Difficult—
Does It Matter?

The seven-point Single Ease Question (SEQ®) has become a standard in assessing post-task perceptions of ease. We developed the SEQ over a decade ago after our research showed it performed comparably to or better than other single-item measures. It is an extension of an earlier five-point version that Tedesco and Tullis (2006 [PDF]) found performed best

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top