What Do You Gain from Larger-Sample Usability Tests?

We typically recommend small sample sizes (5–10) for conducting iterative usability testing meant to find and fix problems (formative evaluations). For benchmark or comparative studies, where the focus is on detecting differences or estimating population parameters (summative evaluations), we recommend using larger sample sizes (20–100+). Usability testing can be used to uncover problems and assess the

Read More »

Sample Size Recommendations for Benchmark Studies

One of the primary goals of measuring the user experience is to see whether design efforts actually make a quantifiable difference over time. A regular benchmark study is a great way to institutionalize the idea of quantifiable differences. Benchmarks are most effective when done at regular intervals (e.g., quarterly or yearly) or after significant design

Read More »

Approximating Task Completion When You Can’t Observe Users

If users can’t complete a task, not much else matters. Consequently, task completion is one of the fundamental UX measures and one of the most commonly collected metrics, even in small-sample formative studies and studies of low-fidelity prototypes. Task completion is usually easy to collect, and it’s easy to understand and communicate. It’s typically coded as

Read More »

Cultural Effects on Rating Scales

Numbers are universally understood across cultures, geography, and languages. But when those numbers are applied to sentiments (for example, satisfaction, agreement, or intention), do people respond universally or does a 4 on a five-point scale elicit different reactions based on culture or geography? Many international organizations use similar sets of measures (such as satisfaction or

Read More »

Using Task Ease (SEQ) to Predict Completion Rates and Times

Our attitudes both reflect and affect our actions. What we think affects what we do and what we do affects what we think. It’s not a perfect relationship of course. What people say or think doesn’t always directly correspond to actions in easily predictable ways. Understanding and measuring user attitudes early and often can provide

Read More »

10 Things To Know About The Single Ease Question (SEQ)

The Single Ease Question (SEQ) is a 7-point rating scale to assess how difficult users find a task. It’s administered immediately after a user attempts a task in a usability test. After users attempt a task, ask them this simple question: Overall, how difficult or easy was the task to complete? Use the seven point

Read More »

Do Users Fail A Task And Still Rate It As Easy?

Have you ever watched a user perform horribly during a usability test only to watch in amazement as they rate a task as very easy to use? I have, and as long as I’ve been conducting usability tests, I’ve heard of this contradictory behavior from other researchers. Such occurrences have led many to discount the

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top