Comparison of SEQ With and Without Numbers

Over the past few months, we’ve conducted several studies with different versions of the seven-point Single Ease Question (SEQ®), a popular task-level metric for perceived ease-of-use. As we’ve seen with other research on rating scales, response means tend to be rather stable despite often salient changes to formatting. In our earlier SEQ research, we found

Read More »

Comparing Two SEQ Item Wordings

We use the seven-point Single Ease Question (SEQ®) frequently in our practice, as do many other UX researchers. One reason for its popularity is the body of research that started in the mid-2000s with the comparison of the SEQ to other similar short measures of perceived ease-of-use, the generation of a normative SEQ database, and

Read More »

Is UX Data Normally Distributed?

If you took an intro to stats class (or if you know just enough to be dangerous), you probably recall two things: something about Mark Twain’s “lies, damned lies …,” and that your data needs to be normally distributed. Turns out both are only partly true. Mark Twain did write the famous quote, but he

Read More »

A Guide to Task-Based UX Metrics

When quantifying the user experience of a product, app, or experience, we recommend using a mix of study-level and task-based UX metrics. While it’s not always feasible to assess a task experience (because of challenges with budgets, timelines, or access to products and users), observing participants attempt tasks can help uncover usability problems, informing designers

Read More »

Difficult–Easy or Easy–Difficult—
Does It Matter?

The seven-point Single Ease Question (SEQ®) has become a standard in assessing post-task perceptions of ease. We developed the SEQ over a decade ago after our research showed it performed comparably to or better than other single-item measures. It is an extension of an earlier five-point version that Tedesco and Tullis (2006 [PDF]) found performed best

Read More »

Why Collect Task- and Study-Level Metrics?

In Quantifying the User Experience, we recommend using a mix of task-level and study-level metrics, especially in benchmarking studies. But what, exactly, are task-level and study-level metrics, how do they differ, and why should you collect them both? In this article, we’ll explore this common practice of collecting both types of metrics to understand the

Read More »
Feature Open Ended Questions 011320

Five Reasons to Use Open-Ended Questions

Despite the ease with which you can create surveys using software like our MUIQ platform, selecting specific questions and response options can be a bit more involved. Most surveys contain a mix of closed-ended (often rating scales) and open-ended questions. We’ve previously discussed 15 types of common rating scales and have published numerous articles in

Read More »

What Do You Gain from Larger-Sample Usability Tests?

We typically recommend small sample sizes (5–10) for conducting iterative usability testing meant to find and fix problems (formative evaluations). For benchmark or comparative studies, where the focus is on detecting differences or estimating population parameters (summative evaluations), we recommend using larger sample sizes (20–100+). Usability testing can be used to uncover problems and assess the

Read More »

Sample Size Recommendations for Benchmark Studies

One of the primary goals of measuring the user experience is to see whether design efforts actually make a quantifiable difference over time. A regular benchmark study is a great way to institutionalize the idea of quantifiable differences. Benchmarks are most effective when done at regular intervals (e.g., quarterly or yearly) or after significant design

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top