Evaluation of Three SEQ Variants

The Single Ease Question (SEQ®) is a single seven-point item that measures the perceived ease of task completion. It is commonly used in usability testing. Since its introduction in 2009 [PDF], some researchers have made variations in its design. Figure 1 shows the version that we currently use. In 2022, we decided to test some

Read More »

Does Changing the Number of Response Options Affect Rating Behavior?

Changing the number of response options in the survey might confuse participants. Over the years, we’ve heard variations on this concern articulated a number of different ways by clients and fellow researchers. Surveys and unmoderated UX studies commonly contain a mix of five-, seven-, and eleven-point scales. That leads some to express concern. Why are

Read More »

Comparison of SEQ With and Without Numbers

Over the past few months, we’ve conducted several studies with different versions of the seven-point Single Ease Question (SEQ®), a popular task-level metric for perceived ease-of-use. As we’ve seen with other research on rating scales, response means tend to be rather stable despite often salient changes to formatting. In our earlier SEQ research, we found

Read More »

Comparing Two SEQ Item Wordings

We use the seven-point Single Ease Question (SEQ®) frequently in our practice, as do many other UX researchers. One reason for its popularity is the body of research that started in the mid-2000s with the comparison of the SEQ to other similar short measures of perceived ease-of-use, the generation of a normative SEQ database, and

Read More »

Is UX Data Normally Distributed?

If you took an intro to stats class (or if you know just enough to be dangerous), you probably recall two things: something about Mark Twain’s “lies, damned lies …,” and that your data needs to be normally distributed. Turns out both are only partly true. Mark Twain did write the famous quote, but he

Read More »

A Guide to Task-Based UX Metrics

When quantifying the user experience of a product, app, or experience, we recommend using a mix of study-level and task-based UX metrics. While it’s not always feasible to assess a task experience (because of challenges with budgets, timelines, or access to products and users), observing participants attempt tasks can help uncover usability problems, informing designers

Read More »

Difficult–Easy or Easy–Difficult—
Does It Matter?

The seven-point Single Ease Question (SEQ®) has become a standard in assessing post-task perceptions of ease. We developed the SEQ over a decade ago after our research showed it performed comparably to or better than other single-item measures. It is an extension of an earlier five-point version that Tedesco and Tullis (2006 [PDF]) found performed best

Read More »

Why Collect Task- and Study-Level Metrics?

In Quantifying the User Experience, we recommend using a mix of task-level and study-level metrics, especially in benchmarking studies. But what, exactly, are task-level and study-level metrics, how do they differ, and why should you collect them both? In this article, we’ll explore this common practice of collecting both types of metrics to understand the

Read More »
Feature Open Ended Questions 011320

Five Reasons to Use Open-Ended Questions

Despite the ease with which you can create surveys using software like our MUIQ platform, selecting specific questions and response options can be a bit more involved. Most surveys contain a mix of closed-ended (often rating scales) and open-ended questions. We’ve previously discussed 15 types of common rating scales and have published numerous articles in

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top