Feature image of two women talking

Are People Who Agree to Think Aloud Different?

In an earlier article, we showed that only about 9% of panel participants will eventually complete a study in which they are asked to think aloud. That is, if you need ten usable think-aloud videos, expect to invite around 111 participants. On the surface, this means you’ll need to plan for a lot of people

Read More »

Comparison of SEQ With and Without Numbers

Over the past few months, we’ve conducted several studies with different versions of the seven-point Single Ease Question (SEQ®), a popular task-level metric for perceived ease-of-use. As we’ve seen with other research on rating scales, response means tend to be rather stable despite often salient changes to formatting. In our earlier SEQ research, we found

Read More »

Comparing Two SEQ Item Wordings

We use the seven-point Single Ease Question (SEQ®) frequently in our practice, as do many other UX researchers. One reason for its popularity is the body of research that started in the mid-2000s with the comparison of the SEQ to other similar short measures of perceived ease-of-use, the generation of a normative SEQ database, and

Read More »

A Guide to Task-Based UX Metrics

When quantifying the user experience of a product, app, or experience, we recommend using a mix of study-level and task-based UX metrics. While it’s not always feasible to assess a task experience (because of challenges with budgets, timelines, or access to products and users), observing participants attempt tasks can help uncover usability problems, informing designers

Read More »

Why Collect Task- and Study-Level Metrics?

In Quantifying the User Experience, we recommend using a mix of task-level and study-level metrics, especially in benchmarking studies. But what, exactly, are task-level and study-level metrics, how do they differ, and why should you collect them both? In this article, we’ll explore this common practice of collecting both types of metrics to understand the

Read More »

How to Determine Task Completion

Task completion is one of the fundamental usability metrics. It’s the most common way to quantify the effectiveness of an interface. If users can’t do what they intend to accomplish, not much else matters. While that may seem like a straightforward concept, actually determining whether users are completing a task often isn’t as easy. The

Read More »

When to Provide Assistance in a Usability Test

One of the fundamental principles behind usability testing is to let the participants actually use the software, app, or website and see what problems might emerge. By simulating use and not interrupting participants, you can detect and fix problems before users encounter them, get frustrated, and stop using and recommending your product. So while there’s

Read More »

How Reliable Are Self-Reported Task Completion Rates?

Did you do that task correctly? Unmoderated testing provides many benefits. The most notable of which is the ability to collect metrics from a large and geographically diverse sample of participants quickly. A common metric collected in usability tests is the task completion rate. It’s often called the fundamental usability metric because if users can’t complete

Read More »

The High Cost of Task Failure on Websites

It’s often called web surfing or web browsing, but it probably should be called web doing. While there is still plenty of time to kill using the web, in large part, we’re all trying to get things done. Purchasing, reserving, comparing and communicating—Internet behavior is largely a goal directed activity. If a website doesn’t help

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top