“Does What I Need It to Do”: Assessing an Alternate Usefulness Item

The UMUX-Lite is a two-item standardized questionnaire that, since its publication in 2013, has been adopted more and more by researchers who need a concise UX metric. Figure 1 shows the standard version with its Perceived Ease-of-Use (“{Product} is easy to use”) and Perceived Usefulness (“{Product}’s capabilities meet my requirements”) items.   Figure 1: Standard

Read More »

A Decision Tree for Picking the Right Type of Survey Question

Crafting survey questions involves thinking first about the content and then about the format (form follows function). Earlier, we categorized survey questions into four content types (attribute, behavior, ability, or sentiment) and four format classes (open-ended, closed-ended static, closed-ended dynamic, or task-based). As with any taxonomy, there are several ways to categorize response options (e.g.,

Read More »

Statistical Hypothesis Testing: What Can Go Wrong?

Making decisions with data inevitably means working with statistics and one of its most common frameworks: Null Hypothesis Significance Testing (NHST). Hypothesis testing can be confusing (and controversial), so in an earlier article we introduced the core framework of statistical hypothesis testing in four steps: Define the null hypothesis (H0). This is the hypothesis that

Read More »

Classifying Survey Questions into Four Content Types

In architecture, form follows function. In survey design, question format follows content. Earlier we described four classes of survey questions. These four classes are about the form, or format, of the question (e.g., open- vs. closed-ended). But before you can decide effectively on the format, you need to choose the content of the question and

Read More »

Do Too Many Response Options Confuse People?

Advice on rating scale construction is ubiquitous on the internet and in the halls of organizations worldwide. The problem is that much of the advice is based not on solid data but rather on conventional wisdom and what’s merely thought to work. Even published papers and books on survey design can present a perspective that

Read More »

Four Types of Potential Survey Errors

When we conduct a survey, we want the truth, even if we can’t handle it. But standing in the way of our dreams of efficiently collected data revealing the unvarnished truth about customers, prospects, and users are the four horsemen of survey errors. Even a well-thought-out survey will have to deal with the inevitable challenge

Read More »

Exploring Another Alternate Form for the UMUX-Lite Usefulness Item

When thinking about user experiences with websites or software, what is the difference between capabilities and functions? Is there any difference at all? In software engineering, a function is code that takes inputs, processes them, and produces outputs (such as a math function). The word capability doesn’t have a formal definition, but it most often

Read More »
UMUX Lite easier

From Functionality to Features: Making the UMUX-Lite Even Simpler

Like pictures and pixels on a screen, words are a type of user interface. Complex language, like complex software, can lead to misunderstanding, so words should communicate effectively while being easy to understand. The solution, to paraphrase William Zinsser, is to use words that are simple and concise—a guideline that also applies to UX questionnaires.

Read More »

Simplifying the UMUX-Lite

It seems like every few years a new standardized UX measure comes along. Standardization of UX measurement is a good thing for researchers and practitioners. Having common methods and definitions helps with objectivity, generalization, economy, and professional communication. At MeasuringU, we pay a lot of attention to the continuing evolution of standardized UX measurement. The

Read More »

What Do You Gain from Larger-Sample Usability Tests?

We typically recommend small sample sizes (5–10) for conducting iterative usability testing meant to find and fix problems (formative evaluations). For benchmark or comparative studies, where the focus is on detecting differences or estimating population parameters (summative evaluations), we recommend using larger sample sizes (20–100+). Usability testing can be used to uncover problems and assess the

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top