The Anatomy of a Survey Question

We’ve written extensively about question types, the elements of good and bad writing, why people forget, and common problems with survey questions. But how do you get started writing questions? Few professionals we know have taken a formal course in survey development and instead rely on their experiences or best practices. Despite being called questions,

Read More »

“Does What I Need It to Do”: Assessing an Alternate Usefulness Item

The UMUX-Lite is a two-item standardized questionnaire that, since its publication in 2013, has been adopted more and more by researchers who need a concise UX metric. Figure 1 shows the standard version with its Perceived Ease-of-Use (“{Product} is easy to use”) and Perceived Usefulness (“{Product}’s capabilities meet my requirements”) items.   Figure 1: Standard

Read More »

A Decision Tree for Picking the Right Type of Survey Question

Crafting survey questions involves thinking first about the content and then about the format (form follows function). Earlier, we categorized survey questions into four content types (attribute, behavior, ability, or sentiment) and four format classes (open-ended, closed-ended static, closed-ended dynamic, or task-based). As with any taxonomy, there are several ways to categorize response options (e.g.,

Read More »

Quant or Qual Research? 27 Words to Help You Decide

When approaching a UX research project, one of the first things to consider is the method. And UX research has many methods. Methods can be categorized as quantitatively focused (e.g., A/B tests) or qualitatively focused (e.g., interviews). Most UX research methods can collect both qualitative and quantitative data. For example, surveys often collect both closed-ended

Read More »

Seven Reasons People Misinterpret Survey Questions

Like in all research methods, many things can go wrong in surveys, from problems with sampling to mistakes in analysis. To draw valid conclusions from your survey, you need accurate responses. But participants may provide inaccurate information. They could forget the answers to questions or just answer questions incorrectly. One common reason respondents answer survey

Read More »

Exploring Another Alternate Form for the UMUX-Lite Usefulness Item

When thinking about user experiences with websites or software, what is the difference between capabilities and functions? Is there any difference at all? In software engineering, a function is code that takes inputs, processes them, and produces outputs (such as a math function). The word capability doesn’t have a formal definition, but it most often

Read More »

Nine Words to Watch for When Writing Survey Questions

In UX research, both studies and surveys contain a lot of questions. Getting those questions right can go a long way in improving the clarity and quality of the findings. For example, we’ve recently written about how to make survey questions clearer. And while there are many stories of how the change of a single

Read More »
Feature Open Ended Questions 011320

Five Reasons to Use Open-Ended Questions

Despite the ease with which you can create surveys using software like our MUIQ platform, selecting specific questions and response options can be a bit more involved. Most surveys contain a mix of closed-ended (often rating scales) and open-ended questions. We’ve previously discussed 15 types of common rating scales and have published numerous articles in

Read More »
Rating Scales

Rating Scale Best Practices: 8 Topics Examined

Rating scales have been around for close to a century. It’s no wonder there are many questions about best practices and pitfalls to avoid. And like any topic that’s been around for that long, there are urban legends, partial truths, context-dependent findings, and just plain misconceptions about the “right” and “wrong” way to use and

Read More »
Sliders

Are Sliders More Sensitive than Numeric Rating Scales?

Sliders are a type of visual analog scale that can be used with many online survey tools such as our MUIQ platform. The literature on their overall effectiveness is mixed (Roster et al., 2015). On the positive side, evidence indicates that sliders might be more engaging to respondents. On the negative side, evidence also indicates

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top