feature image with computer chips and checkmark

Validating a Tech Savviness Metric for UX Research

Some participants in usability studies complete a task effortlessly, while others struggle with the same task. In retrospective UX surveys, some respondents report having an easy time using a website and strongly recommend it to others, but others report having a much poorer website experience. Why? What explains the discrepancy between experiences, especially when the

Read More »

How to Use the Finite Population Correction

What is the impact if you sample a lot of your population in a survey? Many statistical calculations—for example, confidence intervals, statistical comparisons (e.g., the two-sample t-test), and their sample size estimates—assume that your sample is a tiny fraction of your population. But what if you have a relatively modest population size (e.g., IT decision-makers

Read More »

A Taxonomy of Common UX Research Methods

User experience research has a wide variety of methods. From one perspective, it’s good because there’s usually a method for whatever research question you need to answer. On the other hand, it’s hard to keep track of all these methods. Some methods, such as usability testing, are commonly used and have been around for decades.

Read More »

Does Changing the Number of Response Options Affect Rating Behavior?

Changing the number of response options in the survey might confuse participants. Over the years, we’ve heard variations on this concern articulated a number of different ways by clients and fellow researchers. Surveys and unmoderated UX studies commonly contain a mix of five-, seven-, and eleven-point scales. That leads some to express concern. Why are

Read More »

Sample Sizes for Comparing SUS Scores

Microsoft Word is a widely used word processing program, part of the Microsoft Office suite of programs. While its dominance has been challenged recently by Google Docs, Word still leads on the features list, providing many features that Google’s offering lacks. But adding features can also add to bloat, making common tasks harder as users

Read More »

UX-Lite Usefulness Update

Can an experience be useful without meeting your needs? The UX-Lite™ is a new questionnaire that evolved from the SUS and the UMUX-Lite. It has only two items, one measuring perceived Ease and one measuring perceived Usefulness, as shown in Figure 1. Because the verbal complexity of the original Usefulness item stands in stark contrast

Read More »

Sample Sizes for a SUS Score

Despite its age and the availability of other UX measures such as the UX-Lite™ and SUPR-Q®, the ten-item System Usability Scale (SUS) is still a very popular measure. It’s used widely in benchmark tests of software products to generate an overall score of perceived usability. We regularly collect SUS scores for dozens of consumer and

Read More »

Completion Times and Preference for Sliders vs. Numeric Scales

In earlier articles, we investigated the effects of manipulating item formats on rating behaviors—specifically, we compared sliders to traditional five-point and eleven-point radio button numeric scales. For those analyses, we collected data from 212 respondents (U.S. panel agency, late January 2021) who used radio buttons and sliders to rate online shopping websites (e.g., Amazon) and

Read More »

Comparing Select-All-That-Apply with Two Yes/No Item Formats

We conduct a lot of surveys and unmoderated studies at MeasuringU® using our MUIQ® platform. One of the first steps in these studies involves both screening (ensuring you have the right participants) and characterizing (having sufficient information for further analysis on the participants such as prior experience) the participants. While there are over a dozen

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top