Blog

Topics
UMUX Lite easier

From Functionality to Features: Making the UMUX-Lite Even Simpler

Like pictures and pixels on a screen, words are a type of user interface. Complex language, like complex software, can lead to misunderstanding, so words should communicate effectively while being easy to understand. The solution, to paraphrase William Zinsser, is to use words that are simple and concise—a guideline that also applies to UX questionnaires.

Read More »
Randomization

What a Randomization Test Is and How to Run One in R

The two-sample t-test is one of the most widely used statistical tests, assessing whether mean differences between two samples are statistically significant. It can be used to compare two samples of many UX metrics, such as SUS scores, SEQ scores, and task times. The t-test, like most statistical tests, has certain requirements (assumptions) for its

Read More »
Change Verbs

From Soared to Plummeted: Can We Quantify Change Verbs?

Cases spike, home prices surge, and stock prices tank: we read headlines like these daily. But what is a spike and how much is a surge? When does something crater versus tank or just fall? Headlines are meant to grab our attention. They often communicate the dramatic story the author wants to tell rather than

Read More »
Rating Scales

Rating Scale Best Practices: 8 Topics Examined

Rating scales have been around for close to a century. It’s no wonder there are many questions about best practices and pitfalls to avoid. And like any topic that’s been around for that long, there are urban legends, partial truths, context-dependent findings, and just plain misconceptions about the “right” and “wrong” way to use and

Read More »
Sliders

Are Sliders More Sensitive than Numeric Rating Scales?

Sliders are a type of visual analog scale that can be used with many online survey tools such as our MUIQ platform. The literature on their overall effectiveness is mixed (Roster et al., 2015). On the positive side, evidence indicates that sliders might be more engaging to respondents. On the negative side, evidence also indicates

Read More »

Latin and Greco-Latin Experimental Designs for UX Research

During the fall in the northern hemisphere, leaves change colors, birds fly south, and the temperature gets colder. Do the birds change the color of the leaves, and does their departure make the temperature colder? What if you gave participants two versions of a rating scale, with the first having responses ordered from strongly disagree

Read More »

Improving the Prediction of the Number of Usability Problems

Paraphrasing the statistician George Box, all models are wrong, some are useful, and some can be improved. In a recent article, we reviewed the most common way of modeling problem discovery, which is based on a straightforward application of the cumulative binomial probability formula: P(x≥1) = 1 – (1-p)n. Well, it’s straightforward if you like

Read More »

Revisiting the Evidence for the Left-Side Bias in Rating Scales

Are people more likely to select response options that are on the left side of a rating scale? About ten years ago, we provided a brief literature review of the published evidence, which suggested that this so-called left-side bias not only existed but also was detected almost 100 years ago in some of the earliest

Read More »

The UX of Vacation Rental Websites

The COVID-19 pandemic has led to significant changes in how people have vacationed in 2020. To get away from it all without spending time in crowded places, vacationers have turned to vacation rental websites and have planned longer stays. For example, Airbnb recently reported a year-to-year doubling of long-term (>28 days) rentals and a shift

Read More »

Simplifying the UMUX-Lite

It seems like every few years a new standardized UX measure comes along. Standardization of UX measurement is a good thing for researchers and practitioners. Having common methods and definitions helps with objectivity, generalization, economy, and professional communication. At MeasuringU, we pay a lot of attention to the continuing evolution of standardized UX measurement. The

Read More »

How to Code Errors in Unmoderated Studies

Errors can provide a lot of diagnostic information about the root causes of UI problems and the impact such problems have on the user experience. The frequency of errors—even trivial ones—also provides a quantitative description of the performance of a task. The process of observing and coding errors is more time-consuming and dependent on researcher

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top