feature image with empty chair

What Is a Typical No-Show Rate for Moderated Studies?

You have grand research plans. Prototype ready. Stakeholders scheduled to observe. The time comes, and the participant is a no-call no-show. No-shows are a fact of life when conducting UX research. It happens for remote and in-person research. You need to plan for no-shows. While over-recruiting makes sense, you also don’t want to over-over-recruit and

Read More »
feature image

Do UX Certifications Pay Off?

What is the value of UX certification? How do you learn the methods UX professionals use? A few university programs offer courses in UX or related fields. Certainly, nothing beats hands-on experience conducting your own usability test on a prototype, running a survey, and interviewing stakeholders. But how do you demonstrate to others (employers in

Read More »
feature image

Confirming the Perceived Website Clutter Questionnaire (PWCQ)

Poor layout, irrelevant ads, overwhelming videos: websites can be cluttered. Clutter can lead to a poor user experience. Poor experiences repel users. So how does one measure clutter? Earlier, we did a deep dive into the literature to see how clutter has been first defined and then measured. We found the everyday concept of clutter

Read More »
feature image

Building a Website Clutter Questionnaire

Clutter, clutter everywhere, nor any questionnaire to measure. In a previous article, we described our search for a measure of perceived clutter in academic literature and web posts, but we were left unquenched. We found that the everyday conception of clutter includes two components that suggest different decluttering strategies: the extent to which needed objects

Read More »
Feature image with cluttered website

In Search of a Clutter Metric for Websites

A disorganized closet. A messy bedroom. Clutter can make a space feel stressful and make it hard to find things. But it’s not just your mother talking about clutter. We often use the same language to describe digital spaces like websites. In our UX research practice, we have frequently encountered users and designers criticizing website

Read More »
feature image with computer chip and tac questionnaire

12 Things to Know About Using the TAC-10 to Measure Tech Savviness

How do you measure tech savviness? UX researchers are, of course, not in the business of assessing individual performance. But differences in individual technical abilities certainly have an impact on performance. A good measure of tech savviness can help researchers target levels of tech savviness in recruiting (e.g., low, high, or a mix) and classify

Read More »
feature image with UX lite and time arrow

Is the UX-Lite Predictive of Future Behavior?

It’s hard to call a product or app successful if people don’t use it. But how will you know if people will use a product and continue to use it? There’s a strong need to understand technology adoption and usage. The first step in predicting and understanding why people do or don’t adopt tech is

Read More »
Feature Image with UX Lite grid

How to Score and Interpret the UX-Lite

Is the product useful? Is it easy to use? Numerous variables affect whether we purchase, use, and adopt a new technology. But two consistent contributors are whether it does what we want it to do (usefulness) and if it’s easy to use (usability). These apply to consumer and business products. This “model” of tech adoption

Read More »
Feature image with single ease question

The Evolution of the Single Ease Question (SEQ)

The primary driving forces of evolution are variation, competition, and natural selection. In the domain of rating scales, variants are developed and tested to see which variant has the best measurement properties, and the winner of that competition survives to appear in future studies. The Single Ease Question (SEQ®) is a single seven-point question asked

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top