UMUX-lite

Browse Content by Topic

UX ( 74 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 41 )
NPS ( 40 )
Sample Size ( 36 )
Rating Scale ( 36 )
Benchmarking ( 33 )
Usability ( 32 )
Customer Experience ( 31 )
Net Promoter Score ( 30 )
User Research ( 29 )
SUS ( 28 )
Questionnaires ( 21 )
Rating Scales ( 19 )
Usability Problems ( 18 )
Metrics ( 17 )
Surveys ( 16 )
Measurement ( 16 )
UMUX-lite ( 16 )
User Experience ( 15 )
Satisfaction ( 15 )
Validity ( 14 )
SUPR-Q ( 13 )
Usability Metrics ( 13 )
Qualitative ( 12 )
SUPRQ ( 12 )
Market Research ( 12 )
Reliability ( 11 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
UX Metrics ( 8 )
SEQ ( 8 )
Research ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
Confidence ( 6 )
Confidence Intervals ( 6 )
Analytics ( 6 )
Unmoderated Research ( 5 )
Usability Problem ( 5 )
Key Driver ( 5 )
Quantitative ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Moderation ( 4 )
UX Maturity ( 4 )
Expert Review ( 4 )
UX Methods ( 4 )
Credibility ( 4 )
Task Times ( 4 )
Loyalty ( 4 )
sliders ( 4 )
Lean UX ( 3 )
Voice Interaction ( 3 )
Task Metrics ( 3 )
Usability Lab ( 3 )
Desirability ( 3 )
Summative ( 3 )
TAM ( 3 )
Customer Segmentation ( 3 )
Card Sorting ( 3 )
ROI ( 3 )
Data ( 3 )
PURE ( 3 )
Focus Groups ( 2 )
Excel ( 2 )
Findability ( 2 )
Errors ( 2 )
Personas ( 2 )
KLM ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
Cognitive Walkthrough ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Correlation ( 2 )
UX Salary Survey ( 2 )
IA ( 2 )
Tree Testing ( 2 )
Tasks ( 2 )
A/B Testing ( 2 )
SUM ( 2 )
Sample Sizes ( 2 )
Variables ( 2 )
Prototype ( 2 )
LTR ( 2 )
slider ( 2 )
Emoji scale ( 2 )
Star Scale ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Sensitivity ( 2 )
Formative ( 2 )
Desktop ( 1 )
Quality ( 1 )
Linear Numeric Scale ( 1 )
Cumulative Graphs ( 1 )
Polarization ( 1 )
Carryover ( 1 )
Within-subjects ( 1 )
Visual Analog Scale ( 1 )
Task Randomization ( 1 )
Likert ( 1 )
consumer software ( 1 )
b2b software ( 1 )
User-Centred Design ( 1 )
Anchoring ( 1 )
Meeting software ( 1 )
Design Thinking ( 1 )
Margin of Error ( 1 )
Test Metrics ( 1 )
Mean Opinion Scale ( 1 )
Randomization ( 1 )
Latin Squares ( 1 )
Greco-Latin Squares ( 1 )
Information Architecture ( 1 )
Site Analytics ( 1 )
t-test ( 1 )
Statistical Significance ( 1 )
Report ( 1 )
Single Ease Question ( 1 )
R ( 1 )
Research design ( 1 )
Contextual Inquiry ( 1 )
graphic scale ( 1 )
Problem Severity ( 1 )
History of usability ( 1 )
MOS ( 1 )
negative scale ( 1 )
coding ( 1 )
Bias ( 1 )
Probability ( 1 )
Mobile Usability ( 1 )
Measure ( 1 )
MOS-R ( 1 )
RITE ( 1 )
Certification ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Facilitation ( 1 )
Design ( 1 )
Effect Size ( 1 )
Metric ( 1 )
Unmoderated ( 1 )
protoype ( 1 )
Task Completin ( 1 )
Affinity ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Trust ( 1 )
Sample ( 1 )
MUIQ ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Five ( 1 )
User Testing ( 1 )
Persona ( 1 )
Regression Analysis ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
Conjoint Analysis ( 1 )
CUE ( 1 )
MUSiC ( 1 )
Competitive ( 1 )
Formative testing ( 1 )
Expectations ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Segmentation ( 1 )
Moderating ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Delight ( 1 )
ISO ( 1 )
When thinking about user experiences with websites or software, what is the difference between capabilities and functions? Is there any difference at all? In software engineering, a function is code that takes inputs, processes them, and produces outputs (such as a math function). The word capability doesn’t have a formal definition, but it most often appears in the phrase Capability Maturity Model, a formal model

Read More

Happy New Year from all of us at MeasuringU! 2020 was a crazy year, but we still managed to post 48 new articles and continued improving MUIQ, our UX testing platform. We hosted our seventh UX Measurement Bootcamp, this time virtually. The change of format was a challenge, but it was fantastic to work with attendees from all over the world. The topics we wrote

Read More

Like pictures and pixels on a screen, words are a type of user interface. Complex language, like complex software, can lead to misunderstanding, so words should communicate effectively while being easy to understand. The solution, to paraphrase William Zinsser, is to use words that are simple and concise—a guideline that also applies to UX questionnaires. This brings us to the UMUX-Lite, an increasingly popular UX

Read More

Cases spike, home prices surge, and stock prices tank: we read headlines like these daily. But what is a spike and how much is a surge? When does something crater versus tank or just fall? Headlines are meant to grab our attention. They often communicate the dramatic story the author wants to tell rather than what the data say. It isn’t easy to write headlines.

Read More

Rating scales have been around for close to a century. It’s no wonder there are many questions about best practices and pitfalls to avoid. And like any topic that’s been around for that long, there are urban legends, partial truths, context-dependent findings, and just plain misconceptions about the “right” and “wrong” way to use and interpret rating scales. We’ve been researching and conducting our own

Read More

It seems like every few years a new standardized UX measure comes along. Standardization of UX measurement is a good thing for researchers and practitioners. Having common methods and definitions helps with objectivity, generalization, economy, and professional communication. At MeasuringU, we pay a lot of attention to the continuing evolution of standardized UX measurement. The UMUX-Lite is a relatively recent measure that has been attracting

Read More

There are a lot of opinions about the best formats for agreement scales. Sometimes those opinions are strongly held and can lead to lengthy, heated discussions within research teams. When format differences affect measurement properties, those discussions may be time well spent, but when the formats don’t matter (or matter very little), the time is wasted. That’s why we have an ongoing research goal to

Read More

Somewhat agree, very satisfied, extremely likely. The labels used on the points of rating scales can affect responses in often unpredictable ways. What’s more, certain terms can get lost in translation when writing surveys for international usage. Some terms may have subtly different meanings, possibly making cross-cultural comparisons problematic. While numbers are universally understood and don’t need translation, does "5" on a seven-point scale properly

Read More

Five-star reviews. Whether you're rating a product on Amazon, a dining experience on Yelp, or a mobile app in the App or Play Store, you can see that the five-star rating system is quite ubiquitous. Does the familiarity of stars offer a better rating system than traditional numbered scales? We recently reported a comparison between standard five-point linear numeric scales with 0–100-point slider scales made with UMUX-Lite

Read More

There are many ways to format rating scales. Recently we have explored Labeling neutral points Labeling all or some response options Altering the number of response options Comparing agreement vs. item-specific endpoint labels Each of these formatting decisions has a variety of opinions and research, both pro and con, in the scientific literature at large. Our controlled studies on these topics in the context of

Read More