Measurement

Browse Content by Topic

UX ( 68 )
Methods ( 60 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 34 )
Usability ( 32 )
Benchmarking ( 28 )
Customer Experience ( 27 )
User Research ( 27 )
NPS ( 22 )
SUS ( 21 )
Sample Size ( 18 )
Net Promoter Score ( 17 )
Usability Problems ( 17 )
Measurement ( 15 )
Metrics ( 15 )
Rating Scale ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Validity ( 11 )
Navigation ( 10 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
Surveys ( 8 )
Market Research ( 8 )
UX Metrics ( 7 )
Questionnaires ( 7 )
Task Completion ( 7 )
Reliability ( 7 )
Questionnaire ( 6 )
Rating Scales ( 6 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
SUPR-Q ( 6 )
Analytics ( 5 )
Research ( 5 )
Satisfaction ( 5 )
Usability Problem ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Confidence ( 4 )
Loyalty ( 4 )
Quantitative ( 4 )
UX Maturity ( 4 )
Task Times ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
Task Metrics ( 3 )
PURE ( 3 )
Customer Segmentation ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Lean UX ( 3 )
UMUX-lite ( 3 )
SEQ ( 3 )
Unmoderated Research ( 3 )
Card Sorting ( 3 )
A/B Testing ( 2 )
Cognitive Walkthrough ( 2 )
Findability ( 2 )
Eye-Tracking ( 2 )
Excel ( 2 )
Branding ( 2 )
Summative ( 2 )
Personas ( 2 )
Data ( 2 )
Correlation ( 2 )
Salary Survey ( 2 )
SUM ( 2 )
Focus Groups ( 2 )
Tasks ( 2 )
Key Driver ( 2 )
PhD ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
KLM ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Marketing ( 2 )
Perceptions ( 1 )
Mobile Usability ( 1 )
protoype ( 1 )
Facilitation ( 1 )
Site Analytics ( 1 )
Metric ( 1 )
moderated ( 1 )
Moderating ( 1 )
Certification ( 1 )
Information Architecture ( 1 )
Z-Score ( 1 )
Problem Severity ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Prototype ( 1 )
Contextual Inquiry ( 1 )
Performance ( 1 )
Random ( 1 )
Top Task Analysis ( 1 )
Software ( 1 )
Ordinal ( 1 )
True Intent ( 1 )
Trust ( 1 )
Design ( 1 )
Errors ( 1 )
Effect Size ( 1 )
Unmoderated ( 1 )
User Testing ( 1 )
Persona ( 1 )
Segmentation ( 1 )
Visual Appeal ( 1 )
Regression Analysis ( 1 )
Margin of Error ( 1 )
Task Randomization ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Competitive ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Desktop ( 1 )
What gets measured gets managed. It’s more than a truism for business executives. It’s also essential for the user experience professional. In business, and UX research in particular, you don’t want to bring focus to the wrong or flawed measure. It can lead to wrong decisions and a misalignment of effort. In an earlier article, I discussed the differences between the most common variables in

Read More

How much did you spend on Amazon last week? If you had to provide receipts or proof of purchases, how accurate do you think your estimate would be? In an earlier article we reported on the first wave of findings for a UX longitudinal study. We found that attitudes toward the website user experience tended to predict future purchasing behavior. In general, customers of websites

Read More

We talk a lot about measurement at MeasuringU (hence our name). But what’s the point in collecting UX metrics? What do you do with study metrics such as the SUS, NPS, or SUPR-Q? Or task-level metrics such as completion rates and time? To understand the purpose of UX measurement we need to understand fundamentally the purpose of measurement. But settling on a definition of measurement

Read More

For measuring the user experience, I recommend using a mix of task-based and study-level measures that capture both attitudes (e.g. SUS, SUPR-Q, SEQ, and NPS) and actions (e.g. completion rates and times). The NPS is commonly collected by organizations and therefore UX organizations (often because they are told to). Its popularity inevitably has brought skepticism. And rightfully so. After all, the NPS was touted as

Read More

It seems like each year introduces a new measure or questionnaire. Like a late-night infomercial, some are even touted as the next BIG thing, like the NPS was. New questionnaires and measures are a natural part of the evolution of measurement (especially measuring difficult things such as human attitudes). It’s a good thing. I’ll often help peer review new questionnaires published in journals and conference

Read More

The SUPR-Q (Standardized User Experience Percentile Rank Questionnaire) is a standardized questionnaire that measures the quality of the website user experience. It’s an 8-item instrument that’s gone through multiple rounds of psychometric validation and is used by hundreds of organizations around the world. Here’s a list of 10 essential things to know about the SUPR-Q. 1. It’s derived from research and refined across studies. Instead

Read More

Your car doesn’t start on some mornings. Your computer crashes at the worst times. Your friend doesn’t show up to your dinner party. If something or someone isn’t reliable, it’s not only a pain but it makes your life less effective and less efficient. And what is true for people and products is true for measurement. The wording of items and the response options we

Read More

Benchmarking is an essential part of a plan to systematically improve the user experience. A regular benchmark study is a great way to show how design improvements may or may not be improving the user experience of websites and products. After you’ve decided you’re ready to conduct a benchmark, you’ll need to consider whether to conduct it internally within your company or outsource all or

Read More

The Net Promoter Score (NPS) is a popular metric for measuring customer loyalty. For many companies, it’s THE only metric that matters. With such wide usage across different industries, departments, and companies of various sizes, it’s no surprise many questions and controversies arise. Some are systemic—should the NPS be used as a key metric?—and some are trivial—should the NPS be treated as a percentage? In

Read More

Surveys often suffer from having too many questions. Many items are redundant or don’t measure what they intend to measure. Even worse, survey items are often the result of “design by committee” with more items getting added over time to address someone’s concerns. Let's say an organization uses the following items in a customer survey: Satisfaction Importance Usefulness Ease of use Happiness Delight Net Promoter

Read More