Metrics

Browse Content by Topic

UX ( 72 )
Methods ( 62 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 36 )
Usability ( 32 )
Benchmarking ( 31 )
Customer Experience ( 28 )
NPS ( 27 )
User Research ( 27 )
SUS ( 22 )
Rating Scale ( 19 )
Net Promoter Score ( 19 )
Sample Size ( 19 )
Usability Problems ( 17 )
Metrics ( 16 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Validity ( 11 )
Surveys ( 11 )
Qualitative ( 11 )
Questionnaires ( 10 )
Navigation ( 10 )
Satisfaction ( 10 )
Market Research ( 9 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Rating Scales ( 8 )
UX Metrics ( 8 )
SUPR-Q ( 8 )
Task Completion ( 7 )
Reliability ( 7 )
Mobile ( 6 )
Questionnaire ( 6 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Research ( 5 )
Visualizing Data ( 5 )
UX Maturity ( 4 )
Task Times ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
UX Methods ( 4 )
Confidence ( 4 )
Expert Review ( 4 )
Moderation ( 4 )
Confidence Intervals ( 4 )
Quantitative ( 4 )
Unmoderated Research ( 3 )
PURE ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
SEQ ( 3 )
Card Sorting ( 3 )
UMUX-lite ( 3 )
Lean UX ( 3 )
Salary Survey ( 2 )
Key Driver ( 2 )
Correlation ( 2 )
Branding ( 2 )
Excel ( 2 )
PhD ( 2 )
Remote Usability Testing ( 2 )
Data ( 2 )
UX Salary Survey ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
IA ( 2 )
Personas ( 2 )
SUM ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
Summative ( 2 )
KLM ( 2 )
Tree Testing ( 2 )
Cognitive Walkthrough ( 2 )
Tasks ( 2 )
Mobile Usability ( 1 )
Task Completin ( 1 )
PSSUQ ( 1 )
Prototype ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Perceptions ( 1 )
Five ( 1 )
Software ( 1 )
Problem Severity ( 1 )
Performance ( 1 )
Contextual Inquiry ( 1 )
Information Architecture ( 1 )
NSAT ( 1 )
moderated ( 1 )
Customer effort ( 1 )
Delight ( 1 )
CSUQ ( 1 )
Site Analytics ( 1 )
Moderating ( 1 )
Sample ( 1 )
Facilitation ( 1 )
protoype ( 1 )
Metric ( 1 )
Certification ( 1 )
Crowdsourcing ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Margin of Error ( 1 )
Expectations ( 1 )
Effect Size ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Segmentation ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Random ( 1 )
Desktop ( 1 )
Ordinal ( 1 )
User Testing ( 1 )
Errors ( 1 )
Trust ( 1 )
Persona ( 1 )
Unmoderated ( 1 )
UX research pulls many terms, methods, and conventions from other fields. Selecting a method is an important first choice in measuring the user experience. But an important next step is understanding the variables you’ll have to deal with when designing a study or drawing conclusions. Variables are things that change. Variables can be controlled and measured. It sounds simple enough but there are actually different types

Read More

We talk a lot about measurement at MeasuringU (hence our name). But what’s the point in collecting UX metrics? What do you do with study metrics such as the SUS, NPS, or SUPR-Q? Or task-level metrics such as completion rates and time? To understand the purpose of UX measurement we need to understand fundamentally the purpose of measurement. But settling on a definition of measurement

Read More

Happy new year from all of us at MeasuringU! In 2019 we posted 46 new articles and added significant new features to MUIQ—our UX testing platform—including think-aloud videos with picture in picture and an advanced UX metrics dashboard. We hosted our seventh UX Measurement Bootcamp, and MeasuringU Press published Jim Lewis’s book, Using the PSSUQ and CSUQ in User Experience Research and Practice. The topics

Read More

It was another busy year at MeasuringU. We posted 50 new articles, added new features to MUIQ—our UX testing platform, hosted our 6th UX Bootcamp, and released the book, Benchmarking the User Experience. We also moved into a bigger new space in Denver’s Cherry Creek neighborhood. It’s three times the size of our old space with state-of-the-art labs and we hosted UX Book Club this

Read More

Businesses are full of metrics. Increasingly those metrics quantify the user experience (which is a good thing). Collecting consistent and standardized metrics allows organizations to better understand the current user experience of websites, software, and apps. It allows teams to track changes over time and compare to competitors and industry benchmarks. The idea of quantifying experiences is still new for many people, which is one

Read More

A benchmark study tells you where a website, app, or product falls relative to some meaningful comparison. This comparison can be to an earlier version, the competition, or industry standard. Benchmark studies are often called summative evaluations as the emphasis is less on finding problems and more on quantitatively assessing the current experience. To quantify, you need metrics and UX benchmark studies can have quite

Read More

Unmoderated testing platforms allow for quick data collection from large sample sizes. This has enabled researchers to answer questions that were previously difficult or cost prohibitive to answer with traditional lab-based testing. But is the data collected in unmoderated studies, both behavioral and attitudinal, comparable to what you get from a more traditional lab setup? Comparing Metrics There are several ways to compare the agreement or

Read More

UX metrics are a mix of attitude (what people think) and actions (what people do). To fully measure the user experience, you need to measure both. UX metrics are influenced by more than an interface. Users have preconceived notions about companies and this affects both how they think and what they do when they interact with a brand—either in a store or online. Brand attitudes

Read More

It was another busy year on MeasuringU.com with 50 new articles, a new website, a new unmoderated research platform (MUIQ), and our 5th UX Bootcamp. In 2017 over 1.2 million people viewed our articles. Thank You! The most common topics we covered include: usability testing, benchmarking, the 3Ms of methods, metrics and measurement, and working with online panels. Here’s a summary of the articles I

Read More

Task completion is one of the fundamental usability metrics. It’s the most common way to quantify the effectiveness of an interface. If users can’t do what they intend to accomplish, not much else matters. While that may seem like a straightforward concept, actually determining whether users are completing a task often isn’t as easy. The ways to determine task completion will vary based on the

Read More