Comparison of UX Metrics in Moderated vs. Unmoderated Studies

Usability Metrics

Browse Content by Topic

UX ( 70 )
Methods ( 61 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 36 )
Usability ( 32 )
Benchmarking ( 29 )
Customer Experience ( 28 )
User Research ( 27 )
NPS ( 27 )
SUS ( 21 )
Net Promoter Score ( 19 )
Sample Size ( 19 )
Rating Scale ( 18 )
Usability Problems ( 17 )
Metrics ( 15 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Validity ( 11 )
Surveys ( 10 )
Navigation ( 10 )
Satisfaction ( 10 )
Market Research ( 9 )
Questionnaires ( 9 )
Heuristic Evaluation ( 8 )
SUPR-Q ( 8 )
Task Time ( 8 )
UX Metrics ( 7 )
Reliability ( 7 )
Task Completion ( 7 )
Rating Scales ( 7 )
Mobile Usability Testing ( 6 )
Questionnaire ( 6 )
Mobile ( 6 )
Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Analytics ( 5 )
UX Methods ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Task Times ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Usability Lab ( 3 )
Unmoderated Research ( 3 )
SEQ ( 3 )
UMUX-lite ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Task Metrics ( 3 )
Branding ( 2 )
Data ( 2 )
SUM ( 2 )
Key Driver ( 2 )
PhD ( 2 )
KLM ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Personas ( 2 )
Excel ( 2 )
A/B Testing ( 2 )
Tree Testing ( 2 )
Marketing ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
Focus Groups ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Findability ( 2 )
IA ( 2 )
Correlation ( 2 )
Affinity ( 1 )
Perceptions ( 1 )
Problem Severity ( 1 )
Performance ( 1 )
Z-Score ( 1 )
Contextual Inquiry ( 1 )
Moderating ( 1 )
Site Analytics ( 1 )
moderated ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Metric ( 1 )
protoype ( 1 )
Prototype ( 1 )
Mobile Usability ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Information Architecture ( 1 )
Task Completin ( 1 )
Margin of Error ( 1 )
Software ( 1 )
Segmentation ( 1 )
Delight ( 1 )
Ordinal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Visual Appeal ( 1 )
Persona ( 1 )
Design ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Top Task Analysis ( 1 )
Formative ( 1 )
Trust ( 1 )
Errors ( 1 )
Quality ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Desktop ( 1 )
Unmoderated testing platforms allow for quick data collection from large sample sizes. This has enabled researchers to answer questions that were previously difficult or cost prohibitive to answer with traditional lab-based testing. But is the data collected in unmoderated studies, both behavioral and attitudinal, comparable to what you get from a more traditional lab setup? Comparing Metrics There are several ways to compare the agreement or

Read More

Task completion is one of the fundamental usability metrics. It’s the most common way to quantify the effectiveness of an interface. If users can’t do what they intend to accomplish, not much else matters. While that may seem like a straightforward concept, actually determining whether users are completing a task often isn’t as easy. The ways to determine task completion will vary based on the

Read More

Many factors, including features and price, influence whether customers recommend software products. But usability consistently tops the list of key drivers of customer loyalty. Typically, usability accounts for between 30% and 60% of the "why" when customers do or don't recommend products. A positive experience leads more customers to recommend a product. A negative experience, predictably, causes customers to actively discourage others from buying a

Read More

Quantifying the user experience is the first step to making measured improvements. One of the first questions with any metric is "what's a good score?". Like in sports, a good score depends on the metric and context. Here are 10 benchmarks with some context to help make your metrics more manageable. 1.  Average Task Completion Rate is 78%: The fundamental usability metric is task completion.

Read More

Everything should be 1 click away. It takes too many clicks ! For as long as there have been websites it seems that there's been a call to reduce the number of clicks to improve the user experience. This was especially the case after Amazon released its one-click purchase button in 1999. Executives, product managers and developers at one point have all joined the call

Read More

Completion rates are the fundamental usability metric: A binary measure of pass and fail (coded as 1 or 0) provides a simple metric of success. If users cannot complete a task, not much else matters with respect to usability or utility. Easy to understand: They are easy to collect and easy to understand for both engineers and executives. You don't need to be a statistician

Read More

If a user can't find the information does it exist? The inability of users to find products, services and information is one of the biggest problems and opportunities for website designers. Knowing users' goals and what top tasks they attempt on your website is an essential first step in any (re)design. Testing and improving these task experiences is the next step. On most websites a

Read More

It depends (you saw that coming). Context matters in deciding what a good completion rate is for a task, however, knowing what other task completion rates are can be a good guide for setting goals. An analysis of almost 1200 usability tasks shows that the average task-completion rate is 78%. The Fundamental Usability Metric A binary task completion rate is one of the most fundamental

Read More