Comparison of UX Metrics in Moderated vs. Unmoderated Studies

Usability Metrics

Browse Content by Topic

UX ( 53 )
Usability Testing ( 52 )
Statistics ( 51 )
Methods ( 50 )
Usability ( 31 )
Survey ( 30 )
User Research ( 27 )
Customer Experience ( 26 )
Benchmarking ( 23 )
Sample Size ( 18 )
SUS ( 18 )
Usability Problems ( 17 )
NPS ( 17 )
Rating Scale ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Metrics ( 11 )
Qualitative ( 11 )
Net Promoter Score ( 10 )
Measurement ( 10 )
Navigation ( 10 )
User Experience ( 9 )
Task Time ( 8 )
Surveys ( 8 )
Market Research ( 8 )
Task Completion ( 7 )
Heuristic Evaluation ( 7 )
Questionnaires ( 6 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
Reliability ( 5 )
Usability Problem ( 5 )
Questionnaire ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Analytics ( 4 )
Research ( 4 )
Confidence ( 4 )
Task Times ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Validity ( 4 )
Satisfaction ( 4 )
UX Metrics ( 4 )
Rating Scales ( 4 )
Loyalty ( 4 )
Confidence Intervals ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Card Sorting ( 3 )
Expert Review ( 3 )
Unmoderated Research ( 3 )
Customer Segmentation ( 3 )
SUPR-Q ( 3 )
Usability Lab ( 3 )
Lean UX ( 3 )
Task Metrics ( 3 )
ROI ( 3 )
Findability ( 2 )
Summative ( 2 )
Excel ( 2 )
Focus Groups ( 2 )
SEQ ( 2 )
PhD ( 2 )
Remote Usability Testing ( 2 )
Salary Survey ( 2 )
Marketing ( 2 )
Data ( 2 )
SUM ( 2 )
Eye-Tracking ( 2 )
Personas ( 2 )
A/B Testing ( 2 )
UX Methods ( 2 )
Key Driver ( 2 )
Certification ( 2 )
KLM ( 2 )
UX Salary Survey ( 2 )
Cognitive Walkthrough ( 2 )
Correlation ( 2 )
Tree Testing ( 2 )
UMUX-lite ( 2 )
Tasks ( 2 )
IA ( 2 )
Branding ( 2 )
Information Architecture ( 1 )
Moderating ( 1 )
Facilitation ( 1 )
Perceptions ( 1 )
Metric ( 1 )
Contextual Inquiry ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Site Analytics ( 1 )
Z-Score ( 1 )
Problem Severity ( 1 )
Prototype ( 1 )
Mobile Usability ( 1 )
Performance ( 1 )
protoype ( 1 )
Formative ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
Persona ( 1 )
Segmentation ( 1 )
moderated ( 1 )
Software ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Errors ( 1 )
Trust ( 1 )
Visual Appeal ( 1 )
True Intent ( 1 )
Top Task Analysis ( 1 )
Ordinal ( 1 )
Regression Analysis ( 1 )
Random ( 1 )
Margin of Error ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Competitive ( 1 )
Think Aloud ( 1 )
Quality ( 1 )
Desktop ( 1 )
Unmoderated testing platforms allow for quick data collection from large sample sizes. This has enabled researchers to answer questions that were previously difficult or cost prohibitive to answer with traditional lab-based testing. But is the data collected in unmoderated studies, both behavioral and attitudinal, comparable to what you get from a more traditional lab setup? Comparing Metrics There are several ways to compare the agreement or

Read More

Task completion is one of the fundamental usability metrics. It’s the most common way to quantify the effectiveness of an interface. If users can’t do what they intend to accomplish, not much else matters. While that may seem like a straightforward concept, actually determining whether users are completing a task often isn’t as easy. The ways to determine task completion will vary based on the

Read More

Many factors, including features and price, influence whether customers recommend software products. But usability consistently tops the list of key drivers of customer loyalty. Typically, usability accounts for between 30% and 60% of the "why" when customers do or don't recommend products. A positive experience leads more customers to recommend a product. A negative experience, predictably, causes customers to actively discourage others from buying a

Read More

Quantifying the user experience is the first step to making measured improvements. One of the first questions with any metric is "what's a good score?". Like in sports, a good score depends on the metric and context. Here are 10 benchmarks with some context to help make your metrics more manageable. 1.  Average Task Completion Rate is 78%: The fundamental usability metric is task completion.

Read More

Everything should be 1 click away. It takes too many clicks ! For as long as there have been websites it seems that there's been a call to reduce the number of clicks to improve the user experience. This was especially the case after Amazon released its one-click purchase button in 1999. Executives, product managers and developers at one point have all joined the call

Read More

Completion rates are the fundamental usability metric: A binary measure of pass and fail (coded as 1 or 0) provides a simple metric of success. If users cannot complete a task, not much else matters with respect to usability or utility. Easy to understand: They are easy to collect and easy to understand for both engineers and executives. You don't need to be a statistician

Read More

If a user can't find the information does it exist? The inability of users to find products, services and information is one of the biggest problems and opportunities for website designers. Knowing users' goals and what top tasks they attempt on your website is an essential first step in any (re)design. Testing and improving these task experiences is the next step. On most websites a

Read More

It depends (you saw that coming). Context matters in deciding what a good completion rate is for a task, however, knowing what other task completion rates are can be a good guide for setting goals. An analysis of almost 1200 usability tasks shows that the average task-completion rate is 78%. The Fundamental Usability Metric A binary task completion rate is one of the most fundamental

Read More

In usability testing we ask users to complete tasks and often ask them to rate how difficult or easy the task was. Does it matter when you ask this question? What happens if we interrupt users during the task instead of asking it after the task experience is over? Almost ten years ago researchers at Intel asked 28 users to complete various tasks across websites

Read More