Comparison of UX Metrics in Moderated vs. Unmoderated Studies

Usability Metrics

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 51 )
Survey ( 38 )
NPS ( 34 )
Benchmarking ( 32 )
Usability ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Sample Size ( 26 )
Rating Scale ( 25 )
Net Promoter Score ( 22 )
Usability Problems ( 18 )
Metrics ( 17 )
Questionnaires ( 16 )
Measurement ( 16 )
User Experience ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
Rating Scales ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
UMUX-lite ( 12 )
SUPRQ ( 12 )
Market Research ( 12 )
Qualitative ( 11 )
Reliability ( 11 )
SUPR-Q ( 11 )
Navigation ( 10 )
Task Time ( 8 )
UX Metrics ( 8 )
Heuristic Evaluation ( 8 )
SEQ ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Analytics ( 6 )
Research ( 6 )
Unmoderated Research ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Task Times ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Expert Review ( 4 )
Confidence Intervals ( 4 )
Moderation ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Usability Lab ( 3 )
Summative ( 3 )
Card Sorting ( 3 )
Lean UX ( 3 )
Voice Interaction ( 3 )
sliders ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
Key Driver ( 3 )
ROI ( 3 )
Desirability ( 3 )
Data ( 3 )
TAM ( 3 )
PURE ( 3 )
Excel ( 2 )
Salary Survey ( 2 )
Findability ( 2 )
Focus Groups ( 2 )
Sensitivity ( 2 )
SUM ( 2 )
Personas ( 2 )
Tree Testing ( 2 )
Tasks ( 2 )
Branding ( 2 )
UX Salary Survey ( 2 )
Errors ( 2 )
PhD ( 2 )
Correlation ( 2 )
Remote Usability Testing ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Star Scale ( 2 )
Eye-Tracking ( 2 )
Emoji scale ( 2 )
slider ( 2 )
Sample Sizes ( 2 )
LTR ( 2 )
Variables ( 2 )
Marketing ( 2 )
Prototype ( 2 )
Cognitive Walkthrough ( 2 )
Formative ( 2 )
KLM ( 2 )
Likert ( 1 )
consumer software ( 1 )
Desktop ( 1 )
Design Thinking ( 1 )
Latin Squares ( 1 )
Visual Analog Scale ( 1 )
b2b software ( 1 )
User-Centred Design ( 1 )
Meeting software ( 1 )
Formative testing ( 1 )
Margin of Error ( 1 )
CUE ( 1 )
RITE ( 1 )
MUSiC ( 1 )
Cumulative Graphs ( 1 )
History of usability ( 1 )
ISO ( 1 )
Greco-Latin Squares ( 1 )
Linear Numeric Scale ( 1 )
Information Architecture ( 1 )
Bias ( 1 )
Contextual Inquiry ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
Probability ( 1 )
Measure ( 1 )
coding ( 1 )
negative scale ( 1 )
MOS ( 1 )
Mobile Usability ( 1 )
Carryover ( 1 )
Problem Severity ( 1 )
Within-subjects ( 1 )
Task Randomization ( 1 )
Research design ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Polarization ( 1 )
Site Analytics ( 1 )
AttrakDiff2 ( 1 )
Effect Size ( 1 )
Affinity ( 1 )
Unmoderated ( 1 )
Z-Score ( 1 )
User Testing ( 1 )
Persona ( 1 )
Software ( 1 )
Segmentation ( 1 )
Task Completin ( 1 )
Design ( 1 )
Performance ( 1 )
Sample ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Five ( 1 )
Perceptions ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Microsoft Desirability Toolkit ( 1 )
Competitive ( 1 )
Expectations ( 1 )
PSSUQ ( 1 )
meCUE2.0 ( 1 )
UEQ ( 1 )
Quality ( 1 )
Hedonic usability ( 1 )
Trust ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Moderating ( 1 )
Ordinal ( 1 )
Metric ( 1 )
protoype ( 1 )
moderated ( 1 )
NSAT ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Customer effort ( 1 )
Test Metrics ( 1 )
Unmoderated testing platforms allow for quick data collection from large sample sizes. This has enabled researchers to answer questions that were previously difficult or cost prohibitive to answer with traditional lab-based testing. But is the data collected in unmoderated studies, both behavioral and attitudinal, comparable to what you get from a more traditional lab setup? Comparing Metrics There are several ways to compare the agreement or

Read More

Task completion is one of the fundamental usability metrics. It’s the most common way to quantify the effectiveness of an interface. If users can’t do what they intend to accomplish, not much else matters. While that may seem like a straightforward concept, actually determining whether users are completing a task often isn’t as easy. The ways to determine task completion will vary based on the

Read More

Many factors, including features and price, influence whether customers recommend software products. But usability consistently tops the list of key drivers of customer loyalty. Typically, usability accounts for between 30% and 60% of the "why" when customers do or don't recommend products. A positive experience leads more customers to recommend a product. A negative experience, predictably, causes customers to actively discourage others from buying a

Read More

Quantifying the user experience is the first step to making measured improvements. One of the first questions with any metric is "what's a good score?". Like in sports, a good score depends on the metric and context. Here are 10 benchmarks with some context to help make your metrics more manageable. 1.  Average Task Completion Rate is 78%: The fundamental usability metric is task completion.

Read More

Everything should be 1 click away. It takes too many clicks ! For as long as there have been websites it seems that there's been a call to reduce the number of clicks to improve the user experience. This was especially the case after Amazon released its one-click purchase button in 1999. Executives, product managers and developers at one point have all joined the call

Read More

Completion rates are the fundamental usability metric: A binary measure of pass and fail (coded as 1 or 0) provides a simple metric of success. If users cannot complete a task, not much else matters with respect to usability or utility. Easy to understand: They are easy to collect and easy to understand for both engineers and executives. You don't need to be a statistician

Read More

If a user can't find the information does it exist? The inability of users to find products, services and information is one of the biggest problems and opportunities for website designers. Knowing users' goals and what top tasks they attempt on your website is an essential first step in any (re)design. Testing and improving these task experiences is the next step. On most websites a

Read More

It depends (you saw that coming). Context matters in deciding what a good completion rate is for a task, however, knowing what other task completion rates are can be a good guide for setting goals. An analysis of almost 1200 usability tasks shows that the average task-completion rate is 78%. The Fundamental Usability Metric A binary task completion rate is one of the most fundamental

Read More