Unmoderated Research

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 39 )
NPS ( 36 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
Sample Size ( 29 )
Rating Scale ( 29 )
SUS ( 28 )
Net Promoter Score ( 24 )
Usability Problems ( 18 )
Metrics ( 17 )
Questionnaires ( 17 )
Measurement ( 16 )
UMUX-lite ( 15 )
Rating Scales ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
User Experience ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 11 )
Qualitative ( 11 )
Reliability ( 11 )
Navigation ( 10 )
UX Metrics ( 8 )
SEQ ( 8 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
Questionnaire ( 7 )
Research ( 7 )
Mobile ( 6 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Confidence ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Unmoderated Research ( 5 )
Confidence Intervals ( 5 )
Six Sigma ( 5 )
Task Times ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
sliders ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
Summative ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
Key Driver ( 3 )
Voice Interaction ( 3 )
ROI ( 3 )
Desirability ( 3 )
Data ( 3 )
PURE ( 3 )
Findability ( 2 )
Personas ( 2 )
SUM ( 2 )
Salary Survey ( 2 )
A/B Testing ( 2 )
Errors ( 2 )
Excel ( 2 )
IA ( 2 )
Tasks ( 2 )
Correlation ( 2 )
Branding ( 2 )
PhD ( 2 )
Tree Testing ( 2 )
Remote Usability Testing ( 2 )
UX Salary Survey ( 2 )
Focus Groups ( 2 )
Marketing ( 2 )
Emoji scale ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
LTR ( 2 )
slider ( 2 )
Sensitivity ( 2 )
Cognitive Walkthrough ( 2 )
Star Scale ( 2 )
KLM ( 2 )
Formative ( 2 )
Desktop ( 1 )
Visual Analog Scale ( 1 )
Linear Numeric Scale ( 1 )
Within-subjects ( 1 )
consumer software ( 1 )
Margin of Error ( 1 )
History of usability ( 1 )
ISO ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Cumulative Graphs ( 1 )
Meeting software ( 1 )
User-Centred Design ( 1 )
Likert ( 1 )
Carryover ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
MOS ( 1 )
Research design ( 1 )
Bias ( 1 )
Probability ( 1 )
Information Architecture ( 1 )
Greco-Latin Squares ( 1 )
Site Analytics ( 1 )
R ( 1 )
t-test ( 1 )
Randomization ( 1 )
Latin Squares ( 1 )
Measure ( 1 )
Contextual Inquiry ( 1 )
Quality ( 1 )
Problem Severity ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
coding ( 1 )
negative scale ( 1 )
Mobile Usability ( 1 )
Polarization ( 1 )
Hedonic usability ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Task Completin ( 1 )
Top Task Analysis ( 1 )
Certification ( 1 )
Effect Size ( 1 )
Segmentation ( 1 )
Facilitation ( 1 )
Persona ( 1 )
User Testing ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Single Ease Question ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
Five ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
protoype ( 1 )
Metric ( 1 )
Expectations ( 1 )
UEQ ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
Competitive ( 1 )
RITE ( 1 )
Formative testing ( 1 )
CUE ( 1 )
Trust ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Moderating ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Delight ( 1 )
MUSiC ( 1 )
If users can’t complete a task, not much else matters. Consequently, task completion is one of the fundamental UX measures and one of the most commonly collected metrics, even in small-sample formative studies and studies of low-fidelity prototypes. Task completion is usually easy to collect, and it’s easy to understand and communicate. It’s typically coded as a binary measure (success or fail) dependent on a participant

Read More

One of these things is not like the other. That’s the theme of a segment on the long-running US TV show Sesame Street. As children, we learn to identify similarities and differences. And after seeing a group of things that look similar, we tend to remember the differences. Why? Well, one theory describes something called the isolation effect, or the Von Restorff effect. The name

Read More

Unmoderated testing platforms allow for quick data collection from large sample sizes. This has enabled researchers to answer questions that were previously difficult or cost prohibitive to answer with traditional lab-based testing. But is the data collected in unmoderated studies, both behavioral and attitudinal, comparable to what you get from a more traditional lab setup? Comparing Metrics There are several ways to compare the agreement or

Read More

Small differences in design changes can have large consequences on website purchases. But detecting these small differences (e.g. 2%–10% changes) through behaviors and attitudes has generally not been feasible from traditional lab-based testing due to the time and costs of recruiting and facilitator costs/time. With unmoderated testing, organizations can now collect data from hundreds to thousands of participants quickly and from around the world to

Read More

To understand problems on a website, nothing quite beats watching users. The process provides a wealth of information both about what users can or can’t do and what might be causing problems in an interface. The major drawback to watching users live or recordings of sessions is that it takes a lot of focused time. 5 to 20 participants—the typical sample size in moderated studies—isn’t

Read More