Statistical Significance

Browse Content by Topic

UX ( 74 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 41 )
NPS ( 40 )
Sample Size ( 35 )
Rating Scale ( 35 )
Benchmarking ( 33 )
Usability ( 32 )
Customer Experience ( 31 )
Net Promoter Score ( 30 )
User Research ( 29 )
SUS ( 28 )
Questionnaires ( 20 )
Rating Scales ( 19 )
Usability Problems ( 18 )
Metrics ( 17 )
Surveys ( 16 )
Measurement ( 16 )
UMUX-lite ( 16 )
User Experience ( 15 )
Satisfaction ( 15 )
Validity ( 14 )
SUPR-Q ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
Reliability ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
UX Metrics ( 8 )
SEQ ( 8 )
Research ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
Confidence ( 6 )
Confidence Intervals ( 6 )
Analytics ( 6 )
Unmoderated Research ( 5 )
Key Driver ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Six Sigma ( 5 )
UX Maturity ( 4 )
Moderation ( 4 )
UX Methods ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Task Times ( 4 )
sliders ( 4 )
Loyalty ( 4 )
Desirability ( 3 )
Lean UX ( 3 )
Task Metrics ( 3 )
TAM ( 3 )
Voice Interaction ( 3 )
Summative ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Data ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
PURE ( 3 )
Focus Groups ( 2 )
Personas ( 2 )
KLM ( 2 )
Findability ( 2 )
Errors ( 2 )
Excel ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
Cognitive Walkthrough ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Correlation ( 2 )
UX Salary Survey ( 2 )
IA ( 2 )
Tree Testing ( 2 )
Tasks ( 2 )
A/B Testing ( 2 )
SUM ( 2 )
Sample Sizes ( 2 )
Variables ( 2 )
Prototype ( 2 )
LTR ( 2 )
slider ( 2 )
Emoji scale ( 2 )
Star Scale ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Sensitivity ( 2 )
Formative ( 2 )
Desktop ( 1 )
Quality ( 1 )
Linear Numeric Scale ( 1 )
Cumulative Graphs ( 1 )
Polarization ( 1 )
Carryover ( 1 )
Within-subjects ( 1 )
Visual Analog Scale ( 1 )
Task Randomization ( 1 )
Likert ( 1 )
consumer software ( 1 )
b2b software ( 1 )
User-Centred Design ( 1 )
Anchoring ( 1 )
Meeting software ( 1 )
Design Thinking ( 1 )
Margin of Error ( 1 )
Test Metrics ( 1 )
Mean Opinion Scale ( 1 )
Randomization ( 1 )
Latin Squares ( 1 )
Greco-Latin Squares ( 1 )
Information Architecture ( 1 )
Site Analytics ( 1 )
t-test ( 1 )
Statistical Significance ( 1 )
Report ( 1 )
Single Ease Question ( 1 )
R ( 1 )
Research design ( 1 )
Contextual Inquiry ( 1 )
graphic scale ( 1 )
Problem Severity ( 1 )
History of usability ( 1 )
MOS ( 1 )
negative scale ( 1 )
coding ( 1 )
Bias ( 1 )
Probability ( 1 )
Mobile Usability ( 1 )
Measure ( 1 )
MOS-R ( 1 )
RITE ( 1 )
Certification ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Facilitation ( 1 )
Design ( 1 )
Effect Size ( 1 )
Metric ( 1 )
Unmoderated ( 1 )
protoype ( 1 )
Task Completin ( 1 )
Affinity ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Trust ( 1 )
Sample ( 1 )
MUIQ ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Five ( 1 )
User Testing ( 1 )
Persona ( 1 )
Regression Analysis ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
Conjoint Analysis ( 1 )
CUE ( 1 )
MUSiC ( 1 )
Competitive ( 1 )
Formative testing ( 1 )
Expectations ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Segmentation ( 1 )
Moderating ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Delight ( 1 )
ISO ( 1 )
When we wrote Quantifying the User Experience, we put confidence intervals before tests of statistical significance. We generally find fluency with confidence intervals to be easier to achieve and of more value than with formal hypothesis testing. We also teach confidence intervals in our workshops on statistical methods. Most people, even non-researchers, have been exposed to the concept of margins of error—political polls include them.

Read More