Statistical Significance

Browse Content by Topic

UX ( 74 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 41 )
NPS ( 39 )
Benchmarking ( 33 )
Usability ( 32 )
Sample Size ( 32 )
Rating Scale ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Net Promoter Score ( 28 )
Usability Problems ( 18 )
Questionnaires ( 18 )
Rating Scales ( 17 )
Metrics ( 17 )
Measurement ( 16 )
User Experience ( 15 )
UMUX-lite ( 15 )
Surveys ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 12 )
Reliability ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
SEQ ( 8 )
UX Metrics ( 8 )
Research ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Confidence ( 6 )
Confidence Intervals ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Credibility ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Task Times ( 4 )
Key Driver ( 4 )
Quantitative ( 4 )
Loyalty ( 4 )
Expert Review ( 4 )
sliders ( 4 )
Lean UX ( 3 )
Customer Segmentation ( 3 )
Card Sorting ( 3 )
Summative ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
ROI ( 3 )
PURE ( 3 )
Desirability ( 3 )
Voice Interaction ( 3 )
Task Metrics ( 3 )
Data ( 3 )
Focus Groups ( 2 )
Correlation ( 2 )
SUM ( 2 )
Excel ( 2 )
Findability ( 2 )
PhD ( 2 )
Errors ( 2 )
Remote Usability Testing ( 2 )
KLM ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
A/B Testing ( 2 )
Personas ( 2 )
slider ( 2 )
Marketing ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
LTR ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
Sensitivity ( 2 )
Emoji scale ( 2 )
Cognitive Walkthrough ( 2 )
Formative ( 2 )
Star Scale ( 2 )
Carryover ( 1 )
Within-subjects ( 1 )
Visual Analog Scale ( 1 )
Desktop ( 1 )
Linear Numeric Scale ( 1 )
User-Centred Design ( 1 )
Cumulative Graphs ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Margin of Error ( 1 )
Meeting software ( 1 )
Polarization ( 1 )
Likert ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
Mean Opinion Scale ( 1 )
Latin Squares ( 1 )
Greco-Latin Squares ( 1 )
Research design ( 1 )
Information Architecture ( 1 )
Site Analytics ( 1 )
Randomization ( 1 )
Report ( 1 )
Single Ease Question ( 1 )
R ( 1 )
t-test ( 1 )
Bias ( 1 )
Contextual Inquiry ( 1 )
Problem Severity ( 1 )
History of usability ( 1 )
MOS ( 1 )
MOS-R ( 1 )
graphic scale ( 1 )
negative scale ( 1 )
Probability ( 1 )
Measure ( 1 )
Mobile Usability ( 1 )
coding ( 1 )
Anchoring ( 1 )
Formative testing ( 1 )
Certification ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Design ( 1 )
Facilitation ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
protoype ( 1 )
Unmoderated ( 1 )
Task Completin ( 1 )
Affinity ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Trust ( 1 )
Sample ( 1 )
Statistical Significance ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Five ( 1 )
Persona ( 1 )
Metric ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
Hedonic usability ( 1 )
Expectations ( 1 )
MUSiC ( 1 )
RITE ( 1 )
Competitive ( 1 )
CUE ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
Software ( 1 )
moderated ( 1 )
Moderating ( 1 )
Segmentation ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
PSSUQ ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
ISO ( 1 )
When we wrote Quantifying the User Experience, we put confidence intervals before tests of statistical significance. We generally find fluency with confidence intervals to be easier to achieve and of more value than with formal hypothesis testing. We also teach confidence intervals in our workshops on statistical methods. Most people, even non-researchers, have been exposed to the concept of margins of error—political polls include them.

Read More