Confidence Intervals

Browse Content by Topic

UX ( 54 )
Usability Testing ( 52 )
Statistics ( 51 )
Methods ( 50 )
Usability ( 32 )
Survey ( 31 )
User Research ( 27 )
Customer Experience ( 26 )
Benchmarking ( 23 )
SUS ( 19 )
Sample Size ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
Usability Metrics ( 13 )
Rating Scale ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Net Promoter Score ( 11 )
Metrics ( 11 )
User Experience ( 10 )
Measurement ( 10 )
Navigation ( 10 )
Task Time ( 8 )
Surveys ( 8 )
Market Research ( 8 )
Questionnaires ( 7 )
Task Completion ( 7 )
Heuristic Evaluation ( 7 )
Reliability ( 6 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
Rating Scales ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Questionnaire ( 5 )
Moderation ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Confidence Intervals ( 4 )
Confidence ( 4 )
Research ( 4 )
Task Times ( 4 )
Validity ( 4 )
Analytics ( 4 )
UX Metrics ( 4 )
Satisfaction ( 4 )
Loyalty ( 4 )
Customer Segmentation ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
Card Sorting ( 3 )
Expert Review ( 3 )
Unmoderated Research ( 3 )
SUPR-Q ( 3 )
Task Metrics ( 3 )
Lean UX ( 3 )
SEQ ( 2 )
Summative ( 2 )
Data ( 2 )
Excel ( 2 )
PhD ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
Branding ( 2 )
Correlation ( 2 )
Cognitive Walkthrough ( 2 )
A/B Testing ( 2 )
IA ( 2 )
Tree Testing ( 2 )
Findability ( 2 )
Focus Groups ( 2 )
SUM ( 2 )
Personas ( 2 )
Key Driver ( 2 )
UX Methods ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Salary Survey ( 2 )
Certification ( 2 )
Tasks ( 2 )
KLM ( 2 )
UMUX-lite ( 2 )
Perceptions ( 1 )
Performance ( 1 )
Prototype ( 1 )
Site Analytics ( 1 )
protoype ( 1 )
Metric ( 1 )
Moderating ( 1 )
moderated ( 1 )
Facilitation ( 1 )
Information Architecture ( 1 )
Affinity ( 1 )
Problem Severity ( 1 )
Task Completin ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Z-Score ( 1 )
Think Aloud ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
Persona ( 1 )
Segmentation ( 1 )
PURE ( 1 )
Software ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Errors ( 1 )
Trust ( 1 )
Visual Appeal ( 1 )
True Intent ( 1 )
Top Task Analysis ( 1 )
Ordinal ( 1 )
Regression Analysis ( 1 )
Margin of Error ( 1 )
Task Randomization ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Competitive ( 1 )
Formative ( 1 )
Random ( 1 )
Desktop ( 1 )
Every estimate we make from a sample of customer data contains error. Confidence intervals tell us how much faith we can have in our estimates. Confidence intervals quantify the most likely range for the unknown value we're estimating. For example, if we observe 27 out of 30 users (90%) completing a task, we can be 95% confident that between 74% and 97% of all real-world

Read More

Confidence intervals are your frenemies. They are one of the most useful statistical techniques you can apply to customer data. At the same time they can be perplexing and cumbersome. But confidence intervals provide an essential understanding of how much faith we can have in our sample estimates, from any sample size, from 2 to 2 million.  They provide the most likely range for the

Read More

You don't need a PhD in statistics to understand and use confidence intervals. Because we almost always sample a fraction of the users from a larger population, there is uncertainty in our estimates. Confidence intervals are an excellent way of understanding the role of sampling error in the averages and percentages that are ubiquitous in user research. Confidence intervals tell you the most likely range

Read More

Adding confidence intervals to completion rates in usability tests will temper both excessive skepticism and overstated usability findings. Confidence intervals make testing more efficient by quickly revealing unusable tasks with very small samples. Examples are detailed and downloadable calculators are available.   Are you Lacking Confidence?   You just finished a usability test. You had 5 participants attempt a task in a new version of

Read More