Confidence Intervals

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 37 )
Benchmarking ( 32 )
NPS ( 32 )
Usability ( 32 )
Customer Experience ( 30 )
User Research ( 28 )
SUS ( 27 )
Rating Scale ( 22 )
Net Promoter Score ( 21 )
Sample Size ( 20 )
Usability Problems ( 17 )
Metrics ( 17 )
Measurement ( 15 )
User Experience ( 14 )
Validity ( 14 )
Questionnaires ( 13 )
Satisfaction ( 13 )
Usability Metrics ( 13 )
Surveys ( 13 )
SUPRQ ( 12 )
Rating Scales ( 12 )
Qualitative ( 11 )
Navigation ( 10 )
Market Research ( 9 )
Reliability ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
UMUX-lite ( 6 )
Research ( 6 )
Mobile ( 6 )
SEQ ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Task Times ( 4 )
Confidence ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Desirability ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
IA ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
Correlation ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
PhD ( 2 )
Key Driver ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
A/B Testing ( 2 )
Salary Survey ( 2 )
Data ( 2 )
Variables ( 2 )
Excel ( 2 )
TAM ( 2 )
Remote Usability Testing ( 2 )
Marketing ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
KLM ( 2 )
SUM ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Information Architecture ( 1 )
Conjoint Analysis ( 1 )
Site Analytics ( 1 )
Test Metrics ( 1 )
Problem Severity ( 1 )
Desktop ( 1 )
Task Randomization ( 1 )
Expectations ( 1 )
Margin of Error ( 1 )
Competitive ( 1 )
Quality ( 1 )
Design ( 1 )
Microsoft Desirability Toolkit ( 1 )
LTR ( 1 )
Voice Interaction ( 1 )
meCUE2.0 ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
moderated ( 1 )
Moderating ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Delight ( 1 )
Hedonic usability ( 1 )
CUE ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Design Thinking ( 1 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
RITE ( 1 )
Formative testing ( 1 )
MUSiC ( 1 )
ISO ( 1 )
History of usability ( 1 )
Metric ( 1 )
protoype ( 1 )
Top Task Analysis ( 1 )
Sample Sizes ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Errors ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Software ( 1 )
Ordinal ( 1 )
Segmentation ( 1 )
Persona ( 1 )
User Testing ( 1 )
Trust ( 1 )
Formative ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Regression Analysis ( 1 )
Every estimate we make from a sample of customer data contains error. Confidence intervals tell us how much faith we can have in our estimates. Confidence intervals quantify the most likely range for the unknown value we're estimating. For example, if we observe 27 out of 30 users (90%) completing a task, we can be 95% confident that between 74% and 97% of all real-world

Read More

Confidence intervals are your frenemies. They are one of the most useful statistical techniques you can apply to customer data. At the same time they can be perplexing and cumbersome. But confidence intervals provide an essential understanding of how much faith we can have in our sample estimates, from any sample size, from 2 to 2 million.  They provide the most likely range for the

Read More

You don't need a PhD in statistics to understand and use confidence intervals. Because we almost always sample a fraction of the users from a larger population, there is uncertainty in our estimates. Confidence intervals are an excellent way of understanding the role of sampling error in the averages and percentages that are ubiquitous in user research. Confidence intervals tell you the most likely range

Read More

Adding confidence intervals to completion rates in usability tests will temper both excessive skepticism and overstated usability findings. Confidence intervals make testing more efficient by quickly revealing unusable tasks with very small samples. Examples are detailed and downloadable calculators are available.   Are you Lacking Confidence?   You just finished a usability test. You had 5 participants attempt a task in a new version of

Read More