Confidence

Browse Content by Topic

UX ( 60 )
Usability Testing ( 52 )
Statistics ( 51 )
Methods ( 51 )
Usability ( 32 )
Survey ( 31 )
User Research ( 27 )
Customer Experience ( 26 )
Benchmarking ( 25 )
SUS ( 19 )
Sample Size ( 18 )
Usability Problems ( 17 )
NPS ( 17 )
Rating Scale ( 13 )
Usability Metrics ( 13 )
Metrics ( 12 )
SUPRQ ( 12 )
Net Promoter Score ( 11 )
User Experience ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Measurement ( 10 )
Market Research ( 8 )
Task Time ( 8 )
Surveys ( 8 )
Questionnaires ( 7 )
Heuristic Evaluation ( 7 )
Task Completion ( 7 )
Mobile ( 6 )
Questionnaire ( 6 )
Reliability ( 6 )
Mobile Usability Testing ( 6 )
UX Metrics ( 6 )
Usability Problem ( 5 )
Validity ( 5 )
Rating Scales ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Satisfaction ( 5 )
Research ( 4 )
Analytics ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
Quantitative ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
Customer Segmentation ( 3 )
SEQ ( 3 )
SUPR-Q ( 3 )
Expert Review ( 3 )
ROI ( 3 )
Task Metrics ( 3 )
Unmoderated Research ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
SUM ( 2 )
UX Methods ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
A/B Testing ( 2 )
Remote Usability Testing ( 2 )
Marketing ( 2 )
Key Driver ( 2 )
Data ( 2 )
IA ( 2 )
Tree Testing ( 2 )
PhD ( 2 )
Excel ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
KLM ( 2 )
Certification ( 2 )
PURE ( 2 )
UMUX-lite ( 2 )
Correlation ( 2 )
Tasks ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
Cognitive Walkthrough ( 2 )
Facilitation ( 1 )
Site Analytics ( 1 )
Metric ( 1 )
Information Architecture ( 1 )
Perceptions ( 1 )
protoype ( 1 )
Problem Severity ( 1 )
Affinity ( 1 )
moderated ( 1 )
Task Completin ( 1 )
Mobile Usability ( 1 )
Prototype ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Contextual Inquiry ( 1 )
Moderating ( 1 )
Errors ( 1 )
Quality ( 1 )
Trust ( 1 )
Persona ( 1 )
Segmentation ( 1 )
Software ( 1 )
Ordinal ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Top Task Analysis ( 1 )
Design ( 1 )
Unmoderated ( 1 )
Regression Analysis ( 1 )
Conjoint Analysis ( 1 )
Random ( 1 )
Margin of Error ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Desktop ( 1 )
Every estimate we make from a sample of customer data contains error. Confidence intervals tell us how much faith we can have in our estimates. Confidence intervals quantify the most likely range for the unknown value we're estimating. For example, if we observe 27 out of 30 users (90%) completing a task, we can be 95% confident that between 74% and 97% of all real-world

Read More

Statistically significant. It's a phrase that's packed with both meaning, and syllables. It's hard to say and harder to understand. Yet it's one of the most common phrases heard when dealing with quantitative methods. While the phrase statistically significant represents the result of a rational exercise with numbers, it has a way of evoking as much emotion.  Bewilderment, resentment, confusion and even arrogance (for those

Read More

Are you sure you did that right? When we put the effort into making a purchase online, finding information or attempting tasks in software, we want to know we're doing things right. Having confidence in our actions and the outcomes is an important part of the user experience. That's why we ask users how confident they are that they completed a task in a usability

Read More

There are some interesting known differences between men and women in the psychological literature. For example, women tend to be better judges of emotion when looking at faces for just 0.2 of a second[pdf]! And across many measures of ability, while both men and women tend to exhibit overconfidence, men are generally more overconfident than women  and this is especially the case when men do

Read More