Sample Size

Browse Content by Topic

UX ( 70 )
Methods ( 61 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 36 )
Usability ( 32 )
Benchmarking ( 29 )
Customer Experience ( 28 )
User Research ( 27 )
NPS ( 27 )
SUS ( 21 )
Net Promoter Score ( 19 )
Sample Size ( 19 )
Rating Scale ( 18 )
Usability Problems ( 17 )
Metrics ( 15 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Validity ( 11 )
Surveys ( 10 )
Navigation ( 10 )
Satisfaction ( 10 )
Market Research ( 9 )
Questionnaires ( 9 )
Heuristic Evaluation ( 8 )
SUPR-Q ( 8 )
Task Time ( 8 )
UX Metrics ( 7 )
Reliability ( 7 )
Task Completion ( 7 )
Rating Scales ( 7 )
Mobile Usability Testing ( 6 )
Questionnaire ( 6 )
Mobile ( 6 )
Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Analytics ( 5 )
UX Methods ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Task Times ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Usability Lab ( 3 )
Unmoderated Research ( 3 )
SEQ ( 3 )
UMUX-lite ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Task Metrics ( 3 )
Branding ( 2 )
Data ( 2 )
SUM ( 2 )
Key Driver ( 2 )
PhD ( 2 )
KLM ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Personas ( 2 )
Excel ( 2 )
A/B Testing ( 2 )
Tree Testing ( 2 )
Marketing ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
Focus Groups ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Findability ( 2 )
IA ( 2 )
Correlation ( 2 )
Affinity ( 1 )
Perceptions ( 1 )
Problem Severity ( 1 )
Performance ( 1 )
Z-Score ( 1 )
Contextual Inquiry ( 1 )
Moderating ( 1 )
Site Analytics ( 1 )
moderated ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Metric ( 1 )
protoype ( 1 )
Prototype ( 1 )
Mobile Usability ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Information Architecture ( 1 )
Task Completin ( 1 )
Margin of Error ( 1 )
Software ( 1 )
Segmentation ( 1 )
Delight ( 1 )
Ordinal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Visual Appeal ( 1 )
Persona ( 1 )
Design ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Top Task Analysis ( 1 )
Formative ( 1 )
Trust ( 1 )
Errors ( 1 )
Quality ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Desktop ( 1 )
Wondering about the origins of the sample size controversy in the usability profession?  Here is an annotated timeline of the major events and papers which continue to shape this topic. The Pre-Cambrian Era (Up to 1982) It's the dawn of Usability Evaluation and the first indications of diminishing returns in problem discovery are emerging. 1981: Alphonse Chapanis and colleagues suggest that observing about five to

Read More

Will users purchase an upgrade? What features are most desired?  Will they recommend the product to a friend? Part of measuring the user experience involves directly asking users what they think via surveys. The Web has made surveys easy to administer and deliver. It hasn't made the question of how many people you need to survey any easier though. One common question is "How many

Read More

With usability testing it used to be that we had to make our best guess as how users actually interacted with software outside a contrived lab-setting. We didn't have all the information we needed. Knowing what users did was in a sense a puzzle with a lot of missing pieces. Web-analytics provides us with a wealth of data about actual usage we just never had

Read More

Usability tests are conducted on samples of users taken from a larger user population. In usability testing it is hard enough to recruit and test users let alone select them randomly from the larger user population. Samples in usability studies are almost always convenience samples. That is, we rely on volunteers to participate in our test (a convenience to us). Volunteers, even paid volunteers, are

Read More

Neilsen derives his "five users is enough" formula from a paper he and Tom Landauer published in 1993. Before Nielsen and Landauer James Lewis of IBM proposed a very similar problem detection formula in 1982 based on the binomial probability formula.[4] Lewis stated that: The binomial probability theorem can be used to determine the probability that a problem of probability p will occur r times

Read More

One of the biggest and usually first concerns levied against any statistical measures of usability is that the number of users required to obtain "statistically significant" data is prohibitive. People reason that one cannot with any reasonable level of confidence employ quantitative methods to determining product usability. The reasoning continues something like this: "I have a population of 2500 software users. I plug my population

Read More

We already saw how a manageable sample of users can provide meaningful data for discrete-binary data like task completion. With continuous data like task times, the sample size can be even smaller. The continuous calculation is a bit more complicated and involves somewhat of a Catch-22. Most want to determine the sample size ahead of time, then perform the testing based on the results of

Read More