Sample Size

Browse Content by Topic

UX ( 62 )
Usability Testing ( 52 )
Methods ( 52 )
Statistics ( 51 )
Survey ( 34 )
Usability ( 32 )
Benchmarking ( 27 )
User Research ( 27 )
Customer Experience ( 26 )
SUS ( 21 )
NPS ( 21 )
Sample Size ( 18 )
Net Promoter Score ( 17 )
Usability Problems ( 17 )
Rating Scale ( 14 )
Metrics ( 14 )
Measurement ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
User Experience ( 11 )
Navigation ( 10 )
Surveys ( 8 )
Market Research ( 8 )
Task Time ( 8 )
Heuristic Evaluation ( 7 )
Questionnaires ( 7 )
Task Completion ( 7 )
UX Metrics ( 7 )
Reliability ( 6 )
Mobile ( 6 )
Rating Scales ( 6 )
Questionnaire ( 6 )
Mobile Usability Testing ( 6 )
Usability Problem ( 5 )
Validity ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Satisfaction ( 5 )
Analytics ( 5 )
Quantitative ( 4 )
Confidence Intervals ( 4 )
Loyalty ( 4 )
Credibility ( 4 )
Task Times ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Research ( 4 )
Moderation ( 4 )
ROI ( 3 )
Customer Segmentation ( 3 )
SUPR-Q ( 3 )
Unmoderated Research ( 3 )
Usability Lab ( 3 )
Task Metrics ( 3 )
SEQ ( 3 )
Expert Review ( 3 )
Card Sorting ( 3 )
Lean UX ( 3 )
Key Driver ( 2 )
Salary Survey ( 2 )
IA ( 2 )
PhD ( 2 )
SUM ( 2 )
UX Methods ( 2 )
Marketing ( 2 )
Tree Testing ( 2 )
Data ( 2 )
UMUX-lite ( 2 )
Focus Groups ( 2 )
Personas ( 2 )
Certification ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
Excel ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
Remote Usability Testing ( 2 )
Cognitive Walkthrough ( 2 )
PURE ( 2 )
UX Salary Survey ( 2 )
Tasks ( 2 )
Branding ( 2 )
KLM ( 2 )
Correlation ( 2 )
Task Completin ( 1 )
Mobile Usability ( 1 )
User Testing ( 1 )
Performance ( 1 )
Affinity ( 1 )
Problem Severity ( 1 )
Z-Score ( 1 )
Persona ( 1 )
Prototype ( 1 )
Metric ( 1 )
Errors ( 1 )
Trust ( 1 )
Moderating ( 1 )
moderated ( 1 )
Site Analytics ( 1 )
protoype ( 1 )
Ordinal ( 1 )
Information Architecture ( 1 )
Facilitation ( 1 )
Software ( 1 )
Contextual Inquiry ( 1 )
Perceptions ( 1 )
Task Randomization ( 1 )
Margin of Error ( 1 )
Top Task Analysis ( 1 )
Random ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Formative ( 1 )
Design ( 1 )
Expectations ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Conjoint Analysis ( 1 )
Five ( 1 )
Desktop ( 1 )
Unmoderated ( 1 )
Competitive ( 1 )
Regression Analysis ( 1 )
Sample ( 1 )
Segmentation ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Effect Size ( 1 )
Will users purchase an upgrade? What features are most desired?  Will they recommend the product to a friend? Part of measuring the user experience involves directly asking users what they think via surveys. The Web has made surveys easy to administer and deliver. It hasn't made the question of how many people you need to survey any easier though. One common question is "How many

Read More

With usability testing it used to be that we had to make our best guess as how users actually interacted with software outside a contrived lab-setting. We didn't have all the information we needed. Knowing what users did was in a sense a puzzle with a lot of missing pieces. Web-analytics provides us with a wealth of data about actual usage we just never had

Read More

Usability tests are conducted on samples of users taken from a larger user population. In usability testing it is hard enough to recruit and test users let alone select them randomly from the larger user population. Samples in usability studies are almost always convenience samples. That is, we rely on volunteers to participate in our test (a convenience to us). Volunteers, even paid volunteers, are

Read More

Neilsen derives his "five users is enough" formula from a paper he and Tom Landauer published in 1993. Before Nielsen and Landauer James Lewis of IBM proposed a very similar problem detection formula in 1982 based on the binomial probability formula.[4] Lewis stated that: The binomial probability theorem can be used to determine the probability that a problem of probability p will occur r times

Read More

One of the biggest and usually first concerns levied against any statistical measures of usability is that the number of users required to obtain "statistically significant" data is prohibitive. People reason that one cannot with any reasonable level of confidence employ quantitative methods to determining product usability. The reasoning continues something like this: "I have a population of 2500 software users. I plug my population

Read More

We already saw how a manageable sample of users can provide meaningful data for discrete-binary data like task completion. With continuous data like task times, the sample size can be even smaller. The continuous calculation is a bit more complicated and involves somewhat of a Catch-22. Most want to determine the sample size ahead of time, then perform the testing based on the results of

Read More