Sample Size

Browse Content by Topic

UX ( 54 )
Usability Testing ( 52 )
Statistics ( 51 )
Methods ( 50 )
Usability ( 32 )
Survey ( 31 )
User Research ( 27 )
Customer Experience ( 26 )
Benchmarking ( 23 )
SUS ( 19 )
Sample Size ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
Usability Metrics ( 13 )
Rating Scale ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Net Promoter Score ( 11 )
Metrics ( 11 )
User Experience ( 10 )
Measurement ( 10 )
Navigation ( 10 )
Task Time ( 8 )
Surveys ( 8 )
Market Research ( 8 )
Questionnaires ( 7 )
Task Completion ( 7 )
Heuristic Evaluation ( 7 )
Reliability ( 6 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
Rating Scales ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Questionnaire ( 5 )
Moderation ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Confidence Intervals ( 4 )
Confidence ( 4 )
Research ( 4 )
Task Times ( 4 )
Validity ( 4 )
Analytics ( 4 )
UX Metrics ( 4 )
Satisfaction ( 4 )
Loyalty ( 4 )
Customer Segmentation ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
Card Sorting ( 3 )
Expert Review ( 3 )
Unmoderated Research ( 3 )
SUPR-Q ( 3 )
Task Metrics ( 3 )
Lean UX ( 3 )
SEQ ( 2 )
Summative ( 2 )
Data ( 2 )
Excel ( 2 )
PhD ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
Branding ( 2 )
Correlation ( 2 )
Cognitive Walkthrough ( 2 )
A/B Testing ( 2 )
IA ( 2 )
Tree Testing ( 2 )
Findability ( 2 )
Focus Groups ( 2 )
SUM ( 2 )
Personas ( 2 )
Key Driver ( 2 )
UX Methods ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Salary Survey ( 2 )
Certification ( 2 )
Tasks ( 2 )
KLM ( 2 )
UMUX-lite ( 2 )
Perceptions ( 1 )
Performance ( 1 )
Prototype ( 1 )
Site Analytics ( 1 )
protoype ( 1 )
Metric ( 1 )
Moderating ( 1 )
moderated ( 1 )
Facilitation ( 1 )
Information Architecture ( 1 )
Affinity ( 1 )
Problem Severity ( 1 )
Task Completin ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Z-Score ( 1 )
Think Aloud ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
Persona ( 1 )
Segmentation ( 1 )
PURE ( 1 )
Software ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Errors ( 1 )
Trust ( 1 )
Visual Appeal ( 1 )
True Intent ( 1 )
Top Task Analysis ( 1 )
Ordinal ( 1 )
Regression Analysis ( 1 )
Margin of Error ( 1 )
Task Randomization ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Competitive ( 1 )
Formative ( 1 )
Random ( 1 )
Desktop ( 1 )
Will users purchase an upgrade? What features are most desired?  Will they recommend the product to a friend? Part of measuring the user experience involves directly asking users what they think via surveys. The Web has made surveys easy to administer and deliver. It hasn't made the question of how many people you need to survey any easier though. One common question is "How many

Read More

With usability testing it used to be that we had to make our best guess as how users actually interacted with software outside a contrived lab-setting. We didn't have all the information we needed. Knowing what users did was in a sense a puzzle with a lot of missing pieces. Web-analytics provides us with a wealth of data about actual usage we just never had

Read More

Usability tests are conducted on samples of users taken from a larger user population. In usability testing it is hard enough to recruit and test users let alone select them randomly from the larger user population. Samples in usability studies are almost always convenience samples. That is, we rely on volunteers to participate in our test (a convenience to us). Volunteers, even paid volunteers, are

Read More

One question I get a lot is, Do you really only need to test with 5 users? There are a lot of strong opinions about the magic number 5 in usability testing and much has been written about it (e.g. see Lewis 2006PDF). As you can imagine there isn't a fixed number of users that will always be the right number (us quantitative folks love

Read More

Neilsen derives his "five users is enough" formula from a paper he and Tom Landauer published in 1993. Before Nielsen and Landauer James Lewis of IBM proposed a very similar problem detection formula in 1982 based on the binomial probability formula.[4] Lewis stated that: The binomial probability theorem can be used to determine the probability that a problem of probability p will occur r times

Read More

One of the biggest and usually first concerns levied against any statistical measures of usability is that the number of users required to obtain "statistically significant" data is prohibitive. People reason that one cannot with any reasonable level of confidence employ quantitative methods to determining product usability. The reasoning continues something like this: "I have a population of 2500 software users. I plug my population

Read More

We already saw how a manageable sample of users can provide meaningful data for discrete-binary data like task completion. With continuous data like task times, the sample size can be even smaller. The continuous calculation is a bit more complicated and involves somewhat of a Catch-22. Most want to determine the sample size ahead of time, then perform the testing based on the results of

Read More