Sample Size

Browse Content by Topic

UX ( 74 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 39 )
NPS ( 37 )
Benchmarking ( 33 )
Usability ( 32 )
Customer Experience ( 31 )
Sample Size ( 31 )
Rating Scale ( 30 )
User Research ( 29 )
SUS ( 28 )
Net Promoter Score ( 26 )
Usability Problems ( 18 )
Questionnaires ( 17 )
Metrics ( 17 )
Rating Scales ( 16 )
Measurement ( 16 )
UMUX-lite ( 15 )
User Experience ( 15 )
Validity ( 14 )
Satisfaction ( 14 )
Usability Metrics ( 13 )
Surveys ( 13 )
SUPR-Q ( 12 )
SUPRQ ( 12 )
Market Research ( 12 )
Reliability ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
SEQ ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Task Time ( 8 )
Research ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Unmoderated Research ( 5 )
Confidence Intervals ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Confidence ( 5 )
Six Sigma ( 5 )
Credibility ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Task Times ( 4 )
Key Driver ( 4 )
Expert Review ( 4 )
Quantitative ( 4 )
sliders ( 4 )
Loyalty ( 4 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
Summative ( 3 )
Data ( 3 )
Card Sorting ( 3 )
ROI ( 3 )
PURE ( 3 )
Desirability ( 3 )
Task Metrics ( 3 )
Voice Interaction ( 3 )
TAM ( 3 )
Focus Groups ( 2 )
Findability ( 2 )
SUM ( 2 )
Excel ( 2 )
PhD ( 2 )
Errors ( 2 )
Remote Usability Testing ( 2 )
Correlation ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
KLM ( 2 )
Tree Testing ( 2 )
Branding ( 2 )
UX Salary Survey ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Personas ( 2 )
Marketing ( 2 )
Emoji scale ( 2 )
Sample Sizes ( 2 )
LTR ( 2 )
Eye-Tracking ( 2 )
Prototype ( 2 )
Variables ( 2 )
slider ( 2 )
Sensitivity ( 2 )
Formative ( 2 )
Cognitive Walkthrough ( 2 )
Star Scale ( 2 )
Carryover ( 1 )
Visual Analog Scale ( 1 )
Linear Numeric Scale ( 1 )
Desktop ( 1 )
Within-subjects ( 1 )
Likert ( 1 )
History of usability ( 1 )
Margin of Error ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Cumulative Graphs ( 1 )
Meeting software ( 1 )
User-Centred Design ( 1 )
Polarization ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
Problem Severity ( 1 )
Greco-Latin Squares ( 1 )
Research design ( 1 )
Bias ( 1 )
Information Architecture ( 1 )
Latin Squares ( 1 )
Site Analytics ( 1 )
Single Ease Question ( 1 )
R ( 1 )
t-test ( 1 )
Randomization ( 1 )
Probability ( 1 )
Contextual Inquiry ( 1 )
Mean Opinion Scale ( 1 )
ISO ( 1 )
MOS ( 1 )
MOS-R ( 1 )
graphic scale ( 1 )
negative scale ( 1 )
Measure ( 1 )
coding ( 1 )
Mobile Usability ( 1 )
Anchoring ( 1 )
CUE ( 1 )
Certification ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Facilitation ( 1 )
protoype ( 1 )
Persona ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
Visual Appeal ( 1 )
Task Completin ( 1 )
Report ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
Five ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Segmentation ( 1 )
Metric ( 1 )
AttrakDiff2 ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
UEQ ( 1 )
Expectations ( 1 )
Hedonic usability ( 1 )
RITE ( 1 )
Formative testing ( 1 )
Trust ( 1 )
Competitive ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Software ( 1 )
Moderating ( 1 )
Customer effort ( 1 )
Delight ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Ordinal ( 1 )
MUSiC ( 1 )
Timing, luck and perseverance all play a role in making a successful product. But so does observing and understanding your customers' problems. The number of customers you need to observe will depend on how common customer behaviors are and how certain you need to be. Building a successful product means building something that customers want or need and are willing to pay for. It's not

Read More

If you're familiar with usability testing then you're familiar with the magic number 5. Five users will on average find most of the problems that affect at least one-third or more of your users. If problems are less common, then you will need to test more users to find and fix them. On many high-traffic websites usability problems affect less than 1 out of 10

Read More

Wondering about the origins of the sample size controversy in the usability profession?  Here is an annotated timeline of the major events and papers which continue to shape this topic. The Pre-Cambrian Era (Up to 1982) It's the dawn of Usability Evaluation and the first indications of diminishing returns in problem discovery are emerging. 1981: Alphonse Chapanis and colleagues suggest that observing about five to

Read More

Will users purchase an upgrade? What features are most desired?  Will they recommend the product to a friend? Part of measuring the user experience involves directly asking users what they think via surveys. The Web has made surveys easy to administer and deliver. It hasn't made the question of how many people you need to survey any easier though. One common question is "How many

Read More

With usability testing it used to be that we had to make our best guess as how users actually interacted with software outside a contrived lab-setting. We didn't have all the information we needed. Knowing what users did was in a sense a puzzle with a lot of missing pieces. Web-analytics provides us with a wealth of data about actual usage we just never had

Read More

Usability tests are conducted on samples of users taken from a larger user population. In usability testing it is hard enough to recruit and test users let alone select them randomly from the larger user population. Samples in usability studies are almost always convenience samples. That is, we rely on volunteers to participate in our test (a convenience to us). Volunteers, even paid volunteers, are

Read More

Nielsen derives his "five users is enough" formula from a paper he and Tom Landauer published in 1993. Before Nielsen and Landauer James Lewis of IBM proposed a very similar problem detection formula in 1982 based on the binomial probability formula.[4] Lewis stated that: The binomial probability theorem can be used to determine the probability that a problem of probability p will occur r times

Read More

One of the biggest and usually first concerns levied against any statistical measures of usability is that the number of users required to obtain "statistically significant" data is prohibitive. People reason that one cannot with any reasonable level of confidence employ quantitative methods to determining product usability. The reasoning continues something like this: "I have a population of 2500 software users. I plug my population

Read More