Usability Testing

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 34 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Rating Scale ( 25 )
Sample Size ( 24 )
Net Promoter Score ( 22 )
Usability Problems ( 18 )
Metrics ( 17 )
Measurement ( 16 )
Questionnaires ( 15 )
User Experience ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
Rating Scales ( 14 )
Usability Metrics ( 13 )
Surveys ( 13 )
Market Research ( 12 )
SUPRQ ( 12 )
SUPR-Q ( 11 )
UMUX-lite ( 11 )
Qualitative ( 11 )
Reliability ( 10 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
UX Metrics ( 8 )
Task Completion ( 7 )
SEQ ( 7 )
Questionnaire ( 7 )
Research ( 6 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Analytics ( 6 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Expert Review ( 4 )
Task Times ( 4 )
Confidence ( 4 )
Quantitative ( 4 )
Loyalty ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Summative ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
Customer Segmentation ( 3 )
Key Driver ( 3 )
Task Metrics ( 3 )
Data ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
TAM ( 3 )
PURE ( 3 )
Desirability ( 3 )
Voice Interaction ( 3 )
Tree Testing ( 2 )
Tasks ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
Sample Sizes ( 2 )
Salary Survey ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Branding ( 2 )
Correlation ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
Variables ( 2 )
Excel ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
SUM ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Formative ( 2 )
Prototype ( 2 )
Errors ( 2 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
negative scale ( 1 )
graphic scale ( 1 )
Site Analytics ( 1 )
Desktop ( 1 )
consumer software ( 1 )
b2b software ( 1 )
History of usability ( 1 )
coding ( 1 )
Margin of Error ( 1 )
Formative testing ( 1 )
Task Randomization ( 1 )
RITE ( 1 )
MUSiC ( 1 )
Measure ( 1 )
Probability ( 1 )
ISO ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Linear Numeric Scale ( 1 )
Test Metrics ( 1 )
Star Scale ( 1 )
Within-subjects ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Polarization ( 1 )
Carryover ( 1 )
Problem Severity ( 1 )
MOS ( 1 )
Mean Opinion Scale ( 1 )
Design Thinking ( 1 )
Emoji scale ( 1 )
Information Architecture ( 1 )
Anchoring ( 1 )
slider ( 1 )
Visual Analog Scale ( 1 )
sliders ( 1 )
MOS-R ( 1 )
Regression Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Facilitation ( 1 )
Bias ( 1 )
Top Task Analysis ( 1 )
protoype ( 1 )
Unmoderated ( 1 )
Moderating ( 1 )
Design ( 1 )
Metric ( 1 )
Certification ( 1 )
Task Completin ( 1 )
Sample ( 1 )
Five ( 1 )
Z-Score ( 1 )
Perceptions ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Trust ( 1 )
Affinity ( 1 )
Think Aloud ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Performance ( 1 )
Microsoft Desirability Toolkit ( 1 )
Competitive ( 1 )
meCUE2.0 ( 1 )
CUE ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
NSAT ( 1 )
Segmentation ( 1 )
moderated ( 1 )
Persona ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Quality ( 1 )
Paraphrasing the statistician George Box, all models are wrong, some are useful, and some can be improved. In a recent article, we reviewed the most common way of modeling problem discovery, which is based on a straightforward application of the cumulative binomial probability formula: P(x≥1) = 1 - (1-p)n. Well, it’s straightforward if you like playing around with these sorts of formulas like Jim and

Read More

Finding and fixing problems in an interface is one of the fundamental priorities of a formative usability test. But how many users should you test with? And how many usability problems are there to be uncovered? These questions have been discussed and debated for decades. Early work on problem discovery suggested that the first few users will uncover most of the common problems. This isn’t—or

Read More

The fundamental goal of usability testing is to produce highly usable products and services. That’s an uncontroversial statement. Where things can get a bit confusing is how different approaches to usability testing have different ways of achieving that goal. In earlier articles we have described the different types of usability tests but many types still share common goals. In this article, we’ll first revisit the

Read More

Usability testing is expensive. At least that has been the perception. But the idea that usability is a nice-to-have ideal that only big companies such as IBM or Microsoft can afford has fortunately evolved. While technology has improved and gotten cheaper, it’s the technique that’s become more accessible and accepted. The discount-usability movement helped emphasize the effectiveness of low-cost, smaller sample sizes to find and fix usability

Read More

Usability tests don’t have to be expensive or require a lot of technology. The real value is not in the equipment or technology but in the technique. Usability testing is not a focus group. Nor is usability testing a product demo. You shouldn’t lead participants through a product as if it were a demo and ask them if they “like” something. Uncovering problems users encounter

Read More

You don’t need a dedicated usability lab to conduct a usability test. But if you or your organization conducts more than the occasional usability test, which it probably should (another topic in itself), you may want to consider setting up a dedicated usability lab. Having a dedicated space for testing is a hallmark of organizations with high UX maturity. Organizations rated as mature in UX

Read More

Uncovering usability problems is at the heart of usability testing. Problems and insights can be uncovered from observing participants live in a usability lab, using heuristic evaluations, or watching videos of participants. But if you change the person looking for the problems (the evaluator), do you get a different set of problems? The Evaluator Effect It’s been 20 years since two influential papers (Jacobsen, Hertzum,

Read More

Finding and fixing problems encountered by participants through usability testing generally leads to a better user experience. But not all participants are created equal. One of the major differentiating characteristics is prior experience. People with more experience tend to perform more tasks successfully, more quickly and generally have a more positive attitude about the experience than inexperienced people. But does testing with experienced users lead to uncovering

Read More

While UX research may be a priority for you, it probably isn’t for your participants. And participants are a pretty important ingredient in usability testing. If people were predictable, reliable, and always did what they said, few of us would make a living in improving the user experience! Unfortunately, people don’t always show up when they say they will for your usability test, in-depth interview,

Read More

Many researchers are familiar with the Hawthorne Effect in which people act differently when observed. It was named when researchers found workers at the Hawthorne Works factory performed better not because of increased lighting but because they were being watched. This observer effect happens not only with people but also with particles. In physics, the mere act of observing a phenomenon (like subatomic particle movement)

Read More