Usability Testing

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 39 )
NPS ( 36 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
Sample Size ( 29 )
Rating Scale ( 29 )
SUS ( 28 )
Net Promoter Score ( 24 )
Usability Problems ( 18 )
Metrics ( 17 )
Questionnaires ( 17 )
Measurement ( 16 )
UMUX-lite ( 15 )
Rating Scales ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
User Experience ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 11 )
Qualitative ( 11 )
Reliability ( 11 )
Navigation ( 10 )
UX Metrics ( 8 )
SEQ ( 8 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
Questionnaire ( 7 )
Research ( 7 )
Mobile ( 6 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Confidence ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Unmoderated Research ( 5 )
Confidence Intervals ( 5 )
Six Sigma ( 5 )
Task Times ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
sliders ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
Summative ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
Key Driver ( 3 )
Voice Interaction ( 3 )
ROI ( 3 )
Desirability ( 3 )
Data ( 3 )
PURE ( 3 )
Findability ( 2 )
Personas ( 2 )
SUM ( 2 )
Salary Survey ( 2 )
A/B Testing ( 2 )
Errors ( 2 )
Excel ( 2 )
IA ( 2 )
Tasks ( 2 )
Correlation ( 2 )
Branding ( 2 )
PhD ( 2 )
Tree Testing ( 2 )
Remote Usability Testing ( 2 )
UX Salary Survey ( 2 )
Focus Groups ( 2 )
Marketing ( 2 )
Emoji scale ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
LTR ( 2 )
slider ( 2 )
Sensitivity ( 2 )
Cognitive Walkthrough ( 2 )
Star Scale ( 2 )
KLM ( 2 )
Formative ( 2 )
Desktop ( 1 )
Visual Analog Scale ( 1 )
Linear Numeric Scale ( 1 )
Within-subjects ( 1 )
consumer software ( 1 )
Margin of Error ( 1 )
History of usability ( 1 )
ISO ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Cumulative Graphs ( 1 )
Meeting software ( 1 )
User-Centred Design ( 1 )
Likert ( 1 )
Carryover ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
MOS ( 1 )
Research design ( 1 )
Bias ( 1 )
Probability ( 1 )
Information Architecture ( 1 )
Greco-Latin Squares ( 1 )
Site Analytics ( 1 )
R ( 1 )
t-test ( 1 )
Randomization ( 1 )
Latin Squares ( 1 )
Measure ( 1 )
Contextual Inquiry ( 1 )
Quality ( 1 )
Problem Severity ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
coding ( 1 )
negative scale ( 1 )
Mobile Usability ( 1 )
Polarization ( 1 )
Hedonic usability ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Task Completin ( 1 )
Top Task Analysis ( 1 )
Certification ( 1 )
Effect Size ( 1 )
Segmentation ( 1 )
Facilitation ( 1 )
Persona ( 1 )
User Testing ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Single Ease Question ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
Five ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
protoype ( 1 )
Metric ( 1 )
Expectations ( 1 )
UEQ ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
Competitive ( 1 )
RITE ( 1 )
Formative testing ( 1 )
CUE ( 1 )
Trust ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Moderating ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Delight ( 1 )
MUSiC ( 1 )
Paraphrasing the statistician George Box, all models are wrong, some are useful, and some can be improved. In a recent article, we reviewed the most common way of modeling problem discovery, which is based on a straightforward application of the cumulative binomial probability formula: P(x≥1) = 1 - (1-p)n. Well, it’s straightforward if you like playing around with these sorts of formulas like Jim and

Read More

Finding and fixing problems in an interface is one of the fundamental priorities of a formative usability test. But how many users should you test with? And how many usability problems are there to be uncovered? These questions have been discussed and debated for decades. Early work on problem discovery suggested that the first few users will uncover most of the common problems. This isn’t—or

Read More

The fundamental goal of usability testing is to produce highly usable products and services. That’s an uncontroversial statement. Where things can get a bit confusing is how different approaches to usability testing have different ways of achieving that goal. In earlier articles we have described the different types of usability tests but many types still share common goals. In this article, we’ll first revisit the

Read More

Usability testing is expensive. At least that has been the perception. But the idea that usability is a nice-to-have ideal that only big companies such as IBM or Microsoft can afford has fortunately evolved. While technology has improved and gotten cheaper, it’s the technique that’s become more accessible and accepted. The discount-usability movement helped emphasize the effectiveness of low-cost, smaller sample sizes to find and fix usability

Read More

Usability tests don’t have to be expensive or require a lot of technology. The real value is not in the equipment or technology but in the technique. Usability testing is not a focus group. Nor is usability testing a product demo. You shouldn’t lead participants through a product as if it were a demo and ask them if they “like” something. Uncovering problems users encounter

Read More

You don’t need a dedicated usability lab to conduct a usability test. But if you or your organization conducts more than the occasional usability test, which it probably should (another topic in itself), you may want to consider setting up a dedicated usability lab. Having a dedicated space for testing is a hallmark of organizations with high UX maturity. Organizations rated as mature in UX

Read More

Uncovering usability problems is at the heart of usability testing. Problems and insights can be uncovered from observing participants live in a usability lab, using heuristic evaluations, or watching videos of participants. But if you change the person looking for the problems (the evaluator), do you get a different set of problems? The Evaluator Effect It’s been 20 years since two influential papers (Jacobsen, Hertzum,

Read More

Finding and fixing problems encountered by participants through usability testing generally leads to a better user experience. But not all participants are created equal. One of the major differentiating characteristics is prior experience. People with more experience tend to perform more tasks successfully, more quickly and generally have a more positive attitude about the experience than inexperienced people. But does testing with experienced users lead to uncovering

Read More

While UX research may be a priority for you, it probably isn’t for your participants. And participants are a pretty important ingredient in usability testing. If people were predictable, reliable, and always did what they said, few of us would make a living in improving the user experience! Unfortunately, people don’t always show up when they say they will for your usability test, in-depth interview,

Read More

Many researchers are familiar with the Hawthorne Effect in which people act differently when observed. It was named when researchers found workers at the Hawthorne Works factory performed better not because of increased lighting but because they were being watched. This observer effect happens not only with people but also with particles. In physics, the mere act of observing a phenomenon (like subatomic particle movement)

Read More