Usability Testing

Browse Content by Topic

Statistics ( 51 )
Methods ( 50 )
Usability Testing ( 49 )
UX ( 47 )
Survey ( 30 )
User Research ( 27 )
Usability ( 26 )
Customer Experience ( 25 )
Benchmarking ( 20 )
Sample Size ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
SUS ( 17 )
SUPRQ ( 12 )
Usability Metrics ( 12 )
Rating Scale ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Metrics ( 9 )
Measurement ( 9 )
Net Promoter Score ( 9 )
Market Research ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
User Experience ( 7 )
Surveys ( 7 )
Heuristic Evaluation ( 7 )
Mobile Usability Testing ( 5 )
Six Sigma ( 5 )
Questionnaire ( 5 )
Usability Problem ( 5 )
Mobile ( 5 )
Visualizing Data ( 5 )
Questionnaires ( 5 )
Reliability ( 5 )
Moderation ( 4 )
Confidence ( 4 )
Loyalty ( 4 )
Validity ( 4 )
Analytics ( 4 )
Satisfaction ( 4 )
Task Times ( 4 )
Confidence Intervals ( 4 )
UX Maturity ( 4 )
Research ( 4 )
Quantitative ( 4 )
Credibility ( 4 )
Task Metrics ( 3 )
UX Metrics ( 3 )
Expert Review ( 3 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Rating Scales ( 3 )
PhD ( 2 )
SEQ ( 2 )
Branding ( 2 )
Correlation ( 2 )
Remote Usability Testing ( 2 )
Tasks ( 2 )
Excel ( 2 )
SUM ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
Marketing ( 2 )
Usability Lab ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
UX Salary Survey ( 2 )
Findability ( 2 )
Data ( 2 )
UMUX-lite ( 2 )
Salary Survey ( 2 )
Key Driver ( 2 )
A/B Testing ( 2 )
UX Methods ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Certification ( 2 )
Information Architecture ( 1 )
Contextual Inquiry ( 1 )
Desktop ( 1 )
Problem Severity ( 1 )
Site Analytics ( 1 )
Ordinal ( 1 )
Sample ( 1 )
Five ( 1 )
Perceptions ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Trust ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Performance ( 1 )
Z-Score ( 1 )
protoype ( 1 )
Unmoderated Research ( 1 )
Metric ( 1 )
Facilitation ( 1 )
Prototype ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Errors ( 1 )
Visual Appeal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
SUPR-Q ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Software ( 1 )
Segmentation ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Persona ( 1 )
User Testing ( 1 )
Margin of Error ( 1 )
They're the stuff of movies, TV shows, and usability labs. One-way mirrors (or two-way mirrors depending on who you ask) are an enduring symbol of interrogation, psychology experiments, focus groups, and usability tests. This special piece of glass is brightly lit from one side to allow people to inconspicuously observe people on the other side. The technology is simple and actually quite old with a

Read More

"I'd like you to think aloud as you use the software." Having participants think aloud as they use an interface is a cornerstone technique of usability testing. It's been around for much of the history of user research to help uncover problems in an interface. Despite its popularity, there is surprisingly little consistency on how to properly apply the think aloud technique. Because of that,

Read More

One of the best ways to make metrics more meaningful is to compare them to something. The comparison can be the same data from an earlier time point, a competitor, a benchmark, or a normalized database. Comparisons help in interpreting data in both customer research specifically and in data analysis in general. For example, we're often interested in customers' brand attitudes both before and after

Read More

As if the Net Promoter Score didn't already stir up enough strong opinions about whether it's the "right" metric for organizations, now there's a new controversy: how to display it. In case you're unfamiliar with it, the Net Promoter Score (NPS) is a popular measure of customer loyalty. It's derived by asking a single question to a customer: How likely are you to recommend a

Read More

Observing just a few users interact with a product or website can tell you a wealth of information about what's working and not working. But to loosely quote Lord Kelvin, when we can measure something and express it in numbers, we understand and manage it better. Measuring usability allows us to better understand how changes in usability affect customer satisfaction and loyalty. Usability can and

Read More

While we often talk about usability tests as if there is one type of usability test, the truth is there are several varieties of usability tests. Each type addresses different research goals. Don't confuse the five usability testing types with the interface type or the testing modes. Interface types are mobile (website or apps), desktop (software or website), or a physical device (like a thermostat).Testing

Read More

Speeders are survey participants who finish too quickly, even impossibly quickly. How much do they affect the quality of online research? This question has increasing relevance as online research proliferates, including unmoderated usability studies, since an increasing amount of data comes from paid panel participants. With in-person studies, we see each participant's engagement level. With data collected remotely, we need another way to determine whether

Read More

In the fast-paced world of Agile development, where it's difficult to find time to get data from users, unmoderated remote testing gives us a way to quickly collect feedback on interface design. For example, I recently worked with a web-app product team to determine whether users find their new file manager easier to use than the previous one. We started at 10 a.m., and wanted

Read More

How are you reading this page? Are you at work? At home? Are you checking your phone or email as you read? Are you eating? Are pets or family members nearby? Although we rarely interact with websites or software in isolation, without distractions, for decades when we spoke of usability testing, we pictured a quiet room with a two-way mirror hiding the observers, who communicated

Read More