Usability Testing

Browse Content by Topic

Statistics ( 51 )
Methods ( 49 )
Usability Testing ( 46 )
UX ( 43 )
Survey ( 30 )
Usability ( 25 )
User Research ( 24 )
Customer Experience ( 24 )
Sample Size ( 18 )
Benchmarking ( 18 )
SUS ( 17 )
NPS ( 17 )
Usability Problems ( 16 )
Usability Metrics ( 12 )
Rating Scale ( 11 )
Qualitative ( 11 )
SUPRQ ( 10 )
Navigation ( 10 )
Task Time ( 8 )
Market Research ( 8 )
Metrics ( 8 )
Measurement ( 8 )
Surveys ( 7 )
User Experience ( 7 )
Heuristic Evaluation ( 7 )
Task Completion ( 7 )
Six Sigma ( 5 )
Mobile Usability Testing ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Net Promoter Score ( 5 )
Questionnaire ( 5 )
Mobile ( 5 )
Confidence ( 4 )
Analytics ( 4 )
Questionnaires ( 4 )
Research ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
Loyalty ( 4 )
Quantitative ( 4 )
Expert Review ( 3 )
Customer Segmentation ( 3 )
Satisfaction ( 3 )
UX Metrics ( 3 )
Card Sorting ( 3 )
Task Metrics ( 3 )
Rating Scales ( 3 )
Lean UX ( 3 )
ROI ( 3 )
Validity ( 2 )
Correlation ( 2 )
Key Driver ( 2 )
Reliability ( 2 )
Excel ( 2 )
PhD ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
KLM ( 2 )
SEQ ( 2 )
Usability Lab ( 2 )
UMUX-lite ( 2 )
Certification ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
SUM ( 2 )
Personas ( 2 )
UX Methods ( 2 )
Tasks ( 2 )
Data ( 2 )
Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Tree Testing ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
IA ( 2 )
UX Salary Survey ( 2 )
Problem Severity ( 1 )
Site Analytics ( 1 )
Information Architecture ( 1 )
Contextual Inquiry ( 1 )
Desktop ( 1 )
Ordinal ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Errors ( 1 )
Trust ( 1 )
Formative ( 1 )
Perceptions ( 1 )
Performance ( 1 )
Facilitation ( 1 )
protoype ( 1 )
Unmoderated Research ( 1 )
Prototype ( 1 )
Task Completin ( 1 )
Z-Score ( 1 )
Affinity ( 1 )
Visual Appeal ( 1 )
True Intent ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Branding ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Metric ( 1 )
Software ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Segmentation ( 1 )
Persona ( 1 )
Margin of Error ( 1 )
Many researchers are familiar with the Hawthorne Effect in which people act differently when observed. It was named when researchers found workers at the Hawthorne Works factory performed better not because of increased lighting but because they were being watched. This observer effect happens not only with people but also with particles. In physics, the mere act of observing a phenomenon (like subatomic particle movement)

Read More

Facilitating a usability test is a skill. With enough of the right practice you’ll get better at facilitating and running more effective usability test sessions. A solid foundation in both the theory and practical applications of facilitating a usability test will aid you in becoming a solid facilitator. To help, here are ten resources for both beginners and intermediate usability test facilitators. 1. Read about

Read More

One of the fundamental principles behind usability testing is to let the participants actually use the software, app, or website and see what problems might emerge. By simulating use and not interrupting participants, you can detect and fix problems before users encounter them, get frustrated, and stop using and recommending your product. So while there’s good reason to shut up and watch users, should a

Read More

The fundamental idea behind usability testing is that the interface creator is not the user. We can broaden the idea of an interface to encompass more than websites and software as Don Norman famously illustrated in his book The Design of Everyday Things. An interface is the point where people and systems interact. An interface can be words, images, light switches, door handles,  or complex

Read More

Content is king. Whether it’s for books, movies, audio books, news sites, or entertainment websites. When you have good content people will come and stay. But if people can’t find the content or there’s too much friction in the experience you’ll likely lose your audience even with killer content. An increasing number of consumers now subscribe to a premium content service like Netflix, Hulu, or

Read More

Facilitation is a valuable skill for measuring the user experience. A good facilitator ensures sessions run smoothly, make participants comfortable, and extract the right data for even the most difficult scenarios, stakeholders, or participants. Joe Dumas and Beth Loring wrote a great guidebook that is an essential read for anyone interested in facilitating a usability session. Even though it's almost a decade old, it's still

Read More

Having participants think aloud is a valuable tool used in UX research. It's primarily used to understand participants' mental processes, which can ultimately uncover problems with an interface. It has a rich history in the behavioral sciences that dates back over a century. Despite its value, it's not without its controversy. Some research has shown that depending on the activity, having participants think aloud can

Read More

They're the stuff of movies, TV shows, and usability labs. One-way mirrors (or two-way mirrors depending on who you ask) are an enduring symbol of interrogation, psychology experiments, focus groups, and usability tests. This special piece of glass is brightly lit from one side to allow people to inconspicuously observe people on the other side. The technology is simple and actually quite old with a

Read More

"I'd like you to think aloud as you use the software." Having participants think aloud as they use an interface is a cornerstone technique of usability testing. It's been around for much of the history of user research to help uncover problems in an interface. Despite its popularity, there is surprisingly little consistency on how to properly apply the think aloud technique. Because of that,

Read More

One of the best ways to make metrics more meaningful is to compare them to something. The comparison can be the same data from an earlier time point, a competitor, a benchmark, or a normalized database. Comparisons help in interpreting data in both customer research specifically and in data analysis in general. For example, we're often interested in customers' brand attitudes both before and after

Read More