Usability Testing

Browse Content by Topic

Statistics ( 51 )
Methods ( 50 )
Usability Testing ( 49 )
UX ( 47 )
Survey ( 30 )
User Research ( 27 )
Usability ( 26 )
Customer Experience ( 25 )
Benchmarking ( 20 )
Sample Size ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
SUS ( 17 )
SUPRQ ( 12 )
Usability Metrics ( 12 )
Rating Scale ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Metrics ( 9 )
Measurement ( 9 )
Net Promoter Score ( 9 )
Market Research ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
User Experience ( 7 )
Surveys ( 7 )
Heuristic Evaluation ( 7 )
Mobile Usability Testing ( 5 )
Six Sigma ( 5 )
Questionnaire ( 5 )
Usability Problem ( 5 )
Mobile ( 5 )
Visualizing Data ( 5 )
Questionnaires ( 5 )
Reliability ( 5 )
Moderation ( 4 )
Confidence ( 4 )
Loyalty ( 4 )
Validity ( 4 )
Analytics ( 4 )
Satisfaction ( 4 )
Task Times ( 4 )
Confidence Intervals ( 4 )
UX Maturity ( 4 )
Research ( 4 )
Quantitative ( 4 )
Credibility ( 4 )
Task Metrics ( 3 )
UX Metrics ( 3 )
Expert Review ( 3 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Rating Scales ( 3 )
PhD ( 2 )
SEQ ( 2 )
Branding ( 2 )
Correlation ( 2 )
Remote Usability Testing ( 2 )
Tasks ( 2 )
Excel ( 2 )
SUM ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
Marketing ( 2 )
Usability Lab ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
UX Salary Survey ( 2 )
Findability ( 2 )
Data ( 2 )
UMUX-lite ( 2 )
Salary Survey ( 2 )
Key Driver ( 2 )
A/B Testing ( 2 )
UX Methods ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Certification ( 2 )
Information Architecture ( 1 )
Contextual Inquiry ( 1 )
Desktop ( 1 )
Problem Severity ( 1 )
Site Analytics ( 1 )
Ordinal ( 1 )
Sample ( 1 )
Five ( 1 )
Perceptions ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Trust ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Performance ( 1 )
Z-Score ( 1 )
protoype ( 1 )
Unmoderated Research ( 1 )
Metric ( 1 )
Facilitation ( 1 )
Prototype ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Errors ( 1 )
Visual Appeal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
SUPR-Q ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Software ( 1 )
Segmentation ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Persona ( 1 )
User Testing ( 1 )
Margin of Error ( 1 )
In an ideal world, users would be involved in every stage of product development, including requirements gathering, iterative prototype testing and post release testing. However, there are a lot of reasons why testing with users doesn't happen. Among the most common are: Time: Running moderated test sessions takes time to plan and conduct. A developer, product manager or single user researcher with too many priorities

Read More

Which product is the most usable? One of the primary goals of a comparative study is to understand which product or website performs the best or worst on usability metrics such as completion rates or perceptions of usability. Comparisons can be made between competitive products or alternate design concepts. When conducting a comparative usability study, a number of variables make the setup more complicated than

Read More

I received the following email last week about an upcoming change in the learning management system used at the university where I'm an adjunct professor: "Blackboard, has grown to become an essential tool for teaching since it was first adopted in 2000. Over the past few years, however, users have become increasingly dissatisfied with Blackboard from both ease-of-use and technical perspectives." Emails like this are

Read More

It's that time of year again: March Madness. The Madness in March comes from the NCAA College basketball tournament, with unanticipated winners and losers with dozens of games packed into the final days of March. It's also the time of year where a lot of people start working directly with probability, whether they know it or not. Individuals and groups of colleagues around the US

Read More

Seeing is believing. Observing just a handful of users interact with a product can be more influential than reading pages of a professionally done report or polished presentation. But what if a stakeholder only has time to watch two or just one of the users in a usability study? Are there circumstances where watching some users is worse than watching no users at all? Watching

Read More

A lot of planning goes into a usability test. Part of good planning means being prepared for the many things that can go wrong. Here are the ten most common problems we encounter in usability testing and some ideas for how to avoid or manage them when they inevitably occur. Users don't show up : No-shows are a fact of life for usability testing.  We

Read More

While there are books written on measuring usability, it can be easy to get overwhelmed by the details and intimidated by the thought of having to deal with numbers. If I had to use five words to describe some best practices and some core principles of measuring usability, here they are. 1. Multi-method There are a number of methods to measure and improve the user

Read More

A key principal of usability testing is that users should simulate actual usage as much as possible. That means using realistic tasks that represent users' most common goals on the website or app they'll be working with 'out in the wild.' Usability testing is inherently contrived but we still want to provide as realistic a testing environment as possible. For public facing websites this is

Read More

After I conducted my first usability test in the 1990's I was struck by two things: just how many usability problems are uncovered and how some problems repeat after observing just a few users In almost every usability test I've conducted since then I've continued to see this pattern. Even after running 5 to 10 users in a moderated study, there are usually too many

Read More