Usability Problems

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 51 )
Survey ( 38 )
NPS ( 34 )
Benchmarking ( 32 )
Usability ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Sample Size ( 26 )
Rating Scale ( 25 )
Net Promoter Score ( 22 )
Usability Problems ( 18 )
Metrics ( 17 )
Questionnaires ( 16 )
Measurement ( 16 )
User Experience ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
Rating Scales ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
UMUX-lite ( 12 )
SUPRQ ( 12 )
Market Research ( 12 )
Qualitative ( 11 )
Reliability ( 11 )
SUPR-Q ( 11 )
Navigation ( 10 )
Task Time ( 8 )
UX Metrics ( 8 )
Heuristic Evaluation ( 8 )
SEQ ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Analytics ( 6 )
Research ( 6 )
Unmoderated Research ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Task Times ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Expert Review ( 4 )
Confidence Intervals ( 4 )
Moderation ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Usability Lab ( 3 )
Summative ( 3 )
Card Sorting ( 3 )
Lean UX ( 3 )
Voice Interaction ( 3 )
sliders ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
Key Driver ( 3 )
ROI ( 3 )
Desirability ( 3 )
Data ( 3 )
TAM ( 3 )
PURE ( 3 )
Excel ( 2 )
Salary Survey ( 2 )
Findability ( 2 )
Focus Groups ( 2 )
Sensitivity ( 2 )
SUM ( 2 )
Personas ( 2 )
Tree Testing ( 2 )
Tasks ( 2 )
Branding ( 2 )
UX Salary Survey ( 2 )
Errors ( 2 )
PhD ( 2 )
Correlation ( 2 )
Remote Usability Testing ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Star Scale ( 2 )
Eye-Tracking ( 2 )
Emoji scale ( 2 )
slider ( 2 )
Sample Sizes ( 2 )
LTR ( 2 )
Variables ( 2 )
Marketing ( 2 )
Prototype ( 2 )
Cognitive Walkthrough ( 2 )
Formative ( 2 )
KLM ( 2 )
Likert ( 1 )
consumer software ( 1 )
Desktop ( 1 )
Design Thinking ( 1 )
Latin Squares ( 1 )
Visual Analog Scale ( 1 )
b2b software ( 1 )
User-Centred Design ( 1 )
Meeting software ( 1 )
Formative testing ( 1 )
Margin of Error ( 1 )
CUE ( 1 )
RITE ( 1 )
MUSiC ( 1 )
Cumulative Graphs ( 1 )
History of usability ( 1 )
ISO ( 1 )
Greco-Latin Squares ( 1 )
Linear Numeric Scale ( 1 )
Information Architecture ( 1 )
Bias ( 1 )
Contextual Inquiry ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
Probability ( 1 )
Measure ( 1 )
coding ( 1 )
negative scale ( 1 )
MOS ( 1 )
Mobile Usability ( 1 )
Carryover ( 1 )
Problem Severity ( 1 )
Within-subjects ( 1 )
Task Randomization ( 1 )
Research design ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Polarization ( 1 )
Site Analytics ( 1 )
AttrakDiff2 ( 1 )
Effect Size ( 1 )
Affinity ( 1 )
Unmoderated ( 1 )
Z-Score ( 1 )
User Testing ( 1 )
Persona ( 1 )
Software ( 1 )
Segmentation ( 1 )
Task Completin ( 1 )
Design ( 1 )
Performance ( 1 )
Sample ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Five ( 1 )
Perceptions ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Microsoft Desirability Toolkit ( 1 )
Competitive ( 1 )
Expectations ( 1 )
PSSUQ ( 1 )
meCUE2.0 ( 1 )
UEQ ( 1 )
Quality ( 1 )
Hedonic usability ( 1 )
Trust ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Moderating ( 1 )
Ordinal ( 1 )
Metric ( 1 )
protoype ( 1 )
moderated ( 1 )
NSAT ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Customer effort ( 1 )
Test Metrics ( 1 )
Paraphrasing the statistician George Box, all models are wrong, some are useful, and some can be improved. In a recent article, we reviewed the most common way of modeling problem discovery, which is based on a straightforward application of the cumulative binomial probability formula: P(x≥1) = 1 - (1-p)n. Well, it’s straightforward if you like playing around with these sorts of formulas like Jim and

Read More

Finding and fixing usability problems in an interface leads to a better user experience. Beyond fixing problems with current functionality, participant behavior can also reveal important insights into needed new features. These problems and insights are often best gleaned from observing participants interacting with a website, app, or hardware device during actual use or simulated use (during a usability test). With the advent of remote testing platforms like

Read More

Finding and fixing usability problems is fundamental to improving the user experience. How common problems are (the frequency) and how impactful they are (severity) should be treated independently in order to meet that goal. While it's generally straightforward to count how many times you observe a problem in a usability test, assigning severity ratings to problems can be more challenging. Most usability tests will uncover

Read More

Finding and fixing usability problems in an interface leads to a better user experience. We often think of usability testing as the only method for evaluating the usability of a website or application. There are, however, other methods that can help uncover usability problems. These methods can be broken down into empirical (usability testing, surveys, and analytics) or inspection methods (expert review, heuristic evaluation, cognitive

Read More

Ever wonder why you keep encountering the same usability problems on the websites and apps you use? Sure, many organizations don't conduct usability tests on their products, but many do, what explains the persistence of such problems? Finding and fixing usability problems is one of the most effective ways for improving the user experience on websites, applications and hardware.  But just because a problem is

Read More

Seeing is believing. Observing just a handful of users interact with a product can be more influential than reading pages of a professionally done report or polished presentation. But what if a stakeholder only has time to watch two or just one of the users in a usability study? Are there circumstances where watching some users is worse than watching no users at all? Watching

Read More

After I conducted my first usability test in the 1990's I was struck by two things: Just how many usability problems are uncovered and how some problems repeat after observing just a few users. In almost every usability test I've conducted since then I've continued to see this pattern. Even after running 5 to 10 users in a moderated study, there are usually too many

Read More

If only one out of 1000 users encounters a problem with a website, then it's a minor problem. If that sentence bothered you, it should. It could be that that single problem resulted in one visitor's financial information inadvertently being posted to the website for the world to see. Or it could be a slight hesitation with a label on an obscure part of a

Read More

Traveling is hard. Traveling with children is especially hard. A number of things can go wrong, making the trip difficult or even nonexistent. Some problems are nuisances (sick and/or hungry kids, delayed flights, or the wrong sized rental car), while other problems will lead to failure, which in this case means not reaching your destination on time (forgetting your passport or sleeping in and missing

Read More

Errors happen and unintended actions are inevitable. They are a common occurrence in usability tests and are the result of problems in an interface and imperfect human actions. It is valuable to have some idea about what these are, how frequently they occur, and how severe their impact is. First, what is an error? Slips and Mistakes: Two Types of Errors It can be helpful

Read More