Usability Problem

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 36 )
Benchmarking ( 32 )
Usability ( 32 )
NPS ( 31 )
Customer Experience ( 30 )
User Research ( 28 )
SUS ( 25 )
Rating Scale ( 21 )
Net Promoter Score ( 21 )
Sample Size ( 19 )
Usability Problems ( 17 )
Metrics ( 16 )
Measurement ( 15 )
Validity ( 14 )
User Experience ( 14 )
Satisfaction ( 13 )
Usability Metrics ( 13 )
Questionnaires ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Surveys ( 11 )
Navigation ( 10 )
Rating Scales ( 10 )
Market Research ( 9 )
Reliability ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
Mobile Usability Testing ( 6 )
UMUX-lite ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Research ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
SEQ ( 5 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
UX Methods ( 4 )
Quantitative ( 4 )
Moderation ( 4 )
Task Times ( 4 )
Loyalty ( 4 )
Confidence Intervals ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Expert Review ( 4 )
Confidence ( 4 )
ROI ( 3 )
Task Metrics ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
PURE ( 3 )
Usability Lab ( 3 )
Desirability ( 3 )
Correlation ( 2 )
Marketing ( 2 )
Excel ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Branding ( 2 )
Data ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
Key Driver ( 2 )
UX Salary Survey ( 2 )
Summative ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
IA ( 2 )
Tree Testing ( 2 )
Focus Groups ( 2 )
Personas ( 2 )
SUM ( 2 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
Voice Interaction ( 1 )
User-Centred Design ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
TAM ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Customer effort ( 1 )
Desktop ( 1 )
Problem Severity ( 1 )
Information Architecture ( 1 )
RITE ( 1 )
Contextual Inquiry ( 1 )
MUSiC ( 1 )
ISO ( 1 )
Cumulative Graphs ( 1 )
History of usability ( 1 )
Meeting software ( 1 )
Formative testing ( 1 )
CUE ( 1 )
UEQ ( 1 )
Likert ( 1 )
Site Analytics ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Mobile Usability ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
meCUE2.0 ( 1 )
Certification ( 1 )
Segmentation ( 1 )
Think Aloud ( 1 )
Persona ( 1 )
Random ( 1 )
Crowdsourcing ( 1 )
Five ( 1 )
Sample ( 1 )
Software ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Trust ( 1 )
Top Task Analysis ( 1 )
Formative ( 1 )
Unmoderated ( 1 )
Design Thinking ( 1 )
Design ( 1 )
Ordinal ( 1 )
Perceptions ( 1 )
Quality ( 1 )
protoype ( 1 )
Facilitation ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Moderating ( 1 )
Margin of Error ( 1 )
Metric ( 1 )
Errors ( 1 )
Competitive ( 1 )
Regression Analysis ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Conjoint Analysis ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Expectations ( 1 )
moderated ( 1 )
While humans are error prone, many of the unintended actions, delays, frustrations and confusions come from interfaces: Pushing instead of pulling a door Not being able to set the time on a clock Unintentionally voting for the wrong candidate on a ballot Turning on the wrong burner on a stove Not understanding an error message A usability problem is anything in a product or website

Read More

A lot happens when you observe a user during a usability test. There's the interface, the utterances, the body language and the metrics. This rich experience can result in a long list of usability issues. Issues can range from cosmetic (a user finds the type in ALL CAPS a bit much) to catastrophic (data loss), suggestions (a new feature) or positives (finding defaults helpful). These

Read More

If you ask independent usability evaluators to run a usability test and report the problems found you'll get largely different lists of problems. While there are many causes for the differences, one major reason is that evaluators disagree on what constitutes a problem. Usability is often at odds with security and business interests—what's best for the user may not be the best for the organization.

Read More

Let's imagine you are testing five users as part of an iterative testing approach to find and fix problems. During the test only one user encounters a problem with logging in. To fix this particular problem would take a lot of effort and the small sample size is met with skepticism from the overburdened and overcommitted development team. They say "We really don't know whether

Read More

Neilsen derives his "five users is enough" formula from a paper he and Tom Landauer published in 1993. Before Nielsen and Landauer James Lewis of IBM proposed a very similar problem detection formula in 1982 based on the binomial probability formula.[4] Lewis stated that: The binomial probability theorem can be used to determine the probability that a problem of probability p will occur r times

Read More