Usability Problem

Browse Content by Topic

UX ( 74 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 41 )
NPS ( 39 )
Benchmarking ( 33 )
Usability ( 32 )
Sample Size ( 32 )
Rating Scale ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Net Promoter Score ( 28 )
Usability Problems ( 18 )
Questionnaires ( 18 )
Rating Scales ( 17 )
Metrics ( 17 )
Measurement ( 16 )
User Experience ( 15 )
UMUX-lite ( 15 )
Surveys ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 12 )
Reliability ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
SEQ ( 8 )
UX Metrics ( 8 )
Research ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Confidence ( 6 )
Confidence Intervals ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Credibility ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Task Times ( 4 )
Key Driver ( 4 )
Quantitative ( 4 )
Loyalty ( 4 )
Expert Review ( 4 )
sliders ( 4 )
Lean UX ( 3 )
Customer Segmentation ( 3 )
Card Sorting ( 3 )
Summative ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
ROI ( 3 )
PURE ( 3 )
Desirability ( 3 )
Voice Interaction ( 3 )
Task Metrics ( 3 )
Data ( 3 )
Focus Groups ( 2 )
Correlation ( 2 )
SUM ( 2 )
Excel ( 2 )
Findability ( 2 )
PhD ( 2 )
Errors ( 2 )
Remote Usability Testing ( 2 )
KLM ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
A/B Testing ( 2 )
Personas ( 2 )
slider ( 2 )
Marketing ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
LTR ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
Sensitivity ( 2 )
Emoji scale ( 2 )
Cognitive Walkthrough ( 2 )
Formative ( 2 )
Star Scale ( 2 )
Carryover ( 1 )
Within-subjects ( 1 )
Visual Analog Scale ( 1 )
Desktop ( 1 )
Linear Numeric Scale ( 1 )
User-Centred Design ( 1 )
Cumulative Graphs ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Margin of Error ( 1 )
Meeting software ( 1 )
Polarization ( 1 )
Likert ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
Mean Opinion Scale ( 1 )
Latin Squares ( 1 )
Greco-Latin Squares ( 1 )
Research design ( 1 )
Information Architecture ( 1 )
Site Analytics ( 1 )
Randomization ( 1 )
Report ( 1 )
Single Ease Question ( 1 )
R ( 1 )
t-test ( 1 )
Bias ( 1 )
Contextual Inquiry ( 1 )
Problem Severity ( 1 )
History of usability ( 1 )
MOS ( 1 )
MOS-R ( 1 )
graphic scale ( 1 )
negative scale ( 1 )
Probability ( 1 )
Measure ( 1 )
Mobile Usability ( 1 )
coding ( 1 )
Anchoring ( 1 )
Formative testing ( 1 )
Certification ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Design ( 1 )
Facilitation ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
protoype ( 1 )
Unmoderated ( 1 )
Task Completin ( 1 )
Affinity ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Trust ( 1 )
Sample ( 1 )
Statistical Significance ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Five ( 1 )
Persona ( 1 )
Metric ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
Hedonic usability ( 1 )
Expectations ( 1 )
MUSiC ( 1 )
RITE ( 1 )
Competitive ( 1 )
CUE ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
Software ( 1 )
moderated ( 1 )
Moderating ( 1 )
Segmentation ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
PSSUQ ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
ISO ( 1 )
While humans are error prone, many of the unintended actions, delays, frustrations and confusions come from interfaces: Pushing instead of pulling a door Not being able to set the time on a clock Unintentionally voting for the wrong candidate on a ballot Turning on the wrong burner on a stove Not understanding an error message A usability problem is anything in a product or website

Read More

A lot happens when you observe a user during a usability test. There's the interface, the utterances, the body language and the metrics. This rich experience can result in a long list of usability issues. Issues can range from cosmetic (a user finds the type in ALL CAPS a bit much) to catastrophic (data loss), suggestions (a new feature) or positives (finding defaults helpful). These

Read More

If you ask independent usability evaluators to run a usability test and report the problems found you'll get largely different lists of problems. While there are many causes for the differences, one major reason is that evaluators disagree on what constitutes a problem. Usability is often at odds with security and business interests—what's best for the user may not be the best for the organization.

Read More

Let's imagine you are testing five users as part of an iterative testing approach to find and fix problems. During the test only one user encounters a problem with logging in. To fix this particular problem would take a lot of effort and the small sample size is met with skepticism from the overburdened and overcommitted development team. They say "We really don't know whether

Read More

Nielsen derives his "five users is enough" formula from a paper he and Tom Landauer published in 1993. Before Nielsen and Landauer James Lewis of IBM proposed a very similar problem detection formula in 1982 based on the binomial probability formula.[4] Lewis stated that: The binomial probability theorem can be used to determine the probability that a problem of probability p will occur r times

Read More