Errors

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 34 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Rating Scale ( 25 )
Sample Size ( 24 )
Net Promoter Score ( 22 )
Usability Problems ( 18 )
Metrics ( 17 )
Measurement ( 16 )
Questionnaires ( 15 )
User Experience ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
Rating Scales ( 14 )
Usability Metrics ( 13 )
Surveys ( 13 )
Market Research ( 12 )
SUPRQ ( 12 )
SUPR-Q ( 11 )
UMUX-lite ( 11 )
Qualitative ( 11 )
Reliability ( 10 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
UX Metrics ( 8 )
Task Completion ( 7 )
SEQ ( 7 )
Questionnaire ( 7 )
Research ( 6 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Analytics ( 6 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Expert Review ( 4 )
Task Times ( 4 )
Confidence ( 4 )
Quantitative ( 4 )
Loyalty ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Summative ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
Customer Segmentation ( 3 )
Key Driver ( 3 )
Task Metrics ( 3 )
Data ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
TAM ( 3 )
PURE ( 3 )
Desirability ( 3 )
Voice Interaction ( 3 )
Tree Testing ( 2 )
Tasks ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
Sample Sizes ( 2 )
Salary Survey ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Branding ( 2 )
Correlation ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
Variables ( 2 )
Excel ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
SUM ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Formative ( 2 )
Prototype ( 2 )
Errors ( 2 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
negative scale ( 1 )
graphic scale ( 1 )
Site Analytics ( 1 )
Desktop ( 1 )
consumer software ( 1 )
b2b software ( 1 )
History of usability ( 1 )
coding ( 1 )
Margin of Error ( 1 )
Formative testing ( 1 )
Task Randomization ( 1 )
RITE ( 1 )
MUSiC ( 1 )
Measure ( 1 )
Probability ( 1 )
ISO ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Linear Numeric Scale ( 1 )
Test Metrics ( 1 )
Star Scale ( 1 )
Within-subjects ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Polarization ( 1 )
Carryover ( 1 )
Problem Severity ( 1 )
MOS ( 1 )
Mean Opinion Scale ( 1 )
Design Thinking ( 1 )
Emoji scale ( 1 )
Information Architecture ( 1 )
Anchoring ( 1 )
slider ( 1 )
Visual Analog Scale ( 1 )
sliders ( 1 )
MOS-R ( 1 )
Regression Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Facilitation ( 1 )
Bias ( 1 )
Top Task Analysis ( 1 )
protoype ( 1 )
Unmoderated ( 1 )
Moderating ( 1 )
Design ( 1 )
Metric ( 1 )
Certification ( 1 )
Task Completin ( 1 )
Sample ( 1 )
Five ( 1 )
Z-Score ( 1 )
Perceptions ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Trust ( 1 )
Affinity ( 1 )
Think Aloud ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Performance ( 1 )
Microsoft Desirability Toolkit ( 1 )
Competitive ( 1 )
meCUE2.0 ( 1 )
CUE ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
NSAT ( 1 )
Segmentation ( 1 )
moderated ( 1 )
Persona ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Quality ( 1 )
Errors can provide a lot of diagnostic information about the root causes of UI problems and the impact such problems have on the user experience. The frequency of errors—even trivial ones—also provides a quantitative description of the performance of a task. The process of observing and coding errors is more time-consuming and dependent on researcher judgment than recording task completions or task times. Consequently, errors

Read More

Errors happen and unintended actions are inevitable. They are a common occurrence in usability tests and are the result of problems in an interface and imperfect human actions. It is valuable to have some idea about what these are, how frequently they occur, and how severe their impact is. First, what is an error? Slips and Mistakes: Two Types of Errors It can be helpful

Read More