PURE

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 36 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 29 )
User Research ( 28 )
NPS ( 28 )
SUS ( 23 )
Net Promoter Score ( 20 )
Sample Size ( 19 )
Rating Scale ( 19 )
Usability Problems ( 17 )
Metrics ( 16 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
Satisfaction ( 13 )
Validity ( 12 )
SUPRQ ( 12 )
Questionnaires ( 12 )
Surveys ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Rating Scales ( 9 )
Market Research ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
UX Metrics ( 8 )
Heuristic Evaluation ( 8 )
Reliability ( 7 )
Task Completion ( 7 )
Mobile ( 6 )
Questionnaire ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Research ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
UX Methods ( 4 )
Confidence ( 4 )
Quantitative ( 4 )
Moderation ( 4 )
Task Times ( 4 )
SEQ ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
Credibility ( 4 )
UX Maturity ( 4 )
Lean UX ( 3 )
Unmoderated Research ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
UMUX-lite ( 3 )
Card Sorting ( 3 )
Task Metrics ( 3 )
Customer Segmentation ( 3 )
PURE ( 3 )
Excel ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Branding ( 2 )
Salary Survey ( 2 )
PhD ( 2 )
Remote Usability Testing ( 2 )
Key Driver ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Tree Testing ( 2 )
Correlation ( 2 )
Data ( 2 )
KLM ( 2 )
A/B Testing ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
Findability ( 2 )
Tasks ( 2 )
IA ( 2 )
Focus Groups ( 2 )
SUM ( 2 )
Site Analytics ( 1 )
Delight ( 1 )
Customer effort ( 1 )
Microsoft Desirability Toolkit ( 1 )
CSUQ ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
TAM ( 1 )
Task Completin ( 1 )
Facilitation ( 1 )
Metric ( 1 )
Mobile Usability ( 1 )
moderated ( 1 )
Desktop ( 1 )
Moderating ( 1 )
protoype ( 1 )
Information Architecture ( 1 )
NSAT ( 1 )
Certification ( 1 )
Contextual Inquiry ( 1 )
Problem Severity ( 1 )
Prototype ( 1 )
Task Randomization ( 1 )
Desirability ( 1 )
Segmentation ( 1 )
Software ( 1 )
Ordinal ( 1 )
Trust ( 1 )
Errors ( 1 )
Persona ( 1 )
User Testing ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Top Task Analysis ( 1 )
Design ( 1 )
Effect Size ( 1 )
Unmoderated ( 1 )
Regression Analysis ( 1 )
Conjoint Analysis ( 1 )
Five ( 1 )
Test Metrics ( 1 )
Margin of Error ( 1 )
Perceptions ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Quality ( 1 )
Sample ( 1 )
Competitive ( 1 )
Expectations ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Affinity ( 1 )
Methods evolve and adapt. The same is true of UX methods that have evolved from other methods, often from disparate fields and dating back decades. The usability profession itself can trace its roots to the industrial revolution. The think aloud protocol, one of the signature methods of usability testing, can trace its roots to psychoanalysis, with influence from Freud, Wundt, and Skinner dating back over

Read More

The PURE (Practical Usability Rating by Experts) method is an analytic technique that identifies potential problems users may encounter with an interface. In a PURE evaluation, evaluators familiar with UX principles and heuristics break down tasks into small steps. They then rate each step, from 1 to 3, based on a pre-defined rubric. The higher the score, the more difficult the experience. As we continue to

Read More

In an earlier article, I described the PURE methodology. PURE stands for Practical Usability Rating by Experts. Evaluators familiar with UX principles and heuristics decompose tasks into small steps and rate each step based on a pre-defined scoring system (called a rubric), as shown in Figure 1. [table id=30 /] Figure 1: Scoring rubric for PURE. The PURE method is analytic. It’s not based on

Read More