Figure 4: Relationship between SEQ scores, completion rates, and task times.

SEQ

Browse Content by Topic

UX ( 69 )
Methods ( 61 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 34 )
Usability ( 32 )
Benchmarking ( 28 )
Customer Experience ( 27 )
User Research ( 27 )
NPS ( 25 )
SUS ( 21 )
Sample Size ( 19 )
Net Promoter Score ( 18 )
Usability Problems ( 17 )
Rating Scale ( 16 )
Metrics ( 15 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Validity ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Satisfaction ( 9 )
Questionnaires ( 8 )
Heuristic Evaluation ( 8 )
Market Research ( 8 )
Task Time ( 8 )
Surveys ( 8 )
Reliability ( 7 )
UX Metrics ( 7 )
Task Completion ( 7 )
Rating Scales ( 7 )
SUPR-Q ( 6 )
Questionnaire ( 6 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Research ( 5 )
Analytics ( 5 )
Six Sigma ( 5 )
UX Maturity ( 4 )
Confidence ( 4 )
Expert Review ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
Quantitative ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
UX Methods ( 4 )
Task Metrics ( 3 )
SEQ ( 3 )
ROI ( 3 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
Unmoderated Research ( 3 )
Usability Lab ( 3 )
Card Sorting ( 3 )
UMUX-lite ( 3 )
PURE ( 3 )
Data ( 2 )
Eye-Tracking ( 2 )
PhD ( 2 )
Branding ( 2 )
Excel ( 2 )
Cognitive Walkthrough ( 2 )
KLM ( 2 )
Marketing ( 2 )
Summative ( 2 )
Personas ( 2 )
A/B Testing ( 2 )
Salary Survey ( 2 )
IA ( 2 )
Key Driver ( 2 )
Tasks ( 2 )
Findability ( 2 )
UX Salary Survey ( 2 )
SUM ( 2 )
Tree Testing ( 2 )
Remote Usability Testing ( 2 )
Correlation ( 2 )
Focus Groups ( 2 )
Perceptions ( 1 )
Performance ( 1 )
Mobile Usability ( 1 )
Moderating ( 1 )
Information Architecture ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Site Analytics ( 1 )
protoype ( 1 )
Prototype ( 1 )
Contextual Inquiry ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Metric ( 1 )
Five ( 1 )
moderated ( 1 )
Problem Severity ( 1 )
Test Metrics ( 1 )
Segmentation ( 1 )
Persona ( 1 )
Software ( 1 )
NSAT ( 1 )
Ordinal ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Top Task Analysis ( 1 )
Design ( 1 )
Unmoderated ( 1 )
Errors ( 1 )
Regression Analysis ( 1 )
Think Aloud ( 1 )
Margin of Error ( 1 )
Random ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Task Randomization ( 1 )
Quality ( 1 )
Trust ( 1 )
Conjoint Analysis ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Formative ( 1 )
Desktop ( 1 )
Our attitudes both reflect and affect our actions. What we think affects what we do and what we do affects what we think. It’s not a perfect relationship of course. What people say or think doesn’t always directly correspond to actions in easily predictable ways. Understanding and measuring user attitudes early and often can provide a good indication of likely future behaviors. Attitudes can be

Read More

The Single Ease Question (SEQ) is a 7-point rating scale to assess how difficult users find a task. It's administered immediately after a user attempts a task in a usability test. After users attempt a task, ask them this simple question: Overall, how difficult or easy was the task to complete? Use the seven point rating scale format below. Labels and values: We typically label

Read More

Have you ever watched a user perform horribly during a usability test only to watch in amazement as they rate a task as very easy to use? I have, and as long as I've been conducting usability tests, I've heard of this contradictory behavior from other researchers. Such occurrences have led many to discount the collection of satisfaction data altogether. In fact I've often heard

Read More