SEQ

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 33 )
Benchmarking ( 32 )
Usability ( 32 )
Customer Experience ( 30 )
SUS ( 28 )
User Research ( 28 )
Rating Scale ( 23 )
Net Promoter Score ( 21 )
Sample Size ( 20 )
Usability Problems ( 17 )
Metrics ( 17 )
Measurement ( 15 )
User Experience ( 14 )
Validity ( 14 )
Questionnaires ( 13 )
Satisfaction ( 13 )
Usability Metrics ( 13 )
Surveys ( 13 )
SUPRQ ( 12 )
Rating Scales ( 12 )
Qualitative ( 11 )
Navigation ( 10 )
Market Research ( 9 )
Reliability ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
UMUX-lite ( 6 )
Research ( 6 )
Mobile ( 6 )
SEQ ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Task Times ( 4 )
Confidence ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Desirability ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
IA ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
Correlation ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
PhD ( 2 )
Key Driver ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
A/B Testing ( 2 )
Salary Survey ( 2 )
Data ( 2 )
Variables ( 2 )
Excel ( 2 )
TAM ( 2 )
Remote Usability Testing ( 2 )
Marketing ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
KLM ( 2 )
SUM ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Information Architecture ( 1 )
Conjoint Analysis ( 1 )
Site Analytics ( 1 )
Test Metrics ( 1 )
Problem Severity ( 1 )
Desktop ( 1 )
Task Randomization ( 1 )
Expectations ( 1 )
Margin of Error ( 1 )
Competitive ( 1 )
Quality ( 1 )
Design ( 1 )
Microsoft Desirability Toolkit ( 1 )
LTR ( 1 )
Voice Interaction ( 1 )
meCUE2.0 ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
moderated ( 1 )
Moderating ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Delight ( 1 )
Hedonic usability ( 1 )
CUE ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Design Thinking ( 1 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
RITE ( 1 )
Formative testing ( 1 )
MUSiC ( 1 )
ISO ( 1 )
History of usability ( 1 )
Metric ( 1 )
protoype ( 1 )
Top Task Analysis ( 1 )
Sample Sizes ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Errors ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Software ( 1 )
Ordinal ( 1 )
Segmentation ( 1 )
Persona ( 1 )
User Testing ( 1 )
Trust ( 1 )
Formative ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Regression Analysis ( 1 )
One of the primary goals of measuring the user experience is to see whether design efforts actually make a quantifiable difference over time. A regular benchmark study is a great way to institutionalize the idea of quantifiable differences. Benchmarks are most effective when done at regular intervals (e.g., quarterly or yearly) or after significant design or feature changes. A UX benchmark is something akin to

Read More

If users can’t complete a task, not much else matters. Consequently, task completion is one of the fundamental UX measures and one of the most commonly collected metrics, even in small-sample formative studies and studies of low-fidelity prototypes. Task completion is usually easy to collect, and it’s easy to understand and communicate. It’s typically coded as a binary measure (success or fail) dependent on a participant

Read More

Numbers are universally understood across cultures, geography, and languages. But when those numbers are applied to sentiments (for example, satisfaction, agreement, or intention), do people respond universally or does a 4 on a five-point scale elicit different reactions based on culture or geography? Many international organizations use similar sets of measures (such as satisfaction or the Net Promoter Score) to compare countries and regions. If

Read More

Our attitudes both reflect and affect our actions. What we think affects what we do and what we do affects what we think. It’s not a perfect relationship of course. What people say or think doesn’t always directly correspond to actions in easily predictable ways. Understanding and measuring user attitudes early and often can provide a good indication of likely future behaviors. Attitudes can be

Read More

The Single Ease Question (SEQ) is a 7-point rating scale to assess how difficult users find a task. It's administered immediately after a user attempts a task in a usability test. After users attempt a task, ask them this simple question: Overall, how difficult or easy was the task to complete? Use the seven point rating scale format below. Labels and values: We typically label

Read More

Have you ever watched a user perform horribly during a usability test only to watch in amazement as they rate a task as very easy to use? I have, and as long as I've been conducting usability tests, I've heard of this contradictory behavior from other researchers. Such occurrences have led many to discount the collection of satisfaction data altogether. In fact I've often heard

Read More