Satisfaction

Browse Content by Topic

UX ( 68 )
Methods ( 61 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 34 )
Usability ( 32 )
Benchmarking ( 28 )
User Research ( 27 )
Customer Experience ( 27 )
NPS ( 24 )
SUS ( 21 )
Sample Size ( 19 )
Usability Problems ( 17 )
Net Promoter Score ( 17 )
Rating Scale ( 16 )
Measurement ( 15 )
Metrics ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Validity ( 11 )
Navigation ( 10 )
Questionnaires ( 8 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
Market Research ( 8 )
Surveys ( 8 )
UX Metrics ( 7 )
Reliability ( 7 )
Task Completion ( 7 )
Questionnaire ( 6 )
Mobile ( 6 )
Satisfaction ( 6 )
Rating Scales ( 6 )
Mobile Usability Testing ( 6 )
SUPR-Q ( 6 )
Analytics ( 5 )
Usability Problem ( 5 )
Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Confidence Intervals ( 4 )
UX Methods ( 4 )
Confidence ( 4 )
Quantitative ( 4 )
Task Times ( 4 )
Credibility ( 4 )
Expert Review ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Loyalty ( 4 )
SEQ ( 3 )
ROI ( 3 )
PURE ( 3 )
Usability Lab ( 3 )
Customer Segmentation ( 3 )
Unmoderated Research ( 3 )
Lean UX ( 3 )
Task Metrics ( 3 )
Card Sorting ( 3 )
UMUX-lite ( 3 )
Data ( 2 )
IA ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
PhD ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
Eye-Tracking ( 2 )
Excel ( 2 )
Focus Groups ( 2 )
Summative ( 2 )
Branding ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
Personas ( 2 )
SUM ( 2 )
Tree Testing ( 2 )
Correlation ( 2 )
Marketing ( 2 )
Key Driver ( 2 )
Remote Usability Testing ( 2 )
Salary Survey ( 2 )
Performance ( 1 )
Perceptions ( 1 )
Prototype ( 1 )
protoype ( 1 )
Facilitation ( 1 )
Site Analytics ( 1 )
Metric ( 1 )
moderated ( 1 )
Moderating ( 1 )
Certification ( 1 )
Information Architecture ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Problem Severity ( 1 )
Think Aloud ( 1 )
Segmentation ( 1 )
Persona ( 1 )
Software ( 1 )
Ordinal ( 1 )
Errors ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Top Task Analysis ( 1 )
Design ( 1 )
Unmoderated ( 1 )
Trust ( 1 )
Regression Analysis ( 1 )
Random ( 1 )
Margin of Error ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Competitive ( 1 )
Formative ( 1 )
Quality ( 1 )
Desktop ( 1 )
In an earlier article, we examined the relationship between NPS and future company growth. We found the Net Promoter Score had a modest correlation with future growth (r = .35) in the 14 industries we examined. In the 11 industries that had a positive correlation, the average correlation with two-year revenue growth was higher at r = .44. This ranged from a high of r

Read More

There is more to a job than just the pay. The type of work you do and the people you work with have a lot to do with a sense of satisfaction. Consequently, job satisfaction has been measured extensively for decades in many industries. To gauge how satisfied UX practitioners are with their jobs, the UXPA has been asking respondents a job satisfaction question since

Read More

Customer satisfaction is a staple of company measurement. It’s been used for decades to understand how customers feel about a product or experience. Poor satisfaction measures are an indication of unhappy customers, and unhappy customers generally won’t purchase again, leading to poor revenue growth. But is satisfaction the wrong measure for most companies? That’s certainly the claim Fred Reichheld has made and advocated the Net

Read More

By far the most common and fundamental measure of customer attitudes is customer satisfaction. Customer satisfaction is a measure of how well a product or service experience meets customer expectations. It's a staple of customer analytic scorecards as a barometer of how well a product or company is performing. You can measure satisfaction on everything from a brand, a product, a feature, a website, or

Read More

Asking questions immediately after a user attempts a task compliments task-performance data such as task times and completion rates. Post-task satisfaction data is a bit different than the questionnaires asked after a usability test (such as the SUS).  There is a strong correlation (r > .6) between post-task ratings and post-test ratings. Knowing one can predict about 36% of the other[pdf]. However, even this relatively

Read More

In a usability test you typically collect some type of performance data: task times, completion rates and perhaps errors or conversion rates. It is also a good idea to use some type of questionnaire which measures the perceived ease-of-use of an interface. This can be done immediately after a task using a few questions (post-task questionnaires). It can also be done after the usability testing

Read More