Questionnaire

Browse Content by Topic

Usability Testing ( 51 )
Statistics ( 51 )
Methods ( 50 )
UX ( 50 )
Usability ( 30 )
Survey ( 30 )
User Research ( 27 )
Customer Experience ( 26 )
Benchmarking ( 21 )
Sample Size ( 18 )
SUS ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Rating Scale ( 12 )
Qualitative ( 11 )
Metrics ( 10 )
Net Promoter Score ( 10 )
Measurement ( 10 )
Navigation ( 10 )
User Experience ( 9 )
Task Time ( 8 )
Market Research ( 8 )
Heuristic Evaluation ( 7 )
Task Completion ( 7 )
Surveys ( 7 )
Questionnaires ( 6 )
Mobile ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Questionnaire ( 5 )
Reliability ( 5 )
Mobile Usability Testing ( 5 )
Loyalty ( 4 )
Satisfaction ( 4 )
Confidence Intervals ( 4 )
Validity ( 4 )
Research ( 4 )
Credibility ( 4 )
Confidence ( 4 )
Analytics ( 4 )
UX Metrics ( 4 )
Quantitative ( 4 )
Task Times ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
SUPR-Q ( 3 )
Usability Lab ( 3 )
Unmoderated Research ( 3 )
Task Metrics ( 3 )
ROI ( 3 )
Rating Scales ( 3 )
Customer Segmentation ( 3 )
Expert Review ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Tree Testing ( 2 )
IA ( 2 )
Findability ( 2 )
PhD ( 2 )
SEQ ( 2 )
SUM ( 2 )
Focus Groups ( 2 )
Summative ( 2 )
Key Driver ( 2 )
Salary Survey ( 2 )
Personas ( 2 )
Certification ( 2 )
UMUX-lite ( 2 )
Eye-Tracking ( 2 )
Excel ( 2 )
Cognitive Walkthrough ( 2 )
Data ( 2 )
Marketing ( 2 )
A/B Testing ( 2 )
Remote Usability Testing ( 2 )
UX Salary Survey ( 2 )
Branding ( 2 )
Correlation ( 2 )
KLM ( 2 )
UX Methods ( 2 )
Tasks ( 2 )
Z-Score ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Problem Severity ( 1 )
Ordinal ( 1 )
Effect Size ( 1 )
Performance ( 1 )
moderated ( 1 )
User Testing ( 1 )
Persona ( 1 )
protoype ( 1 )
Metric ( 1 )
Errors ( 1 )
Moderating ( 1 )
Site Analytics ( 1 )
Software ( 1 )
Perceptions ( 1 )
Information Architecture ( 1 )
Prototype ( 1 )
Facilitation ( 1 )
Contextual Inquiry ( 1 )
Five ( 1 )
Top Task Analysis ( 1 )
Margin of Error ( 1 )
Design ( 1 )
Formative ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Competitive ( 1 )
Expectations ( 1 )
Quality ( 1 )
Think Aloud ( 1 )
Conjoint Analysis ( 1 )
Crowdsourcing ( 1 )
Desktop ( 1 )
Unmoderated ( 1 )
Sample ( 1 )
Regression Analysis ( 1 )
Random ( 1 )
Segmentation ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Trust ( 1 )
Surveys often suffer from having too many questions. Many items are redundant or don’t measure what they intend to measure. Even worse, survey items are often the result of “design by committee” with more items getting added over time to address someone’s concerns. Let's say an organization uses the following items in a customer survey: Satisfaction Importance Usefulness Ease of use Happiness Delight Net Promoter

Read More

The effectiveness of surveys starts with the quality of the questions. Question writing is an art and a science. You need to balance your needs and the needs of the organization commissioning the survey with the burden on the respondents. Here's a summary of 12 useful guidelines we use, pulled from books and articles.1.    Keep questions short, but not too short. Succinct writing and good

Read More

Questionnaires are an effective way for gauging sentiments toward constructs like usability, loyalty, and the quality of the website user experience. A standardized questionnaire is one that has gone through psychometric validation. That means the items used in the questionnaire have been shown to: 1. Offer consistent responses (reliability) 2. Measure what they are intended to measure (validity) 3. Are able to differentiate between good

Read More

Was a task difficult or easy to complete? Performance metrics are important to collect when improving usability but perception matters just as much. Asking a user to respond to a questionnaire immediately after attempting a task provides a simple and reliable way of measuring task-performance satisfaction. Questionnaires administered at the end of a test such as SUS, measure perception satisfaction. There are numerous questionnaires to

Read More

Have you ever watched a user perform horribly during a usability test only to watch in amazement as they rate a task as very easy to use? I have, and as long as I've been conducting usability tests, I've heard of this contradictory behavior from other researchers. Such occurrences have led many to discount the collection of satisfaction data altogether. In fact I've often heard

Read More