Questionnaires

Browse Content by Topic

Statistics ( 51 )
Methods ( 49 )
Usability Testing ( 46 )
UX ( 43 )
Survey ( 30 )
Usability ( 25 )
User Research ( 24 )
Customer Experience ( 24 )
Sample Size ( 18 )
Benchmarking ( 18 )
SUS ( 17 )
NPS ( 17 )
Usability Problems ( 16 )
Usability Metrics ( 12 )
Rating Scale ( 11 )
Qualitative ( 11 )
SUPRQ ( 10 )
Navigation ( 10 )
Task Time ( 8 )
Market Research ( 8 )
Metrics ( 8 )
Measurement ( 8 )
Surveys ( 7 )
User Experience ( 7 )
Heuristic Evaluation ( 7 )
Task Completion ( 7 )
Six Sigma ( 5 )
Mobile Usability Testing ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Net Promoter Score ( 5 )
Questionnaire ( 5 )
Mobile ( 5 )
Confidence ( 4 )
Analytics ( 4 )
Questionnaires ( 4 )
Research ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
Loyalty ( 4 )
Quantitative ( 4 )
Expert Review ( 3 )
Customer Segmentation ( 3 )
Satisfaction ( 3 )
UX Metrics ( 3 )
Card Sorting ( 3 )
Task Metrics ( 3 )
Rating Scales ( 3 )
Lean UX ( 3 )
ROI ( 3 )
Validity ( 2 )
Correlation ( 2 )
Key Driver ( 2 )
Reliability ( 2 )
Excel ( 2 )
PhD ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
KLM ( 2 )
SEQ ( 2 )
Usability Lab ( 2 )
UMUX-lite ( 2 )
Certification ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
SUM ( 2 )
Personas ( 2 )
UX Methods ( 2 )
Tasks ( 2 )
Data ( 2 )
Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Tree Testing ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
IA ( 2 )
UX Salary Survey ( 2 )
Problem Severity ( 1 )
Site Analytics ( 1 )
Information Architecture ( 1 )
Contextual Inquiry ( 1 )
Desktop ( 1 )
Ordinal ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Errors ( 1 )
Trust ( 1 )
Formative ( 1 )
Perceptions ( 1 )
Performance ( 1 )
Facilitation ( 1 )
protoype ( 1 )
Unmoderated Research ( 1 )
Prototype ( 1 )
Task Completin ( 1 )
Z-Score ( 1 )
Affinity ( 1 )
Visual Appeal ( 1 )
True Intent ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Branding ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Metric ( 1 )
Software ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Segmentation ( 1 )
Persona ( 1 )
Margin of Error ( 1 )
Many researchers are familiar with the SUS, and for good reason. It’s the most commonly used and widely cited questionnaire for assessing the perception of the ease of using a system (software, website, or interface). Despite being short—10 items—the SUS has a fair amount of redundancy given it only measures one construct (perceived usability). While some redundancy is good to improve reliability, shorter questionnaires are

Read More

In a usability evaluation it's good practice to measure both how users perform on realistic tasks and what they think about the usability of the interface. But what exactly DO you ask the users? "Is this usable?" … "Is the interface easy to use?" ... "Did you like using the app?" While you can cobble together a few questions yourself or with your product team,

Read More

There isn't a usability thermometer to tell us how usable an interface is. We observe the effects and indicators of bad interactions then improve the design. There isn't a single silver bullet technique or tool which will uncover all problems. Instead, practitioners are encouraged to use multiple techniques and triangulate to arrive at a more complete set of problems and solutions. Triangles of course have

Read More

There is a long tradition of including items in questionnaires that are phrased both positively and negatively. This website was easy to use.It was difficult to find what I needed on this website.The major reason for alternating item wording is to minimize extreme response bias and acquiescent bias. However, some recent research[pdf] Jim Lewis and I conducted found little evidence for these biases. We found

Read More