Questionnaires

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 39 )
NPS ( 36 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
Sample Size ( 29 )
Rating Scale ( 29 )
SUS ( 28 )
Net Promoter Score ( 24 )
Usability Problems ( 18 )
Metrics ( 17 )
Questionnaires ( 17 )
Measurement ( 16 )
UMUX-lite ( 15 )
Rating Scales ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
User Experience ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 11 )
Qualitative ( 11 )
Reliability ( 11 )
Navigation ( 10 )
UX Metrics ( 8 )
SEQ ( 8 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
Questionnaire ( 7 )
Research ( 7 )
Mobile ( 6 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Confidence ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Unmoderated Research ( 5 )
Confidence Intervals ( 5 )
Six Sigma ( 5 )
Task Times ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
sliders ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
Summative ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
Key Driver ( 3 )
Voice Interaction ( 3 )
ROI ( 3 )
Desirability ( 3 )
Data ( 3 )
PURE ( 3 )
Findability ( 2 )
Personas ( 2 )
SUM ( 2 )
Salary Survey ( 2 )
A/B Testing ( 2 )
Errors ( 2 )
Excel ( 2 )
IA ( 2 )
Tasks ( 2 )
Correlation ( 2 )
Branding ( 2 )
PhD ( 2 )
Tree Testing ( 2 )
Remote Usability Testing ( 2 )
UX Salary Survey ( 2 )
Focus Groups ( 2 )
Marketing ( 2 )
Emoji scale ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
LTR ( 2 )
slider ( 2 )
Sensitivity ( 2 )
Cognitive Walkthrough ( 2 )
Star Scale ( 2 )
KLM ( 2 )
Formative ( 2 )
Desktop ( 1 )
Visual Analog Scale ( 1 )
Linear Numeric Scale ( 1 )
Within-subjects ( 1 )
consumer software ( 1 )
Margin of Error ( 1 )
History of usability ( 1 )
ISO ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Cumulative Graphs ( 1 )
Meeting software ( 1 )
User-Centred Design ( 1 )
Likert ( 1 )
Carryover ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
MOS ( 1 )
Research design ( 1 )
Bias ( 1 )
Probability ( 1 )
Information Architecture ( 1 )
Greco-Latin Squares ( 1 )
Site Analytics ( 1 )
R ( 1 )
t-test ( 1 )
Randomization ( 1 )
Latin Squares ( 1 )
Measure ( 1 )
Contextual Inquiry ( 1 )
Quality ( 1 )
Problem Severity ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
coding ( 1 )
negative scale ( 1 )
Mobile Usability ( 1 )
Polarization ( 1 )
Hedonic usability ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Task Completin ( 1 )
Top Task Analysis ( 1 )
Certification ( 1 )
Effect Size ( 1 )
Segmentation ( 1 )
Facilitation ( 1 )
Persona ( 1 )
User Testing ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Single Ease Question ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
Five ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
protoype ( 1 )
Metric ( 1 )
Expectations ( 1 )
UEQ ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
Competitive ( 1 )
RITE ( 1 )
Formative testing ( 1 )
CUE ( 1 )
Trust ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Moderating ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Delight ( 1 )
MUSiC ( 1 )
When done well, surveys are an excellent method for collecting data quickly from a geographically diverse population of users, customers, or prospects. In an earlier article, I described 15 types of the most common rating scale items and when you might use them. While rating scales are an important part of a survey, they aren’t the only part. Another key ingredient to a successful survey

Read More

What does 4.1 on a 5-point scale mean? Or 5.6 on a 7-point scale? Interpreting rating scale data can be difficult in the absence of an external benchmark or historical norms. A popular technique used often by marketers to interpret rating scale data is the so-called “top box” and “top-two box” scoring approach. For example, on a 5-point scale, such as the one shown in Figure

Read More

How satisfied are you with your life? How happy are you with your job or your marriage? Are you extroverted or introverted? It’s hard to capture the fickle nature of attitudes and constructs in any measure. It can be particularly hard to do that with just one question or item. Consequently, psychology, education, marketing, and user experience have a long history of recommending multiple items

Read More

Many researchers are familiar with the SUS, and for good reason. It’s the most commonly used and widely cited questionnaire for assessing the perception of the ease of using a system (software, website, or interface). Despite being short—10 items—the SUS has a fair amount of redundancy given it only measures one construct (perceived usability). While some redundancy is good to improve reliability, shorter questionnaires are

Read More

There isn't a usability thermometer to tell us how usable an interface is. We observe the effects and indicators of bad interactions then improve the design. There isn't a single silver bullet technique or tool which will uncover all problems. Instead, practitioners are encouraged to use multiple techniques and triangulate to arrive at a more complete set of problems and solutions. Triangles of course have

Read More

There is a long tradition of including items in questionnaires that are phrased both positively and negatively. This website was easy to use. It was difficult to find what I needed on this website. The major reason for alternating item wording is to minimize extreme response bias and acquiescent bias. However, some recent research[pdf] Jim Lewis and I conducted found little evidence for these biases.

Read More