Questionnaires

Browse Content by Topic

UX ( 70 )
Methods ( 61 )
Usability Testing ( 52 )
Statistics ( 51 )
Survey ( 35 )
Usability ( 32 )
Benchmarking ( 29 )
Customer Experience ( 28 )
User Research ( 27 )
NPS ( 27 )
SUS ( 21 )
Sample Size ( 19 )
Net Promoter Score ( 19 )
Usability Problems ( 17 )
Rating Scale ( 17 )
Metrics ( 15 )
Measurement ( 15 )
User Experience ( 14 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Validity ( 11 )
Navigation ( 10 )
Satisfaction ( 10 )
Surveys ( 10 )
Questionnaires ( 9 )
Market Research ( 9 )
Heuristic Evaluation ( 8 )
SUPR-Q ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
Rating Scales ( 7 )
Reliability ( 7 )
UX Metrics ( 7 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Questionnaire ( 6 )
Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Analytics ( 5 )
Usability Problem ( 5 )
UX Methods ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Task Times ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Usability Lab ( 3 )
Unmoderated Research ( 3 )
SEQ ( 3 )
UMUX-lite ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Task Metrics ( 3 )
Branding ( 2 )
Data ( 2 )
SUM ( 2 )
Key Driver ( 2 )
PhD ( 2 )
KLM ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Personas ( 2 )
Excel ( 2 )
A/B Testing ( 2 )
Tree Testing ( 2 )
Marketing ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
Focus Groups ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Findability ( 2 )
IA ( 2 )
Correlation ( 2 )
Affinity ( 1 )
Perceptions ( 1 )
Problem Severity ( 1 )
Performance ( 1 )
Z-Score ( 1 )
Contextual Inquiry ( 1 )
Moderating ( 1 )
Site Analytics ( 1 )
moderated ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Metric ( 1 )
protoype ( 1 )
Prototype ( 1 )
Mobile Usability ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Information Architecture ( 1 )
Task Completin ( 1 )
Margin of Error ( 1 )
Software ( 1 )
Segmentation ( 1 )
Delight ( 1 )
Ordinal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Visual Appeal ( 1 )
Persona ( 1 )
Design ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Top Task Analysis ( 1 )
Formative ( 1 )
Trust ( 1 )
Errors ( 1 )
Quality ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Desktop ( 1 )
In an earlier article, we reviewed five competing models of delight. The models differed in their details, but most shared the general idea that delight is composed of an unexpected positive experience. Or, for the most part, delight is a pleasant surprise. However, there is disagreement on whether you actually need surprise to be delighted. And if you don’t need surprise, then delight is really

Read More

The NASA TLX is a multi-item questionnaire developed in 1980 by Sandra Hart. NASA is, of course, the US-based space agency famous for the one giant leap for mankind. The TLX stands for Task Load Index and is a measure of perceived workload. If you conduct mostly digital UX research for consumers (websites and software), you may not have used the NASA TLX, but as interfaces

Read More

When done well, surveys are an excellent method for collecting data quickly from a geographically diverse population of users, customers, or prospects. In an earlier article, I described 15 types of the most common rating scale items and when you might use them. While rating scales are an important part of a survey, they aren’t the only part. Another key ingredient to a successful survey

Read More

What does 4.1 on a 5-point scale mean? Or 5.6 on a 7-point scale? Interpreting rating scale data can be difficult in the absence of an external benchmark or historical norms. A popular technique used often by marketers to interpret rating scale data is the so-called “top box” and “top-two box” scoring approach. For example, on a 5-point scale, such as the one shown in Figure

Read More

How satisfied are you with your life? How happy are you with your job or your marriage? Are you extroverted or introverted? It’s hard to capture the fickle nature of attitudes and constructs in any measure. It can be particularly hard to do that with just one question or item. Consequently, psychology, education, marketing, and user experience have a long history of recommending multiple items

Read More

Many researchers are familiar with the SUS, and for good reason. It’s the most commonly used and widely cited questionnaire for assessing the perception of the ease of using a system (software, website, or interface). Despite being short—10 items—the SUS has a fair amount of redundancy given it only measures one construct (perceived usability). While some redundancy is good to improve reliability, shorter questionnaires are

Read More

There isn't a usability thermometer to tell us how usable an interface is. We observe the effects and indicators of bad interactions then improve the design. There isn't a single silver bullet technique or tool which will uncover all problems. Instead, practitioners are encouraged to use multiple techniques and triangulate to arrive at a more complete set of problems and solutions. Triangles of course have

Read More

There is a long tradition of including items in questionnaires that are phrased both positively and negatively. This website was easy to use. It was difficult to find what I needed on this website. The major reason for alternating item wording is to minimize extreme response bias and acquiescent bias. However, some recent research[pdf] Jim Lewis and I conducted found little evidence for these biases.

Read More