Surveys

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 33 )
Benchmarking ( 32 )
Usability ( 32 )
Customer Experience ( 30 )
SUS ( 28 )
User Research ( 28 )
Rating Scale ( 23 )
Net Promoter Score ( 21 )
Sample Size ( 20 )
Usability Problems ( 17 )
Metrics ( 17 )
Measurement ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
User Experience ( 14 )
Questionnaires ( 13 )
Usability Metrics ( 13 )
Surveys ( 13 )
Rating Scales ( 12 )
SUPRQ ( 12 )
Qualitative ( 11 )
Navigation ( 10 )
Market Research ( 9 )
SUPR-Q ( 9 )
Reliability ( 9 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
UMUX-lite ( 8 )
UX Metrics ( 8 )
Task Completion ( 7 )
Questionnaire ( 7 )
Research ( 6 )
SEQ ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Usability Problem ( 5 )
Unmoderated Research ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Moderation ( 4 )
Loyalty ( 4 )
Confidence ( 4 )
UX Methods ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
UX Maturity ( 4 )
Task Times ( 4 )
Quantitative ( 4 )
Desirability ( 3 )
ROI ( 3 )
Task Metrics ( 3 )
Card Sorting ( 3 )
Lean UX ( 3 )
Customer Segmentation ( 3 )
PURE ( 3 )
Usability Lab ( 3 )
Variables ( 2 )
Excel ( 2 )
Tree Testing ( 2 )
IA ( 2 )
TAM ( 2 )
UX Salary Survey ( 2 )
Correlation ( 2 )
Data ( 2 )
Findability ( 2 )
Tasks ( 2 )
Branding ( 2 )
Voice Interaction ( 2 )
Key Driver ( 2 )
Salary Survey ( 2 )
A/B Testing ( 2 )
PhD ( 2 )
Summative ( 2 )
Prototype ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Cognitive Walkthrough ( 2 )
Remote Usability Testing ( 2 )
Focus Groups ( 2 )
SUM ( 2 )
KLM ( 2 )
Personas ( 2 )
Contextual Inquiry ( 1 )
Mobile Usability ( 1 )
Ordinal ( 1 )
Site Analytics ( 1 )
Information Architecture ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Problem Severity ( 1 )
Desktop ( 1 )
Regression Analysis ( 1 )
Competitive ( 1 )
Margin of Error ( 1 )
Quality ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Visual Appeal ( 1 )
MUSiC ( 1 )
RITE ( 1 )
Formative testing ( 1 )
ISO ( 1 )
History of usability ( 1 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
CUE ( 1 )
Hedonic usability ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
Microsoft Desirability Toolkit ( 1 )
meCUE2.0 ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
b2b software ( 1 )
consumer software ( 1 )
Within-subjects ( 1 )
Linear Numeric Scale ( 1 )
Carryover ( 1 )
Polarization ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Star Scale ( 1 )
slider ( 1 )
User-Centred Design ( 1 )
Likert ( 1 )
Design Thinking ( 1 )
Sample Sizes ( 1 )
Visual Analog Scale ( 1 )
sliders ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Errors ( 1 )
MOS ( 1 )
Trust ( 1 )
Formative ( 1 )
Random ( 1 )
Think Aloud ( 1 )
True Intent ( 1 )
Top Task Analysis ( 1 )
Persona ( 1 )
Segmentation ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
Design ( 1 )
Unmoderated ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Metric ( 1 )
protoype ( 1 )
Moderating ( 1 )
moderated ( 1 )
Customer effort ( 1 )
NSAT ( 1 )
Facilitation ( 1 )
Certification ( 1 )
Perceptions ( 1 )
Five ( 1 )
Performance ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Affinity ( 1 )
Software ( 1 )
There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes? Most research on this topic shows

Read More

UX research and UX measurement can be seen as an extension of experimental design. At the heart of experimental design lie variables. Earlier we wrote about different kinds of variables. In short, dependent variables are what you get (outcomes), independent variables are what you set, and extraneous variables are what you can’t forget (to account for). When you measure a user experience using metrics—for example,

Read More

To have a reliable and valid response scale, people need to understand what they’re responding to. A poorly worded question, an unclear response item, or an inadequate response scale can create additional error in measurement. Worse yet, the error may be systematic rather than random, so it would result in unmodeled bias rather than just increased measurement variability. Rating scales have many forms, with variations

Read More

In an earlier article, we reviewed five competing models of delight. The models differed in their details, but most shared the general idea that delight is composed of an unexpected positive experience. Or, for the most part, delight is a pleasant surprise. However, there is disagreement on whether you actually need surprise to be delighted. And if you don’t need surprise, then delight is really

Read More

Small changes can have big impacts on rating scales. But to really know what the effects are, you need to test. Labels, number of points, colors, and item wording differences can often have unexpected effects on survey responses. In an earlier analysis, we compared the effects of using a three-point recommend item compared to an eleven-point recommend item. In that study, we showed how a

Read More

It seems like there are endless ways to ask questions of participants in surveys. Variety in question types can be both a blessing and a curse. Having many ways to ask questions provides better options to the researcher to assess the opinion of the respondent. But the wrong type of question can fail to capture what’s intended, confuse respondents, or even lead to incorrect decisions.

Read More

We all are bombarded with surveys asking us to provide feedback on everything from our experience on a website to our time at the grocery store. Many of us also create surveys. They're an indispensable method for collecting data quickly. Done well, they can be one of the most cost effective ways to: Understand the demographics of your customers Assess brand attitudes Benchmark perceptions of

Read More

Surveys are a relatively quick and effective way to measure customers' attitudes and experiences along their journey. Not all customer experience surveys are created equal. Depending on the goals, the type of questions and length will vary. While customer experience surveys can take on any form, it can be helpful to think of them as falling into these seven categories. 1. Relationship/BrandingCustomers' attitudes toward a

Read More

The best way to measure the user experience is by observing real users attempting realistic tasks.   This is best done through a usability test or direct observation of users.You may not always be able to do so though because of limitations in time, costs, availability of products, or difficulty in finding qualified users. Many large companies have dozens or even thousands of products for both internal

Read More

Which design do you prefer? Which product would you pick? Which website do you find more usable? A cornerstone of customer research is asking preferences. A common scenario is to show participants a set of designs or two or more websites, and then ask customers which design or site they prefer. While customer preferences often don't match customer performance, it's good practice to collect both

Read More