Surveys

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 34 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Rating Scale ( 25 )
Sample Size ( 24 )
Net Promoter Score ( 22 )
Usability Problems ( 18 )
Metrics ( 17 )
Measurement ( 16 )
Questionnaires ( 15 )
User Experience ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
Rating Scales ( 14 )
Usability Metrics ( 13 )
Surveys ( 13 )
Market Research ( 12 )
SUPRQ ( 12 )
SUPR-Q ( 11 )
UMUX-lite ( 11 )
Qualitative ( 11 )
Reliability ( 10 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
UX Metrics ( 8 )
Task Completion ( 7 )
SEQ ( 7 )
Questionnaire ( 7 )
Research ( 6 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Analytics ( 6 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Expert Review ( 4 )
Task Times ( 4 )
Confidence ( 4 )
Quantitative ( 4 )
Loyalty ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Summative ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
Customer Segmentation ( 3 )
Key Driver ( 3 )
Task Metrics ( 3 )
Data ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
TAM ( 3 )
PURE ( 3 )
Desirability ( 3 )
Voice Interaction ( 3 )
Tree Testing ( 2 )
Tasks ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
Sample Sizes ( 2 )
Salary Survey ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Branding ( 2 )
Correlation ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
Variables ( 2 )
Excel ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
SUM ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Formative ( 2 )
Prototype ( 2 )
Errors ( 2 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
negative scale ( 1 )
graphic scale ( 1 )
Site Analytics ( 1 )
Desktop ( 1 )
consumer software ( 1 )
b2b software ( 1 )
History of usability ( 1 )
coding ( 1 )
Margin of Error ( 1 )
Formative testing ( 1 )
Task Randomization ( 1 )
RITE ( 1 )
MUSiC ( 1 )
Measure ( 1 )
Probability ( 1 )
ISO ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Linear Numeric Scale ( 1 )
Test Metrics ( 1 )
Star Scale ( 1 )
Within-subjects ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Polarization ( 1 )
Carryover ( 1 )
Problem Severity ( 1 )
MOS ( 1 )
Mean Opinion Scale ( 1 )
Design Thinking ( 1 )
Emoji scale ( 1 )
Information Architecture ( 1 )
Anchoring ( 1 )
slider ( 1 )
Visual Analog Scale ( 1 )
sliders ( 1 )
MOS-R ( 1 )
Regression Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Facilitation ( 1 )
Bias ( 1 )
Top Task Analysis ( 1 )
protoype ( 1 )
Unmoderated ( 1 )
Moderating ( 1 )
Design ( 1 )
Metric ( 1 )
Certification ( 1 )
Task Completin ( 1 )
Sample ( 1 )
Five ( 1 )
Z-Score ( 1 )
Perceptions ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Trust ( 1 )
Affinity ( 1 )
Think Aloud ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Performance ( 1 )
Microsoft Desirability Toolkit ( 1 )
Competitive ( 1 )
meCUE2.0 ( 1 )
CUE ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
NSAT ( 1 )
Segmentation ( 1 )
moderated ( 1 )
Persona ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Quality ( 1 )
There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes? Most research on this topic shows

Read More

UX research and UX measurement can be seen as an extension of experimental design. At the heart of experimental design lie variables. Earlier we wrote about different kinds of variables. In short, dependent variables are what you get (outcomes), independent variables are what you set, and extraneous variables are what you can’t forget (to account for). When you measure a user experience using metrics—for example,

Read More

To have a reliable and valid response scale, people need to understand what they’re responding to. A poorly worded question, an unclear response item, or an inadequate response scale can create additional error in measurement. Worse yet, the error may be systematic rather than random, so it would result in unmodeled bias rather than just increased measurement variability. Rating scales have many forms, with variations

Read More

In an earlier article, we reviewed five competing models of delight. The models differed in their details, but most shared the general idea that delight is composed of an unexpected positive experience. Or, for the most part, delight is a pleasant surprise. However, there is disagreement on whether you actually need surprise to be delighted. And if you don’t need surprise, then delight is really

Read More

Small changes can have big impacts on rating scales. But to really know what the effects are, you need to test. Labels, number of points, colors, and item wording differences can often have unexpected effects on survey responses. In an earlier analysis, we compared the effects of using a three-point recommend item compared to an eleven-point recommend item. In that study, we showed how a

Read More

It seems like there are endless ways to ask questions of participants in surveys. Variety in question types can be both a blessing and a curse. Having many ways to ask questions provides better options to the researcher to assess the opinion of the respondent. But the wrong type of question can fail to capture what’s intended, confuse respondents, or even lead to incorrect decisions.

Read More

We all are bombarded with surveys asking us to provide feedback on everything from our experience on a website to our time at the grocery store. Many of us also create surveys. They're an indispensable method for collecting data quickly. Done well, they can be one of the most cost effective ways to: Understand the demographics of your customers Assess brand attitudes Benchmark perceptions of

Read More

Surveys are a relatively quick and effective way to measure customers' attitudes and experiences along their journey. Not all customer experience surveys are created equal. Depending on the goals, the type of questions and length will vary. While customer experience surveys can take on any form, it can be helpful to think of them as falling into these seven categories. 1. Relationship/BrandingCustomers' attitudes toward a

Read More

The best way to measure the user experience is by observing real users attempting realistic tasks.   This is best done through a usability test or direct observation of users.You may not always be able to do so though because of limitations in time, costs, availability of products, or difficulty in finding qualified users. Many large companies have dozens or even thousands of products for both internal

Read More

Which design do you prefer? Which product would you pick? Which website do you find more usable? A cornerstone of customer research is asking preferences. A common scenario is to show participants a set of designs or two or more websites, and then ask customers which design or site they prefer. While customer preferences often don't match customer performance, it's good practice to collect both

Read More