Surveys

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 36 )
Benchmarking ( 32 )
Usability ( 32 )
NPS ( 31 )
Customer Experience ( 30 )
User Research ( 28 )
SUS ( 25 )
Rating Scale ( 21 )
Net Promoter Score ( 21 )
Sample Size ( 19 )
Usability Problems ( 17 )
Metrics ( 16 )
Measurement ( 15 )
Validity ( 14 )
User Experience ( 14 )
Satisfaction ( 13 )
Usability Metrics ( 13 )
Questionnaires ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Surveys ( 11 )
Navigation ( 10 )
Rating Scales ( 10 )
Market Research ( 9 )
Reliability ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
Mobile Usability Testing ( 6 )
UMUX-lite ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Research ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
SEQ ( 5 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
UX Methods ( 4 )
Quantitative ( 4 )
Moderation ( 4 )
Task Times ( 4 )
Loyalty ( 4 )
Confidence Intervals ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Expert Review ( 4 )
Confidence ( 4 )
ROI ( 3 )
Task Metrics ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
PURE ( 3 )
Usability Lab ( 3 )
Desirability ( 3 )
Correlation ( 2 )
Marketing ( 2 )
Excel ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Branding ( 2 )
Data ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
Key Driver ( 2 )
UX Salary Survey ( 2 )
Summative ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
IA ( 2 )
Tree Testing ( 2 )
Focus Groups ( 2 )
Personas ( 2 )
SUM ( 2 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
Voice Interaction ( 1 )
User-Centred Design ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
TAM ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Customer effort ( 1 )
Desktop ( 1 )
Problem Severity ( 1 )
Information Architecture ( 1 )
RITE ( 1 )
Contextual Inquiry ( 1 )
MUSiC ( 1 )
ISO ( 1 )
Cumulative Graphs ( 1 )
History of usability ( 1 )
Meeting software ( 1 )
Formative testing ( 1 )
CUE ( 1 )
UEQ ( 1 )
Likert ( 1 )
Site Analytics ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Mobile Usability ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
meCUE2.0 ( 1 )
Certification ( 1 )
Segmentation ( 1 )
Think Aloud ( 1 )
Persona ( 1 )
Random ( 1 )
Crowdsourcing ( 1 )
Five ( 1 )
Sample ( 1 )
Software ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Trust ( 1 )
Top Task Analysis ( 1 )
Formative ( 1 )
Unmoderated ( 1 )
Design Thinking ( 1 )
Design ( 1 )
Ordinal ( 1 )
Perceptions ( 1 )
Quality ( 1 )
protoype ( 1 )
Facilitation ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Moderating ( 1 )
Margin of Error ( 1 )
Metric ( 1 )
Errors ( 1 )
Competitive ( 1 )
Regression Analysis ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Conjoint Analysis ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Expectations ( 1 )
moderated ( 1 )
To have a reliable and valid response scale, people need to understand what they’re responding to. A poorly worded question, an unclear response item, or an inadequate response scale can create additional error in measurement. Worse yet, the error may be systematic rather than random, so it would result in unmodeled bias rather than just increased measurement variability. Rating scales have many forms, with variations

Read More

In an earlier article, we reviewed five competing models of delight. The models differed in their details, but most shared the general idea that delight is composed of an unexpected positive experience. Or, for the most part, delight is a pleasant surprise. However, there is disagreement on whether you actually need surprise to be delighted. And if you don’t need surprise, then delight is really

Read More

Small changes can have big impacts on rating scales. But to really know what the effects are, you need to test. Labels, number of points, colors, and item wording differences can often have unexpected effects on survey responses. In an earlier analysis, we compared the effects of using a three-point recommend item compared to an eleven-point recommend item. In that study, we showed how a

Read More

It seems like there are endless ways to ask questions of participants in surveys. Variety in question types can be both a blessing and a curse. Having many ways to ask questions provides better options to the researcher to assess the opinion of the respondent. But the wrong type of question can fail to capture what’s intended, confuse respondents, or even lead to incorrect decisions.

Read More

We all are bombarded with surveys asking us to provide feedback on everything from our experience on a website to our time at the grocery store. Many of us also create surveys. They're an indispensable method for collecting data quickly. Done well, they can be one of the most cost effective ways to: Understand the demographics of your customers Assess brand attitudes Benchmark perceptions of

Read More

Surveys are a relatively quick and effective way to measure customers' attitudes and experiences along their journey. Not all customer experience surveys are created equal. Depending on the goals, the type of questions and length will vary. While customer experience surveys can take on any form, it can be helpful to think of them as falling into these seven categories. 1. Relationship/BrandingCustomers' attitudes toward a

Read More

The best way to measure the user experience is by observing real users attempting realistic tasks.   This is best done through a usability test or direct observation of users.You may not always be able to do so though because of limitations in time, costs, availability of products, or difficulty in finding qualified users. Many large companies have dozens or even thousands of products for both internal

Read More

Which design do you prefer? Which product would you pick? Which website do you find more usable? A cornerstone of customer research is asking preferences. A common scenario is to show participants a set of designs or two or more websites, and then ask customers which design or site they prefer. While customer preferences often don't match customer performance, it's good practice to collect both

Read More

You've worked hard designing your survey, you need data to make better decisions for your product, you need people to answer your survey! Unfortunately, in our quest to squeeze the most out of our precious participants, it gets difficult not to commit some survey sins. Inevitably one or a few of these response rate killers will creep into your next survey project. Knowing more about

Read More

An effective questionnaire is one that has been psychometrically validated. This primarily means the items are reliable (consistent) and valid (measuring what we intend to measure). So if we say a questionnaire measures perceptions of website usability, it should be able to differentiate between usable and unusable websites and do so consistently over time. With websites and software products that have an international reach, a

Read More