Survey

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 34 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Rating Scale ( 25 )
Sample Size ( 24 )
Net Promoter Score ( 22 )
Usability Problems ( 18 )
Metrics ( 17 )
Measurement ( 16 )
Questionnaires ( 15 )
User Experience ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
Rating Scales ( 14 )
Usability Metrics ( 13 )
Surveys ( 13 )
Market Research ( 12 )
SUPRQ ( 12 )
SUPR-Q ( 11 )
UMUX-lite ( 11 )
Qualitative ( 11 )
Reliability ( 10 )
Navigation ( 10 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
UX Metrics ( 8 )
Task Completion ( 7 )
SEQ ( 7 )
Questionnaire ( 7 )
Research ( 6 )
Mobile Usability Testing ( 6 )
Mobile ( 6 )
Analytics ( 6 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Expert Review ( 4 )
Task Times ( 4 )
Confidence ( 4 )
Quantitative ( 4 )
Loyalty ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Summative ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
Customer Segmentation ( 3 )
Key Driver ( 3 )
Task Metrics ( 3 )
Data ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
TAM ( 3 )
PURE ( 3 )
Desirability ( 3 )
Voice Interaction ( 3 )
Tree Testing ( 2 )
Tasks ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
Sample Sizes ( 2 )
Salary Survey ( 2 )
Remote Usability Testing ( 2 )
PhD ( 2 )
Branding ( 2 )
Correlation ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
Variables ( 2 )
Excel ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
SUM ( 2 )
Marketing ( 2 )
Eye-Tracking ( 2 )
Formative ( 2 )
Prototype ( 2 )
Errors ( 2 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
negative scale ( 1 )
graphic scale ( 1 )
Site Analytics ( 1 )
Desktop ( 1 )
consumer software ( 1 )
b2b software ( 1 )
History of usability ( 1 )
coding ( 1 )
Margin of Error ( 1 )
Formative testing ( 1 )
Task Randomization ( 1 )
RITE ( 1 )
MUSiC ( 1 )
Measure ( 1 )
Probability ( 1 )
ISO ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Linear Numeric Scale ( 1 )
Test Metrics ( 1 )
Star Scale ( 1 )
Within-subjects ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Polarization ( 1 )
Carryover ( 1 )
Problem Severity ( 1 )
MOS ( 1 )
Mean Opinion Scale ( 1 )
Design Thinking ( 1 )
Emoji scale ( 1 )
Information Architecture ( 1 )
Anchoring ( 1 )
slider ( 1 )
Visual Analog Scale ( 1 )
sliders ( 1 )
MOS-R ( 1 )
Regression Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Facilitation ( 1 )
Bias ( 1 )
Top Task Analysis ( 1 )
protoype ( 1 )
Unmoderated ( 1 )
Moderating ( 1 )
Design ( 1 )
Metric ( 1 )
Certification ( 1 )
Task Completin ( 1 )
Sample ( 1 )
Five ( 1 )
Z-Score ( 1 )
Perceptions ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Trust ( 1 )
Affinity ( 1 )
Think Aloud ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Performance ( 1 )
Microsoft Desirability Toolkit ( 1 )
Competitive ( 1 )
meCUE2.0 ( 1 )
CUE ( 1 )
Hedonic usability ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
LTR ( 1 )
PSSUQ ( 1 )
NSAT ( 1 )
Segmentation ( 1 )
moderated ( 1 )
Persona ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Quality ( 1 )
There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes? Most research on this topic shows

Read More

Question wording in a survey can impact responses. That shouldn’t be much of a surprise. Ask a different question and you’ll get a different answer. But just how different the response ends up being depends on how a question has changed. Subtle differences can have big impacts; alternatively, large differences can have little impact. It’s hard to predict the type and size of impact on

Read More

Survey response options come in all sorts of shapes, sizes, and now, colors. The number of points, the addition of labels, the use of numbers, and the use of positive or negative tone are all factors that can be manipulated. These changes can also affect responses, sometimes modestly, sometimes a lot. There is some concern that long response scales (more than three points) are hard

Read More

You’ve probably taken a survey or two in your life, maybe even this week. Which means you’ve probably answered a few types of survey questions, including rating scale questions. Earlier I outlined 15 common rating scale questions with the linear numeric scale being one of the most used. Examples of linear numeric scales include the Single Ease Question (SEQ) and the Likelihood to Recommend item (LTR)

Read More

Should you label all points on a scale? Should you include a neutral point? What about labeling neutral points? How does that affect how people respond? These are common questions when using rating scales and they’ve also been asked about the Net Promoter Score: What are the effects of having a neutral label on the 11-point Likelihood to Recommend (LTR) item used to compute the

Read More

Have you taken a survey for a company without an incentive? I mean surveys where you have no clear chance of winning a prize, getting a discount, or receiving any clear compensation for your time? If you did, what motivated you to take it? Were you just curious, maybe killing time? Did you have a more positive or negative attitude toward the product or company

Read More

When done well, surveys are an excellent method for collecting data quickly from a geographically diverse population of users, customers, or prospects. In an earlier article, I described 15 types of the most common rating scale items and when you might use them. While rating scales are an important part of a survey, they aren’t the only part. Another key ingredient to a successful survey

Read More

We conduct a lot of quantitative online research, both surveys and unmoderated UX studies. Much of the data we collect in these studies is from closed-ended questions or task-based questions with behavioral data (time, completion, and clicks). But just about any study we conduct also includes some open-ended response questions. Our research team then needs to read and interpret the free-form responses. In some cases,

Read More

Much of market and UX research studies are taken by paid participants, usually obtained from online panels. Our research has shown that using online panels for UX research for the most part provides reliable and valid results. While these huge sources of participants help fill large sample studies quickly, there’s a major drawback: poor quality respondents.  Reliable and valid responses only come when your data

Read More

The results of the 2016 UXPA salary survey are in. This is the 5th UXPA survey we've crunched the numbers for and it showed similar patterns as 2014. The Results The data was collected from September-December 2016 using a non-probability sample. Initial respondents were recruited through postings on professional networks and websites, such as UXPA and LinkedIn. Additional respondents were recruited using snowball sampling. Please

Read More