Survey

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 37 )
Benchmarking ( 32 )
NPS ( 32 )
Usability ( 32 )
Customer Experience ( 30 )
User Research ( 28 )
SUS ( 27 )
Rating Scale ( 22 )
Net Promoter Score ( 21 )
Sample Size ( 20 )
Usability Problems ( 17 )
Metrics ( 17 )
Measurement ( 15 )
User Experience ( 14 )
Validity ( 14 )
Questionnaires ( 13 )
Satisfaction ( 13 )
Usability Metrics ( 13 )
Surveys ( 13 )
SUPRQ ( 12 )
Rating Scales ( 12 )
Qualitative ( 11 )
Navigation ( 10 )
Market Research ( 9 )
Reliability ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
UMUX-lite ( 6 )
Research ( 6 )
Mobile ( 6 )
SEQ ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Task Times ( 4 )
Confidence ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Desirability ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
IA ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
Correlation ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
PhD ( 2 )
Key Driver ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
A/B Testing ( 2 )
Salary Survey ( 2 )
Data ( 2 )
Variables ( 2 )
Excel ( 2 )
TAM ( 2 )
Remote Usability Testing ( 2 )
Marketing ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
KLM ( 2 )
SUM ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Information Architecture ( 1 )
Conjoint Analysis ( 1 )
Site Analytics ( 1 )
Test Metrics ( 1 )
Problem Severity ( 1 )
Desktop ( 1 )
Task Randomization ( 1 )
Expectations ( 1 )
Margin of Error ( 1 )
Competitive ( 1 )
Quality ( 1 )
Design ( 1 )
Microsoft Desirability Toolkit ( 1 )
LTR ( 1 )
Voice Interaction ( 1 )
meCUE2.0 ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
moderated ( 1 )
Moderating ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Delight ( 1 )
Hedonic usability ( 1 )
CUE ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Design Thinking ( 1 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
RITE ( 1 )
Formative testing ( 1 )
MUSiC ( 1 )
ISO ( 1 )
History of usability ( 1 )
Metric ( 1 )
protoype ( 1 )
Top Task Analysis ( 1 )
Sample Sizes ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Errors ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Software ( 1 )
Ordinal ( 1 )
Segmentation ( 1 )
Persona ( 1 )
User Testing ( 1 )
Trust ( 1 )
Formative ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Regression Analysis ( 1 )
There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes? Most research on this topic shows

Read More

Question wording in a survey can impact responses. That shouldn’t be much of a surprise. Ask a different question and you’ll get a different answer. But just how different the response ends up being depends on how a question has changed. Subtle differences can have big impacts; alternatively, large differences can have little impact. It’s hard to predict the type and size of impact on

Read More

Survey response options come in all sorts of shapes, sizes, and now, colors. The number of points, the addition of labels, the use of numbers, and the use of positive or negative tone are all factors that can be manipulated. These changes can also affect responses, sometimes modestly, sometimes a lot. There is some concern that long response scales (more than three points) are hard

Read More

You’ve probably taken a survey or two in your life, maybe even this week. Which means you’ve probably answered a few types of survey questions, including rating scale questions. Earlier I outlined 15 common rating scale questions with the linear numeric scale being one of the most used. Examples of linear numeric scales include the Single Ease Question (SEQ) and the Likelihood to Recommend item (LTR)

Read More

Should you label all points on a scale? Should you include a neutral point? What about labeling neutral points? How does that affect how people respond? These are common questions when using rating scales and they’ve also been asked about the Net Promoter Score: What are the effects of having a neutral label on the 11-point Likelihood to Recommend (LTR) item used to compute the

Read More

Have you taken a survey for a company without an incentive? I mean surveys where you have no clear chance of winning a prize, getting a discount, or receiving any clear compensation for your time? If you did, what motivated you to take it? Were you just curious, maybe killing time? Did you have a more positive or negative attitude toward the product or company

Read More

When done well, surveys are an excellent method for collecting data quickly from a geographically diverse population of users, customers, or prospects. In an earlier article, I described 15 types of the most common rating scale items and when you might use them. While rating scales are an important part of a survey, they aren’t the only part. Another key ingredient to a successful survey

Read More

We conduct a lot of quantitative online research, both surveys and unmoderated UX studies. Much of the data we collect in these studies is from closed-ended questions or task-based questions with behavioral data (time, completion, and clicks). But just about any study we conduct also includes some open-ended response questions. Our research team then needs to read and interpret the free-form responses. In some cases,

Read More

Much of market and UX research studies are taken by paid participants, usually obtained from online panels. Our research has shown that using online panels for UX research for the most part provides reliable and valid results. While these huge sources of participants help fill large sample studies quickly, there’s a major drawback: poor quality respondents.  Reliable and valid responses only come when your data

Read More

The results of the 2016 UXPA salary survey are in. This is the 5th UXPA survey we've crunched the numbers for and it showed similar patterns as 2014. The Results The data was collected from September-December 2016 using a non-probability sample. Initial respondents were recruited through postings on professional networks and websites, such as UXPA and LinkedIn. Additional respondents were recruited using snowball sampling. Please

Read More