Survey

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 39 )
NPS ( 36 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
Sample Size ( 29 )
Rating Scale ( 29 )
SUS ( 28 )
Net Promoter Score ( 24 )
Usability Problems ( 18 )
Metrics ( 17 )
Questionnaires ( 17 )
Measurement ( 16 )
UMUX-lite ( 15 )
Rating Scales ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
User Experience ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 11 )
Qualitative ( 11 )
Reliability ( 11 )
Navigation ( 10 )
UX Metrics ( 8 )
SEQ ( 8 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
Questionnaire ( 7 )
Research ( 7 )
Mobile ( 6 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Confidence ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Unmoderated Research ( 5 )
Confidence Intervals ( 5 )
Six Sigma ( 5 )
Task Times ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
sliders ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
Summative ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
Key Driver ( 3 )
Voice Interaction ( 3 )
ROI ( 3 )
Desirability ( 3 )
Data ( 3 )
PURE ( 3 )
Findability ( 2 )
Personas ( 2 )
SUM ( 2 )
Salary Survey ( 2 )
A/B Testing ( 2 )
Errors ( 2 )
Excel ( 2 )
IA ( 2 )
Tasks ( 2 )
Correlation ( 2 )
Branding ( 2 )
PhD ( 2 )
Tree Testing ( 2 )
Remote Usability Testing ( 2 )
UX Salary Survey ( 2 )
Focus Groups ( 2 )
Marketing ( 2 )
Emoji scale ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
LTR ( 2 )
slider ( 2 )
Sensitivity ( 2 )
Cognitive Walkthrough ( 2 )
Star Scale ( 2 )
KLM ( 2 )
Formative ( 2 )
Desktop ( 1 )
Visual Analog Scale ( 1 )
Linear Numeric Scale ( 1 )
Within-subjects ( 1 )
consumer software ( 1 )
Margin of Error ( 1 )
History of usability ( 1 )
ISO ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Cumulative Graphs ( 1 )
Meeting software ( 1 )
User-Centred Design ( 1 )
Likert ( 1 )
Carryover ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
MOS ( 1 )
Research design ( 1 )
Bias ( 1 )
Probability ( 1 )
Information Architecture ( 1 )
Greco-Latin Squares ( 1 )
Site Analytics ( 1 )
R ( 1 )
t-test ( 1 )
Randomization ( 1 )
Latin Squares ( 1 )
Measure ( 1 )
Contextual Inquiry ( 1 )
Quality ( 1 )
Problem Severity ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
coding ( 1 )
negative scale ( 1 )
Mobile Usability ( 1 )
Polarization ( 1 )
Hedonic usability ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Task Completin ( 1 )
Top Task Analysis ( 1 )
Certification ( 1 )
Effect Size ( 1 )
Segmentation ( 1 )
Facilitation ( 1 )
Persona ( 1 )
User Testing ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Single Ease Question ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
Five ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
protoype ( 1 )
Metric ( 1 )
Expectations ( 1 )
UEQ ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
Competitive ( 1 )
RITE ( 1 )
Formative testing ( 1 )
CUE ( 1 )
Trust ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Moderating ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Delight ( 1 )
MUSiC ( 1 )
Happy New Year from all of us at MeasuringU! 2020 was a crazy year, but we still managed to post 48 new articles and continued improving MUIQ, our UX testing platform. We hosted our seventh UX Measurement Bootcamp, this time virtually. The change of format was a challenge, but it was fantastic to work with attendees from all over the world. The topics we wrote

Read More

During the fall in the northern hemisphere, leaves change colors, birds fly south, and the temperature gets colder. Do the birds change the color of the leaves, and does their departure make the temperature colder? What if you gave participants two versions of a rating scale, with the first having responses ordered from strongly disagree to strongly agree and the second reversing the order, and

Read More

There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes? Most research on this topic shows

Read More

Question wording in a survey can impact responses. That shouldn’t be much of a surprise. Ask a different question and you’ll get a different answer. But just how different the response ends up being depends on how a question has changed. Subtle differences can have big impacts; alternatively, large differences can have little impact. It’s hard to predict the type and size of impact on

Read More

Survey response options come in all sorts of shapes, sizes, and now, colors. The number of points, the addition of labels, the use of numbers, and the use of positive or negative tone are all factors that can be manipulated. These changes can also affect responses, sometimes modestly, sometimes a lot. There is some concern that long response scales (more than three points) are hard

Read More

You’ve probably taken a survey or two in your life, maybe even this week. Which means you’ve probably answered a few types of survey questions, including rating scale questions. Earlier I outlined 15 common rating scale questions with the linear numeric scale being one of the most used. Examples of linear numeric scales include the Single Ease Question (SEQ) and the Likelihood to Recommend item (LTR)

Read More

Should you label all points on a scale? Should you include a neutral point? What about labeling neutral points? How does that affect how people respond? These are common questions when using rating scales and they’ve also been asked about the Net Promoter Score: What are the effects of having a neutral label on the 11-point Likelihood to Recommend (LTR) item used to compute the

Read More

Have you taken a survey for a company without an incentive? I mean surveys where you have no clear chance of winning a prize, getting a discount, or receiving any clear compensation for your time? If you did, what motivated you to take it? Were you just curious, maybe killing time? Did you have a more positive or negative attitude toward the product or company

Read More

When done well, surveys are an excellent method for collecting data quickly from a geographically diverse population of users, customers, or prospects. In an earlier article, I described 15 types of the most common rating scale items and when you might use them. While rating scales are an important part of a survey, they aren’t the only part. Another key ingredient to a successful survey

Read More

We conduct a lot of quantitative online research, both surveys and unmoderated UX studies. Much of the data we collect in these studies is from closed-ended questions or task-based questions with behavioral data (time, completion, and clicks). But just about any study we conduct also includes some open-ended response questions. Our research team then needs to read and interpret the free-form responses. In some cases,

Read More