The effectiveness of surveys starts with the quality of the questions.
Question writing is an art and a science. You need to balance your needs and the needs of the organization commissioning the survey with the burden on the respondents.
1. Keep questions short, but not too short.
Succinct writing and good copy editing make questions short and clear. Get to the point quickly and use simple rather than compound sentences.
Watch for filler phrases (in order to, for the purposes of). You don’t want to make a question too short for the sake of brevity though (so avoid arbitrarily stopping at say 25 words). Say what you need to and no more. See Strunk and White for ideas on cutting your words while keeping your content.
2. Ask only one question at a time.
Overall, how satisfied are you with the cost and quality of the product?
These so-called double-barreled questions are problematic because you don’t know which aspect the respondent is reacting to; is it the cost OR the quality, or both? Split these into two items.
3. Minimize demanding recall.
When did you purchase your first mobile app?
Knowing when and why things happen can be very helpful in uncovering problems or generating design ideas. But people don’t have photographic memories and even have trouble recalling recent events.
4. Avoid leading questions.
Don’t you agree that employers should be required to give paid time off?
Attorneys aren’t allowed to lead witnesses in the courtroom, and neither should you when crafting questions. Use neutral wording, such as: Employers should be required to give paid time off. Then have the respondent select a level of agreement using a Likert scale.
5. Avoid jargon and acronyms.
Use the same language your respondents use and spell out acronyms, even if you think they know what a TLA is (Three Letter Acronym). Have someone unfamiliar with your field review your questions to identify unfamiliar jargon.
6. Make lists exhaustive.
Which one of the following operating systems does your mobile phone use?
• iOS (Apple)
Be sure the options you provide actually contain all viable options. While the bulk of the smartphone market is dominated by Apple and Android-based smartphones, there are Windows and Amazon-based phones. Having a catch-all “Other” option is a way to ensure you aren’t forcing respondents to pick an answer that doesn’t apply. It also helps to get around the assumption that the respondent owns a mobile phone in the first place (see #10).
Randomize the order of response options (and questions). There is a bias for choices placed first and to the left. If options have a natural order (like age or income brackets), then it doesn’t make sense to randomize. Otherwise, wherever you can, randomize the order of the options to minimize this bias.
8. Watch for non-mutual exclusivity.
Please select your age:
When creating response choices in which respondents are asked to pick only one choice, be sure each answer is only represented by one response (mutually exclusive). For example, with the age and income brackets shown above, it’s easy to overlook overlapping categories.
A respondent who is 30 could pick one of the first two groups. Non-mutually exclusive age groups are 21-30, 31-40, 41-50 and 51+. The 30 year old then only has one choice.
9. Use all positive wording for rating scales.
This website is easy to use.
It’s hard to navigate within this website.
Mixing item wording is intended to reduce acquiescent and extreme response biases. However, there is good evidence that mixing the item wording increases response errors, distorts the factor structure when running a factor analysis, and even causes coding errors from researchers who forget to reverse the responses.
10. Be sure to give respondents an applicable answer.
If you ask people their opinion, they’ll give you one, even if it’s uninformed or people don’t have a strong feeling; or worse, they have no idea what you’re asking about. If you’re asking about people’s commute to work, be sure they actually commute first. Proper logic and screening can often avoid this issue, when it can’t, provide a Not Applicable option.
11. Differentiate between Don’t Know vs. Not Applicable.
Not knowing and not being applicable differ. As much as possible, target your respondents so they are only asked questions on which they can answer. For example, respondents might not be familiar with a brand and the Not Applicable response is acceptable when they are asked to rate their favorability toward it. A respondent might be asked if they will purchase your product in the next month; Don’t Know is likely more correct than Not Applicable.
12. Use a mix of open-ended and closed-ended questions.
Open-ended response comments allow you to get what’s at the top of mind for respondents without priming them with suggestions. The downside is that to get a more systematic analysis of open-ended comments, you will have to deal with coding many responses (which is often a worthwhile investment to gain a deeper understanding of your respondents).
|UX Measurement Boot Camp : Three Days of Intensive Training on UX Methods, Metrics and Measurement Aug. 7th-9th 2019|