On top of worrying what to ask and who to ask, you have to worry about how to ask the questions so you don’t distort the real views of the respondents.
Here are eight things to help make the process a little smoother:
- Use all positive wording: People respond differently to positively worded and negatively worded items. So it makes sense why there is a tradition of including a mix of both items in questionnaires. Mixing item wording is intended to reduce acquiescent and extreme response biases. However, there is good evidence that mixing the item wording increases response errors and even coding errors from researchers who forget to reverse the responses.
- Don’t word items in the extreme: People tend to agree to items that are close to their opinion and disagree with all others. Items worded either extremely positive or extremely negative tend to generate high disagreement so try and strike a balanced tone.
- Include a neutral response option: There’s a concern that neutral responses will attract too many respondents who don’t feel strongly and thus dilute the results. For most customer experience issues (usability, loyalty and satisfaction) it is legitimate to have a neutral attitude. Removing a neutral option will force users to agree or disagree and likely just increase the error.
- Respondents prefer the left side of the rating scale: There is a slight bias to response options that appear on the left-side of a scale. Including more favorable items on the left (e.g. strongly agree) will inflate scores slightly due to the proximity of the response to the item.
- Include between 5 and 7 points for response options: For multiple item questionnaires having between 5 and 7 response options provides enough points of discrimination. For single items (like the likelihood to recommend question) there is a benefit to have more response options. There is a diminishing return after 11 options so between 7 and 11 options will suffice.
- Be consistent: Regardless of the number of response options, item wording, neutral response or scale orientation, if you consistently use the same scales and items over time you can build a database of relevant comparables which will help in interpreting the responses.
- Compare: Average responses, top-box scores and the percent of respondents who agree to an item can be difficult to interpret from a single survey. Comparison to earlier surveys on similar products and users or an industry benchmark provide the context to responses that makes interpretation much easier.
- Effects are Minor: Fortunately the effects of unusable, unreliable and undesirable products usually outweigh the fluctuations in scores you get from different wording and response formats so keep it in perspective. Worry more about what you can do to improve the user experience than how you measure it.