Despite the ease with which you can create surveys using software like our MUIQ platform, selecting specific questions and response options can be a bit more involved.
Most surveys contain a mix of closed-ended (often rating scales) and open-ended questions.
We’ve previously discussed 15 types of common rating scales and have published numerous articles in which we investigated their measurement properties. Now we turn our attention to different ways to use open-ended questions.
Closed-ended rating scales are for measuring; open-ended questions are for discovery. But the concept of discovery can be a bit vague.
In this article, we drill into the high-level concept of discovery and discuss five reasons to use open-ended questions.
1. Conduct Exploratory Research
A common reason to use open-ended questions is to learn more about a phenomenon or topic. Sometimes the responses to the open-ended questions tell you all you need to know. Other times, they can be a first step toward developing closed-ended questions, using information from qualitative research to guide a round of more quantitative research.
For example, when we conduct unmoderated usability studies, we often present a list of potential problems to participants after each task completion. If possible, we base the list on errors we collected and identified from prior studies. However, when we’re compiling an error list for the first time, we conduct pilot studies with open-ended questions to identify likely tasks and usability problems.
For an example of an open-ended question, see Figure 1; for an example of a problem list, see Figure 2. As shown in Figure 2, we routinely include an open-ended option at the end of the list so participants can tell us if we missed a problem in previous studies.
2. Reduce Bias or Other Unintended Influence
A major goal for UX researchers is to conduct research that collects unbiased information from participants. Much of our research on rating scales has investigated the extent to which differences in item formats affect the measurement of opinions and attitudes. Generally, we find that these differences have little effect on measurement; in other words, they don’t appear to bias the resulting data. (See Rating Scale Best Practices: 8 Topics Examined and Rating Scales: Myth vs. Evidence.)
On the other hand, we have not yet empirically investigated many variations of rating scales in the context of UX research. Some research contexts may be prone to unintentional manipulation of the wording of rating scale instructions, items, or response options (such as in anchoring). For example, does it matter whether the instructions for a grid of agreement items say “Please indicate the extent to which you agree with the following items” or “Please indicate the extent to which you agree or disagree with the following items”? Judging from our research, we suspect it doesn’t matter in practice, but we can’t say for sure.
As a check on potential bias, you can include one or more open-ended questions before presenting rating scales. You need to ensure that the wording of the open-ended question is as neutral as possible. For example, the open-ended question in Figure 3 is clearly problematic; the question in Figure 4 is better.
3. Assess Unaided Recall/Awareness
A key goal in many branding studies is to assess awareness of brands that come to mind for specific categories. A common strategy, known as aided awareness, is to present a list of brands for each category. Even if you provide an open-ended option at the end of the list (similar to Figure 2), the list can influence which brands come to mind, because all respondents need to do is to recognize rather than remember the brands.
An alternative strategy is unaided awareness, which uses open-ended questions to require respondents to recall rather than just recognize brands. This method reveals which brands are foremost in respondents’ minds, reducing the importance of brands they have heard of but not experienced. Figures 5 and 6 show two examples.
4. Ask Follow-up Questions
An important use of open-ended questions is to get follow-up clarification to close-ended questions. We use this strategy frequently, even in benchmark studies that have dozens of closed-ended questions. Two common follow-up questions ask respondents to give reasons for their Net Promoter Score (NPS) and Single Ease Question (SEQ) ratings. (See Figures 7 and 8 for examples.)
5. Reveal Unanticipated Events or Unexpected Attitudes
Sometimes you need to provide an opportunity for respondents to tell you whatever is on their minds. In those times, open-ended questions are the only option, and those questions need to be completely nondirective.
In diary studies, for example, you might ask participants to summarize what happened that day and then narrow the focus with directive questions (Figure 9).
At the end of a survey, you should include a final question that broadens the focus so respondents can tell you anything that they think the survey failed to ask, or if they noticed anything, negative or positive, that happened during the survey (Figure 10).
At a high level, the purpose of asking open-ended questions is discovery, but there are finer-grained reasons to ask open-ended questions. The five reasons discussed in this article include
- Conducting exploratory research
- Reducing bias or other unintended influence
- Assessing unaided recall/awareness
- Asking follow-up questions after closed-ended ratings
- Revealing unanticipated events or unexpected attitudes
Each of these reasons has an element of discovery in it, but they differ in their specific research contexts and whether they are somewhat directive or completely nondirective.