We all are bombarded with surveys asking us to provide feedback on everything from our experience on a website to our time at the grocery store.
Many of us also create surveys.
They’re an indispensable method for collecting data quickly. Done well, they can be one of the most cost effective ways to:
- Understand the demographics of your customers
- Assess brand attitudes
- Benchmark perceptions of the user experience
- Measure customer satisfaction
In addition to a method for collecting data, surveys can be viewed as a type of interface (the content and delivery). As an interface with a person on the receiving end of the questions, it makes sense to apply some HCI/usability principles to survey design.
One key usability principle is to minimize the burden on the respondent (user). Some things that make surveys a burden are:
- Too lengthy
- Many open-ended questions
- Difficult to understand questions
- Too many questions
- Required responses
Generally speaking, when respondents find a survey too much of a burden, you’ll pay for it in low response rates.
Cons of Required Responses
The obstacles you’ll likely encounter with required responses are:
- Frustration and abandonment: The conventional wisdom is that respondents become frustrated and abandon surveys with required questions (e.g. this article, and this one). The best crafted survey isn’t much good if no one takes it. It might even be worse if people commit to taking a survey and abandon it out of frustration; you may have incomplete data AND an irritated sample of customers!
- Non-response bias: If people abandon because of required responses, this may contribute to response bias as the people who abandon may be different than those who don’t.
- Response bias: Participants may respond to required questions, but they might be lying or randomly picking an answer to get through the survey. There are ways to detect unusual responses, but it’s not foolproof.
- Privacy policies: For surveying certain populations, policies may prevent you from requiring responses. Most institutional review boards (IRBs), for example, have a policy that allows people to opt out from answering any question, including mandatory questions.
Pros of Required Responses
Despite their drawbacks, there are good reasons to require responses.
- Branching and logic: One of the best ways to keep a survey short is to only include relevant questions to participants and not force respondents to answer N/A or skip the questions. If a question has logic (skipping several inapplicable questions) or branching, responses need to be compulsory.
- Avoiding listwise deletion: Many of the statistical analyses used to analyze results (such as multiple regression, factor analysis, and cluster analysis) can’t handle missing values well. If any response is missing from a respondent, all responses from that respondent are removed from the analysis (called listwise deletion). There are methods for filling in missing values (called imputation) but it’s not a substitute for real answers, especially if there is systematic bias in people who are not responding. Having someone volunteer their time to answer most of your survey questions but their data gets thrown out may be worse than requiring responses.
- Reducing overlooked questions: Not all missing responses are intentional. People make mistakes and overlook questions (especially in matrix questions). Required responses allow respondents to notice this mistake and provide a response. Some survey systems allow you to request a response without requiring a response, but for most platforms, the only way to remind is to require a response.
Compensation & Compelled Respondents
The above pros and cons of requiring responses apply to surveys where people are asked to volunteer their time. This is often with customer surveys. But in many cases, participants are paid for their time using national panel providers or Mechanical Turk. Under these circumstances, completing the survey is essentially a job. The more questions you ask, the more you pay to get responses. Not answering questions for paid respondents is like not doing a job. You still have to consider the effects of people lying or providing erroneous answers even when you pay them though.
When participants are not part of a panel but are still compensated, or entered into a sweepstakes, you likely have more license to compel responses, but probably not as much as with paid panel participants.
In some situations, people are compelled to respond (by law or company policy). A company may require participants to answer an employee satisfaction survey. And in the US, people are required by law to answer all the questions on a census survey.
How Much Do Required Responses Affect Response Rates?
Much of the research on surveys comes from decades of paper-based surveys. One of the major differences between paper-based and electronic surveys is that with electronic surveys you can require your respondents to provide an answer before they can proceed. As such, there’s not as much in the literature on the pros and cons of mandatory responses.
Some of the arguments I’ve read for avoiding required responses is that they lead to drop out. The idea is that if respondents get too many required questions, participants abandon out of frustration. What’s unclear though is whether these same participants would abandon anyway simply from being presented with too many questions.
One study I found on the topic showed that mandatory items actually increased the response rate.
In another analysis I conducted of several factors that affected survey completion rates, the best predictor of survey abandonment was the length of the survey. The number of required fields was not a significant predictor. This is a small study so more data is needed.
Recommendation & Summary
Requiring responses increases the burden on the respondent, which in turn may lead to increased abandonment. The actual effect on survey completion rates is unclear and in some cases required responses may actually increase the response rate.
When compensating participants, especially with paid panel services, you likely have more license to require responses to most, if not all, questions. Required questions also have the added benefit of reducing survey length by allowing for branching and logic, as well as alerting respondents to questions they may have overlooked.
To improve response rates, reducing the length of the survey (number of questions) will likely have a bigger effect on the response rate than the number of required responses. More research is needed to disentangle the effects of survey length and required responses to a variety of survey types.
If there is one clear conclusion about required responses is that the advice that you should never have mandatory responses is overstated. Much like the notorious three clicks to content rule, there are so many exceptions that it shouldn’t be added to the survey playbook.
|UX Measurement Boot Camp : Three Days of Intensive Training on UX Methods, Metrics and Measurement Aug. 8th-10th 2018|