How Hard Are Your Surveys to Take?

Jeff Sauro, PhD

Surveys are ubiquitous.

We’re all inundated with requests to answer questions about what we think, what we do and what we might do.

You’ve almost surely been asked to fill out surveys about:

  • a hotel stay
  • a car-repair
  • a restaurant meal
  • a flight
  • a tech-support call

Surveys are everywhere because they are such a cost-effective and quick way to collect immediate feedback from customers. We’ve written about them extensively and, in Customer Analytics for Dummies, I discuss their use during all phases of the customer journey.

No doubt you’ve suffered through a painful survey. Whenever voluntary participants take a survey, friction leads to lower response rates.

You can improve your survey, and consequently the response rate, by running a usability test on it: have a handful of participants take it and think aloud as they answer (or struggle to answer) your questions. While this provides valuable feedback, we wanted to know if we could quantify the difficulty of surveys before the results are in and even predict the response rate using a short questionnaire.

To find out, I partnered with my friends at Questionpro and TryMyUI to develop just such an instrument: the Survey Respondent Score (SRS). We wanted a short survey to gauge the difficulty of, well, a survey. In a March 17th webinar, I presented the results.

To develop the SRS, we followed the general process of psychometric validation for creating any standardized instrument.

  1. Review the literature and create a list of items.
  2. Test the items with a sample of participants.
  3. Winnow the items to the best fitting and most reliable.
  4. Assess validity.

Creating the items

We started with a list of 13 candidate items that we felt addressed the usability, fatigue, answerability and clarity of the questions. Those items are:

  1. This survey was easy to take.
  2. In terms of flow, this survey flowed well.
  3. Compared to other surveys I have taken, this survey was average.
  4. This survey was quick-paced.
  5. This survey took average time to complete.
  6. If I were asked, I would be likely to take this survey again.
  7. Throughout the survey, I felt engaged.
  8. All of the questions could be answered accurately.
  9. With the answers provided, I was able to answer all of the questions.
  10. Did you have enough information to answer the questions?
  11. The questions were clear.
  12. I could understand the questions easily.
  13. To understand the questions, I had to work hard

Testing the Items

Next we randomly assigned roughly 500 participants to one of 5 surveys. For each survey, we manipulated a number of elements that we believed affected the survey difficulty:

  • number of open-ended questions
  • number of rows in tables
  • number of page breaks
  • presence or absence of a progress bar
  • presence or absence of an introduction and/or instructions
  • number of required fields
  • number of questions
  • number of response options

For example, one survey intended to be easy, had 5 questions and a progress bar, while another survey, intended to be harder, had 45 questions and no progress bar.

Winnowing

To determine how well the items were working, I used a mix of Classical Test Theory (CTT) and Item Response Theory (IRT) techniques.

I removed three items with low total correlation, leaving 10 items. Next I examined the factor structure and found a unidimensional structure (using a parallel analysis and examination of the Scee Plot). That is, the remaining 10 items all tended to measure one underlying construct of survey difficulty.

I examined the items using a logit-transformation. This is the transformation used in Item Response Theory which converts ordinal data to interval-scaled data. This revealed three items with a similar discrimination profile, suggesting redundancy, and I removed three of them. This left 7 items.

The average of these 7 items formed the basis for the Survey Respondent Score (SRS).  The internal consistency reliability (a measure of how consistently participants respond across questions) is high (Cronbach alpha = 0.93). This provides evidence that the instrument is reliable.

Validity

Validity refers to the ability of an instrument to measure what its intended to measure. (Reliability is necessary to have a valid instrument, but a reliable instrument is no guarantee that it’s valid.) Our first indication that the SRS is measuring what it’s intended to measure is in its ability to discriminate between the five surveys.

The SRS for the five surveys used in the pretest is shown in Figure 1. You can see the clear delineation between surveys. Survey 1 had the most questions, no progress bar, and the most open-ended questions and received the lowest score. Survey 5 had the fewest questions, a progress bar, and the fewest open-ended questions and received the highest score.


Figure 1: Average Survey Respondent Score (SRS) for the five candidate surveys. The SRS discriminates well between difficult surveys (survey 1) and easy surveys (survey 5).

To further establish validity, I looked at the correlation between each pair of elements we manipulated in the survey and the SRS. I found a strong correlation between several combinations of elements and SRS scores. For example, the number of questions alone explained 85% of the variation in the SRS scores.

Finally, I looked at the response rate for each of the surveys (the number that started the survey divided by the number that completed it).   The response rate ranged from a low of 60% to a high of 87% and correlated highly with the SRS.

This is a great first step in establishing validity with a small set of surveys. Next, we’ll see how well the SRS can predict a more diverse set of survey response rates and the quality of responses.

The SRS is now being offered as a service from TryMyUI. You can have a handful of participants walk through the survey and provide feedback on the questions, and you can get an SRS to understand how well your survey scores relative to the ones we’ve tested.

The final 7 items are:

  1. This survey was easy to take.
  2. In terms of flow, this survey flowed well.
  3. Compared to other surveys I have taken, this survey was average.
  4. If I were asked, I would be likely to take this survey again.
  5. Throughout the survey, I felt engaged.
  6. The questions were clear.
  7. I could understand the questions easily.

Conclusion

While there isn’t a universal formula for making better surveys, here are some things to keep in mind on your next project:

  • To understand which elements need to be addressed in a survey, have a handful of participants walk through the survey and think aloud as they answer questions.
  • To test the difficulty of a survey, have a set of participants answer the SRS.
  • The 7-item SRS is a quick, reliable, and valid instrument for assessing survey difficulty and correlates with the response rate.
  • The SRS also correlates highly with elements that increase the difficulty of taking a survey:
    • number of questions
    • presence or absence of a progress bar
    • number of open-ended questions

Surveys can tell us how our customers respond to our products, services, websites, or anything else about our brand. But if our surveys are a pain to use, participants are less likely to complete them or provide poor quality responses. Before you put your survey in front of customers, use the SRS to determine how easy or difficult your customers will find it to complete and use the feedback from the think-aloud sessions for ideas on what to fix.

You might also be interested in
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top