{"id":256,"date":"2015-03-17T22:00:00","date_gmt":"2015-03-17T22:00:00","guid":{"rendered":"http:\/\/measuringu.com\/hard-surveys\/"},"modified":"2021-01-28T06:30:04","modified_gmt":"2021-01-28T06:30:04","slug":"hard-surveys","status":"publish","type":"post","link":"https:\/\/measuringu.com\/hard-surveys\/","title":{"rendered":"How Hard Are Your Surveys to Take?"},"content":{"rendered":"
Surveys are ubiquitous.<\/p>\n
We’re all inundated with requests to answer questions about what we think, what we do and what we might do.<\/p>\n
You’ve almost surely been asked to fill out surveys about:<\/p>\n
Surveys are everywhere because they are such a cost-effective and quick way to collect immediate feedback from customers. We’ve written about them extensively and, in Customer Analytics for Dummies<\/a><\/i>, I discuss their use during all phases of the customer journey.<\/p>\n No doubt you’ve suffered through a painful survey<\/a>. Whenever voluntary participants take a survey, friction leads to lower response rates.<\/p>\n You can improve your survey, and consequently the response rate, by running a usability test on it: have a handful of participants take it and think aloud as they answer (or struggle to answer) your questions. While this provides valuable feedback, we wanted to know if we could quantify the difficulty of surveys before the results are in and even predict the response rate using a short questionnaire.<\/p>\n To find out, I partnered with my friends at Questionpro<\/a> and TryMyUI<\/a> to develop just such an instrument: the Survey Respondent Score (SRS). We wanted a short survey to gauge the difficulty of, well, a survey. In a March 17th webinar<\/a>, I presented the results.<\/p>\n To develop the SRS, we followed the general process of psychometric validation for creating any standardized instrument.<\/p>\n We started with a list of 13 candidate items that we felt addressed the usability, fatigue, answerability and clarity of the questions. Those items are:<\/p>\n Next we randomly assigned roughly 500 participants to one of 5 surveys. For each survey, we manipulated a number of elements that we believed affected the survey difficulty:<\/p>\n For example, one survey intended to be easy, had 5 questions and a progress bar, while another survey, intended to be harder, had 45 questions and no progress bar.<\/p>\n To determine how well the items were working, I used a mix of Classical Test Theory<\/a> (CTT) and Item Response Theory<\/a> (IRT) techniques.<\/p>\n I removed three items with low total correlation, leaving 10 items. Next I examined the factor structure and found a unidimensional structure (using a parallel analysis and examination of the Scee Plot). That is, the remaining 10 items all tended to measure one underlying construct of survey difficulty.<\/p>\n I examined the items using a logit-transformation. This is the transformation used in Item Response Theory which converts ordinal data to interval-scaled data<\/a>. This revealed three items with a similar discrimination profile, suggesting redundancy, and I removed three of them. This left 7 items.<\/p>\n The average of these 7 items formed the basis for the Survey Respondent Score (SRS).\u00a0 The internal consistency reliability (a measure of how consistently participants respond across questions) is high (Cronbach alpha = 0.93). This provides evidence that the instrument is reliable.<\/p>\n\n
Creating the items<\/h2>\n
\n
Testing the Items<\/h2>\n
\n
Winnowing<\/h2>\n
Validity<\/h2>\n