{"id":33436,"date":"2022-06-28T19:31:37","date_gmt":"2022-06-29T01:31:37","guid":{"rendered":"https:\/\/measuringu.com\/?p=33436"},"modified":"2022-06-28T07:17:08","modified_gmt":"2022-06-28T13:17:08","slug":"comparison-of-two-seq-item-wordings","status":"publish","type":"post","link":"https:\/\/measuringu.com\/comparison-of-two-seq-item-wordings\/","title":{"rendered":"Comparing Two SEQ Item Wordings"},"content":{"rendered":"

\"\"<\/a>We use the seven-point Single Ease Question<\/a> (SEQ\u00ae<\/sup>) frequently in our practice, as do many other UX researchers.<\/p>\n

One reason for its popularity is the body of research that started in the mid-2000s<\/a> with the comparison of the SEQ to other similar short measures<\/a> of perceived ease-of-use, the generation of a normative SEQ database, and evidence demonstrating its strong correlation to task completion and task time<\/a>.<\/p>\n

Over time, different versions of the SEQ have appeared in the literature. Even in our own practice, it has evolved, so we\u2019ve recently started studying whether some of these variants have different measurement properties, especially concerning means and top-box scores<\/a>.<\/p>\n

For example, we recently investigated mean and top-box differences for data collected using SEQ formats that differed only in the polarity of their response option endpoint labels (Very Difficult on the left for standard; Very Easy on the left for alternate, using a within-subjects Greco-Latin<\/a> square design that, in addition to manipulating item format, also manipulated task difficulty). We found that the two formats produced very similar means<\/a>, but there were some differences in top-box scores<\/a>, especially when the task was difficult.<\/p>\n

Another way in which the SEQ has changed in our practice is in the wording of the item stem<\/a> (the part of the item that precedes the response options). As shown in Figure 1, the original wording was just \u201cOverall, this task was:\u201d. That works fine when presented in an online survey or printed form but is awkward to say verbally to participants in moderated usability tests. For consistency between our moderated usability testing and unmoderated usability testing\/surveys, the wording we currently use is \u201cHow easy or difficult was it to complete this task?\u201d.<\/p>\n

\"\"<\/a>
Figure 1:<\/strong> Two SEQ item formats differing in the wording of their stems.<\/figcaption><\/figure>\n

We often find that minor changes in wording have no consistent detectable effect on measurement with means or top-box scores, but we wanted to see whether this was the case with these SEQ variants because our current normative database includes data collected with both versions. It seemed to us that it was plausible that including the phrase \u201ceasy or difficult\u201d in our current stem, with \u201ceasy\u201d first, could slightly bias respondents toward selecting \u201cVery Easy.\u201d<\/p>\n

So, we ran an experiment to find out.<\/p>\n

Experimental Design: SEQ Original vs. Current Formats<\/h2>\n

Using our MUIQ\u00ae<\/sup> platform<\/a> for conducting unmoderated remote UX studies, we set up a Greco-Latin<\/a> experimental design to support a within-subjects comparison of original and current versions of the SEQ in the contexts of attempting easy and hard tasks.<\/p>\n