{"id":612,"date":"2020-10-21T01:28:43","date_gmt":"2020-10-21T01:28:43","guid":{"rendered":"http:\/\/measuringu.com\/revisiting-the-left-side-bias\/"},"modified":"2023-04-24T16:44:07","modified_gmt":"2023-04-24T22:44:07","slug":"revisiting-the-left-side-bias","status":"publish","type":"post","link":"https:\/\/measuringu.com\/revisiting-the-left-side-bias\/","title":{"rendered":"Revisiting the Evidence for the Left-Side Bias in Rating Scales"},"content":{"rendered":"

\"\"<\/a>Are people more likely to select response options that are on the left side of a rating scale?<\/p>\n

About ten years ago, we provided a brief literature review<\/a> of the published evidence, which suggested that this so-called left-side bias not only existed but also was detected almost 100 years ago in some of the earliest rating scales.<\/p>\n

Across the publications we reviewed, the effect size seemed to be about 7.5% of the total possible range of the items used to measure the bias, leading to the claim that \u201can unethical researcher interested in manipulating results could place the desired response on the left side of the scale\u201d (Friedman & Amoo, 1999, p. 117<\/a>).<\/p>\n

This didn\u2019t seem unreasonable given theories on participant motivation, reading habits, and education level, which operated in conjunction with a primacy effect<\/a> that was consistent with the observed bias. For ten years this was our understanding of the left-side bias.<\/p>\n

But after a UX metrics study conducted<\/a> by Jim in 2019 found little evidence for a left-side bias (just a nonsignificant 1% difference), we wondered how robust the left-side bias really was.<\/p>\n

Could other variables account for the lack of effect in the new study? Was it the study design or something about the UX measures? Or was there more (or less) to the earlier papers than their titles and abstracts suggested?<\/p>\n

The only way to find out was to revisit the published papers from the original literature review and to check the quality of their data and citations. This took us on a journey through six papers published over the course of 90 years.<\/p>\n

Wait a Second \u2026 Where\u2019s the Left-Side Bias?<\/h2>\n

In the 2019 study<\/a>, 546 IBM employees rated IBM Notes using a version of the Technology Acceptance Model<\/a> (TAM). Participants were randomly assigned to complete a TAM questionnaire with item scales oriented from either left to right (Figure 1, top) or right to left (Figure 1, bottom).<\/p>\n

\"\"<\/a>
\n
\"\"<\/a><\/p>\n

Figure 1: Left to right (top) and right to left (bottom) scale formats used in Lewis (2019).<\/p>\n

Both formats of the TAM had 12 items that were averaged and converted to 0\u2013100-point scales<\/a>. The results were consistent with a left-side bias (Figure 2 slightly higher mean for R to L). However, the difference was smaller than expected (just 1% of the scale\u2019s range) and not statistically significant (F(1, 542) = .16, p = .69). A 95% confidence interval around the mean difference of 1 ranged from -3.9 to 5.8, so if the nominal difference of 1 is a real difference, it would take a sample size much larger than 546 to prove it.<\/p>\n

\"\"<\/a><\/p>\n

Figure 2: Comparison of means as a function of format from Lewis (2019).<\/p>\n

With this result in mind, we revisited the literature and tracked down earlier papers (and possibly the earliest) on this topic. The papers we reviewed follow two research threads:<\/p>\n