{"id":324,"date":"2016-07-26T21:30:14","date_gmt":"2016-07-26T21:30:14","guid":{"rendered":"http:\/\/measuringu.com\/control-brand\/"},"modified":"2021-08-10T09:34:12","modified_gmt":"2021-08-10T15:34:12","slug":"control-brand","status":"publish","type":"post","link":"https:\/\/measuringu.com\/control-brand\/","title":{"rendered":"Controlling for Brand Attitudes in UX Studies"},"content":{"rendered":"
<\/a>There are a number of variables that affect UX metrics<\/a>.<\/p>\n In most cases though, you’ll simply want to measure the user experience and not these other “nuisance variables” that may mask the experience users have with an interface.<\/p>\n This is especially the case when making comparisons. In a comparative analysis you use multiple measures to determine which website or product is superior.<\/p>\n Three of the most common variables<\/a> that can affect UX metrics and mask differences in designs are:<\/p>\n In an earlier post I discussed the effects of prior experience<\/a> and how it can be measured and controlled for<\/a>. Here I’ll describe the more nebulous nature of brand attitudes<\/a> and how best to measure and control for them in UX studies.<\/p>\n A user’s attitude toward a company or product can heavily influence (for better or worse) their attitude toward an experience. People treat companies and brands like they treat people. Think of someone in your life who you generally don’t like (a colleague, manager, or friend of a friend). This person may annoy you, or you disagree with on many issues.<\/p>\n If I asked you to rate your favorability toward them on a seven-point scale, chances are, it’d be on the low side. This unfavorable attitude would then likely carry over to how you judge the quality of his or her work and ideas. In contrast, think of someone you like, admire, and respect. When you judge this person’s work and ideas you’re likely to cast a more positive “halo”<\/a> on their efforts.<\/p>\n The same concept applies for companies and their brands. If you don’t like a company (for example Coca-Cola or Walmart), or one of their brands (Odwalla or Sam’s Club), you’re probably not going to like their website and in turn rate the experience less favorably.<\/p>\n While it’s realistic to have a mix of brand lovers and haters in a UX study, you’ll often want to understand how much the experience differs despite these variations in participants’ brand attitudes.<\/p>\n For example, I have data from a comparative website benchmark<\/a> study we conducted with well-known consumer retail brands. The study was between-subjects<\/a> with 592 participants and four websites. At the beginning of the study we asked participants to rate their favorability toward the brands used in the study. At the end of the study, participants answered the 8-item SUPR-Q<\/a>\u00ae\u2014a reliable measure of the quality of the website user experience. You can see the scores by website in Figure 1 below.<\/p>\n In Figure 1, website A had the highest SUPR-Q score and separated itself from the other websites. Running an Analysis of Variance (ANOVA<\/a>) we find at least one website is statistically significant from the others F(3,589)=6.31; p <.001. You can also see this difference visually from the lack of overlap in the confidence intervals between website A and the other sites.<\/p>\n But Figure 2 below shows the same participants’ brand favorability ratings on a 7-point scale (1 = very unfavorable and 7 = favorable).<\/p>\n We see the same pattern as the SUPR-Q scores in Figure 1. Participants in particular had a strong affinity toward Brand A. In fact, the correlation between brand favorability and SUPR-Q scores in this study is reasonably large ( r = .52). But when measuring the website user experience, we usually want to measure how the website affected attitudes about usability and not just a preexisting measure of brand attitude.<\/p>\n In other words, what we want to know is, how much of the differences in SUPR-Q scores are due to the website and not brand favorability?<\/b> To find out we can use two approaches: the ANCOVA or create subgroups by splitting the data.<\/p>\n The Analysis of Covariance (called ANCOVA) will “partial” out the correlated effects of brand attitudes to tell us if the website experiences continue to differ. Both SPSS<\/a> and R<\/a> have ANCOVA procedures and they both allow you to save the corrected scores, which you can then use as a new variable and to visualize. Figure 3 below shows the new “corrected” means from the ANCOVA along with the original SUPR-Q mean scores by website.<\/p>\n After taking into account brand favorability, the differences between websites are no longer statistically significant F(3,593) = 1.790; p = .148. You can also see that website A’s error bars are now overlapping with the other websites, also showing the loss of statistical significance. In other words, after accounting for brand favorability, attitudes toward the quality of the website user experience are no longer substantially different. Accounting for brand favorability in this study makes a difference in our conclusions\u2014the sites aren’t substantially different (not statistically) given the tasks and types of users.<\/p>\n If you aren’t able to run an ANCOVA on your data, you can also take a simpler approach by segmenting your data into High versus Low Brand Favorability groups and then running the comparisons again. For example, I created a new segment of high brand favorability (responses of 7 on the brand favorability scale) and a lower brand favorability segment (1 to 6 on the same scale) and then graphed the SUPR-Q scores.<\/p>\n\n
Attitudes toward People and Brands<\/h2>\n
Controlling for Brand<\/h2>\n
\nFigure 1: SUPR-Q scores for four websites (un-corrected). Error bars are 95% confidence intervals.<\/span><\/p>\n
\nFigure 2: Brand favorability score for the four websites.<\/span><\/p>\nThe ANCOVA<\/h3>\n
\nFigure 3: Raw and corrected SUPR-Q scores for brand favorability for four websites.<\/span><\/p>\nCompare Brand Subgroups<\/h3>\n