If we think something is unusable today, will we think it’s unusable tomorrow, next week or next year?
How much does prior experience affect how usable we think a websites or software is?
Enough to pay attention to. In a recent assessment, prior experience boosted usability ratings 11% for websites and consumer software. But the effect of experience paled in comparison with the effect of “usable” and “unusable” websites and products.
In search of that conclusion, I looked to a large database of System Usability Scale (SUS) data that I maintain, which contains information on prior experience. There is data from 62 websites and 16 consumer software products across almost 2000 users.
Over 1100 users attempted tasks across one of 62 websites (e.g. airlines, rental cars, retailers and government websites), answered the 10-item SUS and provided information on how often they’d been to the website prior to the usability test.
I split the users into two groups: those who’d never been to the website (first-time users) and those who had been to the website before (repeat users).
On average I found that repeat users rated the websites as 11% more usable than first-time users. Generating a confidence interval around the difference tells us we can be 95% confident that repeat users think the website is between 6% and 15% more usable.
Figure 1: Website SUS scores by experience from 62 websites.
Consumer Software Experience
I next examined SUS scores from 800 users from 16 widely used consumer software products (e.g. Word, Photoshop and Quicken). Respondents provided information on how many years they’d owned and used the software (from less than a year to 10+ years).
I split the users into three categories: less than 3 years, 4-5 years and 5+ years of usage.
On average I found that each level of experience generated 5% higher SUS scores. The average SUS score of the most experienced user group (those with 5+ years of experience) had 11% higher average SUS scores than those with the least experience (less than 3 years).
Figure 2: Consumer Software SUS scores by years of experience from 16 products.
We can be 95% confident the most experienced users think the software is between 6% and 15% more usable.
Interestingly enough, both the average difference (11%) and the confidence interval around the difference (6 to 15%) are identical to the website data.
Familiarity Breeds Content
What this data tells us is that prior experience does impact our perceptions of usability as measured by SUS scores. In general, a user with a lot of prior experience will rate an application as more usable. This will especially be the case between the users with the most experience and those with the least (or none at all).
When gathering SUS data (or any usability data) it’s important to track the user’s prior experience with a product or website. It also likely makes sense to report the SUS scores against levels of experience.
Prior experience is often a concern when comparing a new unfamiliar system to an older familiar (and likely less usable system). Researchers are concerned that the new system gets penalized because it isn’t as familiar to the users even though it has bug fixes and simpler designs. They worry that improvements in usability might be masked by the effects of experience.
This data suggests that there is reason to be concerned. Even differences as large as 10% may be caused by differences in exposure.
If a researcher can’t mitigate the differences in prior experience, then an adjustment of between 6% and 15% in scores may be warranted. This would especially be the case if there were large gaps in the years of experience, like the ones shown here, or when comparing new users with existing website users.
For example, imagine users with no prior experience with a new version of a software application generate an average SUS score of 70. Then the average SUS score from users who have been using the existing system for 7 or more years was 80. The new system score can be bumped by 11% to an adjusted SUS score of 77.7—suggesting a more similar perception in usability between systems.
Adjusting the scores in this way estimates what the mean SUS score would be like if the system had been in use for 5+ years.
Keep it in Perspective
In general, experience does matter, but it matters a lot less than differences between usable and unusable websites and software. On average, experience with a website or software product accounts for only 3% of the difference in SUS scores. To put that into perspective, the difference in the usability of websites is much larger, accounting for around 30% of the variation in SUS scores (10 times greater).
The difference between consumer software products accounted for 11% of the variation (almost 4 times greater than experience). This lower amount of variation with the software sample I used is likely due to the more homogeneous nature of these mass-market products.
Put another way, where we see differences of 6 to 7 points in SUS scores between the extreme levels of experience, we see differences in SUS scores of more than 60 points between the best and worst websites.
When there is little difference between the products being tested, however, experience can matter. You will want to track and report out on experience and likely bump the new version’s SUS score for a more equitable comparison.
Editorial services courtesy of Marcia Riefer Johnston. See her “Word Power” blog.
|UX Measurement Boot Camp : Three Days of Intensive Training on UX Methods, Metrics and Measurement Aug. 8th-10th 2018|