{"id":303,"date":"2016-03-08T07:31:59","date_gmt":"2016-03-08T07:31:59","guid":{"rendered":"http:\/\/measuringu.com\/measuring-essentials\/"},"modified":"2021-01-28T06:30:12","modified_gmt":"2021-01-28T06:30:12","slug":"measuring-essentials","status":"publish","type":"post","link":"https:\/\/measuringu.com\/measuring-essentials\/","title":{"rendered":"10 Essentials of Measuring Usability"},"content":{"rendered":"
<\/a>Observing just a few users interact with a product or website can tell you a wealth of information about what’s working and not working.<\/p>\n But to loosely quote Lord Kelvin<\/a>, when we can measure something and express it in numbers, we understand and manage it better.<\/p>\n Measuring usability allows us to better understand how changes in usability affect customer satisfaction and loyalty<\/a>.<\/p>\n Usability can and should be measured on mobile apps, enterprise accounting software, early stage prototypes, or mature websites.<\/p>\n While devices and users will differ, here are ten core concepts to understand when measuring usability that are likely to remain constant.\u00a0 We’ll also cover these at the Rome<\/a> and Denver UX Boot Camps<\/a>.<\/p>\n While Lord Kelvin provides inspiration for measuring to better understand usability, unfortunately there is no usability thermometer to rely on. You can’t measure usability directly; instead you measure the outcomes of good and bad experiences.<\/p>\n Usability is formally defined in a few places, but most prominently in the ISO 9241 pt. 11<\/a> definition as a combination of effectiveness, efficiency, and satisfaction in the context of use.<\/p>\n There is some value in pulling a random person off the street to tell you what he thinks of a design. It’s likely better than nothing if your interface is for general, walk-up use. However, usability is best measured by having people who will actually use the product, app, or website attempt to do tasks they would actually do.<\/p>\n The best way to measure usability is to use multiple metrics. The metrics should correspond to the ISO definition of usability. These are typically collected at the task and study level. At the task level, collect completion rates<\/a> (effectiveness), time on task<\/a> (efficiency), and perceived-ease using a questionnaire like the SEQ<\/a> (satisfaction). These can be combined into a Single Usability Measure (SUM)<\/a>, which is easier to report on. At the study level, use a questionnaire like the SUS<\/a> or SUPR-Q<\/a> ,which have desirable psychometric properties.<\/p>\n You can think of usability as a combination of attitudes (what people think about an interface) and actions (how people interact with an interface). Both are needed to effectively quantify usability.<\/p>\n People are notoriously fickle, difficult to measure, and often contradict themselves. While we’ve found that measures of perceived ease and performance metrics correlate[pdf]<\/a>, we do see cases where preference does not equal performance. It’s not uncommon to see participants perform worse on a design and yet prefer it when asked. When this happens; we tend to rely on performance because users are rarely tasked with picking among alternatives in real life.<\/p>\n While there is evidence that initial judgments about facial expressions<\/a>, dating partners[pdf]<\/a>, and even students’ teacher evaluations<\/a>[pdf] are accurate, we’ve found that the first five seconds judging the usability of a website aren’t as reliable. The first click however is reasonably predictive of success<\/a>.<\/p>\n It’s not always feasible<\/a> to conduct a task-based usability study. Using well-calibrated surveys to measure attitudes about usability<\/a> can provide at least half the equation. While they aren’t terribly helpful in diagnosing problems, they are an inexpensive and efficient way to benchmark attitudes across many products and understand the key-drivers of attitude.<\/p>\n1. There’s no magic usability thermometer<\/h2>\n
2. There is an international standard<\/h2>\n
3. Have real users attempt real tasks<\/h2>\n
4. Use multiple measures<\/h2>\n
5. Remember attitudes and actions<\/h2>\n
6. Attitude does not always equal action<\/h2>\n
7. Initial judgments of usability are tenuous<\/h2>\n
8. Surveys as retrospective measures<\/h2>\n
9. Context matters<\/h2>\n