{"id":423,"date":"2017-09-20T03:53:30","date_gmt":"2017-09-20T03:53:30","guid":{"rendered":"http:\/\/measuringu.com\/why-numbers\/"},"modified":"2021-08-12T08:30:35","modified_gmt":"2021-08-12T14:30:35","slug":"why-numbers","status":"publish","type":"post","link":"https:\/\/measuringu.com\/why-numbers\/","title":{"rendered":"5 Ways to Get at the Why Behind the Numbers"},"content":{"rendered":"
Measurement is at the heart of scientific knowledge<\/a>.<\/p>\n It\u2019s also key to understanding and systematically improving the user experience.<\/p>\n But numbers alone won\u2019t solve all your problems. You need to understand what\u2019s driving your metrics.<\/p>\n While taking the pulse or blood pressure of a patient will provide metrics that can indicate good or poor health, physicians need more to understand what\u2019s affecting those numbers. The same concept applies to UX research.<\/p>\n We advocate and write extensively about UX metrics such as SUS<\/a>, NPS<\/a>, task times<\/a>, completion rates<\/a>, and SUPR-Q<\/a>. Using these measures is the first step, but understanding what\u2019s affecting them\u2014\u201cthe why\u201d behind the numbers\u2014is the essential next step to diagnose and fix problems in the experience. Here are five approaches we use to understand the \u201cwhy.\u201d<\/p>\n In surveys, website intercepts, and unmoderated usability tests<\/a>, there are (and should be) open-ended comments for participants. They\u2019re a good place to start understanding the why. If a respondent in a survey provides a low likelihood to recommend score (a detractor<\/a>), a question immediately following this closed-ended question can examine his or her reasons.<\/p>\n You can evaluate responses systematically<\/a> by sorting and coding them. However, even a quick reading of a subset of what participants are saying will give you some idea about what\u2019s driving high or low scores. For example, participants in our business software benchmarking study<\/a> reflected on their likelihood to recommend the products they used. We asked participants to briefly explain their ratings, which can be particularly helpful for low scoring responses. One respondent gave the Learning Management System (LMS) software Canvas a score of 5 on an 11-point scale (a detractor) and said<\/p>\n \u201cAlthough Canvas allows you to connect with students on a more personal level than email\u2026 it still has a large amount of issues present. The layout of Canvas is horrid, and users should have an option to collapse the menu found on the left-hand side of the screen. Although it’s an adequate LMS program, it isn’t better than Blackboard.\u201d<\/em><\/p><\/blockquote>\n There\u2019s a lot packed into that one comment. It doesn\u2019t mean you immediately start redoing the software but it does mean it\u2019s likely worthy of further investigation.<\/p>\n For measuring the website user experience, looking at where users click, how long they spend on a page, and how many pages they visit can give you some idea about why tasks might be taking too long or reasons for low task completion rates<\/a>.<\/p>\n Log files also allow you to quickly see whether participants are getting diverted to the wrong page, either unintentionally (through following a link) or intentionally (browsing Facebook while attempting a task.) (The latter case would be a good reason to exclude the participant’s responses.)<\/p>\n1. Verbatim Analysis<\/h2>\n
2. Log Files & Click Streams<\/h2>\n