These perceptions come from an informal practice that stems back to the beginning of the usability profession and continues through training programs and some UX experts.
Unfortunately, this perception is misguided and can prevent perfectly good data from being used to gain accurate views of the user experience.
Qualitative data can in fact be converted into quantitative measures even if it doesn’t come from an experiment or from a large sample size.
The distinction between a qualitative study and quantitative study is a false dichotomy. It doesn’t cost more money to quantify or use statistics. It just takes some training and confidence–like any method or skill.
Here are five examples of how you can take common qualitative approaches to assessing the user experience and convert them into numbers which can then be treated with a range of statistical procedures.
- Converting a usability problem into a frequency: The quintessential usability activity is watching users attempt realistic tasks and identifying what in the interface is causing problems.Simply categorize the problems, count the frequency, then use confidence intervals to estimate how common the problems likely are in the entire user population. For example, if 3 out 11 users had a problem downloading the correct software product from a website, then we can be 95% confident at least 9% of all users would also have the problem (use the free web calculator or download the problem frequency calculator). It doesn’t cost more money to generate those confidence intervals. This process also allows you to generate more accurate sample size estimates.
- What problems are customers having?: It is sometimes difficult for customers to identify what they need in a product and where the shortfalls are. One effective approach is ethnographic research (a qualitative method), observing customers in their own setting encountering and solving problems.Observe the problems customers encounter, categorize and count them. Then estimate the percent of all customers that likely share this behavior or problem to help prioritize product features. You can then estimate how many customers you need to visit based on the frequency of these issues.
- Why is the product not being recommended? When using the Net Promoter Score, it’s valuable to ask open-ended, follow-up questions, especially for Detractors, such as, “Briefly describe why you gave the rating.” Take the list of open-ended comments and group them into categories (content analysis). Count the occurrences, create a percentage of all comments, graph them and throw in some confidence intervals for good measure.
- Why was that task so difficult?: I recommend asking just a single question after users attempt a task in an informal Steve Krug usability test. If a user provides a low rating (below a 5), ask them to briefly explain why they gave a low rating. Take these open-ended comments, categorize them and add up the frequency in each group. This process can help you and your stakeholders make more informed decisions about the likely causes of the trouble. Figure 1 below shows an example of the comments from a recent usability test.
Figure 1: Categorized comments about why the task was difficult. Error bars are 90% confidence intervals, N= 106.
- Combining Net Promoter Scores and comments: A powerful way of making qualitative, open-ended comments more actionable is to combine them with a closed-ended question, like the Net Promoter Score. For example, quantify what users say they would improve on a website, then show what these customer’s Net Promoter Scores are.An example is shown in Figure 2 below. There were 110 comments in total, but to quickly identify what to focus on, we can see that comments related to website navigation and product filters are both high in frequency and come from users that are likely generating negative word of mouth (notice the negative NPS). In contrast, design/layout comments and advertisements while high in frequency appear to be minor issues for the users.
Figure 2: Combining open-ended comments about what to fix on a website with those users’ Net Promoter Scores. Comments related to Navigation and Product filters are both high in frequency and come from the more dissatisfied users. Error bars are 90% confidence intervals.
I’m not advocating quantifying data for an exercise in counting. There are of course many software applications and websites which have never been exposed to any input from users. In such situations there will likely be many obvious problems that just need to be fixed, regardless of how many users encounter the problem.
But once you’ve picked the low hanging fruit of a neglected interface, the benefits of structuring your activities and results lend themselves to quantification, where you can derive more meaning from your methods.
The advantage of converting qualitative data into quantitative data is that the source of qualitative data–a direct encounter of the user’s experience–can reveal nuances in usability, perhaps otherwise missed in more formal quantitative experiments and surveys.
Not only can qualitative data be categorized into quantities, but it can prompt further questions and discovery for usability improvement.