While there are books written on measuring usability, it can be easy to get overwhelmed by the details and intimidated by the thought of having to deal with numbers.
If I had to use five words to describe some best practices and some core principles of measuring usability, here they are.
There are a number of methods to measure and improve the user experience. There’s no reason to think only one will always be the best approach. In most studies we use a mix of unmoderated and moderated approaches as well as a mix of quantitative metrics and qualitative insights.
When we conduct a usability test we often incorporate a top-task analysis, card-sort, tree test and a task-based portion—often randomly assigned to different users or in different phases. Take an “AND” instead of an “OR” approach when it comes to deciding on the method to use to measure usability.
Usability is a fuzzy construct like delight, loyalty, satisfaction and frustration. Rarely is there a single question, single task or method that will fully capture what you’re intending to measure. Use multiple questions, complementary tasks and converging metrics to help triangulate around the construct. The Single Usability Metric (SUM) is a more formalized approach at triangulating the multiple metrics of usability.
Where there’s human judgment there’s disagreement. Different evaluators will tend to uncover different usability problems, even from watching the same videos of users attempting tasks. Rating the severity of usability problems is also highly variable (some would say unreliable) between different people. Different facilitators tend to affect the interaction with users during a usability session. When grouping verbatim responses, different people will generate different categories.
One of the best ways to deal with the variability in judgments is to have redundant evaluators independently coding usability problems, verbatims, and problem severity. The aggregated findings are often the most reliable interpretation. You get a bonus if you report a measure of reliability like a correlation coefficient or Kappa.
Measure and observe what users do. Don’t just ask users what they think or if they like an interface or design. Having users attempt realistic tasks is the signature principle of usability testing. It provides some of the best insight into what elements of an interface are leading to frustration, task failure and that need to be fixed.
Usability is attitude plus action. While you don’t want to rely exclusively on what users say, you should still ask users to reflect on their experience with an interface. Usability is the combination of effectiveness, efficiency and satisfaction. The first two are obtained from observing user performance whereas satisfaction is best measured systematically using post-task questions like the Single Ease Question (SEQ) and post-test questionnaires like the SUPR-Q.