{"id":187,"date":"2013-10-22T20:00:00","date_gmt":"2013-10-22T20:00:00","guid":{"rendered":"http:\/\/measuringu.com\/key-usability-principles\/"},"modified":"2022-03-21T18:02:48","modified_gmt":"2022-03-22T00:02:48","slug":"key-usability-principles","status":"publish","type":"post","link":"https:\/\/measuringu.com\/key-usability-principles\/","title":{"rendered":"Designing for Usability: 3 Key Principles"},"content":{"rendered":"
<\/a>It’s not terribly complicated, yet it’s not universally applied.<\/p>\n When designing an application, website or product, three things help generate a more usable experience: an early focus on the users and tasks, empirical measurement, and iterative design.<\/p>\n These three key principles were articulated by John Gould and Clayton Lewis almost 30 years ago in the seminal 1985 paper<\/a>, Designing for Usability: Key Principles and What Designers Think.<\/p>\n The same obstacles to designing for usability exist now, in the mobile age, as they did at the dawn of the Graphical User Interface age!<\/p>\n Here are many of the points they made along with some examples of how we’ve applied these key principles.<\/p>\n Bring the design team in direct contact with potential users, not through intermediaries or an examination of profiles<\/a>. Especially in the first phases, collect information about what users are trying to accomplish, what’s not making sense, how well the architecture and system navigation map to user expectations. Using a Top Task Analysis<\/a> is a efficient method for understanding the vital few tasks users want to accomplish in software or websites.<\/p>\n Careful design pays off, and having multiple phases is not a license to be sloppy or careless. Getting it right the first time is a good goal, but experience shows that it’s not as easy as it sounds. If you have the budget for testing 15 users, it’s best to split that sample up into three groups of five users. Test the first round of five users, fix the problems that aren’t controversial or won’t introduce new issues, then test again. In fact, it’s often overlooked, but Nielsen recommends testing with five users per round, not five users total, in his famous article on sample sizes<\/a> in usability tests.<\/p>\n Many development teams may get the first two key principles but are hesitant to measure. Picking a few key usability metrics and tracking them from each round of testing provides an easy objective check on your design decisions. A low-fidelity prototype or changing tasks are not excuses for not measuring. In addition to completion rates<\/a>, here are some other examples of showing measured improvements from iterative testing.<\/p>\n We tested an iPad application in three rounds of testing over a three month period. In each round, we had five or six participants attempt a number of tasks. After each task we asked participants to rate how difficult they found the task using the 7-point Single Ease Question<\/a> (we administered the scale verbally). In the first round, the prototype was marginally functioning, but we were still able to uncover problems with the navigation and labels. The graph below shows the average scores along with 85% confidence intervals for each round.<\/p>\n <\/p>\n Despite changing some tasks we used in each round, there were still three tasks that were common in all three rounds and five tasks that were common in two rounds.<\/p>\n You can see in the graph above that the perception of task difficulty improved statistically from Round 1 to Round 3 for three tasks (notice the error bars on the green bars mostly don’t overlap with the error bars on the blue bars). It was generally consistent for most other common tasks and generally at a high level of ease (around the 90th percentile). This empirical measurement provided clear evidence for a quantifiably better user experience.<\/p>\n In addition to task-level metrics, you can also measure perceptions of the entire app experience across sessions. Bangor et al,<\/a> described an iterative design with an installation kit where the System Usability Scale (SUS)<\/a> was tracked after each session. A visual representation of their data is shown in the graph below across five rounds along with the historical “average” benchmark of 68 for SUS scores.<\/p>\n <\/p>\nEarly Focus on Users and Tasks<\/h2>\n
Iterative Design<\/h2>\n
Empirical Measurement<\/h2>\n
Task Difficulty<\/h3>\n
Overall Perceptions of Usability<\/h3>\n
Percent of Critical\/Severe Problems Uncovered<\/h3>\n