Blogs

Adding confidence intervals to completion rates in usability tests will temper both excessive skepticism and overstated usability findings. Confidence intervals make testing more efficient by quickly revealing unusable tasks with very small samples. Examples are detailed and downloadable calculators are available.   Are you Lacking Confidence?   You just finished a usability test. You had 5 participants attempt a task in a new version of

Read More

One of the biggest and usually first concerns levied against any statistical measures of usability is that the number of users required to obtain "statistically significant" data is prohibitive. People reason that one cannot with any reasonable level of confidence employ quantitative methods to determining product usability. The reasoning continues something like this: "I have a population of 2500 software users. I plug my population

Read More

We already saw how a manageable sample of users can provide meaningful data for discrete-binary data like task completion. With continuous data like task times, the sample size can be even smaller. The continuous calculation is a bit more complicated and involves somewhat of a Catch-22. Most want to determine the sample size ahead of time, then perform the testing based on the results of

Read More

Often the most reported measures of usability is task success. Success rates can be converted into a sigma value by using the discrete-binary defect calculation: Proportion Unsuccessful= Defects/Opportunities Where opportunities are the total number of tasks and defects are the total number of unsuccessful tasks. This calculation provides a proportion that is equivalent to a success or failure rate. For example, if 143 total tasks

Read More

It’s a big step when User Centered Design methods are employed in a company to improve the usability of a product. It shouldn’t be the last step and often times it is. Many popular usability testing techniques are the right method to gather user data, however, their results alone will only scratch the surface of the true state of usability. Often their results can be

Read More

Continuous Data Efficiency is one of the cardinal aspects of a product's usability. The amount of time it takes for a user to complete a task is one of the best predictors of efficiency because it: Illustrates the slightest variability (in seconds) that are more difficult to detect in other measures like errors Is a continuous measurement meaning it can be subdivided infinitely (unlike task

Read More

Usability measurements involve human performance and because human behavior is inherently error prone, reaching the goal of 6σ isn't necessary to proclaim success. Manufacturing companies that are considered producing "high-quality" products are usually somewhere between 4σ and 5σ. The benchmarks and targets that we set in our tests will necessarily need to be more forgiving than in manufacturing. I prefer focusing on the movement of

Read More

Minimize Lurking Variables Getting Warmed Up Without task randomization, so-called lurking variables can taint your data--usually not enough that it's devastating but often it's noticeable. One lurking variable when analyzing task times is the user's tendency to perform better on the later tasks and worse on the earlier tasks. It's human nature: someone hands you a piece of paper and says: "Ok, complete the task.

Read More