Blogs

There isn't a usability thermometer to tell us how usable an interface is. We observe the effects and indicators of bad interactions then improve the design. There isn't a single silver bullet technique or tool which will uncover all problems. Instead, practitioners are encouraged to use multiple techniques and triangulate to arrive at a more complete set of problems and solutions. Triangles of course have

Read More

Time is a metric we all understand so it's no wonder it's one of the core usability metrics. Perhaps it's something about the precision of minutes and seconds that demands greater scrutiny. There's a lot to consider when measuring and analyzing task time. Here are 10 of them. Task times are collected in about half of formative usability tests and 75% of summative tests. Task

Read More

Heuristic Evaluations and Cognitive Walkthroughs belong to a family of techniques called Inspection Methods. Inspection methods, like Keystroke Level Modeling, are analytic techniques. They don't involve users and tend to generate results for a fraction of the time and cost as empirical techniques like usability testing. They are also referred to as Expert Reviews because they are usually performed by an expert in usability or

Read More

If you collect nothing else in a usability test it should be a list of problems encountered by users. It seems so simple yet there is a rich history of how many users you need to test, what constitutes a problem and which method to use. A usability problem should have a name, description and often a severity rating. Problem severities can be based on

Read More

If you ask independent usability evaluators to run a usability test and report the problems found you'll get largely different lists of problems. While there are many causes for the differences, one major reason is that evaluators disagree on what constitutes a problem. Usability is often at odds with security and business interests—what's best for the user may not be the best for the organization.

Read More

Ask your customers if they'd recommend your product using a scale from 0 to 10 where 10 means extremely likely. Now find the average of the responses. Is an average score of say 7.212 good? It's hard to say. Interpreting averages from rating scales is notoriously difficult unless you have something to compare the average to. One of the reasons the Net Promoter Score is

Read More

Computers are supposed to make life easier. There are many reasons why users are forced to take extra steps, remember things or be inconvenienced just to accomplish tasks. Not all of them are good reasons. I've listed 14 of the more frequent/painful burdens I experience in the hope we can shift more of the burden from the human back to the computer. Asking for your

Read More

If a user can't find the information does it exist? The inability of users to find products, services and information is one of the biggest problems and opportunities for website designers. Knowing users' goals and what top tasks they attempt on your website is an essential first step in any (re)design. Testing and improving these task experiences is the next step. On most websites a

Read More

It's time for that major redesign. A long list of bugs, feature-requests and usability problems have accumulated and it's time to fix that website, intranet or software application. Where do you start? Do look for a new technology, feature requests from the VP, the oldest, most neglected problems in the bug-tracking database? All of these will play a role in the redesign, but you should

Read More

How many people will respond to your survey?  It would be nice if you knew ahead of time. Here's a simple technique I use to get an idea about the total number of responses I can expect from a survey invite. Perform a Soft-Launch (aka Pre-Test): It's always a good idea to pre-test your survey on actual recipients to work out the kinks in your

Read More