Website navigation is at the heart of good findability.
In both types of studies, we collect many metrics to help uncover problems with terms and taxonomy.
While the fundamental metric of findability is whether users find an item or not, often other metrics provide clues to problems in the hierarchy even if users manage to find an item.
Here are 10 metrics to collect and dig into when you’re looking to improve website navigation.
- Findability Rate/Completion Rate: Every test of navigation should include a simple binary metric (1= found and 0=not found) of whether the users found an item or piece of information or not. Convert these 1’s and 0’s into a findability/completion rate with confidence intervals. So if 20 out of 25 people find an item, the success rate is 78% with a 90% confidence interval of 64% to 90%.
- Time to Find: Like task time in a usability test, the longer it takes to find an item, the more evidence you have that there’s a mismatch between where items live and where users look. Also, how long it should take to find an item can depend a lot on the taxonomy and context ( just like with a usability test). Across a few tree tests studies, we see that average times to find items are in the 20-30 second range. Anything over one minute is usually too long when simply testing a navigation structure. The average time takes longer on live websites when page load time and browsing factors come into play, so again context matters.
- Variability in Finding Time: Once you collect the time, you can then see how consistent the experience is by looking at the standard deviation. Standard deviations can be difficult to interpret by themselves. One technique is to divide the standard deviation by the mean time. This is called the coefficient of variation (CV). Smaller values indicate a less variable experience and longer CV’s mean a more variable experience.Across many usability tests, we see an average CV of 44%, meaning the standard deviation is 44% of the mean time (e.g. average time is 50 seconds and the standard deviation is 22 seconds). Anything over 100% is especially variable and worth investigating more (for example, a standard deviation of 110 divided by the mean of 100 seconds is a CV of 110%).
- Initial Click: Find out where users click first in both a tree test and a live test of the website. There’s some research that suggests that getting that first click right is indicative of ultimate task success. This initial click by users provides that first instinct as to which main category users believe an item will be found–it can be especially helpful when findability rates are low.
- Success Path : While a bit more labor intensive, you’ll want to know not just whether users are finding items in a navigation structure but also how they are finding them (or not finding them). Use this data to see if users are taking the most efficient path or a secondary one for successful items, or to see where they are going when they fail to locate an item. Tree-testing software from MeasuringU and OptimalWorkshop will allow you to research user paths more closely.
- Confidence: After each task, we ask participants how confident they are on a 7-point scale. Confidence ratings by themselves are helpful, but they can be especially revealing when you pair them with findability rates. Look for users that were extremely confident that they would find an item but actually didn’t, and then find out why.
- Difficulty: After each task, we ask the Single Ease Question (SEQ) which is also a 7-point rating scale from Very Difficult to Very Easy. We report the raw mean along with a converted percentile score derived from the several hundred tasks we have in our database. A raw mean of 5 is around the 50th percentile.
- The Most Difficult Items to Locate: At the end of the study, ask participants to select which item(s) were the most difficult to locate.
- First Path Success vs. Second Path Success: If users go down one path and then back out (a.k.a. pogo-sticking) , you have an indication that something might be wrong. This usually correlates with time on task (multiple attempts take longer) but it will tell you more about what’s taking longer.
- Reasons for Difficulty: When users express that an item was hard to find (from the SEQ) or they list an item as one of the more difficult to find in a tree test, we want to know why. Sometimes the results aren’t as helpful when you get comments such as “I didn’t know where to look” whereas other times you’ll see comments like “I didn’t even know what the item was I was looking for, so I wasn’t sure where to look,” or ” I just wasn’t sure if the item was in Home or Auto.” This is an open-ended comment and can take time to sift through and interpret, but putting some metrics around the percentage of users that didn’t know what the item was (perhaps a false positive) versus those who had trouble with a category adds another layer of interpretation.
|Denver UX Boot Camp: August 16-18 3 Days of Hands-On Training on UX Methods, Metrics and Analysis|