A bad experience will impact how likely users are to recommend a website or product to a friend.
Fixing those bad experiences is critical to increasing positive word of mouth.
Unfortunately, there are usually too many things to fix and just as many opinions on what should be fixed. Development teams need to prioritize.
An obvious way to do this is to fix the things that will have the biggest impact on revenue. One of the reasons A/B tests on websites are so effective and popular is that you can literally see how one interface change can increase or decrease sales through conversions.
It’s not always possible to A/B test elements of the user experience, especially in software applications. It’s also not always easy to associate interface changes to revenue. The Net Promoter Score is intended to be a proxy for revenue and future growth and is easier to make associations with using user-experience metrics. You can then indirectly tie changes made through early development research efforts to later measurements of customers’ likelihood to recommend through such metrics.
Here is an approach I’ve used with clients, including Autodesk, to help associate interface-level user-experience changes to the Net Promoter Scores collected and reported on corporate dashboards.
- Obtain a baseline set of Net Promoter Scores: If you aren’t doing so already, survey your customers to get a current baseline of how likely customers are to recommend the product to a friend. Ask the 11-point LTR question about both the brand and the product. In fact, you can extend the likelihood to recommend question down to feature and functional areas if you have a lot of functionality. Include an open-ended question to ask what’s driving users to give the rating they gave. Ideally, these surveys should be run monthly or quarterly or set up in some systematic way to collect. You can email customers, use a pop-up on the website, or use a 3rd Party to obtain a perspective. Ideally, use all three approaches as they all provide different lenses of the experience.
- Ask a set of standardized usability questions: The SUPR-Q is a questionnaire with an additional 12 items to have users rate, four of which provide a standardized measure of usability. The other items provide measures of trust and credibility and appearance. Trust often plays a more central role than usability when driving negative word of mouth about your product or website. For software you can use the System Usability Score (SUS) and we’ve found just asking a single question about overall ease of use often suffices.
- Ask a handful of key questions about features and functions: Many products and websites have vast amounts of features and functions. While you can’t expect to obtain detailed metrics for every one of these, you should be able to collect data at a level that allows you to narrow your focus. For example, are poor quality reviews, advertisements, checkout forms or shipping costs driving detractors? Having respondents rate each of these aspects provides you input for the next critical step of narrowing your focus.
- Use a Key Driver Analysis to identify what’s driving NPS: With the NPS scores and items about usability, trust and specific feature areas, you likely have the pool of candidates that are driving word of mouth. To determine which are having the biggest impact, you can use a multivariate technique called multiple regression analysis, also called Key Driver Analysis. It can determine statistically what aspects are having the biggest impact on NPS and allow you to prioritize. The graph below shows an output of a key-driver analysis. Impressions of usability have the biggest impact on users’ likelihood to recommend this software product.
Figure 1: Output of a Key Driver Analysis from a web-based software application. The vertical (y-axis) shows how much each of the items contributes to users’ likelihood to recommend. Usability was measured using the four items from the SUPR-Q and is the biggest driver of LTR. It’s more than five times as important as Feature A.
- Analyze Verbatims: Sort the open-ended comments obtained from the survey into meaningful groupings to get clear examples of where experiences are falling short. We’ll often find a handful of no-brainer fixes from quickly reading through comments from participants. Don’t let these go to waste.
- Design a usability test to uncover interface level changes: Now that you have some idea about possible features or areas of the website that need improvement, conduct a baseline usability test on the identified areas. If the checkout form is driving users to not recommend or return to a website, understand what elements of the checkout form are problematic from the testing. Are there too many fields, confusing error messages, too many steps, or unanticipated shipping charges? Along with the problems users are having, collect baseline metrics on efficiency (time to complete), effectiveness (did they complete) and satisfaction (questionnaires). Ask the NPS question along with the same set of items like the SUPR-Q obtained in the baseline survey. This allows you to see how well scores from simulated studies match your higher-level baseline data.
- Make design changes: Based on the findings of the usability tests, make changes in the interface to improve the experience. It’s better to do this iteratively than to put all your testing and design efforts into a one-shot fix and test approach.
- Conduct a follow-up usability study: Using the same set of tasks and metrics, see if there is evidence that the usability problems identified have been corrected and the metrics have improved. You can do this on even small sample sizes (10 or so users) but you will be limited to detecting only large differences (for example, 20%-30% point differences in completion rates).
- Conduct the follow-up survey: Using the same system and set of questions you used to obtain the baseline NPS data, collect more data from users who have had time to experience the new interface. Compare the old scores to the new scores statistically to see if the improvements are beyond what you’d expect from chance variation. The Net Promoter Score is based on two paired proportions and has margins of errors that tend to be about twice as wide as simple proportions. Don’t make the mistake of making decisions on random variation. You may need sample sizes in the high hundreds to low thousands to determine if 5-10 point shifts in NPS are statistically significant.
Linking low-level interface changes to high-level customer attitudes isn’t an exact science. There are many variables that impact why users do and don’t recommend the website or product. What’s more, you can make massive improvements in the user interface but not see those reflected in new Net Promoter Scores. This is usually because other factors are having a much larger impact on recommendations. It could be different tasks, parts of the interface, different types of users, or things outside the control of most UX departments, like pricing, compatibility or features.
The approach outlined above, however, does provide a framework for capturing the effects of improving the interface and associating those improvements to more macro attitudes reflected in the Net Promoter Score.