Measuring the Customer Experience: Questions and Answers

Jeff Sauro, PhD

This week I was invited to share my thoughts on challenges and insights into measuring the customer and user experience during a live chat on Twitter hosted by the folks at Vivisimo.

We had 904 tweets which generated 2,304,670 impressions, reaching an audience of 311,413 followers.

Here’s a bit of what we discussed.

Isn’t revenue the ultimate metric to track?

The ultimate business metric is revenue but it’s a lagging indicator. You can’t do anything about last quarter’s profits. You want to find some leading indicator of growth which is why the Net Promoter Score is popular. For many companies growth is measured by positive word of mouth and NPS tracks that reasonably well.

For more established industries repeat purchases from existing customers are better drivers of growth. In those cases, measuring the likelihood to voluntarily repurchase might be a better metric. Oh and by the way, those stealthy auto-billing features often aren’t voluntary.

For example, in the Loyalty and Usability database I maintain for websites and software companies, we found that Walmart had the fewest percentage of users who were referred by friends (as many respondents said:”Why would I tell a friend about Walmart, everyone knows Walmart).

Revenue can also be misleading if it comes at the cost of pissing off your customers.

  • Your mobile carrier sets minute-plans where you pay for more than you need or pay the price when you don’t have enough.
  • The hotel charges you more for three days of slow internet access than you pay for a whole month at home.
  • And who doesn’t love playing the gas tank game with rental cars: forget paper or plastic the choice is overpay now or overpay later for gas you didn’t use or gas that’s 3x the pump price.

Bottom Line: Measure attitudes and Experiences so you can tell what revenue ENRAGES and what revenue ENGAGES the customer.

What indicators tell that you’re using the right customer experience measures?

  1. The customer “feels” it: If you improve a metric and the customer doesn’t notice, did you really improve the customer experience?
  2. It’s meaningful: Measure on time arrivals not on-time departures. Perhaps you don’t have control over the entire experience but the user might not know it and blame you anyway. For example, if you run a website that hosts input from other contributors you might get blamed for their poor quality.  Properly tracking that as a source of detractors might generate new ideas into how you can improve the experience even when you don’t control all of it.
  3. It’s easily measurable: If it’s expensive or a pain to measure then it probably won’t get measured or worse won’t get improved.

 

What are the biggest obstacles to company wide customer experience measurement?

  1. Getting a measure everyone feels like they can improve: Nothing quite like being held accountable for a corporate metric that isn’t part of your day job, or worse, you have little control over.
  2. Obsessing over the “right” measure and the perfect time to measure. Do we survey after the purchase, during customer support calls, from a 3rd party?  Most customer satisfaction and loyalty measures are strongly correlated. Pick a few good candidates and work with them. I’ve seen double-digit differences in Net Promoter Scores for the same product, taken at the same time. Some come from customer support (typically lower) and others from after purchasing (typically higher). All metrics are flawed, most are useful, especially when you compare the same ones over time.
  3. Obsessing over large sample sizes: Yes it’s nice to have thousands of responses, in many cases you come to the same conclusion with a fraction of the sample size with slightly less precision. For a simple agree/disagree question you have a margin of error of +/- 5% at a sample size around 300. You need over 1000 people to cut that margin of error in half.

 

What type of metrics should websites include in their customer experience reports?

For websites you need to measure these essential elements:

  • Usability: People come to a website to do things. If they can’t do it they won’t purchase return, or recommend.
  • Credibility: Do customers trust your brand, and that you don’t have nefarious intentions with your personal data?
  • Loyalty : How likely are customer recommending the website to friends and how likely are they to revisit?
  • Appearance: Customers judge you based on how your website looks.

What are best practices for measuring customer experience quality?

  1. Use 3rd party data where possible to get a better pulse on what current and former customers really think—not just the really happy ones who respond to your survey.
  2. Understand chance fluctuations from real differences using confidence intervals and statistics.
  3. Use multiple methods: A/B testing, site analytics and usability testing can triangulate on customer insights.
  4. Don’t obsess over large sample sizes. You can learn a lot from a few customers statistically. We found key insights from watching 11 users on the Autodesk website that we couldn’t see with 11k using analytics.
  5. Use a mix of qualitative and quantitative methods:  The two aren’t mutually exclusive. You can always quantify the comments and qualitative insights you gather while understanding the “why” behind those detractors.
  6. Measure both immediate experiences and lasting experiences: immediate perceptions of the experience, while related to the lasting impression, are distinct.
  7. Simulate actual scenarios in usability tests to get at the drivers behind the metrics.

 

What would you take away to change how you measure CUX in your company?

  1. People will teach to the test. If bonuses are based on scores, people will find creative ways to inflate scores or survey only happy customers.  My rep at the car dealer told me to expect an email survey after my appointment. He told me the only “correct” answer was a 10—extremely satisfied.  An in-house instructor at a Fortune 100 company recently told the audience when filling out the evaluations that scores of 9-10 on the 0-10 scale are “promoters.”
  2. Don’t obsess on finding the “ultimate” bullet proof metric that’s always perfect: All measures are flawed, most are useful. You can get different data when you ask current customers, prior to a purchase, during a customer support call or while using a product. It’s all about making comparisons using the same method over time. The hardest part is doing something about improving the problems.
  3. Identify the key drivers of loyalty, usability and repurchasing. While there are often 20-30 variables that impact attitudes and actions, there are usually 3 to 5 which account for the majority of the changes. Find those vital few variables using multiple regression analysis–a topic for another blog.

 

You might also be interested in
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top