32 Ways to Measure the Customer Experience

Jeff Sauro, PhD

There’s a lot to measure in the customer experience. 

There’s also many ways to collect the measurements.

While the “right” methods and metrics you select depend on the industry and study goals, this list covers most of the online and offline customer experience.

It includes a cross section of the four types of analytics data to collect, with an emphasis on collecting customer attitudes via surveys. 

Attitudes & Affect

Customer Satisfaction: Survey your customers at key touchpoints using a simple Likert scale. Ask overall customer satisfaction (usually about the brand) and lower level satisfaction (usually specific to the touchpoint) such as the purchase or service experience. Include key attributes; for example, quality, speed, cost, and functionality.

Brand attitude: Measure affinity, association, and recall in a branding survey.

Loyalty: Use a repurchase matrix to measure the likelihood to repurchase and Net Promoter Score for likelihood to recommend.

Brand lift: Measure attitudes before and after participants are exposed to a stimulus.

Customer Attributes

Customer lifetime value: Not all customers are created equal (in terms of profitability at least!). Measure the revenue, frequency, and duration of purchases by customer and subtract the acquisition and maintenance cost by customer.

Who your customers are: Conduct a True Intent or Voice of Customer Survey (VoC) study by recruiting directly off your websites or emailing current customers and use a segmentation analysis.

Customer expectations: Ask expectations qualitatively in a usability study or quantitatively in a survey. Consider having an independent group rate expectations and another group rate the experience. Customers want to be consistent and will be affected by the memory of their expectation ratings; with two independent groups, you know you’re getting more accurate results.

The things customers do the most: Run a top tasks analysis by having a qualified sample pick their top five features in any website or application. This works really well and is easy to conduct.

What delights customers: Consider the Kano Method by asking customers how they’d feel if a feature was included and how they’d feel if it wasn’t included.

Product & Service Features

What features are most important: Conduct a key-drivers analysis after surveying customers on key features and even emotional aspects of a product or experience. You’ll need to have one variable you want to optimize (usually loyalty or customer satisfaction).

Value of a feature: Conduct a conjoint analysis to see the value of each feature and the ideal combination of features.

What price to charge: Use a choice-based conjoint analysis that allows you to understand the tradeoff between price and features.

Response time: Call hold time, website loading times, and delivery times are a few factors that play a role in satisfaction and loyalty. The best thing to do is automate the data collection or systematically sample transactions. It’s often the expectation of how long something will take that matters.

Technology acceptance and usefulness: Use the 20-item Technology Acceptance Model (TAM) questionnaire to see whether customers or users find an application’s features and experience both usable and acceptable.

Design Elements

Where website visitors click first: Conduct a first click test or run a tree test. If customers’ first click is the right one on a website, they are around nine times more likely to find the right information!

If your users notice design elements: Use an eye-tracking study to know where participants’ eyes go, observe behavior to see if people react appropriately, and ask if they noticed elements.

Comprehension: Use a mix of recall and recognition questions after having participants view images or videos or read copy.

Measuring recall: Ask a sample of participants to list features, brands, companies, names, or whatever you want to recall using an open text box in a survey. Recall suggests stronger memory than recognition.

Measuring recognition: For brands or products, list a set and have customers pick what they recognize (include distractors). Recognition suggests less salience than recall.

Icons: Maybe you know what the tiny image means, but do your customers? Don’t guess; test the icons using context and no context with qualified participants.

What terms to use: What would you call this? Ask your participants. Use an open card sort, or just open text fields in a survey.

Experience & Usability

Navigation and findability: Use a tree test to measure findability and an open card sort to assess categorization and labels.

Ease of use: Conduct a usability test. Having just five users attempt to complete tasks on apps, websites, or products will reveal most of the obvious issues. Rinse and repeat.

Efficiency: Measure the time users need to complete tasks in a usability study. You can also use Keystroke Level Modeling (KLM) to estimate skilled error-free task times using screenshots.

Task difficulty: Ask how difficult customers find a task immediately after they attempt it using the Single Ease Question (SEQ).

Overall system ease: To assess the overall impression of a product’s usability, administer the System Usability Scale (SUS) in a survey or immediately after a usability test.

Website quality: Measure the perceived usability, loyalty, trust, and appearance using the SUPR-Q after a usability test or by surveying recent visitors to your website.

How your website compares to the competition: Conduct a competitive usability benchmark test and include additional questions about loyalty and brand attributes.

Effectiveness

Improvement in conversion rates: Conduct an A/B test to find the proportion converting for statistical significance.

Finding and fixing problems: Use a Fishbone and Failure Modes Effects Analysis (FMEA) to get to the root cause of any problem and understand the impact.

Search effectiveness: Search engines are both the first and last resort for customers to finding things on website. Test the accuracy of the search results and clarity of the Search Engine Results Page (SERP) using a targeted usability test.

Reliability of your methods: Use a combination of reliability metrics: inter-rater, test-retest, parallel forms, and internal consistency reliability to see how consistent your data is.

Not sure where to start? I explain the five essential steps for measuring the customer experience.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top