Picking the Right Dependent Variables for UX Research

Jeff Sauro, PhD

What gets measured gets managed.

It’s more than a truism for business executives. It’s also essential for the user experience professional.

In business, and UX research in particular, you don’t want to bring focus to the wrong or flawed measure. It can lead to wrong decisions and a misalignment of effort.

In an earlier article, I discussed the differences between the most common variables in UX research: dependent vs. independent, latent vs. observed, and extraneous variables.

One of the first things to do when conducting UX research is to identify the dependent variable that best fits your needs. In this article, I’ll provide some guidance for selecting the right dependent variable based on the hundreds of projects we’ve conducted at MeasuringU.

The dependent (or outcome) variable is what we hope changes when we change something, such as fixing a design element in an interface, reordering the features on a product page, or changing a company process or policy.

While the term “dependent variable” may be unfamiliar to some, especially those without a research background, dependent variables are more easily recognized as all those metrics organizations love to track. The metrics include both broad business metrics (often called KPIs) and more specific product- and task-level metrics. All levels include a mix of attitudes (beliefs, feelings, and intentions) and behaviors.

 

Business-Level Metrics

  • Revenue
  • Number of New Customers
  • Calls to Support
  • Call Support Time
  • Customer Referrals
  • Brand Attitude
  • Attrition Rate
  • Likelihood to Repurchase
  • Likelihood to Recommend

I cover business metrics more extensively in Customer Analytics For Dummies.

 

Product-Level Metrics

  • Number of Active Users
  • Time Spent Using the Software
  • Likelihood to Use
  • Attitude toward Ease (SUS)
  • Trust (SUPR-Q)
  • Attitude toward Usefulness (TAM/UMUX-Lite)

 

Task-Level Metrics

I cover both product- and task-level metrics more extensively in Benchmarking the User Experience.

All these metrics makes it difficult to know where to start. Picking the right dependent variable involves a number of considerations and shares a lot in common with assessing the quality of a measure.

Here are nine considerations you can take, roughly in prioritized order, to help pick the right dependent variables for your UX research.

  1. Address the Research Goals

Your dependent variable should first and foremost address your research goal and hypotheses.

Using customer satisfaction to see whether a button change improved signups isn’t aligned to your research and won’t effectively answer your research question. A better measure would be the percent of users who signed up or at least the percent of users who clicked the button since the change.

If you want to know whether a new product design meets users’ needs, asking about usability (the SUS) is important, but it doesn’t address usefulness or utility. Instead the UMUX-Lite or items from the TAM would be better questionnaires (as would other methods).

  1. Align with the Business/Organization

You can have a dependent variable that perfectly addresses your research needs but may fail to align with what your organization cares about. If your organization is in a mature market and customer retention is more important than new customer acquisition, renewal rates, attrition rates, and likelihood to repurchase are probably more important than likelihood to recommend or referral rates. Effective UX research works backwards from the organizational needs and will often dictate many of your dependent variables. Dependent variables that can both address research goals and align with business needs (and existing metrics) are the first two essential steps in picking your metrics.

  1. Have a High-Quality Measure

You don’t want a measure that addresses the research questions and business needs but is of poor quality. There are ways to assess the quality of the measure. Primarily, you should be concerned about:

  • Its validity: You want measures that measure what they intend to measure. Validity is a complex topic that can be assessed in many different ways. If possible, start with a previously validated measure such as the UMUX-Lite (usefulness and ease), SUPR-Q (website UX quality including trust, usability, appearance, and loyalty), SEQ (task ease), completion rate (task effectiveness), task time (efficiency), or Net Promoter Score (intent to recommend). Others have gone through the process of establishing validity (some more than others) so you don’t have to.
  • Its sensitivity: You don’t want a blunt measure. Rating scales that have too few points may be too coarse to capture changes in attitudes that are linked to behavior (e.g., 3-point scales are usually insufficient). Even valid and reliable measures, such as completion rates, may be insensitive to problems users encounter while working with an interface. The number of problems (and severity), task time, or perceived ease and efficiency may be more sensitive to changes in an independent variable.
  • Its reliability: You want data that can be collected with consistency. If nothing changes about an experience you should expect to see similar values. Reliability can be measured by comparing data collected at different times (called test-retest); for questionnaire data with more than one item, you can use internal consistency reliability. Reliability isn’t all or nothing. A measure isn’t 100% reliable or 0% reliable; most fall somewhere between perfect and poor reliability. It’s OK to be critical when examining the reliability, but use data, not Twitter rants, and be realistic on what’s reliable.
  1. Be Meaningful to Your Customer

If the purpose of a business is to create and retain a customer, then you want dependent variables that are ultimately meaningful to customers.

You can actually have a metric that perfectly aligns with an organization, addresses a research question, is psychometrically reliable and valid, but fails to address what’s meaningful to your customer.

A notorious example is airlines’ measuring on-time departures as a way of improving performance for customers. But pulling away from a jetway on time but sitting on the tarmac and being an hour late counted as an on-time departure. A more meaningful metric is on-time arrival (despite the challenges of weather). Of course, that too can be “handled” by just extending the time it takes to arrive.

A more familiar issue closer to UX research is a better experience when calling customer support. Businesses want happy customers and hope good customer support increases satisfaction, retention, and referrals.

Customers, of course, want their problems solved and solved quickly. Reducing time on customer support calls may be intended to measure call efficiency but not if it leads to calls being answered quickly but ineffectively. Call resolution is more meaningful than call time (although the latter is likely a good secondary measure when calls are resolved effectively).

  1. Be Meaningful to Your Manager

You want a measure that’s meaningful to your customer, but you also want your manager (and executives) to understand what your dependent variables mean. Consequently, a more widely used measure that’s understood may be better (get more attention and action) than a less well understood and complex measure (even if it’s “better” in some way). Often you can simplify your score when you present it using percentiles, colors, and adjectives based on historical data. Oh, and don’t just make up your thresholds of what defines “good” and “bad;” that should be based on reference data. One of the best ways to interpret a measure is to compare it to published or historical data. Again another reason to work with standardized measures.

For example, if your organization uses the NPS but you heard on Twitter that a better measure might be an 11-item questionnaire from Gallup, there should be solid evidence that the measure really is better to overcome its unfamiliarity. A good way to lose a seat at the table is to ignore the measures a business uses (oh, and believe everything you read on Twitter). Your reward could be ignored research. A better services may be to use multiple measures (which we cover next).

  1. Consider Multiple Measures

In most UX research projects we conduct with clients, rarely does one metric solve all the research and business needs. We’ll often have dependent variables addressing different levels. This will include business metrics such as customer satisfaction or NPS, then product-level SUS or SUPR-Q, and task-level completion rates. At the task level, we almost always include at least two dependent variables (usually completion rate and perceived ease) as it’s hard to know which one will be more sensitive to design changes ahead of time. And pragmatically speaking, if you feel strongly that you have a “better” measure than one your organization is using, one of the best ways of gaining adoption is to include both measures to allow others to equate the two (and determine how one may or may not be better).

  1. Have More Dependent Variables

Dependent variables themselves can be composed of more dependent variables. For example, while companies that provide dating apps and websites ultimately care about revenue and profits, these business-dependent variables themselves depend on customer retention and referrals. Customer retention and referrals depend on current customers trusting the quality of the people who post on these platforms. Therefore, measuring current customer trust will likely improve retention and referrals (see also the HEART framework for integrating multiple measures) and ultimately sales and profits. Finding ways to link lower level product-dependent variables to higher level business-dependent variables (especially revenue) can be challenging, but we’ve found it well worth the effort to align research to the business goals.

  1. Be Practical

You want a measure that addresses your research and the business’ needs, is of high quality, and is meaningful. But you also want a measure that’s not a burden. A measure that is too burdensome to collect is a measure that doesn’t get collected. Measures that are efficient to collect and analyze and that are “good” enough are often better than measures that require special equipment, need a lot of time to administer, or are difficult to analyze.

For example, the lostness metric requires watching videos of participants (often repeatedly) and logging paths and screens. It takes time. We’ve found that the simpler measure of perceived ease captures most of it.

Eye-tracking where people look on a website or application generates very interesting heat maps and gaze-path videos as well as several dependent variables such as time in an area of interest and number of fixations. But eye-tracking requires expensive equipment and often quite a bit of time to analyze. But often self-reported data (did you notice this element?) or even easily collected click paths or mouse paths may be a sufficient dependent variable. It doesn’t mean eye-tracking doesn’t have its place; it just means be sure you really need the effort.

We’ve also seen this with logging errors in experiences. Defining an error and recording it is both time consuming and subjective. Often just noting UI issues captures the “what do we fix” and the time, perceived ease, and completion rates provide sufficient measures that describe the task experience.

  1. Don’t Obsess over Finding the Perfect Metric

You should spend time thinking about the right dependent measures. But don’t obsess. Is it mental effort, delight, love, loyalty, future intent, or affection? Which is better: error rate or completion rate? Should we throw out the NPS? Is lostness better than completion rate or task time?

There are a number of things to measure and just as many ways to measure them. But don’t get metric mania by looking for the perfect metric for your project. You’ll want to hone in on the right construct, but many measures correlate as they tap into similar things.

This holds true at the task level as well. The common metrics of completion rates, time, and errors correlate [pdf] as an approximation for task usability. It’s usually best to use multiple metrics and then average them in some reasonable way. This is what the Single Usability Metric (SUM) does. And, of course, don’t get too hung up on the “right” way to aggregate metrics!

 

Summary and Takeaway

It takes years of practice to perfect selecting the best dependent variables for UX research. But by starting with variables that address research questions, align with business needs, are psychometrically valid, and meaningful to the customer and business, you’re taking an essential first step in measuring and managing the user experience.

 

You might also be interested in
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top