Is the UX-Lite Predictive of Future Behavior?

Jim Lewis, PhD • Jeff Sauro, PhD

feature image with UX lite and time arrowIt’s hard to call a product or app successful if people don’t use it.

But how will you know if people will use a product and continue to use it?

There’s a strong need to understand technology adoption and usage. The first step in predicting and understanding why people do or don’t adopt tech is having a reliable and valid measure of tech adoption.

A solid candidate measure is the UX-Lite®.

The UX-Lite (Figure 1) is a two-item UX questionnaire that measures perceived ease of use (PEoU) and perceived usefulness (PU). It’s essentially a miniature version of a questionnaire developed in the 1990s aptly called the Technology Acceptance Model (TAM).

Figure 1: The UX-Lite (created with MUiQ®).

Figure 1: The UX-Lite (created with MUiQ®).

In a previous article, we presented research from retrospective UX surveys using structural equation modeling (SEM) to demonstrate the essential equivalence between the PEoU and PU components of the UX-Lite and the modified Technology Acceptance Model (mTAM) as drivers of experiential and intentional outcomes (overall experience, behavioral intention to use, likelihood to recommend).

We found that the SUS, UX-Lite, and mTAM had good to excellent reliability and concurrent validity. PU tended to be a stronger driver than PEoU. Overall, substituting the UX-Lite Ease and Usefulness items for the mTAM PEoU and PU components resulted in reasonably consistent models.

This research supports UX practitioners by (1) demonstrating the importance of work that improves perceptions of product ease and usefulness and (2) demonstrating that UX researchers and practitioners can use the two-item UX-Lite in their work to effectively and efficiently measure perceived ease and usefulness.

But what about the next step, from intention to use to estimates of actual use?

In this article, we describe a follow-up study, using participants from earlier studies, that estimates the connection between this set of ease and usefulness drivers on behavioral intentions and how the components of the UX-Lite (perceived ease and perceived usefulness) map to existing measures of technology adoption.

Follow-Up Study:
Method

In our 2020 surveys of business and consumer software, we included the UX-Lite and also collected the SUS, mTAM, and a three-item behavioral intention (BI) measure made up of the average of two items from TAM research (“Assuming I had access to [Product], I intend to use it” and “Given that I had access to [Product], I predict that I would use it.”) and a similar third item that we routinely collect (“I plan to use [Product] in the next three months.”). At the beginning of the survey, participants indicated which products they had used in the past year and were randomly assigned one of those to evaluate. We used the 2,412 responses to this initial survey for our preliminary SEMs.

To understand how well the UX-Lite could predict future behavior, about three months after the initial study, we reached out to the respondents to see who was interested in participating in a follow-up study. The follow-up study had 321 respondents who reported how often they had used their assigned product during the intervening three months on a frequency scale with six response options: Never, Once a month, Once a week, Several times a week, Daily, Multiple times a day.

Follow-Up Study:
Structural Equation Models

Figure 2 shows three structural equation models created with AMOS. These models allow us to see which combination of measures best predicts reported usage. The three contenders were:

  • mTAM (Model A): Used the components of the mTAM as drivers of Overall Experience, LTR, and BI.
  • SUS (Model B): Replaced the mTAM PEoU with the ten-item SUS.
  • UX-Lite (Model C): Replaced the mTAM PEoU and PU components with the UX-Lite Ease and Usefulness components.

For this analysis, reported usage was measured using a frequency scale for the follow-up item (how often the product rated in Survey 1 had been used over the past three months), which had numeric values assigned to each response option (0: Never, 1: Once a month, 2: Once a week, 3: Several times a week, 4: Daily, 5: Multiple times a day).

While everyone loves a horse race, how do we determine the winner when there are no mint juleps and finish lines? We use fit statistics.

Structural equation models of perceived ease and perceived usefulness as drivers of overall experience, likelihood to recommend, behavioral intention to use, and measure of follow-up usage (with model fit metrics, n = 321).

Figure 2: Structural equation models of perceived ease and perceived usefulness as drivers of overall experience, likelihood to recommend, behavioral intention to use, and measure of follow-up usage (with model fit metrics, n = 321).

Assessing Model Fit

For assessing the goodness of fit of these types of models, we followed the advice of Jackson et al. (2009), who recommended reporting fit statistics that have different measurement properties, such as the comparative fit index (CFI: a score of 0.90 or higher indicates good fit), the root-mean-square error of approximation (RMSEA: values less than 0.08 indicate acceptable fit), and the Bayesian information criterion (BIC: lower values are preferred).

Values on double-headed arrows in Figure 2 are correlations between the primary drivers. Values on single-headed arrows (links) are standardized estimates of the strengths of relationships between variables (interpreted like beta weights in multiple regression), and values above the upper right-hand corners of outcome metrics are squared multiple correlations (interpreted like coefficients of determination in multiple regression—i.e., percentage of variance accounted for, also designated as R2).

Even with the smaller sample size in this follow-up survey (n = 321), all correlations (which had patterns like the first survey by being significantly larger for mTAM predictors in Model A than the predictors in Models B and C), standardized estimates, and squared multiple correlations in the models were statistically significant (p < 0.01), and all three models had similar (and acceptable) fit statistics.

The models (specifically, behavioral intention) accounted for 19% of the variation in the usage follow-up ratings. The correlation between the primary predictors in Model A (0.70, 95% confidence interval from 0.64 to 0.75) was significantly higher than those in Models B and C (B: 0.56, 95% confidence interval from 0.48 to 0.63; C: 0.49, 95% confidence interval from 0.40 to 0.57).

All three structural equation models had good fit statistics, but the UX-Lite model (C) was nominally better than the mTAM model (A) and had a structure (link weights and R2) very much like the model without usage follow-up from the earlier analysis (n = 2,412, Figure 3).

Figure 3: UX-Lite model (C) from the earlier analysis with n = 2,412 (follow-up models A and B are also consistent with models A and B from the earlier analysis).

Figure 3: UX-Lite model (C) from the earlier analysis with n = 2,412 (follow-up models A and B are also consistent with models A and B from the earlier analysis).

Summary and Discussion

To understand how well the UX-Lite could predict future behavior, we reached out to the respondents from an earlier study to see who was interested in participating in a follow-up study, ultimately analyzing 321 responses. Our key findings were:

Can the UX-Lite predict future behavior?… Yes, it can. In the SEM we built with data from the follow-up study, we found significant linkages from Perceived Ease of Use (PeoU) and Perceived Usefulness (PU) to Usage Follow-up through Overall Experience and BI (to use).

Usefulness tended to be a stronger driver than Ease of Use. Across all the models, the link weights for PU with Overall Experience, LTR, and BI (to use) were larger than for PeoU. For example, in the UX-Lite model in Figure 2 (Model C), the standardized Ease and Usefulness link weights with Overall Experience were, respectively, .23 and .63 and, with the behavioral intention to use were, respectively, .09 and .44.

The SUS and mTAM PEoU were essentially interchangeable in the models. Consistent with the results from the earlier study, the structural equation models were similar regarding the magnitudes of standardized estimates and squared multiple correlations when substituting the SUS for the mTAM PEoU. Both continued to appear to measure the same or almost identical underlying constructs, suggesting that there might not be a substantive difference between the constructs of perceived usability and perceived ease of use.

Overall, substituting the UX-Lite Ease and Usefulness items for the mTAM PEoU and PU components resulted in reasonably consistent models. Most standardized estimates (link weights) and squared multiple correlations were consistent in the two models (A and C).

Bottom line: This research supports UX practitioners by (1) demonstrating the importance of work that improves perceptions of product ease and usefulness and (2) demonstrating that UX researchers and practitioners can use the two-item UX-Lite in their work to effectively and efficiently measure perceived ease and usefulness. Not only is the UX-Lite predictive of ratings of overall experience and behavioral intentions, but it is also predictive of usage behavior driven by the behavioral intention to use.

For more details about this study, see the paper we published in the International Journal of Human-Computer Interaction (Lewis & Sauro, 2023).

You might also be interested in
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top