The best way to measure the user experience is by observing real users attempting realistic tasks.
You may not always be able to do so though because of limitations in time, costs, availability of products, or difficulty in finding qualified users.
Many large companies have dozens or even thousands of products for both internal and external customers. It can take years to evaluate all the products.
For example, when I worked at PeopleSoft there was an initiative called the Total Ownership Experience (TOE). The laudable goal of this program was to have the user experience of every product measured on a regular basis. The initiative led to many informative discussions on how to effectively measure usability, determine optimal task times, and combine usability scores.
But unfortunately, it also had a major downside. There were so many products and product areas, each with different types of users, that it took more than two years just to measure all the major products! By the time we were done, the measures were well out of date and we had to start all over. Even worse, UX teams felt they were spending more time measuring the user experience than doing anything about it!
It’s a problem for researchers, managers, and customers. The solution is not to give up on measuring, but to use alternative measurement strategies.
Alternative to Testing & Observation: UX Survey
One way to gauge the experience of a product is with a UX survey. The perceptions of the user experience in general, and usability in particular are, of course, not a full substitute for data collected in a usability test or from directly observing users with a product.
However, the two types of data—test data and survey data—are related. Surveys won’t provide the rich behavioral data that helps diagnose the root causes of problems, but carefully crafted surveys can provide similar data to usability tests, along with additional benefits. For example:
- Attitudes toward usability and task-based usability metrics (like task completion) have a reasonably strong correlation. Therefore, measuring SUS will provide some idea about task-completion rates, even if you don’t collect task data.
- Attitudes toward usability and likelihood to recommend are strongly correlated. You can collect both in a survey and have measures that are tied more closely to company revenue.
- Users self-reported usability problems often uncover up to half of the same issues uncovered in a traditional moderated usability test.
Conducting a UX Survey
You can use a standardized measure such as the System Usability Scale (SUS) for a UX survey. Respondents can reflect on their experiences with open-ended questions and a number of more detailed questions about features. This process might be familiar to you; many of the same measures are collected is a usability test.
Here’s how to conduct UX focused survey.
- Find out who the users are.
In the survey, have participants identify their role (e.g. engineer, manager) and their prior experience with the product (ask both duration and frequency of use). You can collect any other demographic or user-related information that helps segment your users appropriately—just keep it short!
- Administer a standardized questionnaire.
Before you ask any other questions about the product experience and influence participants’ responses, have them respond to a standardized measure, like the SUS for apps and products and the SUPR-Q for websites.
This becomes your key measure to gauge the relative usability and experience of each product. By collecting this data across many product experiences you can build your own internal database to rank products and present them on scorecards. Low-scoring products indicate a need for improvement (and follow-up usability testing); high-scoring products serve as benchmarks that you can leverage for best practices.
If you’re using measuring software, you can compare your data to our benchmarks. Warning: Use caution when comparing results from a survey with those collected in a usability test. While this is an open-research question, the results are usually disparate enough to make comparisons difficult.
- Ask participants the reasons behind the ratings.
Ask the respondents to briefly describe why they provided the ratings they did. This will be your first clue as to the “why” behind the numbers and what needs to be further investigated.
- Ask participants how they use the product.
Have users describe in their own words how they use the product. In particular, have them describe the tasks they perform and what they’re trying to accomplish. This can be especially helpful for products with diverse user types; often many of the problems in usability stem from a misalignment of what users do and what the product team thinks they do.
- Ask participants to rank the top tasks.
Users aren’t always the best at describing task behavior. To help, use a top-task analysis. Enumerate, in the users’ language, all the tasks they can perform with the product. Present the tasks in random order and have them pick the top five most important tasks. Most products have hundreds of features and functions. A top-task analysis helps separate the trivial from the vital tasks. Use this analysis in combination with the open-ended task questions for defining what areas to test further in your usability tests.
- Ask participants to describe problems.
While respondents are thinking about tasks and usage, have them describe the problems they encounter and encourage them to provide detailed descriptions of what they were doing (and even attach screenshots). While you can’t rely on users to tell you what to fix (comments can often be vague or confusing), they usually let you know of the most pressing issues facing them.
Usability tests and direct observation are the best methods for uncovering what to fix in an application, but they aren’t always feasible to conduct—especially when you need to measure a lot of products. A UX survey is a quick way to get a standardized measure of the user experience and gives you insight about what needs to be fixed. Keep the surveys short but collect responses to a standardized measure of the experience, detail on who the users are, the tasks they perform, and the most common problems they encounter.
Don’t stop at the survey. Use the data as input for what products to include in a usability test and what areas of the product likely need improvement.