# The 3 R’s of Measuring Design Comprehension

Jeff Sauro, PhD

Will users get it?

Marketing and design teams often want to know if users will understand a key concept on a website or design.

For example, do users understand new terms and conditions, a privacy policy, different product models, prices or the service packages properly?

When you want to know if users will understand something in a design, you can quickly see how asking “Did you understand the difference in our service plans?” isn’t a good idea.

It’s unlikely that more than a few participants will acknowledge that they don’t understand something. It would be like asking students in a math class if they understand the concept of logarithms.  It’s much better to ask students to find the logarithm of 1000.

To that end, to really measure comprehension, you need to use questions that assess users’ knowledge of the policy, the product or design.

We use three complimentary techniques: measuring recognition, recall and recounting.

## Recognition

Recognition measures the ability of a user to correctly identify an item among a set of alternatives. This is measured using the classic multiple choice question format. When a user can correctly select an item from a list of multiple choice questions, it demonstrates at least a superficial level of comprehension.

For example, if you want to know whether users understand a new cancellation policy for a service, you can ask them to review a product page and then answer some questions that include something like the following:

Which of the following options best represents the service cancellation policy?

A.    All sales are final.
B.    You can cancel any time after 90 days.
C.    You can cancel at any time.
D.    You can cancel any time within the first 30 days.

If a user selects the correct choice “C”, this reflects a certain level of comprehension. But it’s unclear if the participant would have provided this answer without considering the alternatives and getting a quick mental cue from seeing the correct answer. Guessing also complicates matters. With one correct choice and three incorrect alternatives, there is a 25% chance of randomly selecting an item correctly.

Adding additional multiple choice questions about the cancellation policy can certainly help this, and that’s why standardized tests aren’t merely single questions. The probability of correctly guessing three questions correcting is .253 or  about 2% .

For unmoderated usability testing, we often need to verify task completion rates and use a multiple choice verification question. It usually asks participants to provide the price or description of a product we asked them to search for. That means in many respects, the lower limit of task completion rates would be closer to 25% than 0% (for multiple choice questions with four options).

So while adding many multiple choice questions offsets the problems of guessing and poorly worded questions and answers, we can’t subject participants to long batteries of SAT-like questions in the world of applied user research. We use complimentary approaches instead.

## Recall

Recall is the users’ ability to pull the correct answer from their memory without any prompting or cues. This is usually measured by having participants answer open-ended questions.

Open-ended questions that require users to correctly recall the cancellation policy terms or the name of a feature provides evidence for a deeper level of comprehension than recognition.  Asking a participant to recall the cancellation policy would entail a question such as:
What is the cancellation policy for the software service?

While we largely eliminate the problem of guessing, open-ended questions have their own issues. They take longer to analyze and introduce an additional layer of subjectivity and differing interpretation.

## Recounting

Sometimes we not only want to understand whether users understand specific aspects of a product or service, but we want to know what features or details are most important and memorable in the mind of the user.

Instead of asking a participant to summarize what they understand, we ask them how they would explain what they saw to a friend or colleague. For example, “How would you explain the service and cancellation policy to a friend who was considering this service?” By asking a participant to rephrase things to a non-present friend forces them to not rely on jargon or half-baked terms.

This approach helps not only to assess a deeper level of comprehension but also to assess what features stand out, and in the users’ language. The verbatim responses provide a great opportunity to determine what branded terms are being used.

Rarely can we assess whether users “get” a concept, feature or detail with a single question or by asking them directly. Using a mix of multiple choice questions (recall), open-response questions (recognition) and recounting questions provides a balanced view of what users understand and what they don’t comprehend. We find this approach works well for measuring abstract software concepts, terms and conditions, pricing structures, upgrades, product tiers, service plans, and branded features.

0
0