Classifying Survey Questions into Four Content Types

Jeff Sauro, PhD • Jim Lewis, PhD

In architecture, form follows function. In survey design, question format follows content.

Earlier we described four classes of survey questions. These four classes are about the form, or format, of the question (e.g., open- vs. closed-ended). But before you can decide effectively on the format, you need to choose the content of the question and how it will address the research goal.

While survey questions are generally distinct in their wording and response options, many share common content characteristics. The contents of the survey questions themselves can be classified. For example, you can think of demographic questions (age and gender) as belonging to one group and satisfaction-type attitude questions as belonging to another group. Both may even use the same question format (closed-ended single response), but they will collect different content. Classification isn’t just something for bored librarians (think of the scientific advances driven by biological classification or the periodic table); it can be used to better understand the strengths and weaknesses of similar questions, including possible common response errors.

With over a century of practice with and research on surveys, there’s been a lot written about them. The journal Public Opinion Quarterly is one place where researchers have attempted to classify survey questions using various classification schemes.

Schaeffer and Dykema (2011) reviewed studies published in Public Opinion Quarterly and generated a loose classification of seven groups from the types of questions they encountered:

  1. Events and behaviors (going to doctor, purchasing toothpaste)
  2. Evaluations or judgments (ratings of ease, satisfaction with products)
  3. Internal states or feelings (happiness, anxiety, surprise)
  4. Accounts (reasons for taking an action, such as leaving a job or purchasing a product)
  5. Classifications (whether you have a 401k, own an electric car)
  6. Social characteristics (marital status)
  7. Descriptions of the environment (number of employees in a company)

Schaeffer and Dykema didn’t intend their list to be an exhaustive taxonomy. While it offers a good start for other researchers, it does have some challenges. For example, how are classification questions differentiated from accounts and social characteristics?

Robinson and Leonard (2018) offered a more compact framework with four types of questions that contain the groups identified by Schaeffer and Dykema. Their framework seems to work well for the survey questions we typically encounter in CX and UX research. We further built on their classification system to describe four types of questions that should encompass all types of survey questions: attributes, behaviors, abilities, and thoughts/sentiments.

1. Attributes

Typically, demographic-type questions are asked at the beginning of surveys to either screen-out respondents (e.g., include only people 18–44) or characterize them for later analysis and cross-tabbing (e.g., high income vs. low income). In addition to standard demographics, questions in this class can also measure attributes of both people and groups such as companies, organizations, or cities.

Common attribute questions include

  • Age
  • Education
  • Location
  • Income
  • Ethnicity
  • Occupation
  • Number of employees at your company
  • Number of family members you live with

When using these questions to screen, a common question format is multiple-choice single response, sometimes with one or more optional “other” fill-in responses. When attributes aren’t defined well enough to create a set of likely response options, the appropriate format is an open-ended question. This allows exploration of unconstrained responses but increases the analytical burden.

Watch out when using attributes such as income or age, where it’s easy to create overlapping categories (e.g. 18–21; 21–25) or miss values (e.g. $25,000 to $39,000; $40,000 to $50,000).

Avoid asking too many attribute questions, as some may be sensitive (e.g., income), and you don’t want to clutter your survey with questions you don’t plan to use (shorter surveys have higher response rates). This is especially the case if the information is available through another source (e.g., from customer records).

2. Behaviors (Reported)

These questions ask for self-reports of past or current actions. They should not be confused with actually observing behavior in either moderated or unmoderated UX research (e.g., using a platform like our MUIQ to conduct a survey that, in addition to the question classes discussed in this article, also has task-based activities that capture actual task behaviors such as completion times, success rates, screen paths, and clicks).

Examples of self-reported behaviors include

  • Prior experience using a product, app, or service
  • Products purchased online
  • Usage of a food delivery service
  • Frequency of making hotel reservations using Hyatt’s mobile app
  • Number of flights booked on Expedia
  • Impact COVID-19 had on job search behaviors

Behavioral questions are typically reported as frequency, duration/tenure, and intensity. In UX research, prior experience with a product, website, or app has one of the biggest impacts on other metrics. Familiarity breeds content: people who have used the product for a longer time (duration/tenure) use it more often (frequency) and use more of its features (intensity). They’re more likely to complete tasks successfully and quickly, and do so with a generally more positive attitude.

When measuring behavioral questions, watch for vague modifiers (sometimes, frequently), which can be interpreted in different ways by different people. Self-reports of prior behavior, when measured correctly, can be a reasonable (but far from perfect) predictor of future behavior.

With behavioral questions, you typically need to have a reference period (e.g., one week, one year, ten years). The longer the reference period, the more likely participants will forget details.

3. Abilities

When you want to measure a respondent’s knowledge or skills, you ask ability questions. These look a lot like a quiz or assessment, either fill-in-the-blank or multiple choice. Ability questions are often used in CX surveys as an indirect way to assess understanding of a design or interface. They can also be used to assess the usability of an interface through task-based questions. Examples of ability questions include

  • What does HDMI stand for?
  • Find a blender for under $45 on the Walmart website with an average of at least four stars.
  • What is a deductible on a health insurance plan?
  • How much is an annual Netflix plan?
  • Which of the following does the privacy policy cover?
  • Which shipping option will get you the product the fastest?

Asking ability questions is different from asking respondents to assess their abilities, something addressed in thoughts/sentiments questions (covered next). We know some of our UX colleagues may take exception to the idea that a task-based question falls under the category of measuring abilities. It seems reasonable to us to include such questions in this category, however, because task-based questions help gauge the ability of the person to complete tasks, allowing evaluation of users’ experiences with an interface.

4. Thoughts, Sentiments, and Judgments

Thoughts, sentiments, and judgment questions are often the heart of CX and UX surveys. They encompass a broad range of questions that tap into attitudes, beliefs, feelings, opinions, preferences, awareness, and behavioral intentions. Some common examples include

  • Brand awareness
  • Brand favorability
  • Satisfaction
  • Perceived usability
  • Perceived usefulness
  • Preference
  • Intent to recommend
  • Feature ranking
  • Self-reported tech savviness

Thoughts and sentiments are often measured using multipoint rating scales, which in many cases are part of standardized questionnaires, such as the System Usability Scale (SUS), SUPR-Q, or UMUX-Lite. Even more complex survey frameworks such as the Kano and MaxDiff use questions that ask about thoughts and judgments (what things are most or least important).

Summary

Survey questions can be categorized both by their format and by their content. Most survey questions can fall into one of four content types:

  1. Attributes: Typically, demographic-like questions that describe attributes of the respondents (e.g., age, income) or descriptions of other people or groups (e.g., employees at a company).
  2. Behaviors: Self-reports of past or current behavior, which usually include a reference period (e.g., last six months). These should not be confused with observing actual behavior.
  3. Abilities: Assessments of a person’s knowledge or skill, including the ability to complete usability tasks (such as finding information).
  4. Thoughts, sentiments, and judgments: Questions that ask for respondents’ attitudes, judgments, or preferences, including satisfaction, brand, and ease questions.

UX and CX researchers can use these categories when planning surveys to help connect questions to research goals.

 

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top