A Blueprint for Writing Survey Questions

Jeff Sauro, PhD • Jim Lewis, PhD

feature image with survey platform on laptopLike with writing an article or book, it can be a challenge to sit down and write survey items.

Few professionals have taken a formal course in survey development. Instead, most rely on their experiences or best practices.

To help with the process, we wrote Surveying the User Experience. In this article, we take you through a blueprint of what to think about as you go through the process of building survey questions (and we include links for more information).

It’s hopefully an easier way to tackle your next survey.

The Blueprint (in Seven Sections)

1. Understand the anatomy of a survey item.

Despite being called questions, survey questions can be both questions and phrases, so we often refer to them with the broader term item (but like others in the industry, we’ll use question and item interchangeably … sorry about that).

It helps to break down a survey item into two main components: the stem and the response options. We can then break the stem down further into five elements:

  • 1. Introduction
  • 2. Information about the topic or definitions
  • 3. Instructions
  • 4. Opinions of others
  • 5. Requests for an answer

There are quite a few response options to choose from, but some types are better suited than others for different research goals. Picking the right response options involves additional consideration.

2. Determine the type of survey item you need.

In architecture, form follows function. In survey design, item format follows content. Although survey items are generally distinct in wording and response options, many share common content characteristics. Classifying survey items leads to a better understanding of the strengths and weaknesses of similar questions, including common response errors. Four survey item types are:

  • Attributes: Typically, these are demographic-like questions that describe attributes of the respondents (e.g., age, income) or descriptions of other people or groups (e.g., employees at a company).
  • Behaviors: Not to be confused with observing actual behavior, these are self-reports of past or current behavior that usually include a reference period (e.g., last six months).
  • Abilities: These assess knowledge or skill, including the ability to complete usability tasks (such as finding information).
  • Thoughts, sentiments, and judgments: Examples are questions that ask for respondents’ attitudes, judgments, or preferences, including satisfaction, brand, and ease questions.

3. Start writing.

It’s daunting to know that subtle word changes may lead to unanticipated responses. The good news is that you don’t have to start from scratch each time. Instead, you can follow a process to help you make decisions, ideally reducing the likelihood of errors. Follow these seven steps:

  • 1. Start with the concept or construct. This is what you intend to measure (can be concrete or abstract).
  • 2. See whether a question already exists. Don’t reinvent the wheel. Improve clarity and avoid misinterpretation by reusing or adapting an item, especially standardized ones.

If your item doesn’t exist, then continue:

  • 3. Determine the type of question. See #2 above.
  • 4. Brainstorm ideas. Come up with different words, phrases, or ideas to describe the concept, or
  • 5. Interview the target respondents (if you can). Discover problematic terms and correct misinterpretations.
  • 6. Craft the question. Use a direct request (wh-question) or indirect request (e.g., rating statements).
  • 7. Select the response options. Use our decision tree.

4. Avoid common practices that can cause misinterpretation.

People aren’t computers. Here are seven potential reasons why respondents might misinterpret what you’re asking:

  • Grammatical and lexical ambiguity: Language is inherently ambiguous (e.g., words, phrases, and sentences can have multiple meanings).
  • Excessive complexity: Complex sentences are hard to process because they pack too much into a single question.
  • Faulty presuppositions: Assertions are sometimes presented as facts when they really aren’t.
  • Vague concepts: Sometimes there isn’t enough information to clarify key concepts.
  • Vague quantifiers: Some quantifiers can be interpreted differently, such as near, very, quite, much, most, few, often, several, and occasionally.
  • Unfamiliar terms: Respondents won’t understand words, phrases, or initialisms that are not part of their everyday vocabulary.
  • False inferences: Sentences may be misinterpreted by those who struggle to infer their underlying intent.

5. Review your survey questions to make them clearer.

To improve the clarity of your survey items, consider the following strategies:

  • Keep items short. Replace wordy phrases with shorter versions (e.g., “at this point in time” with “now”). If your item has > 25 words, make sure they are all necessary.
  • Use simple language. Replace multisyllabic words with shorter words (e.g., prefer “The system’s features meet my needs” to “The system’s capabilities meet my requirements”).
  • Prefer common over rare words. Frequently used words are more familiar and tend to require less mental processing than less common words (e.g., use “angry” instead of “irate”).
  • Use the respondents’ vocabulary. Avoid jargon for general population surveys, but for surveys of specific populations, match their vocabularies, including their jargon.
  • Minimize the use of acronyms. Avoid acronyms except those so common that spelling them out would require more mental processing (e.g., NATO, SCUBA).
  • Avoid complicated syntax. Be careful with adjunct wh-questions, embedded clauses, and left-embedded syntax.
  • Avoid passive voice. Passive voice is appropriate in some situations, but survey writers should use active voice unless there is a compelling reason to do otherwise.

6. Be aware of the reasons why people forget things.

People, of course, forget details (that’s why we wrote these down). Four primary culprits are:

  • Retrieval failure: The more time has passed and the less salient the detail, the more imperfect the recollection.
  • Reconstruction: People fill in gaps with generic memories.
  • Distortion: People’s memories are influenced by contemporary details such as photos and videos of an event.
  • Mismatched terms: People might not associate memories with the terms used by interviewers or in surveys.

7. Help people remember.

Some of the following techniques, especially when used in combination, may help to improve participant recall in surveys. They are particularly useful for controlling “telescoping” errors—the tendency of people to incorrectly remember the timeframe in which something has happened.

  • Shorten the reference period. To control telescoping errors, use short rather than long reference periods.
  • Provide personal landmarks. It’s easier for people to remember events associated with strong emotions, so framing response periods in terms of meaningful events can decrease telescoping errors.
  • Ask people to think backward. This can be especially effective when studying events that have happened more than once.
  • Decompose the question into small, concrete questions. Decompose broad categories (e.g., shopping) into more specific events (e.g., shopping for clothes).
  • Use introductions to questions. Although you want to keep items as short as possible, sometimes a good intro with specific instructions helps constrain a broad scope.
  • Give more time. Encourage participants to take the time to remember the details of the events of interest.

Looking to Learn More?

This blueprint for writing survey questions is a good place to start. To learn more about the entire survey creation process, not just writing survey questions but selecting response options, launching surveys, analyzing data, and presenting results, see our book, Surveying the User Experience. We also have a companion course that follows the book on MeasuringUniversity.com.

You might also be interested in
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top