Seven Reasons People Misinterpret Survey Questions

Jeff Sauro, PhD • Jim Lewis, PhD

Like in all research methods, many things can go wrong in surveys, from problems with sampling to mistakes in analysis. To draw valid conclusions from your survey, you need accurate responses. But participants may provide inaccurate information. They could forget the answers to questions or just answer questions incorrectly.
One common reason respondents answer survey questions incorrectly is that they misinterpret the question. Understanding the possible reasons for misinterpretation can help with crafting better questions. Here are seven common reasons people misinterpret survey questions, which we adapted from research in Tourangeau et al. (2000).

1. Grammatical and Lexical Ambiguity

Grammar was a pain in elementary school, and it can also be a pain in surveys. In addition to grammatical ambiguity, some words, such as you, have ambiguous meanings.
“Do you have a subscription to Amazon Prime?”
This you can mean just the respondent or their whole household (you as the second person plural, e.g., all of you living in your house).
A simple question such as “Are you visiting professors?” can mean either 1) “Are you going to visit professors?” or 2) “Are you part of a group of visiting professors?” In discourse, the prosody of a spoken sentence often provides cues regarding the intended meaning (Kraljic & Brennan, 2004 [PDF]), but those cues are not available when reading a written sentence.
Given the nature of human languages, it can be difficult to avoid ambiguity. Words and phrases can have multiple meanings, sentences (similar to the example above) have multiple interpretations, and pronouns might have multiple possible antecedents.
As we compose a sentence, we are usually aware, at least implicitly, of how we would say it, so it’s hard to predict how others might interpret—or misinterpret—it without prosodic cues. That’s why it’s important to have other people proofread our survey questions.

2. Excessive Complexity

There’s a laudable desire to reduce the number of questions in a survey, but be careful not to pack too much into a single question:
“During the last 12 months, since January 1, 2020, how many times have you seen or talked to a doctor (or physician assistant) about COVID-19? Do not count any time you might have seen a doctor while you were in the hospital or for planned surgery at an outpatient facility, but count all other times.”
Complex questions are hard because they demand that participants keep track of multiple things at the same time and can lead to misreporting. Complex questions can be simplified by splitting them into simpler ones. We’ve found that it’s the length of the survey and not the total number of questions that’s a bigger detriment to completion. But even short questions can suffer from complex syntax. Watch out for

  • Adjunct wh-questions: Questions starting with wh-words such as who or what often include adjuncts—phrases that can be removed from the sentence without affecting its grammaticality. (“When did [your account representative say] you need[ed] to renew your software license?”)
  • Embedded clauses: Statements that include modifying clauses in the middle of the sentence. (“The task [that I just completed in this study] was easy.”)
  • Left-embedded syntax: Placing many adjectives, adverbs, and/or prepositional phrases before respondents get to the critical part of a question. (“Even if it would cost them some money and might make their code more complex because the ‘rules’ are a complicated mix of industry regulations and federal, state, and local laws, Twitter, as one of the most influential social media platforms, should comply with industry guidelines.”)

We’ve recently written about strategies for dealing with these kinds of complexities.

3. Faulty Presuppositions

A question that makes assumptions (presupposes something) can present problems for respondents:
How much do you agree or disagree with the following statement?
“Family life has suffered during COVID-19 because parents are having to concentrate too much on their work.”
The question presupposes that parents are concentrating too much on their work, but for some respondents, family life might be suffering from parents being out of work or having their hours reduced. It’s difficult for respondents to work out how to rate the statement.
The presuppositions in leading questions can even plant false memories. In a classic psychology experiment, participants viewed a slideshow of events that culminated in a traffic accident after a car turned right at a stop sign. After the slideshow, embedded in a battery of twenty questions, half of the participants were asked “Did another car pass the red Datsun while it was stopped at the stop sign?” and the other half were asked “Did another car pass the red Datsun while it was stopped at the yield sign?” Later, after a twenty-minute filler activity, the participants had to indicate which of two slides they remembered seeing, where the only difference in the slides was whether the car was stopped at a yield or a stop sign. Even though they had seen a stop sign, 41% of the participants in the yield condition claimed to remember having seen a yield sign.
So, how might we deal with our survey example to make it less leading? It helps to break the statement into multiple parts:

Figure
Figure 1: Revising a statement with a faulty presupposition by breaking it into parts and eliminating the presupposition (programmed in MUIQ so the second item appears after a Yes response to the first item and the third item appears after a response of 1, 2, or 3 to the second item).

 

4. Vague Concepts

A concept is vague if it’s unclear whether a description applies. Consequently, respondents will address the vagueness with their interpretation, leading to ambiguity in responses:
“How many children are staying at the Airbnb property?”
In this question, who counts as a child? Is a child someone who is under 18 years old or someone who is under 16? Does an infant count?
“How often do you play video games that include violence?”
Does it count if you play a racing game that has no violence other than the possibility of crashing a car? Is driving a car that can crash considered violence?
While you want to be clear by spelling out the details of terms, you want to avoid the opposite problem of adding too much complexity (see #2). The following examples strike a reasonable balance by being unambiguous without adding complexity:
“How many children (aged 0–18) are staying at the Airbnb property?”
“How often do you play first-person-shooter video games?”

5. Vague Quantifiers

Vague quantifiers are words (usually adjectives and adverbs) that have different interpretations. They can appear both in question stems and in response options:
“Do you consider yourself a frequent Netflix user?”
Peterson (2000) describes the words near, very, quite, much, most, few, often, several, and occasionally as ambiguous. We’ve seen similar ambiguity with probability words and change verbs.
Peterson gives an example of how respondents who reported “very frequently” to their reported movie attendance and “very much” to their monthly beer consumption later assigned frequencies ranging from 52–365 as the number of movies per year and beers per month respectively. Using these response options is fine if you want to gauge the perception of a behavior such as online streaming (do respondents think streaming an hour a day is frequent or moderate), but they’re not appropriate if you’re researching the frequency of a behavior.
There’s a misconception that rating scale points all need to be labeled. Full labeling becomes a challenge when there are more than five or seven points and may lead researchers to use vague modifiers. For example, in Figure 2, is “quite” really more than “slightly?” Research has shown that TAM rating scales with only endpoints labeled (Version B) produce similar ratings to fully labeled scales (Version A) without the potential complication of ambiguous quantifiers.

Figure
Figure 2: Two versions of response options for the Technology Acceptance Model (Davis, 1986).

 

6. Unfamiliar Terms

It’s easy to forget how much jargon and novel terminology there are in research and industries such as finance, healthcare, and retail, which may be unfamiliar to survey respondents. Such jargon includes initialisms (MPG, IRA, 401K) and terms (assets, deductible, out-of-pocket maximum). The worst combination would be initialisms made from unfamiliar terms (OOPM).
Unfamiliar terms are not just problematic in surveys. As we saw in our benchmark of health insurance websites, UX designs that include unfamiliar terms, whether software or survey, lead to guessing, misinterpretation, confusion, and frustration.
However, when respondents are familiar with industry jargon and initialisms, it’s reasonable to use them for more efficient communication (Nielsen Heuristic #2—speak the users’ language).

7. False Inferences

False inferences come from respondents overinterpreting questions, often to infer intent or to get at the spirit of a question:
“Are there any situations you can imagine in which you would approve of Robinhood removing access to your trading account?”
Some respondents may interpret the question in the spirit of Robinhood promising to provide unbroken access to money and the ability to make timely trades. However, others may interpret the question more literally. In that case, the use of the word any would lead them to agree more strongly with the statement than they otherwise would; “any situations” include acts of fraud, after all. (This is an issue associated with the use of absolute modifiers in survey questions.)
Would a high response rate to this question justify locking traders out of their accounts? Probably not …

Summary

People may misinterpret survey questions for a variety of reasons, including

  • Grammatical and lexical ambiguity: Caused by the inherently ambiguous nature of human language (words, phrases, and sentences with multiple meanings).
  • Excessive complexity: Sentences that are hard to process, often due to trying to pack too much into a single question.
  • Faulty presuppositions: Sentences that present assertions as facts when that is not the case.
  • Vague concepts: Sentences that do not include enough information to clarify their key concepts.
  • Vague quantifiers: Sentences or response options that include quantifiers that can be interpreted differently, such as near, very, quite, much, most, few, often, several, and occasionally.
  • Unfamiliar terms: The use of words, phrases, or initialisms that are not part of the respondents’ language.
  • False inferences: Sentences that may be overinterpreted by some respondents who struggle to infer its underlying intent.

You can employ some specific strategies for improving questions with these issues, but the best overall strategy is to make sure you have other people review your surveys before you launch them. Always run a pilot that allows respondents to comment on any questions they have difficulty understanding.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top