9 Recommendations for Better Online Research

Jeff Sauro, PhD

We conduct unmoderated UX studies, surveys, and various forms of online research every week at MeasuringU.

Part of our process for delivering effective research is spending enough time up front on issues that affect the quality of results.

Here are our nine recommendations for conducting better online research.

  1. Use a Study Script A study script is similar to a blueprint for online research or prototype for a functioning product. It’s best to work through all the details while it’s easy to make changes. After a study is programmed with logic, conditions, and tasks, it takes a lot longer to make changes and introduces opportunities for errors. Study scripts don’t need to be fancy; just Word documents or Google docs for online collaboration and tracking changes.
  2. Do You Really Need Every Demographic Question? There’s a tendency to want to ask anything and everything about the participants in an online study: age, income, gender, education, geography, and occupation to name a few. This can especially be the case when using paid participants from online panels where little is known about the respondents. Demographic question-bloat can be particularly bad when multiple stakeholders want to have input and each take their turn adding demographic questions. For every demographic question, ask:
    • How will you report on it?
    • Can you get the information elsewhere?
    • Are you screening on it?

    Demographic questions tend to be more sensitive for participants than other types of questions and can lead to increased drop out. If you don’t have good answers to the above questions, consider dropping those demographic questions.

  3. Minimize Screen-outs. Good online research starts with collecting data from the right participants. While you want to ensure you have the right participants in your study, there is a tendency to target participants too narrowly, especially around demographic questions. The result is an excessive amount of screen-outs that make the study take too long to complete and usually cost more. This problem can be especially acute when companies want to recruit around their personas and then screen out participants if they don’t match the right combination of narrowly defined age, income, and education.While well intentioned, most personas we’ve encountered aren’t validated scientifically and therefore the variables used to screen out (such as income) aren’t the right ones to focus on. It’s often a better strategy to have looser criteria instead of screening out so many participants. You can then examine the differences between criteria and screen out participants.
  4. Include a Measure of Prior Experience. Whether you’re conducing a UX benchmarking study, branding survey, or customer satisfaction surveyprior experience with a brand, product, feature, or website likely plays a major role in your participants’ attitudes. Including a question about prior experience and exposure allows you to both differentiate and analyze separately. For example, when measuring attitudes about the hotel experience, we included the following question about the brands used in the study:In the past 12 months, how many times have you visited the following websites?[Marriott, Best Western, Hyatt, Hilton, DoubleTree]
    • 0 Times
    • 1-3 Times
    • 4-6 Times
    • 7-9 Times
    • 10+ Times

    These granular values can then be consolidated to create more variables. The figure below shows how the 5 levels of responses were collapsed to a “low” and “high” frequency for easier analysis.

  5. Minimize Yes/No Questions. While it can be easier to think of your ideal participant in terms of yes/no questions (are you 35-44? yes/no), you want to minimize yes/no questions. Even participants who aren’t trying to misrepresent themselves tend to “acquiesce” and answer in the affirmative to many of your questions.Instead of: Did you research computers in the last 12 months? Yes/NoUse something like: Which of the following have you researched in the last 12 months? [Select all that apply]
    • Computers
    • Dishwashers
    • Cars
    • Homes

    You can also use this type of question to exclude participants who did not select the response you’re looking for (e.g., computers).

  6. Be Careful Not To Bias Participants. By definition you need to include screening questions early in your study because you don’t want to exclude participants after they’ve spent a lot of time answering questions. You may also want to include questions about prior experience and brand attitude before exposing participants to additional questions or tasks. However, asking participants about brands (Zappos) or prior experience (shoe shopping) may prime participants for later tasks and questions.The mere exposure primes participants to think about products, websites, or attitudes in a way they possibly wouldn’t unless you asked about them. While you can’t always eliminate this exposure in research, look to minimize it or at least be aware of its potential effects if you’re using the study to make generalizations about the population (e.g. the number of people who think about buying shoes at Zappos).
  7. Use Standardized Questions Where Possible.If you’re asking it, there’s a chance someone else already has. Instead of re-creating questions each time, look for pre-written questions and response options. The main benefits of using standardized questions are:
    • Reliability: Consistency in responses
    • Validity: Measure what you intended to measure
    • Sensitivity: The ability to differentiate between good and bad
    • Norms: Reference a database of “normed-scores” to provide additional meaning

    Examples of constructs and corresponding standardized questions include:

  8. Consider Conscientious and Consistency Check Questions. When using paid participants, you have a certain percentage who may misrepresent themselves to get the honorarium. Especially when you’re looking for a specialized population, such as financial advisors, physicians, or IT decision makers. While you don’t want to bog down a survey script with “trick” questions, often including subtle changes can provide insight on the authenticity of the respondents. For example, the following item addresses TV brands:Which of the following TV brands have you researched in the last 12 months?
    • Vizio
    • Samsung
    • LGE
    • Sony

    In the above example LGE is a fictional brand. If participants select it they’re either misrepresenting themselves, not paying attention, or made a mistake. You can use this information to determine whether to exclude them or if not, how much credence you should give to their responses.

  9. Keep It Short. Length is one of the biggest predictors of study drop-out and also leads to poorer quality responses to those who stick it out. Look to keep things short and less of a burden.
    • Fewer questions: Will you really report on each question (e.g. those demographics)?
    • Shorter questions: Use simple words instead of long ones.
    • Succinct instructions: Get to the point quickly.
    • Fewer open-ended responses: Verbatim responses are helpful but not if there is so many that they’re a burden.
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top