5 Techniques to Make your UX Research More Effective

Jeff Sauro, PhD

5 Techniques to Make your UX Research More EffectiveA lot of UX methods exist along with recommendations on when to use them.

Some activities tend to cross methods: from operationalizing research questions, making data collection more efficient, and making the most of both what users say and what they do.

Here are five techniques we’ve found that make our UX research more effective (and often more efficient).

1. Use a Research Matrix

To ensure a study design addresses the research questions and guides decision-making, we’ve found a simple grid (or matrix because it sounds cooler) that aligns research questions to study components helps. To create the research matrix:

  1. List the research goals and hypotheses (e.g. What are the pain points users have with purchasing and installing our enterprise software product?).
  2. Place the research goals, questions, or hypotheses at the top of a grid.
  3. Identify how the research goals or questions will be addressed in the research study. These include specific questions, observations, tasks, and open-ended comments.
  4. Look for gaps where you have light or no coverage of the research questions.
  5. Cut the bloat. Questions or tasks that don’t match up with research questions are candidates for removal. (Do you really need 30 demographic questions?)

2. Reduce No Shows

Unfortunately, people don’t always show when they say they will. They get busy, forget, or have other things they need to do other than participate in your study. Here are several steps we’ve implement to get no-show rates down:

  • Establish both phone and email contact for the participant.
  • Communicate it’s a dedicated slot (it’s not a focus group that will go on without them).
  • Make reminder calls (people forget).
  • Include clear directions (getting lost is sometimes not worth the hassle).
  • Ensure they know what to expect (e.g. they will be using a website or software to find information online).
  • Exploit the consistency bias.(Have participants electronically or verbally commit to being reliable.)
  • Over-recruit. Despite your best intentions, you’ll still get no-shows. Reduce the pain by over-recruiting.
  • Over-schedule. Even with backups, you’ll still have to scramble to fill spots; prescheduling reduces that pain.

3. Clean Your Data

In our experience, around 10% of responses (often ranging from 3% to 20%) in surveys and unmoderated UX studies need to be tossed out or “cleaned.” These responses are usually a combination of cheaters, speeders, respondents misrepresenting themselves, or participants just not putting forth effort. All of which threatens the validity of the studies’ findings.

There’s no simple rule for excluding participants. Instead, we use a combination of the following methods to flag poor quality respondents, progressing from the more to the less obvious indicators. This process ensures we obtain quality results and valid findings from our studies.

Respondents who fail the following multiple checks are the first to be flagged for removal.

Most Obvious and Easiest Detection Methods

  • Poor verbatim responses: A participant who has multiple verbatim responses that consist of gibberish (“asdf ksjfh”) or terse, repetitive responses (“good,” “idk”) should be removed.
  • Irrelevant responses: These answers aren’t gibberish but really don’t make sense (they’re often copied and pasted from bots or inattentive respondents).
  • Cheater questions: A respondent answers too many “cheater” questions wrong (e.g. “Select 3 for this response”). But we all make mistakes, so look for more violations before removing the participant.
  • Speeders: If you’ve pretested your survey or study and it takes 15 minutes to complete but a participant finished it in 2 minutes, they probably rushed through it.

Less Obvious and More Difficult Detection Methods

  • Inconsistent responses: Agreeing to statements like “The website is easy to use” and “I had trouble using the website” are inconsistent. Again people make mistakes or think two statements are actually compatible. Look for other violations before removing.
  • Missing data: Too many unanswered questions usually means you have to throw out all the data and some statistical procedures need complete data.
  • Pattern detection: Participants who respond using conspicuous patterns such as straight lining (all 5s or all 3s) or alternating from 5s to 1s also indicate a bot or a disingenuous respondent.
  • Screen recording: If the study is task-based with screen-recordings, then you can observe what the participants are doing. If they aren’t doing anything or browsing Facebook instead, they’re candidates for removal.
  • Disqualifying questions: If participants state they are familiar with fictitious brands, have bought products that don’t exist, or state they are IT decision-makers but don’t answer basic IT questions correctly, they should be removed.

Keep in mind that you’re measuring people, not robots, and people get tired, bored, and distracted and yet still want to provide genuine input. Some level of poor quality responses is inevitable, even from paid respondents, but the goal is to winnow out those who don’t seem to provide enough effort from those who perhaps got a little distracted.

4. Code Verbatim Comments

Collecting open-ended data in an online study (survey or unmoderated study) helps get at the why behind the numbers. We’ve experimented with sophisticated algorithms for parsing verbatim responses, and so far we’ve found that nothing quite beats the brute force method of manually coding and classifying comments (as we did with our brand affinity study). While tedious, the extra effort brings insight that automated algorithms can’t provide. Here’s the process we follow to code and analyze verbatim comments.

After taking a quick scan at the open-ended comments: 

  1. Sort & clean to remove blanks and group similarly phrased words.
  2. Group into themes.
  3. Split themes that are too big into smaller themes.
  4. Combine themes that have too few comments into larger ones.
  5. Quantify & display. Count the number of comments in each category to provide a frequency and graph.
  6. Add confidence intervals. Estimate the prevalence of these sentiments in the larger population using confidence intervals.
  7. Create variables. Each theme can be turned into a variable and used in statistical analysis or cross-tabbing.

5. Derive Insights from Videos

With the advent of remote testing platforms, such as MUIQ, you can collect videos of hundreds of people attempting tasks in a short amount of time. It can take quite a long time to actually view every video though. You’ll, therefore, want to make the most of your time by systematically learning as much as you can and coding observations into insights. Here’s the process we follow:

  1. Start with a random (or pseudo-random) sample of videos. Don’t just pull videos sequentially.
  2. Record the ID of the participant and task. You’ll often need to reference this later.
  3. Look for symptoms of problems. Errors and hesitations are the hallmarks of interface issues.
  4. Identify an issue or insight. What in the interface is causing the pre-identified problems?
  5. Look for the root cause. What in the interface is causing the pre-identified problems?
  6. Repeat for the next participant. Look for patterns.
  7. Provide a frequency. Estimate the prevalence of the problem or insight.
  8. Include bonus insights. Include severity (cosmetic or catastrophic), global and local impact (affects one page on an app versus everywhere), and confidence intervals (provide plausible upper and lower bounds for the impact).
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top