20 Tips for your Next Moderated Usability Test

Jeff Sauro, PhD

Despite the rise in unmoderated usability testing, the bulk of evaluations are still done with a facilitator.

Whether you are sitting next to the user in a lab or sharing screens with someone thousands of miles away, here are 20 practical tips for your next moderated usability test.

  1. Shut up and listen:  You need to talk to moderate a session but don’t let the talking get in the way of discovering. Just like in any relationship, you’ve got to know when to talk, know when to listen and know when to move on.
  2. Measure completion rates: Usability = able to use. Even the most open and unstructured usability test should have users attempting tasks. Record whether users complete or don’t complete (1 or 0).
  3. Use confidence intervals around every measure even if you don’t report them (don’t get fooled by chance). Completion rates, problem frequencies, task times and rating scales all lend themselves to generating informative confidence intervals that tell you and your stakeholders the most likely average for the entire user population. See Chapter 3 in Quantifying the User Experience for examples and calculations for any sample size.
  4. Use at least one structured question after each task attempt that asks a user how usable they thought the task was.  I use the Single Ease Question (SEQ) and a question asking about confidence.
  5. Use a short questionnaire at the end of each session to gauge overall impressions:  If possible use a standardized questionnaire which tends to be more reliable than a homegrown one. Ideally pick one that also allows you to convert a raw score into a more meaningful rank such as the SUS for software or SUPR-Q for websites.
  6. Record the screen so you can always go back and look at task times, double check interactions or look for additional insights. Even though the recording won’t change, it’s amazing how your perspective can after watching 10 more users.
  7. Pretest: Use any warm body (interns work well) then pretest with a qualified user: For the first few usability sessions you have to get used to the tasks and the system quirks. Be prepared to make changes, improvise and improve early in the testing.
  8. Over-recruit: Plan on no shows. I typically see between 10%-20% of users cancel. Sometimes they call and sometimes they don’t.
  9. Have plenty of backup plans: Murphy’s Law is alive and well in the usability lab. The test system will go down, the users’ phone is too old, you’ll forget to record, the audio will fail, the user will be late, the note taker will be sick etc.
  10. Don’t wait until you’ve tested all users before reporting on the problems: Most stakeholders want to know right away what the major issues are without waiting weeks to test all the users and crank out the report. As long as you are clear the results are preliminary it’s usually a welcome update.
  11. Track which users encountered which problem: This allows you to estimate the percentage of problems you’ve uncovered and the sample size needed to uncover the majority of problems (given the same set tasks, user types and interface).
  12. Video of the user’s face is nice but not essential: Most of the action will be on the screen (or on the handset). It’s nice to get those crazy facial expressions or seeing when the user squints to read the font but if you don’t have the face cam, don’t worry.
  13. Probe users about interaction problems between task attempts not during: It’s usually just a few seconds to minutes after the interaction so the experience is still memorable. Using this retrospective probing technique allows you to collect task-time and prevents interrupting the user or inadvertently seeding ideas such as which path to follow.
  14. Paper and pencil are fine recording devices: they’re quiet and quick. I use custom software and excel sheets to record problems, comments and notes but sometimes the non-linear format that doesn’t need power and a clicking keyboard work just fine.
  15. Have a note taker and separate facilitator if possible: The facilitator is often kept busy asking follow up questions, troubleshooting technical issues, answering user questions and keeping the study on track. It’s easy to miss valuable insights if you’re doing both.
  16. Review the observations and problems after each user (when possible): Reviewing the issues when they’re fresh with another person such as a note taker or stakeholder. It helps get the problem list out faster and form new hypotheses you can look to confirm or deny in your next set of users.
  17. Record positive issues, suggestions and usability problems: Don’t just collect the bad news. Collect those suggestions, positive comments and features that go smoothly. While a development team will often want to get right to the problems, most will appreciate that users and usability professionals aren’t all gloom and doom. See Joe Dumas’s great book: Moderating Usability Tests: Principles and Practices for Interacting
  18. Illustrate issues using screenshots and categorize problems: Sorting problems into logical groups such as “buttons,” “navigation” and “labels” along with a good picture can really help with digesting long lists.
  19. Use highlight videos: Small clips of the most common usability problems or illustrative examples are helpful for stakeholders who rarely have time to pour over hours of video. Every one of our reports is full of data but sometimes the best way to illustrate what the data says is not with a graph but with a gaff.
  20. Don’t lead the user: Even if a user asks if they “did it right” or are going down the wrong path and ask “is this the right way” try and deflect such questions by asking back “what would your inclination be” or “where would you go to look for that?”
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top