One of the fundamental principles behind usability testing is to let the participants actually use the software, app, or website and see what problems might emerge.
By simulating use and not interrupting participants, you can detect and fix problems before users encounter them, get frustrated, and stop using and recommending your product.
So while there’s good reason to shut up and watch users, should a facilitator ever intervene? I’m not talking about stopping a dangerous situation, but when a user gets stuck and can’t proceed. What should a facilitator do?
For as long as I’ve been conducting usability tests, the question of whether to or when to assist, and then what to do with that data has been a constant debate—and for good reason. The idea of assisting participants seems contradictory to the major goal of usability testing—letting people use the product without the help of a facilitator, product manager, or engineer.
Here are ten things to consider when deciding when an assist is warranted in a usability test. I’ve pulled these from our experience at MeasuringU, from Joe Dumas & Beth Loring’s excellent book on moderating usability tests, and the Moderator’s Survival Guide by Tedesco and Tranquada.
1. What’s an Assist?
An assist is when a facilitator intervenes during a usability test to provide help to participants, which often leads them to complete a task or step of a task. For example, if a participant is asked to add a hooded sweatshirt to the shopping cart as a task during a usability study but is unable to proceed because he or she didn’t notice the error message to select a size (the small error message in the figure below). Eventually, the facilitator may need to point out the need to select the color and size to proceed to the next step in the task.
2. Why Assist?
If participants become stuck early in a task or study, providing assistance may allow you to uncover any usability issues later in the study that you wouldn’t if you didn’t provide assistance. The benefits of intervening (additional problems found or more data collected) should outweigh the costs (potentially unrepresentative data for one task because of the intervention). In the hoodie example, more data can be gathered about the shopping cart pages and experience by telling the participant how to proceed.
3. Don’t Be Afraid to Assist
One of the first things facilitators learn is to not coach or lead participants to complete tasks or make the session into a demonstration. It can be very difficult to learn not to interfere because people generally want to help others when they struggle. But sometimes intervention is warranted so you should be prepared to assist if needed. There’s not a one-size-fits-all solution to know when to step in. There are some common signs (covered next) of when you might need to jump in.
4. Know When to Assist
Knowing when to wait and when to intervene takes practice. Here are some good indications of when it’s time to assist a participant:
- When a participant is really struggling: If a participant is repeatedly having trouble or experiencing the same errors. There’s no reason to make the experience go on forever. Don’t jump on the first sign of struggling though; ensure a participant really can’t progress.
- Time is running out: There’s a limited amount of time in usability tests. You should assist a participant when he or she takes too long with one task and won’t get to other tasks, screens, or parts of the study.
- There’s an eminent disaster: If a participant’s continued actions will lead to a problem, such as crashing a system, deleting data, or charging a credit card inadvertently, you’ll need to intervene.
- When it’s not really over: Sometimes participants will think they’ve completed the task and are ready to move on when in fact they haven’t completed the task (or all parts of the task). If this premature task completion happens, you may need to intervene and nudge the participant into considering other options.
5. Use Assists Sparingly
While there are times when you need to assist, you should still keep them to a minimum. If you find you’re frequently assisting in a study, consider redesigning your tasks or study. If you or another moderator frequently assist across different studies, then reconsider your facilitation style to see how you can reduce assistance.
6. Measure Assists
You should record if you had to assist a participant, how many times, and for what reason. Knowing the frequency and causes of assists alone can help with diagnosing usability problems or help refine your testing protocol (for example, there may be a problem with your study protocol, task instructions, or the software being tested).
7. Report Task Completion With and Without Assists
With a record of assists you have a choice on how to report completion rates (with assistance or without). Some organizations have policies on which completion rate to report—assisted, unassisted, or both. The unassisted completion rate is by definition lower than the assisted completion rate (when there are assists). Depending on the consequences of the data (if a minimum task completion rate needs to be met), there can be controversy over whether to report assisted task completion rates.
8. Not Every Intervention Is an Assist
Sometime the facilitator will need to intervene as participants are completing a task but don’t necessarily assist them in completing the task. Examples of interventions that are not assists are:
- Clarifying task instructions or repeating the task
- Prompting a participant to think aloud
- Helping a participant recover from a software bug, computer crash, or some other technical issue
9. All Assists Aren’t Created Equal
Dumas and Loring suggest the following four levels when assisting, ranging from the least amount of intervention to the most, and then reporting the level.
- Breaking a repeating sequence by asking participants to perhaps read the task again or consider another option.
- Providing a general hint such as letting participants know the information they need is on a screen they’ve been to already.
- Providing a specific hint such as telling participants that the function they need is under a menu item or on the next screen.
- Telling participants what to do, when all else fails, and telling participants to click a certain button or navigate to an element.
10. Assist as Diversion
Sometimes providing direct assistance to participants may make them feel, well, not great. One way to minimize this mutually uncomfortable situation is to use what Tedesco and Tranquada refer to as a diversionary assist. In a diversionary assist, you stop participants from what they’re currently doing to ask about something unrelated (for example, what they might have said earlier in the task about a feature). After they answer your question, participants return to the task where they were stuck and you can ask them to move on by clicking that un-noticed function (this is the Level 4 assist).
Learn More: UX Measurement Boot Camp
Intensive Training on UX Methods, Metrics and Measurement
|Fall 2020: Delivered Online|