Identifying what users are trying to do is a key first step. Once you know what tasks you want to test, you’ll want to create realistic task scenarios for participants to attempt.
A task is made up of the steps a user has to perform to accomplish a goal. A task-scenario describes what the test user is trying to achieve by providing some context and the necessary details to accomplish the goal.
Crafting task scenarios is a balance between providing just enough information so users aren’t guessing as to what they’re supposed to do and not too much information so you can simulate the discovery and nonlinearity of real world application usage.
- Be specific: Give participants a reason or purpose for performing the task. Instead of giving generalities like “find a new kitchen appliance” ask them to find a blender for under $75 that has high customer ratings.
While users might start searching with general ideas of what they want, they will quickly narrow their selection based on the usual suspects of price, indicators of quality and recommendations. In the artificial world of usability testing, users will often encounter problems if you are too vague, and they will look to a moderator (if there is one) as to what they want them to find. Don’t be so vague in your task that users have to guess what you want them to do. For example, “You need to rent a mid-sized car on July 21st at 10am and return it on July 23rd at noon from Boston’s Logan Airport.”
- Don’t tell the user where to click and what to do: While providing specific details is important, don’t walk the users through every step. Leading a user too much will provide biased and less useful results. For example, instead of saying “Click on the small check box at the bottom of the screen to add GPS,” just say “Add GPS to your rental car.”
- Use the user’s language and not the company’s language: It’s a common mistake to mirror the internal structure of a company on a website’s navigation. It’s also bad practice to ask participants to do things based on internal company jargon or terms. If users don’t use the terms used in a scenario, it can lead to false positive test results or outright confusion. Do users really use the term “asset” when referring to their kids’ college funds? Will a user know what a product “configurator” is or an “item-page” or even the “mega menu?”
- Have a correct solution: If you ask a user to find a rental car location nearest to a hotel address, there should be a correct choice. This makes the task more straightforward for the user and allows you to more easily know if a task was or wasn’t successfully completed. The problem with “Find a product that’s right for you” task is that participants are in the state of mind of finding information to solve problems. At the time, there probably isn’t a product that’s right for them; they’re more interested in getting the test done and collecting their honorarium. This can lead to a sense that any product selection is correct and inflate basic metrics like task completion rates.
- Don’t make the tasks dependent (if possible): It is important to alternate the presentation order of tasks as there is a significant learning effect that happens. If your tasks have dependencies (e.g., create a file in one task then delete the same file in another task) then if a users fails one task they will often necessarily fail the other. Do your best to avoid dependencies (e.g. have the user delete another file.) This isn’t always possible if you’re testing an installation process but be cognizant of both the bias and complications introduced by adding dependencies.
- Provide context but keep the scenario short: You want to provide some context to get the user thinking as if they were actually needing to perform the task. But don’t go overboard with the details. For example, “You will be attending a conference in Boston in July and need to rent a car.”
- Task scenarios differ for moderated and unmoderated testing: The art of task-scenario writing has been honed over the years largely through moderated lab-based testing. However, if you’re conducting an unmoderated usability test, it requires an additional level of refinement. You can’t rely on a moderator to encourage users through a task and ask them what they’d expect.
While you don’t want to lead users and give them step-by-step instructions, you do need to be more explicit. You’ll need to provide product names, specific price ranges and brands. While some people might be concerned that will lead the user, I rarely see a task-completion rate above 90% in unmoderated benchmark studies. Even with all these details spelled out, users get lost in the navigation, the checkout procedures, or confused by simple things like terms, and overall organizations that aren’t obvious to developers so close to a design.
It takes some practice balancing not leading users on one hand and not making the task too difficult on the other. There are no universally “right” tasks, so don’t be afraid to tweak details for different methods (moderated vs. unmoderated) or different goals (findablity vs. checkout). It’s even fine to read task scenarios out loud instead of having them printed or on the screen (we do this a lot with mobile testing).
For more information on writing better usability task scenarios, one of the best sources is one of the classics: A Practical Guide to Usability Testing from 1993 by Dumas and Redish, A Practical Guide to Measuring Usability and Beyond the Usability Lab for unmoderated studies.
|Denver UX Boot Camp: August 16-18 3 Days of Hands-On Training on UX Methods, Metrics and Analysis|