There are many great books (some classics) on conducting usability tests.

These books provide the blueprint for conducting an ideal usability test. One common theme these books present is that when conducting a test you are to act, as best as possible, as a neutral observer.

Don’t lead users, don’t put words in their mouth and don’t just tests as a formality to confirm your preconceived ideas about the product.

But what do usability evaluators actually do in the lab?

A somewhat recent analysis by Norgaard & Hornbæk[pdf] provides some insight on what goes on behind the one-way mirror.

This analysis is different than the famous Comparative Usability Evaluation studies. The CUE studies largely focused on the results of usability tests and found evaluators tend to find different sets of problems (especially when the tasks and methods differ). While disconerceting, it turns out many professions that rely on expert judgment have high variability.

14 Usability Test Sessions Analyzed

The analysis reviewed audio and video from 14 different usability testing sessions from seven different companies in Denmark. Around half the labs were for in-house usability teams (IT or product development) and the others were from consulting firms who are hired to conduct usability tests.

Of the 14 usability sessions analyzed:

  • 8 contained examples of evaluators confirming suspected usability problems from preconceived opinions: “Now I am just looking for ammunition”
  • 13 of the facilitators asked leading questions: “Did you notice this column” or “Can you do this task another way?”
  • 10 asked questions about product utility but utility issues were almost always presented as less important than usability issues
  • 8 encountered technical problems including system crashes
  • 0 carried out any structured problem reviews immediately after a test session
  • 13 asked about expectations: “What would you expect to happen if you clicked on this link?”

Usability Testing as Experimental Research

It is called a usability laboratory so it’s no surprise that many usability professional consider themselves unbiased observers carrying out controlled experiments. This analysis makes clear that the fast-paced results oriented realities of product development mean practicality can rule over experimental rigor.

I recently asked one of the living legends of usability Joe Dumas and editor of the Journal of Usability Studies how he interprets these results. His response was insightful.

UX professionals are supposed to be able to put their own expectations aside and remain neutral. This is what we criticize developers as not being able to do. Perhaps we are not above letting our own expectations influence how we interact and what we select to report.

So we’re far from perfect and like many professions you should listen to what we say, not what we do.

Some Practical Advice for Your Next Usability Test

Just like the CUE studies, this analysis brought up some good practical advice. So after you’re done asking leading questions in your next usability test, consider this practical advice:

  • System Failure: Expect your prototype to break or application to crash and have a back-up plan of what to test.
  • Create a Top 10 Observation List: Immediately after a testing session ends, generate a list of the top 5 or10 observations or notes when they are fresh in mind. Plan on adding say 10 minutes after each user to include this.
  • Don’t neglect utility: You’re conducting a usability test to find and fix problems but this might be the only time developers or product managers get candid feedback on missing features or a mismatch with the user’s workflow.