Was a task difficult or easy to complete? Performance metrics are important to collect when improving usability but perception matters just as much. Asking a user to respond to a questionnaire immediately after attempting a task provides a simple and reliable way of measuring task-performance satisfaction. Questionnaires administered at the end of a test such as SUS, measure perception satisfaction.
There are numerous questionnaires to gather post-task responses, some of the more popular ones are :
- ASQ PDF: After Scenario Questionnaire (3 Questions)
- NASA-TLX : NASA’s task load index is a measure of mental effort (5 Questions)
- SMEQPDF: Subjective Mental Effort Questionnaire (1- Question)
- UMEPDF : Usability Magnitude Estimation (1-Question)
- SEQ PDF: Single Ease Question
The SEQ is a new addition which Joe Dumas and I tested two years ago PDF and found it to perform very well. In addition to the usual psychometric properties of being reliable, sensitive and valid, a good questionnaire should also be:
- easy to respond to
- easy to administer
- easy to score
The SEQ is all four. You can administer it on paper, electronically on any web survey service or even verbally.
Overall, this task was?
|Very Difficult||Very Easy|
Figure 1: The Single Ease Question (SEQ).
For the past year I’ve been collecting data using the SEQ on numerous tasks on websites and applications. My goal is to assemble a large database of tasks to generate a standardized post-task SEQ score. It would be nice to have standardized task completion rates and task times for classes of tasks but I’ve found slight differences in tasks scenarios make aggregating data difficult.
The beauty of the SEQ is that users build their expectations into their response. So while adding or removing a step from a task scenario would affect times, users adjust their expectations based on the number of steps and respond to the task difficulty accordingly. On your next task use the SEQ.