Asking questions immediately after a user attempts a task compliments task-performance data such as task times and completion rates. Post-task satisfaction data is a bit different than the questionnaires asked after a usability test (such as the SUS).  There is a strong correlation (r > .6) between post-task ratings and post-test ratings. Knowing one can predict about 36% of the other[pdf]. However, even this relatively strong association still shows that the two measure slightly different things.

The just-attempted task tends to be such a salient activity that it really captures the users’ attitude about the task. For that reason I’ve come to call it performance satisfaction. Conversely, when users respond to post-test questionnaires like SUS, they tend to provide overall attitudes about the application in general and not necessarily their task performance, hence my name perception satisfaction.

The questions on post-test questionnaires are asked in a way to elicit general attitudes as opposed to task-level events (e.g. “I would imagine that most people would learn to use this system very quickly.”).  The tasks users attempt do have an effect on post-test scores but the effect is attenuated. It is also unclear how much each task (or especially the last and first tasks—so called primacy and recency effects) affects the post-test responses.

When you get low ratings on post-test questionnaires it is difficult for you or a product team to know what to fix in an interface. Some questionnaires have sub-categories like “messages and errors,” but it still doesn’t tell you what in the application should be fixed—just overall impressions.  Of course you could just ask users to tell you what to fix, but this will likely only help for more obvious functional problems and bugs and not provide the relative rankings of task-experiences that help identify suboptimal performance.

Post-task ratings of usability allow you to quickly identify problem-areas in an interface since they are taken immediately after a task. If you want to know where to start improving the experience, you start with the functionality encountered in the task that elicited low ratings.  Post-task questionnaires need not be long or complex (1-3 questions will suffice [PDF]).

It’s not an either or question. You should take both if possible. Post-test perception satisfaction will tell you what users think of your website or application. Task-level performance satisfaction will tell you what to fix to improve overall impressions.  See the Questionnaire section of the Quantitative Usability Report for more information on post-task and post-test questionnaires.