1. Quantifying usability: Usability is all about the user (i.e. people). Talk of using numbers to describe human computer interaction gets some upset. Usability is typically considered a qualitative activity and not the place for cold-number crunching. Throw some probability and statistics and you’ve really got some folks going.
2. Reliability & quality of Usability Evaluators: Ask a group of independent usability professionals to examine an interface and identify the problems. You’ll get a list of problems with only a small overlap. This has been demonstrated in several comparative usability evaluations. Should we question the veracity of usability claims? Do such discussions hurt the credibility of the field?
3. Certification: Want to call yourself a usability professional? Maybe you should be certified. Will a certification board and process lift the quality of work or just add a layer of bureaucracy that doesn’t mean more than the piece of paper it’s printed on?
4. Unmoderated testing versus lab-based moderated testing: There are dozens of tools to collect data from users quickly. Are such tools a threat to practitioners? Can we trust the data?
5. Effectiveness of Heuristic Evaluations: Does the method of having a usability expert review an interface against a set of heuristics do more harm than good? Are heuristic evaluations 99% bad? Are such discount-usability methods effective or should they be cleared off the shelves?
6. Sample size: There are few things which generate more controversy that the number of users you need to test in a usability evaluation. Some just say “a lot,” others say 5 and others do their best to avoid talking sample sizes all together. The real answer depends on a few factors but there are solid mathematical ways for finding the optimal sample size for usability tests. Of course, using such methods to find the optimal sample size may get some upset (see controversy 1).