A lot of effort goes into simplifying interactions, reducing bugs and enhancing features.

While these changes may be obvious to some, they can be taken for granted by others (especially those in charge of budgets).

It is valuable to document both the effort that goes into improving the user experience and the result of all that effort. You’ll want to measure the interface before and after changes were made with benchmark tests or in a comparative test of both interfaces simultaneously.

Here are eight ways to show that the design has improved.

Show:

  1. Improvement in perceived usability using a questionnaire like the System Usability Scale (SUS): Even if you double user-productivity, if users don’t think an application is more usable, negative word-of-mouth can do great damage. A step-by-step guide and calculator are available for showing statistical improvements in SUS scores.
  2. Reduction in the time it takes to complete a task: Whether users are tracking customer information, logging sales orders or entering time-sheet information, showing a reduction in the time it takes users to complete tasks is a metric executives and users appreciate.
  3. Improvement in productivity for skilled users using Keystroke Level Modeling (KLM): If you don’t have the time or budget to test actual users but have tasks that are done repeatedly, using a cognitive modeling technique like KLM provides a quick way to generate relative reductions in tasks times and improvements in worker productivity.
  4. Reduction in the number of UI problems encountered by users during a test: No one likes usability problems and even low budget discount usability efforts focus on finding and fixing problems. Simply document the number, severity and description of problems users are encountering before and after design changes. You have the opportunity to show reductions in both the number and severity of problems.
  5. Increases in completion rates for key tasks: Every application has great features that users just can’t figure out. Increases in task completion rates on tasks around features that were heavily invested in can bring new life to a product without increasing feature bloat. People buy the benefits, not the features and usability is a key benefit.
  6. Reduction in the number of calls to customer support: User Interface problems and failed task attempts lead to expensive calls to customer support. Showing that design changes cut the number of support calls is an easily quantifiable savings for a company.
  7. Increases in conversion rates: Converting more browsers to members and more members to buyers is at the core of A/B testing. Simple changes in copy, layout and navigation translate into big numbers.
  8. Improvement in task-level satisfaction ratings using something like the Single Ease Question (SEQ): Overall satisfaction measures like SUS tell you what people think of the product in general but aren’t terribly helpful for diagnosing what to fix. They also reflect attitudes about product features you may have no control over. Asking a simple question immediately after a task attempt provides laser-like precision. Eventually task-level satisfaction improvements lead to application level satisfaction improvements and will spread positively through word-of-mouth.

If you don’t measure the impact of design changes and quantify the benefits, someone else will. Don’t force people to guess. By providing simple quantitative measures of improvements in the user experience you have the data to both justify design efforts and get a better idea of what methods worked.