6 Best Practices for Using Numbers to Inform Design

Jeff Sauro, PhD

paintbynumbersYour job title doesn’t have to be “researcher” or “statistician” to use data to drive design decisions.

You can apply some best practices even when numbers aren’t your best friend.

It’s actually easier when you’re a designer to enhance your skills with quantitative data than for statisticians to enhance their analytical skills with design principles.

Here are six best practices for using numbers to inform your design efforts that don’t require a career change or advanced degree in math.

I’ll cover these best practices in London, and more in depth at the Rome UX Boot Camp, and Denver UX Boot Camp.

1. Don’t only ask whether people like a design

There’s nothing wrong with asking participants their opinion about a design, but you can and should collect much more than what people say they like. You should incorporate what matters to your design into more specific questions. Define what makes a design successful to your organization. Is it:

  • Ease of use
  • Likelihood to use
  • Likelihood to recommend
  • Overall satisfaction
  • Reduced time on task
  • Findability

Decide what metrics matter and measure them.

2. Measure actions and attitudes

The users’ experience is comprised of a combination of what people do (actions) and what people think (attitudes). Measure both by having users attempt to do tasks that are most important to them (top-tasks) with a design. Record whether they can do it (effectiveness), and how difficult they found the experience (satisfaction) using the SEQ. You can then compute completion rates.

3. Preference doesn’t always equal performance (attitudes don’t always equal actions)

When you need to decide between alternative designs, it’s good to ask your participants for the design they prefer in order to gain perspective on a design. Instead of making it just a beauty pageant though, have participants attempt to then use each design (in randomized order), rate the experience, and have them pick the one they prefer.

Sometimes the design participants prefer isn’t the one they performed best on. When that happens, you need to understand what would cause the disconnect between attitude and action. Often (but not always) the ratings and performance data collected from participants using the design are more indicative of the better design because it’s closer to actual use.

4. Have a testable hypothesis

You can make your research efforts a lot more effective when you have a stated hypothesis to test and disprove (or fail to disprove). Your hypotheses don’t have to attempt to cure cancer, but they can and should be simple. Here are some hypotheses and corresponding metrics:

  • More people will click the red button than blue (percent of button clicks).
  • Customers will know to scroll down the homepage to view the services (percent of users who scroll down).
  • The new product names will reduce confusion (time to selection, perceived difficulty).

Then use data and scenarios to generate evidence for or against the hypotheses. Having these hypotheses articulated before your research helps ensure you’re using the right methods for testing and collecting the right metrics.

5. Use a framework to measure before and after

A good place to start establishing a measurement framework is to know who your users are and what they’re trying to do with an interface. But don’t stop there. After you define users and tasks, measure how the experience currently is (benchmark), make changes iteratively, and measure how the interface works after your efforts. This helps isolate how your changes make a difference in the experience.

6. Consider sampling error

You might want to test all your users but it’s not necessary. When measuring the user experience you’ll almost always use a sample of users—and often a small one. With any sample comes sampling error, which is the difference between the sample metrics and population metrics. You want to differentiate real differences from random noise.

You don’t need to be a statistician to account for sampling error, but using confidence intervals on your metrics is a simple and effective technique to help you and your stakeholders understand how much uncertainty there is in your estimates and compare your initial designs to your later designs.

You might also be interested in
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top