How do you create a better user experience?
The answer starts with asking the right questions.
While there are many questions you should ask to measure the user experience, there are a number of questions that come up repeatedly.
Here are the seven I get the asked the most and some guidance on how to answer them. We cover all of them in our Denver UX Boot Camp.
1. What sample size do I need?
There’s no magic sample size (like 5) that always works. The right sample size is based on whether you’re finding problems in an interface, estimating something about the population, or making a comparison. The more precise you need to be, the larger the sample size you need. But even with a small sample size, you can still establish statistical significance or detect usability problems– you’re just limited to detecting large effects or problems that affect a lot of users. A small sample size limits the claims you cam make, but there are ways of making the most of a small sample.
2. What metric do I use?
The more metrics the merrier. No UX metric measures everything; when possible, triangulate by using multiple metrics. Use a continuous measure (like time or rating scales) to detect differences, as they provide more fidelity than binary data (like completion rates or agree/disagree statements). You can always degrade a continuous measure into a categorical or binary measure.
3. What method do I use?
Among the many methods, some combination usually does the job. Most methods are varieties of surveys, usability tests, experiments, and direct observation. Where you put the emphasis depends on your location in the product development cycle: requirements, design and development, or deployment and release.
4. What statistical test do I use?
You have the method. You have the metric. Now you need the statistical test. Determine whether your metric is binary or continuous, and then determine whether you’re making a comparison or using a sample to estimate something about the population.
5. Are the results statistically significant?
Using a sample (as opposed to measuring all customers) comes with sampling error and the possibility of being fooled by the randomness. To differentiate between random noise and meaningful differences, we use confidence intervals and statistical tests, which we use in turn to generate p-values.
- The lower the p-value, the less likely the results are from sampling error.
- You can get statistically significant results even with small sample sizes.
6. How do I find participants for a study?
We recruit thousands of participants a month at MeasuringU and it takes creative thinking and many sources to find representative participants. In general there are internal sources and external sources. Every source has its bias. It’s best to mix your sources to reduce the biases. While you want a representative sample, remember that some of the most important findings in social science research come from a non-representative biased sample of freshman in college getting extra-credit in their psychology classes!
7. Which Approach Do I Use—Qualitative or Quantitative?
Use both. Mixed-methods research embraces the integration of qualitative and quantitative methods into one study. Mixed-method designs fall into three categories:
- explanatory (qual explains quant)
- exploratory (qual frames quant)
- concurrent (qual and quant collected simultaneously)
These questions, and many others, arise as we set out to measure the user experience of our products, websites and services. This post provides high-level answers; for more detail and for an opportunity to put theory into practice, come to our annual Denver UX Boot Camp. More details are also in Quantifying the User Experience and Customer Analytics for Dummies.