In 2015, our articles were served up 2.3 million times to 900,000 visitors. Thank You!
We covered topics including the essentials of usability testing, finding the right sample size, and better ways of measuring the customer experience.
Here, in descending order, are the ten articles that received the most page views.
While there are many questions you should ask to measure the user experience, there are a number of questions that come up repeatedly (from sample size to metrics and methods). Here are the seven I get the asked the most and some guidance on how to answer them. We cover all of them in our Denver UX Boot Camp
Reliability is a measure of the consistency of a metric or a method. Before you can establish validity, you need to establish reliability. There are four common ways of measuring reliability: inter-rater, test-retest, parallel forms and internal consistency reliability which I describe with examples in this article.
Determining the sample size involves identifying the type of metrics you’ll collect and how you’ll collect them. Typically, you take one of three approaches to find the right sample size: uncovering problems or insights, estimating a parameter, or making a comparison. This article looks at eight research studies and discusses how to determine the sample size for each.
Customer research lends itself well to integrating qualitative and quantitative methods, called a mixed-methods approach. While you can combine qualitative and quantitative methods at various points, the three most common research designs are an explanatory sequential design, an exploratory sequential design and a convergent parallel design.
While we often talk about usability tests as if there is one type of usability test, the truth is there are several varieties of usability tests. Each type addresses different research goals. The five types described in this article are: problem discovery, benchmark, competitive, eye-tracking and learnability.
When measuring the customer experience, one of the first things you need to understand is how to identify and categorize the data you come across. It enables you to better summarize your data, select the right statistical test and determine the right method for computing the study sample size. While you can classify data in a number of ways, two of the most common and helpful ways are detailed, along with a diagram to help.
Every product, website, or design has a user interface. If it’s used, it has a user experience. What differentiates a good user experience from a bad one is not based on how many awards and accolades it gets; instead, it’s based on superior measurable outcomes. Using the framework of defining metrics, users, and tasks, and measuring before and after changes helps ensure the user experience is quantifiably better.
Across dozens of tutorials, five books, articles, boot camps, and discussions with both seasoned and new UX professionals, I’ve noticed a number of common problems and themes related to measuring the customer experience. Here’s a list of 25 rules and recommendations that people tend to find most helpful.
There’s a lot to measure in the customer experience. There’s also many ways to collect the measurements. While the “right” methods and metrics you select depend on the industry and study goals, this list covers most of the online and offline customer experience. This article includes a cross section of the four types of analytics data to collect, with an emphasis on collecting customer attitudes via surveys.
The world continues to go mobile. Here’s an updated version of mobile facts and insights from 2013 along with some new additions and a look back on how accurate some of the 2013 projections were. I’ve again included as many sources as possible so you can double check our conclusions. It was as popular in 2015 as it was in 2013, suggesting there’s a lot of demand out there for mobile data!