Eight Laws of Statistics

Statistics doesn’t have a Magna Carta, constitution, or bill of rights to enumerate laws, guiding principles, or limits of power. There have been attempts to articulate credos for statistical practice. Two of the most enduring ones are based on the work by Robert P. Abelson, a former statistical professor at Yale. If Abelson wasn’t the

Read More »

Statistical Hypothesis Testing: What Can Go Wrong?

Making decisions with data inevitably means working with statistics and one of its most common frameworks: Null Hypothesis Significance Testing (NHST). Hypothesis testing can be confusing (and controversial), so in an earlier article we introduced the core framework of statistical hypothesis testing in four steps: Define the null hypothesis (H0). This is the hypothesis that

Read More »

How to Statistically Compare Two Net Promoter Scores

When we wrote Quantifying the User Experience, we put confidence intervals before tests of statistical significance. We generally find fluency with confidence intervals to be easier to achieve and of more value than with formal hypothesis testing. We also teach confidence intervals in our workshops on statistical methods. Most people, even non-researchers, have been exposed

Read More »

Confidence Intervals for Net Promoter Scores

The Net Promoter Score (NPS) is a widely used metric, but it can be tricky to work with statistically. One of the first statistical steps we recommend that researchers take is to add confidence intervals around their metrics. Confidence intervals provide a good visualization of how precise estimates from samples are. They are particularly helpful

Read More »

How Confident Do You Need to be in Your Research?

Every estimate we make from a sample of customer data contains error. Confidence intervals tell us how much faith we can have in our estimates. Confidence intervals quantify the most likely range for the unknown value we’re estimating. For example, if we observe 27 out of 30 users (90%) completing a task, we can be

Read More »

What Does Statistically Significant Mean?

Statistically significant. It’s a phrase that’s packed with both meaning, and syllables. It’s hard to say and harder to understand. Yet it’s one of the most common phrases heard when dealing with quantitative methods. While the phrase statistically significant represents the result of a rational exercise with numbers, it has a way of evoking as

Read More »

Measuring User Confidence in Usability Tests

Are you sure you did that right? When we put the effort into making a purchase online, finding information or attempting tasks in software, we want to know we’re doing things right. Having confidence in our actions and the outcomes is an important part of the user experience. That’s why we ask users how confident

Read More »

Are Men Overconfident Users?

There are some interesting known differences between men and women in the psychological literature. For example, women tend to be better judges of emotion when looking at faces for just 0.2 of a second[pdf]! And across many measures of ability, while both men and women tend to exhibit overconfidence, men are generally more overconfident than

Read More »
0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top