Eight Laws of Statistics

Statistics doesnâ€™t have a Magna Carta, constitution, or bill of rights to enumerate laws, guiding principles, or limits of power. There have been attempts to articulate credos for statistical practice. Two of the most enduring ones are based on the work by Robert P. Abelson, a former statistical professor at Yale. If Abelson wasnâ€™t the

Statistical Hypothesis Testing: What Can Go Wrong?

Making decisions with data inevitably means working with statistics and one of its most common frameworks: Null Hypothesis Significance Testing (NHST). Hypothesis testing can be confusing (and controversial), so in an earlier article we introduced the core framework of statistical hypothesis testing in four steps: Define the null hypothesis (H0). This is the hypothesis that

How Confident Do You Need to be in Your Research?

Every estimate we make from a sample of customer data contains error. Confidence intervals tell us how much faith we can have in our estimates. Confidence intervals quantify the most likely range for the unknown value we’re estimating. For example, if we observe 27 out of 30 users (90%) completing a task, we can be

What Does Statistically Significant Mean?

Statistically significant. It’s a phrase that’s packed with both meaning, and syllables. It’s hard to say and harder to understand. Yet it’s one of the most common phrases heard when dealing with quantitative methods. While the phrase statistically significant represents the result of a rational exercise with numbers, it has a way of evoking as

Measuring User Confidence in Usability Tests

Are you sure you did that right? When we put the effort into making a purchase online, finding information or attempting tasks in software, we want to know we’re doing things right. Having confidence in our actions and the outcomes is an important part of the user experience. That’s why we ask users how confident