Statistics

Browse Content by Topic

Statistics ( 51 )
Methods ( 50 )
Usability Testing ( 49 )
UX ( 47 )
Survey ( 30 )
User Research ( 27 )
Usability ( 26 )
Customer Experience ( 25 )
Benchmarking ( 20 )
Sample Size ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
SUS ( 17 )
Usability Metrics ( 12 )
SUPRQ ( 12 )
Rating Scale ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Metrics ( 9 )
Measurement ( 8 )
Market Research ( 8 )
Task Time ( 8 )
Net Promoter Score ( 8 )
Task Completion ( 7 )
Surveys ( 7 )
Heuristic Evaluation ( 7 )
User Experience ( 7 )
Six Sigma ( 5 )
Mobile Usability Testing ( 5 )
Questionnaire ( 5 )
Usability Problem ( 5 )
Questionnaires ( 5 )
Visualizing Data ( 5 )
Mobile ( 5 )
Moderation ( 4 )
Loyalty ( 4 )
Validity ( 4 )
Confidence ( 4 )
Task Times ( 4 )
Satisfaction ( 4 )
Reliability ( 4 )
UX Maturity ( 4 )
Quantitative ( 4 )
Credibility ( 4 )
Research ( 4 )
Analytics ( 4 )
Confidence Intervals ( 4 )
Task Metrics ( 3 )
UX Metrics ( 3 )
Expert Review ( 3 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Rating Scales ( 3 )
PhD ( 2 )
SEQ ( 2 )
Branding ( 2 )
Correlation ( 2 )
Remote Usability Testing ( 2 )
Tasks ( 2 )
Excel ( 2 )
SUM ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
KLM ( 2 )
Cognitive Walkthrough ( 2 )
Marketing ( 2 )
Usability Lab ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
UX Salary Survey ( 2 )
Findability ( 2 )
Data ( 2 )
UMUX-lite ( 2 )
Salary Survey ( 2 )
Key Driver ( 2 )
A/B Testing ( 2 )
UX Methods ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Certification ( 2 )
Information Architecture ( 1 )
Contextual Inquiry ( 1 )
Desktop ( 1 )
Problem Severity ( 1 )
Site Analytics ( 1 )
Ordinal ( 1 )
Sample ( 1 )
Five ( 1 )
Perceptions ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Trust ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Performance ( 1 )
Z-Score ( 1 )
protoype ( 1 )
Unmoderated Research ( 1 )
Metric ( 1 )
Facilitation ( 1 )
Prototype ( 1 )
Affinity ( 1 )
Task Completin ( 1 )
Errors ( 1 )
Visual Appeal ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
SUPR-Q ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Software ( 1 )
Segmentation ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Persona ( 1 )
User Testing ( 1 )
Margin of Error ( 1 )
Understanding who your users are and what they think about an experience is an essential step for measuring and improving the user experience. Part of understanding your users is understanding how they are similar and different with respect to demographics, psychographics, and behaviors. These groupings are often called clusters or segments to refer to the shared characteristics within each group. Clusters play an important role

Read More

Researchers rely heavily on sampling. It's rarely possible, or even makes sense, to measure every single person of a population (all customers, all prospects, all homeowners, etc.). But when you use a sample, the average value you observe (e.g. a completion rate, average satisfaction) differs from the actual population average. Consequently, the differences between designs or attitudes measured in a questionnaire may be the result

Read More

You can't see customer satisfaction. You can't see usability. There isn't a thermometer that directly measures someone's intelligence. While we can talk about satisfied customers, usable products, or smart people, there isn't a direct way to measure these abstract concepts. And clearly these concepts vary. We've all had experiences that left us feeling unsatisfied or conversely very delighted. We've also had our share of products

Read More

There are a number of variables that affect UX metrics. In most cases though, you'll simply want to measure the user experience and not these other "nuisance variables" that may mask the experience users have with an interface. This is especially the case when making comparisons. In a comparative analysis you use multiple measures to determine which website or product is superior. Three of the

Read More

Statistics can be daunting, especially for UX professionals who aren't particularly excited about the idea of using numbers to improve designs. But like any skill that can be learned, it takes some time to understand statistical concepts and put them into practice. Most participants at our UX Boot Camp go from little knowledge of statistics to running statistical comparisons in just three days. Here's the

Read More

Excel is an invaluable tool for analyzing and displaying data. In Part 1 I covered some essentials Excel skills, such as conditionals, absolute references and the fill handle. In this second part I'll cover a few more advanced functionalities that mimic database manipulations. You can also find these examples in the downloadable spreadsheet.1. VLOOKUPVLOOKUPs separate the novice from the intermediate Excel analyst. A VLOOKUP joins

Read More

Excel is a powerful program. It's like an onion, peeling back layers to reveal increasingly specialized functions. The minute you think you've mastered it, you discover a new set of functions. It can take years to learn it and unfortunately there's not usually a class on learning Excel in university. Students have to pick things up on their own. But after you get beyond the

Read More

Yes, of course you can. But it depends on who you ask!It's a common question and point of contention when measuring human behavior using multi-point rating scales. Can you take the average of a Likert item (or Likert-type item) similar to the following?The website is easy to use:Here's how 62 participants after using the Budget rental car website responded (with corresponding values):  Coded Valued Response

Read More

One of the best ways to make metrics more meaningful is to compare them to something. The comparison can be the same data from an earlier time point, a competitor, a benchmark, or a normalized database. Comparisons help in interpreting data in both customer research specifically and in data analysis in general. For example, we're often interested in customers' brand attitudes both before and after

Read More