Statistics

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 33 )
Benchmarking ( 32 )
Usability ( 32 )
Customer Experience ( 30 )
SUS ( 28 )
User Research ( 28 )
Rating Scale ( 23 )
Net Promoter Score ( 21 )
Sample Size ( 20 )
Usability Problems ( 17 )
Metrics ( 17 )
Measurement ( 15 )
User Experience ( 14 )
Validity ( 14 )
Questionnaires ( 13 )
Satisfaction ( 13 )
Usability Metrics ( 13 )
Surveys ( 13 )
SUPRQ ( 12 )
Rating Scales ( 12 )
Qualitative ( 11 )
Navigation ( 10 )
Market Research ( 9 )
Reliability ( 9 )
SUPR-Q ( 9 )
Task Time ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
UMUX-lite ( 6 )
Research ( 6 )
Mobile ( 6 )
SEQ ( 6 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Unmoderated Research ( 5 )
Visualizing Data ( 5 )
Task Times ( 4 )
Confidence ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Quantitative ( 4 )
UX Methods ( 4 )
Desirability ( 3 )
PURE ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
ROI ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
IA ( 2 )
Branding ( 2 )
Tree Testing ( 2 )
Correlation ( 2 )
Tasks ( 2 )
UX Salary Survey ( 2 )
PhD ( 2 )
Key Driver ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
A/B Testing ( 2 )
Salary Survey ( 2 )
Data ( 2 )
Variables ( 2 )
Excel ( 2 )
TAM ( 2 )
Remote Usability Testing ( 2 )
Marketing ( 2 )
Personas ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
KLM ( 2 )
SUM ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Information Architecture ( 1 )
Conjoint Analysis ( 1 )
Site Analytics ( 1 )
Test Metrics ( 1 )
Problem Severity ( 1 )
Desktop ( 1 )
Task Randomization ( 1 )
Expectations ( 1 )
Margin of Error ( 1 )
Competitive ( 1 )
Quality ( 1 )
Design ( 1 )
Microsoft Desirability Toolkit ( 1 )
LTR ( 1 )
Voice Interaction ( 1 )
meCUE2.0 ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
moderated ( 1 )
Moderating ( 1 )
NSAT ( 1 )
Customer effort ( 1 )
Delight ( 1 )
Hedonic usability ( 1 )
CUE ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Likert ( 1 )
User-Centred Design ( 1 )
Design Thinking ( 1 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
RITE ( 1 )
Formative testing ( 1 )
MUSiC ( 1 )
ISO ( 1 )
History of usability ( 1 )
Metric ( 1 )
protoype ( 1 )
Top Task Analysis ( 1 )
Sample Sizes ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Errors ( 1 )
Unmoderated ( 1 )
Effect Size ( 1 )
Software ( 1 )
Ordinal ( 1 )
Segmentation ( 1 )
Persona ( 1 )
User Testing ( 1 )
Trust ( 1 )
Formative ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Task Completin ( 1 )
Certification ( 1 )
Facilitation ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Regression Analysis ( 1 )
Understanding who your users are and what they think about an experience is an essential step for measuring and improving the user experience. Part of understanding your users is understanding how they are similar and different with respect to demographics, psychographics, and behaviors. These groupings are often called clusters or segments to refer to the shared characteristics within each group. Clusters play an important role

Read More

Researchers rely heavily on sampling. It's rarely possible, or even makes sense, to measure every single person of a population (all customers, all prospects, all homeowners, etc.). But when you use a sample, the average value you observe (e.g. a completion rate, average satisfaction) differs from the actual population average. Consequently, the differences between designs or attitudes measured in a questionnaire may be the result

Read More

You can't see customer satisfaction. You can't see usability. There isn't a thermometer that directly measures someone's intelligence. While we can talk about satisfied customers, usable products, or smart people, there isn't a direct way to measure these abstract concepts. And clearly these concepts vary. We've all had experiences that left us feeling unsatisfied or conversely very delighted. We've also had our share of products

Read More

There are a number of variables that affect UX metrics. In most cases though, you'll simply want to measure the user experience and not these other "nuisance variables" that may mask the experience users have with an interface. This is especially the case when making comparisons. In a comparative analysis you use multiple measures to determine which website or product is superior. Three of the

Read More

Statistics can be daunting, especially for UX professionals who aren't particularly excited about the idea of using numbers to improve designs. But like any skill that can be learned, it takes some time to understand statistical concepts and put them into practice. Most participants at our UX Boot Camp go from little knowledge of statistics to running statistical comparisons in just three days. Here's the

Read More

Excel is an invaluable tool for analyzing and displaying data. In Part 1 I covered some essentials Excel skills, such as conditionals, absolute references and the fill handle. In this second part I'll cover a few more advanced functionalities that mimic database manipulations. You can also find these examples in the downloadable spreadsheet.1. VLOOKUPVLOOKUPs separate the novice from the intermediate Excel analyst. A VLOOKUP joins

Read More

Excel is a powerful program. It's like an onion, peeling back layers to reveal increasingly specialized functions. The minute you think you've mastered it, you discover a new set of functions. It can take years to learn it and unfortunately there's not usually a class on learning Excel in university. Students have to pick things up on their own. But after you get beyond the

Read More

Yes, of course you can. But it depends on who you ask!It's a common question and point of contention when measuring human behavior using multi-point rating scales. Can you take the average of a Likert item (or Likert-type item) similar to the following?The website is easy to use:Here's how 62 participants after using the Budget rental car website responded (with corresponding values):  Coded Valued Response

Read More

One of the best ways to make metrics more meaningful is to compare them to something. The comparison can be the same data from an earlier time point, a competitor, a benchmark, or a normalized database. Comparisons help in interpreting data in both customer research specifically and in data analysis in general. For example, we're often interested in customers' brand attitudes both before and after

Read More