ux-scorecard-feature

Metrics

Browse Content by Topic

UX ( 60 )
Usability Testing ( 52 )
Statistics ( 51 )
Methods ( 51 )
Usability ( 32 )
Survey ( 31 )
User Research ( 27 )
Customer Experience ( 26 )
Benchmarking ( 25 )
SUS ( 19 )
Sample Size ( 18 )
Usability Problems ( 17 )
NPS ( 17 )
Rating Scale ( 13 )
Usability Metrics ( 13 )
Metrics ( 12 )
SUPRQ ( 12 )
Net Promoter Score ( 11 )
User Experience ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
Measurement ( 10 )
Market Research ( 8 )
Task Time ( 8 )
Surveys ( 8 )
Questionnaires ( 7 )
Heuristic Evaluation ( 7 )
Task Completion ( 7 )
Mobile ( 6 )
Questionnaire ( 6 )
Reliability ( 6 )
Mobile Usability Testing ( 6 )
UX Metrics ( 6 )
Usability Problem ( 5 )
Validity ( 5 )
Rating Scales ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Satisfaction ( 5 )
Research ( 4 )
Analytics ( 4 )
Confidence ( 4 )
UX Maturity ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
Quantitative ( 4 )
Credibility ( 4 )
Loyalty ( 4 )
Moderation ( 4 )
Customer Segmentation ( 3 )
SEQ ( 3 )
SUPR-Q ( 3 )
Expert Review ( 3 )
ROI ( 3 )
Task Metrics ( 3 )
Unmoderated Research ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
SUM ( 2 )
UX Methods ( 2 )
Eye-Tracking ( 2 )
Summative ( 2 )
A/B Testing ( 2 )
Remote Usability Testing ( 2 )
Marketing ( 2 )
Key Driver ( 2 )
Data ( 2 )
IA ( 2 )
Tree Testing ( 2 )
PhD ( 2 )
Excel ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
KLM ( 2 )
Certification ( 2 )
PURE ( 2 )
UMUX-lite ( 2 )
Correlation ( 2 )
Tasks ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
Cognitive Walkthrough ( 2 )
Facilitation ( 1 )
Site Analytics ( 1 )
Metric ( 1 )
Information Architecture ( 1 )
Perceptions ( 1 )
protoype ( 1 )
Problem Severity ( 1 )
Affinity ( 1 )
moderated ( 1 )
Task Completin ( 1 )
Mobile Usability ( 1 )
Prototype ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Contextual Inquiry ( 1 )
Moderating ( 1 )
Errors ( 1 )
Quality ( 1 )
Trust ( 1 )
Persona ( 1 )
Segmentation ( 1 )
Software ( 1 )
Ordinal ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Top Task Analysis ( 1 )
Design ( 1 )
Unmoderated ( 1 )
Regression Analysis ( 1 )
Conjoint Analysis ( 1 )
Random ( 1 )
Margin of Error ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Formative ( 1 )
Think Aloud ( 1 )
Desktop ( 1 )
Businesses are full of metrics. Increasingly those metrics quantify the user experience (which is a good thing). Collecting consistent and standardized metrics allows organizations to better understand the current user experience of websites, software, and apps. It allows teams to track changes over time and compare to competitors and industry benchmarks. The idea of quantifying experiences is still new for many people, which is one

Read More

A benchmark study tells you where a website, app, or product falls relative to some meaningful comparison. This comparison can be to an earlier version, the competition, or industry standard. Benchmark studies are often called summative evaluations as the emphasis is less on finding problems and more on quantitatively assessing the current experience. To quantify, you need metrics and UX benchmark studies can have quite

Read More

Unmoderated testing platforms allow for quick data collection from large sample sizes. This has enabled researchers to answer questions that were previously difficult or cost prohibitive to answer with traditional lab-based testing. But is the data collected in unmoderated studies, both behavioral and attitudinal, comparable to what you get from a more traditional lab setup? Comparing Metrics There are several ways to compare the agreement or

Read More

UX metrics are a mix of attitude (what people think) and actions (what people do). To fully measure the user experience, you need to measure both. UX metrics are influenced by more than an interface. Users have preconceived notions about companies and this affects both how they think and what they do when they interact with a brand—either in a store or online. Brand attitudes

Read More

It was another busy year on MeasuringU.com with 50 new articles, a new website, a new unmoderated research platform (MUIQ), and our 5th UX Bootcamp. In 2017 over 1.2 million people viewed our articles. Thank You! The most common topics we covered include: usability testing, benchmarking, the 3Ms of methods, metrics and measurement, and working with online panels. Here’s a summary of the articles I

Read More

Task completion is one of the fundamental usability metrics. It’s the most common way to quantify the effectiveness of an interface. If users can’t do what they intend to accomplish, not much else matters. While that may seem like a straightforward concept, actually determining whether users are completing a task often isn’t as easy. The ways to determine task completion will vary based on the

Read More

There are a number of ways to quantify the value of your customers throughout the customer journey. While the "best" metrics depend on your goals and specific context, here is a list of 10 that most organizations should collect. They include a mix of the four types of customer analytics to collect: descriptive, behavioral, interaction and attitudinal. Customer Revenue: Understanding where, when and how much

Read More

Website navigation is at the heart of good findability. To measure findability, we perform a tree test or a click test on a live website. In both types of studies, we collect many metrics to help uncover problems with terms and taxonomy. While the fundamental metric of findability is whether users find an item or not, often other metrics provide clues to problems in the

Read More

It's often called web surfing or web browsing, but it probably should be called web doing. While there is still plenty of time to kill using the web, in large part, we're all trying to get things done. Purchasing, reserving, comparing and communicating—Internet behavior is largely a goal directed activity. If a website doesn't help users accomplish their goals then it's unlikely users will return

Read More

Ask a user to complete a task and they can tell you how difficult it was to complete. But can a user tell you how difficult the task will be without even attempting it? It turns out the task description reveals much of the task's complexity, so users can predict actual task ease and difficulty reasonably well. The gap in expectations can be a powerful

Read More