benchmark-metrics

Metrics

Browse Content by Topic

UX ( 54 )
Usability Testing ( 52 )
Statistics ( 51 )
Methods ( 50 )
Usability ( 32 )
Survey ( 31 )
User Research ( 27 )
Customer Experience ( 26 )
Benchmarking ( 23 )
SUS ( 19 )
Sample Size ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
Usability Metrics ( 13 )
Rating Scale ( 13 )
SUPRQ ( 12 )
Qualitative ( 11 )
Net Promoter Score ( 11 )
Metrics ( 11 )
User Experience ( 10 )
Measurement ( 10 )
Navigation ( 10 )
Task Time ( 8 )
Surveys ( 8 )
Market Research ( 8 )
Questionnaires ( 7 )
Task Completion ( 7 )
Heuristic Evaluation ( 7 )
Reliability ( 6 )
Mobile ( 6 )
Mobile Usability Testing ( 6 )
Rating Scales ( 5 )
Six Sigma ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Questionnaire ( 5 )
Moderation ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Confidence Intervals ( 4 )
Confidence ( 4 )
Research ( 4 )
Task Times ( 4 )
Validity ( 4 )
Analytics ( 4 )
UX Metrics ( 4 )
Satisfaction ( 4 )
Loyalty ( 4 )
Customer Segmentation ( 3 )
ROI ( 3 )
Usability Lab ( 3 )
Card Sorting ( 3 )
Expert Review ( 3 )
Unmoderated Research ( 3 )
SUPR-Q ( 3 )
Task Metrics ( 3 )
Lean UX ( 3 )
SEQ ( 2 )
Summative ( 2 )
Data ( 2 )
Excel ( 2 )
PhD ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
Branding ( 2 )
Correlation ( 2 )
Cognitive Walkthrough ( 2 )
A/B Testing ( 2 )
IA ( 2 )
Tree Testing ( 2 )
Findability ( 2 )
Focus Groups ( 2 )
SUM ( 2 )
Personas ( 2 )
Key Driver ( 2 )
UX Methods ( 2 )
UX Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Salary Survey ( 2 )
Certification ( 2 )
Tasks ( 2 )
KLM ( 2 )
UMUX-lite ( 2 )
Perceptions ( 1 )
Performance ( 1 )
Prototype ( 1 )
Site Analytics ( 1 )
protoype ( 1 )
Metric ( 1 )
Moderating ( 1 )
moderated ( 1 )
Facilitation ( 1 )
Information Architecture ( 1 )
Affinity ( 1 )
Problem Severity ( 1 )
Task Completin ( 1 )
Mobile Usability ( 1 )
Contextual Inquiry ( 1 )
Z-Score ( 1 )
Think Aloud ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
Persona ( 1 )
Segmentation ( 1 )
PURE ( 1 )
Software ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Errors ( 1 )
Trust ( 1 )
Visual Appeal ( 1 )
True Intent ( 1 )
Top Task Analysis ( 1 )
Ordinal ( 1 )
Regression Analysis ( 1 )
Margin of Error ( 1 )
Task Randomization ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Expectations ( 1 )
Conjoint Analysis ( 1 )
Competitive ( 1 )
Formative ( 1 )
Random ( 1 )
Desktop ( 1 )
A benchmark study tells you where a website, app, or product falls relative to some meaningful comparison. This comparison can be to an earlier version, the competition, or industry standard. Benchmark studies are often called summative evaluations as the emphasis is less on finding problems and more on quantitatively assessing the current experience. To quantify, you need metrics and UX benchmark studies can have quite

Read More

Unmoderated testing platforms allow for quick data collection from large sample sizes. This has enabled researchers to answer questions that were previously difficult or cost prohibitive to answer with traditional lab-based testing. But is the data collected in unmoderated studies, both behavioral and attitudinal, comparable to what you get from a more traditional lab setup? Comparing Metrics There are several ways to compare the agreement or

Read More

UX metrics are a mix of attitude (what people think) and actions (what people do). To fully measure the user experience, you need to measure both. UX metrics are influenced by more than an interface. Users have preconceived notions about companies and this affects both how they think and what they do when they interact with a brand—either in a store or online. Brand attitudes

Read More

It was another busy year on MeasuringU.com with 50 new articles, a new website, a new unmoderated research platform (MUIQ), and our 5th UX Bootcamp. In 2017 over 1.2 million people viewed our articles. Thank You! The most common topics we covered include: usability testing, benchmarking, the 3Ms of methods, metrics and measurement, and working with online panels. Here’s a summary of the articles I

Read More

Task completion is one of the fundamental usability metrics. It’s the most common way to quantify the effectiveness of an interface. If users can’t do what they intend to accomplish, not much else matters. While that may seem like a straightforward concept, actually determining whether users are completing a task often isn’t as easy. The ways to determine task completion will vary based on the

Read More

There are a number of ways to quantify the value of your customers throughout the customer journey. While the "best" metrics depend on your goals and specific context, here is a list of 10 that most organizations should collect. They include a mix of the four types of customer analytics to collect: descriptive, behavioral, interaction and attitudinal. Customer Revenue: Understanding where, when and how much

Read More

Website navigation is at the heart of good findability. To measure findability, we perform a tree test or a click test on a live website. In both types of studies, we collect many metrics to help uncover problems with terms and taxonomy. While the fundamental metric of findability is whether users find an item or not, often other metrics provide clues to problems in the

Read More

It's often called web surfing or web browsing, but it probably should be called web doing. While there is still plenty of time to kill using the web, in large part, we're all trying to get things done. Purchasing, reserving, comparing and communicating—Internet behavior is largely a goal directed activity. If a website doesn't help users accomplish their goals then it's unlikely users will return

Read More

Ask a user to complete a task and they can tell you how difficult it was to complete. But can a user tell you how difficult the task will be without even attempting it? It turns out the task description reveals much of the task's complexity, so users can predict actual task ease and difficulty reasonably well. The gap in expectations can be a powerful

Read More

In a usability test you typically collect some type of performance data: task times, completion rates and perhaps errors or conversion rates. It is also a good idea to use some type of questionnaire which measures the perceived ease-of-use of an interface. This can be done immediately after a task using a few questions (post-task questionnaires). It can also be done after the usability testing

Read More