Methods

Browse Content by Topic

Statistics ( 51 )
Methods ( 49 )
Usability Testing ( 46 )
UX ( 43 )
Survey ( 30 )
Usability ( 25 )
User Research ( 24 )
Customer Experience ( 24 )
Sample Size ( 18 )
Benchmarking ( 18 )
SUS ( 17 )
NPS ( 17 )
Usability Problems ( 16 )
Usability Metrics ( 12 )
Rating Scale ( 11 )
Qualitative ( 11 )
SUPRQ ( 10 )
Navigation ( 10 )
Task Time ( 8 )
Market Research ( 8 )
Metrics ( 8 )
Measurement ( 8 )
Surveys ( 7 )
User Experience ( 7 )
Heuristic Evaluation ( 7 )
Task Completion ( 7 )
Six Sigma ( 5 )
Mobile Usability Testing ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Net Promoter Score ( 5 )
Questionnaire ( 5 )
Mobile ( 5 )
Confidence ( 4 )
Analytics ( 4 )
Questionnaires ( 4 )
Research ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Credibility ( 4 )
Confidence Intervals ( 4 )
Task Times ( 4 )
Loyalty ( 4 )
Quantitative ( 4 )
Expert Review ( 3 )
Customer Segmentation ( 3 )
Satisfaction ( 3 )
UX Metrics ( 3 )
Card Sorting ( 3 )
Task Metrics ( 3 )
Rating Scales ( 3 )
Lean UX ( 3 )
ROI ( 3 )
Validity ( 2 )
Correlation ( 2 )
Key Driver ( 2 )
Reliability ( 2 )
Excel ( 2 )
PhD ( 2 )
Summative ( 2 )
Cognitive Walkthrough ( 2 )
KLM ( 2 )
SEQ ( 2 )
Usability Lab ( 2 )
UMUX-lite ( 2 )
Certification ( 2 )
Eye-Tracking ( 2 )
Marketing ( 2 )
SUM ( 2 )
Personas ( 2 )
UX Methods ( 2 )
Tasks ( 2 )
Data ( 2 )
Salary Survey ( 2 )
Remote Usability Testing ( 2 )
Tree Testing ( 2 )
Focus Groups ( 2 )
Findability ( 2 )
A/B Testing ( 2 )
IA ( 2 )
UX Salary Survey ( 2 )
Problem Severity ( 1 )
Site Analytics ( 1 )
Information Architecture ( 1 )
Contextual Inquiry ( 1 )
Desktop ( 1 )
Ordinal ( 1 )
Crowdsourcing ( 1 )
Sample ( 1 )
Five ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Errors ( 1 )
Trust ( 1 )
Formative ( 1 )
Perceptions ( 1 )
Performance ( 1 )
Facilitation ( 1 )
protoype ( 1 )
Unmoderated Research ( 1 )
Prototype ( 1 )
Task Completin ( 1 )
Z-Score ( 1 )
Affinity ( 1 )
Visual Appeal ( 1 )
True Intent ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Branding ( 1 )
Expectations ( 1 )
Competitive ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Metric ( 1 )
Software ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
Effect Size ( 1 )
User Testing ( 1 )
Segmentation ( 1 )
Persona ( 1 )
Margin of Error ( 1 )
Did they see it? We're often asked whether participants in a study notice certain design elements—icons, labels, ads, a component of a company logo, or a product function—in a user interface. For a participant to notice these elements involves both seeing and perceiving, so this simple question can be easier to ask than to answer. Seeing and perceiving are of course different things. Participants can

Read More

We measure more than just usability. We work with clients to measure everything from delight, loyalty, brand affinity, luxury, quality and even love. While all of these concepts are related, they each measure slightly different aspects of the customer experience. Before measuring anything, especially a construct that's not well defined or used in practice, we answer these five questions. 1.    How is this being measured

Read More

UX researchers have developed many techniques over the years for testing and validating their ideas. Here are ten essential methods to learn and employ on your next project. We cover many of these in detail at our UX Bootcamp in Denver. Moderated In-Person Usability Testing: This fundamental technique is used by usability professionals for obtaining feedback from live users interacting with everything from paper prototypes

Read More

Facilitators in usability tests are highly variable. The results of many studies, including the well known Comparative Usability Evaluations (CUEs), have consistently shown that different usability facilitators are inconsistent in how they interact with participants in a usability lab, producing dissimilar results. What's more, testing more than a few participants a day leads to facilitator fatigue, introducing further variation even when a single facilitator runs

Read More

Which design will improve the user experience? One of the primary goals of conducting user research is to establish some causal relationship between a design and a behavior. Typically, we want to see if a design element or changes to an interface lead to a more usable experience (experiment) or if more desirable outcomes are associated with some aspect in our designs (correlation). Even though

Read More

Who are the users and what are they trying to do? Answering those two questions are essential first steps to measuring and improving the right things on an interface. It's also one of the first things we'll cover at the Denver UX Boot Camp. While there are hundreds to thousands of things users can accomplish on websites and software interfaces, there are a critical few

Read More

Naming a product is like naming a baby.  Everyone has opinions and you are stuck with it for a long time! Product naming, whether it is for software, hardware or a physical device is a multistage process—often involving creative teams, product managers, CEOs and lawyers. It's also one of the few times I think focus groups can actually be the right tool for the job,

Read More

Qualitative research is often used as a catch-all phrase to mean not to expect any "hard numbers" from research findings. While qualitative research is the collection and analysis of primarily non-numerical activities (words, pictures and actions), it doesn't mean you can't apply a structured approach to your research efforts. Usability testing is often characterized as a qualitative activity. Summarizing findings from watching participants in a

Read More

Who are your users and what are they doing? Measuring the user experience starts with understanding who your users are and what tasks they are trying to accomplish on your website. It's one of the first things we'll cover at the UX Boot Camp. Visitor profile and intent are not something you can derive easily from Google Analytics or log files. These sources provide an

Read More

It doesn't matter if it's your first usability test or your hundredth; there are always things you can improve to make the most of the time with your users. Avoid using why in a direct, reflexive manner: We of course want to know why users do things on websites and in applications. But when we ask why directly, we risk putting the participant on the

Read More