User Research

Browse Content by Topic

Statistics ( 51 )
UX ( 50 )
Usability Testing ( 50 )
Methods ( 50 )
Survey ( 30 )
User Research ( 27 )
Usability ( 27 )
Customer Experience ( 26 )
Benchmarking ( 21 )
Sample Size ( 18 )
NPS ( 17 )
Usability Problems ( 17 )
SUS ( 17 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Rating Scale ( 12 )
Qualitative ( 11 )
Net Promoter Score ( 10 )
Metrics ( 10 )
Navigation ( 10 )
Measurement ( 10 )
User Experience ( 8 )
Task Time ( 8 )
Market Research ( 8 )
Surveys ( 7 )
Task Completion ( 7 )
Heuristic Evaluation ( 7 )
Questionnaires ( 6 )
Questionnaire ( 5 )
Usability Problem ( 5 )
Mobile ( 5 )
Six Sigma ( 5 )
Visualizing Data ( 5 )
Reliability ( 5 )
Mobile Usability Testing ( 5 )
Confidence Intervals ( 4 )
Research ( 4 )
Analytics ( 4 )
Confidence ( 4 )
Credibility ( 4 )
UX Maturity ( 4 )
Moderation ( 4 )
Satisfaction ( 4 )
Task Times ( 4 )
Loyalty ( 4 )
Validity ( 4 )
Quantitative ( 4 )
UX Metrics ( 4 )
Rating Scales ( 3 )
Expert Review ( 3 )
ROI ( 3 )
Task Metrics ( 3 )
Customer Segmentation ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
Unmoderated Research ( 3 )
Lean UX ( 3 )
SUPR-Q ( 2 )
Eye-Tracking ( 2 )
Certification ( 2 )
UMUX-lite ( 2 )
KLM ( 2 )
Excel ( 2 )
Marketing ( 2 )
PhD ( 2 )
Remote Usability Testing ( 2 )
Salary Survey ( 2 )
Summative ( 2 )
SEQ ( 2 )
Key Driver ( 2 )
Data ( 2 )
Findability ( 2 )
Cognitive Walkthrough ( 2 )
Tasks ( 2 )
SUM ( 2 )
Tree Testing ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
IA ( 2 )
Correlation ( 2 )
A/B Testing ( 2 )
UX Methods ( 2 )
Focus Groups ( 2 )
Branding ( 2 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Design ( 1 )
Problem Severity ( 1 )
Top Task Analysis ( 1 )
Information Architecture ( 1 )
Facilitation ( 1 )
Site Analytics ( 1 )
protoype ( 1 )
Metric ( 1 )
Prototype ( 1 )
Visual Appeal ( 1 )
Contextual Inquiry ( 1 )
True Intent ( 1 )
Five ( 1 )
Task Completin ( 1 )
Crowdsourcing ( 1 )
Test Metrics ( 1 )
Task Randomization ( 1 )
Moderating ( 1 )
Margin of Error ( 1 )
Quality ( 1 )
Ordinal ( 1 )
Conjoint Analysis ( 1 )
Competitive ( 1 )
Trust ( 1 )
Regression Analysis ( 1 )
Software ( 1 )
Formative ( 1 )
Desktop ( 1 )
Effect Size ( 1 )
Expectations ( 1 )
Unmoderated ( 1 )
Random ( 1 )
User Testing ( 1 )
Errors ( 1 )
Segmentation ( 1 )
Persona ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
The range of methods available to the researcher is one of the things that makes UX research such an interesting and effective field. The recently completed UXPA salary survey provides one of the more comprehensive pictures of the methods practitioners use. It contains data from over 1200 respondents from 37 countries collected in 2016. Similar data was collected in 2014 and 2011 with similarly sized

Read More

Online panels are the go-to method for collecting data quickly for market and UX research studies. Despite their wide usage, surprisingly little is known about these panels, such as the characteristics of the panel members or the reliability and accuracy of the data collected from them. While there isn’t much published data on the inner workings of panels, we’ve conducted our own research and compiled

Read More

User and customer research fundamentally rely on collecting data from users and customers. But it can be a constant challenge to find the right number and type of qualified participants . Even when you find the right participants, there’s a limit to how much time they’re willing to spend filling out a survey. A common question we get from clients is how long is too

Read More

Online panels are a major source of participants for both market research and UX research studies. In an earlier article I summarized some of the research on the accuracy and variability of estimates from online panels. The types of estimates from those studies tended to center around general demographic or psychographic questions (e.g., smoking, newspaper readership). To understand whether similar findings would hold for UX-related

Read More

Online panel research has been a boon for market research and UX researchers alike. Since the late 1990s in fact, online panels have provided a cost effective pool of sample participants ready to take online studies, covering topics from soup to nuts (literally)…from apple juice to zippers. But can you trust the data you get from these online participants? In this article, I'll examine the

Read More

"I'd like you to think aloud as you use the software." Having participants think aloud as they use an interface is a cornerstone technique of usability testing. It's been around for much of the history of user research to help uncover problems in an interface. Despite its popularity, there is surprisingly little consistency on how to properly apply the think aloud technique. Because of that,

Read More

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_column_text]The affinity diagram is a visual technique to organize ideas and information. The "affinity" between pieces of information reveals patterns and often a hierarchy that can help with product design (similar to the affinity or basket analysis). It's also known as the K-J method for its creator, Jiro Kawakita, who developed it as one of his seven quality tools in the 1960s. Jiro was a

Read More

Google Analytics is an amazing tool for understanding website traffic. There's a reason most of the top websites use it. Among other things it can tell you: - How many people visit daily, monthly, and across years and seasons - How much time people spend on pages - What pages get the most visitors - Which pages people arrive on and depart from Despite the

Read More

You often hear that research results are not "valid" or "reliable." Like many scientific terms that have made it into our vernacular, these terms are often used interchangeably. In fact, validity and reliability have different meanings with different implications for researchers. Validity refers to how well the results of a study measure what they are intended to measure. Contrast that with reliability, which means consistent

Read More

In 1963 Yale Psychologist Stanley Milgram paid volunteers $4 to "teach" another  volunteer, called the "learner" new vocabulary words. If the learner got the words wrong, he or she received an electric shock! Or, so the teacher/volunteer was led to believe. In fact, no shock was given, instead a person working with Milgram pretended, with great gusto, that they were being shocked. So while no

Read More