User Research

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 53 )
Statistics ( 51 )
Survey ( 37 )
NPS ( 34 )
Benchmarking ( 32 )
Usability ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
SUS ( 28 )
Rating Scale ( 25 )
Net Promoter Score ( 21 )
Sample Size ( 21 )
Metrics ( 17 )
Usability Problems ( 17 )
Measurement ( 15 )
User Experience ( 14 )
Questionnaires ( 14 )
Satisfaction ( 14 )
Validity ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
Rating Scales ( 12 )
Market Research ( 12 )
SUPRQ ( 12 )
Qualitative ( 11 )
Navigation ( 10 )
Reliability ( 10 )
SUPR-Q ( 10 )
UMUX-lite ( 10 )
UX Metrics ( 8 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Questionnaire ( 7 )
Task Completion ( 7 )
SEQ ( 7 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Research ( 6 )
Mobile ( 6 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Unmoderated Research ( 5 )
Six Sigma ( 5 )
Task Times ( 4 )
Moderation ( 4 )
UX Methods ( 4 )
Confidence ( 4 )
Credibility ( 4 )
UX Maturity ( 4 )
Loyalty ( 4 )
Quantitative ( 4 )
Confidence Intervals ( 4 )
Expert Review ( 4 )
Summative ( 3 )
Data ( 3 )
Usability Lab ( 3 )
Desirability ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Customer Segmentation ( 3 )
Key Driver ( 3 )
Task Metrics ( 3 )
Voice Interaction ( 3 )
PURE ( 3 )
ROI ( 3 )
Findability ( 2 )
Focus Groups ( 2 )
A/B Testing ( 2 )
Excel ( 2 )
Remote Usability Testing ( 2 )
Tasks ( 2 )
Correlation ( 2 )
Salary Survey ( 2 )
Branding ( 2 )
PhD ( 2 )
Personas ( 2 )
UX Salary Survey ( 2 )
Tree Testing ( 2 )
IA ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
SUM ( 2 )
Variables ( 2 )
Sample Sizes ( 2 )
Marketing ( 2 )
TAM ( 2 )
Cognitive Walkthrough ( 2 )
KLM ( 2 )
Formative ( 2 )
Errors ( 2 )
Meeting software ( 1 )
Cumulative Graphs ( 1 )
History of usability ( 1 )
Desktop ( 1 )
b2b software ( 1 )
consumer software ( 1 )
Design Thinking ( 1 )
User-Centred Design ( 1 )
Likert ( 1 )
ISO ( 1 )
MUSiC ( 1 )
AttrakDiff2 ( 1 )
UEQ ( 1 )
meCUE2.0 ( 1 )
Hedonic usability ( 1 )
negative scale ( 1 )
RITE ( 1 )
Formative testing ( 1 )
CUE ( 1 )
Trust ( 1 )
graphic scale ( 1 )
Carryover ( 1 )
Information Architecture ( 1 )
Within-subjects ( 1 )
Polarization ( 1 )
Anchoring ( 1 )
MOS ( 1 )
Site Analytics ( 1 )
MOS-R ( 1 )
Contextual Inquiry ( 1 )
Linear Numeric Scale ( 1 )
sliders ( 1 )
Microsoft Desirability Toolkit ( 1 )
Problem Severity ( 1 )
Visual Analog Scale ( 1 )
slider ( 1 )
Emoji scale ( 1 )
Star Scale ( 1 )
Mobile Usability ( 1 )
Mean Opinion Scale ( 1 )
Task Randomization ( 1 )
Segmentation ( 1 )
Perceptions ( 1 )
Persona ( 1 )
User Testing ( 1 )
Performance ( 1 )
Software ( 1 )
Task Completin ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Effect Size ( 1 )
Five ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Random ( 1 )
Crowdsourcing ( 1 )
Unmoderated ( 1 )
Sample ( 1 )
Design ( 1 )
coding ( 1 )
Ordinal ( 1 )
CSUQ ( 1 )
Delight ( 1 )
Customer effort ( 1 )
Competitive ( 1 )
PSSUQ ( 1 )
Quality ( 1 )
LTR ( 1 )
Think Aloud ( 1 )
Test Metrics ( 1 )
NSAT ( 1 )
Expectations ( 1 )
protoype ( 1 )
Facilitation ( 1 )
Certification ( 1 )
Metric ( 1 )
Moderating ( 1 )
moderated ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
Margin of Error ( 1 )
We’ve all spent a lot of time at home this year. The pandemic has made already-popular video streaming services seem essential. The popularity makes sense given the relatively inexpensive subscription fees, the lack of long-term contracts, and the many channels of access (through websites, mobile apps, smart TVs), and there is a LOT of content (albeit distributed across different services). But no matter how good

Read More

We write extensively about standardized UX metrics such as the SUS, PSSUQ, and SUPR-Q. The main benefits of standardization include improved reliability, validity, sensitivity, objectivity, quantification, economy, communication, and norms. Even when standardized UX questionnaires are developed independently, they are influenced by earlier work, just like how UX itself is a new field built upon earlier fields. The deep roots of questionnaire development date back over

Read More

A lot of UX methods exist along with recommendations on when to use them. Some activities tend to cross methods: from operationalizing research questions, making data collection more efficient, and making the most of both what users say and what they do. Here are five techniques we’ve found that make our UX research more effective (and often more efficient). 1. Use a Research Matrix To ensure a

Read More

Finding and fixing usability problems in an interface leads to a better user experience. Beyond fixing problems with current functionality, participant behavior can also reveal important insights into needed new features. These problems and insights are often best gleaned from observing participants interacting with a website, app, or hardware device during actual use or simulated use (during a usability test). With the advent of remote testing platforms like

Read More

While UX research may be a priority for you, it probably isn’t for your participants. And participants are a pretty important ingredient in usability testing. If people were predictable, reliable, and always did what they said, few of us would make a living in improving the user experience! Unfortunately, people don’t always show up when they say they will for your usability test, in-depth interview,

Read More

We conduct unmoderated UX studies, surveys, and various forms of online research every week at MeasuringU. Part of our process for delivering effective research is spending enough time up front on issues that affect the quality of results. Here are our nine recommendations for conducting better online research. Use a Study Script A study script is similar to a blueprint for online research or prototype

Read More

We conduct a lot of quantitative online research, both surveys and unmoderated UX studies. Much of the data we collect in these studies is from closed-ended questions or task-based questions with behavioral data (time, completion, and clicks). But just about any study we conduct also includes some open-ended response questions. Our research team then needs to read and interpret the free-form responses. In some cases,

Read More

Much of market and UX research studies are taken by paid participants, usually obtained from online panels. Our research has shown that using online panels for UX research for the most part provides reliable and valid results. While these huge sources of participants help fill large sample studies quickly, there’s a major drawback: poor quality respondents.  Reliable and valid responses only come when your data

Read More

What makes a UX practice “mature?,” how do we measure UX maturity, and does maturity really matter? In an earlier article, we discussed the history and challenges of assessing the UX maturity of companies and departments within large organizations. Existing models of maturity generally consist of different stages, with maturity progressing from unrecognized or ad hoc to institutionalized (for example, Nielsen's steps). The models also

Read More

User Experience improvements don’t just happen. You need to have the right people in the right positions to help make a better experience. It would be easy of course if you could just hire as many people as you want. Unless you’re Facebook or Google, that’s probably not an option. Instead, UX teams need to justify requests for headcounts. One way to justify and gauge

Read More