Reliability

Browse Content by Topic

UX ( 73 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 39 )
NPS ( 36 )
Usability ( 32 )
Benchmarking ( 32 )
Customer Experience ( 31 )
User Research ( 29 )
Sample Size ( 29 )
Rating Scale ( 29 )
SUS ( 28 )
Net Promoter Score ( 24 )
Usability Problems ( 18 )
Metrics ( 17 )
Questionnaires ( 17 )
Measurement ( 16 )
UMUX-lite ( 15 )
Rating Scales ( 15 )
Satisfaction ( 14 )
Validity ( 14 )
User Experience ( 14 )
Surveys ( 13 )
Usability Metrics ( 13 )
SUPRQ ( 12 )
Market Research ( 12 )
SUPR-Q ( 11 )
Qualitative ( 11 )
Reliability ( 11 )
Navigation ( 10 )
UX Metrics ( 8 )
SEQ ( 8 )
Heuristic Evaluation ( 8 )
Task Time ( 8 )
Task Completion ( 7 )
Questionnaire ( 7 )
Research ( 7 )
Mobile ( 6 )
Analytics ( 6 )
Mobile Usability Testing ( 6 )
Confidence ( 5 )
Usability Problem ( 5 )
Visualizing Data ( 5 )
Unmoderated Research ( 5 )
Confidence Intervals ( 5 )
Six Sigma ( 5 )
Task Times ( 4 )
UX Maturity ( 4 )
Credibility ( 4 )
Quantitative ( 4 )
Expert Review ( 4 )
Loyalty ( 4 )
sliders ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
Summative ( 3 )
Lean UX ( 3 )
Card Sorting ( 3 )
Usability Lab ( 3 )
TAM ( 3 )
Customer Segmentation ( 3 )
Task Metrics ( 3 )
Key Driver ( 3 )
Voice Interaction ( 3 )
ROI ( 3 )
Desirability ( 3 )
Data ( 3 )
PURE ( 3 )
Findability ( 2 )
Personas ( 2 )
SUM ( 2 )
Salary Survey ( 2 )
A/B Testing ( 2 )
Errors ( 2 )
Excel ( 2 )
IA ( 2 )
Tasks ( 2 )
Correlation ( 2 )
Branding ( 2 )
PhD ( 2 )
Tree Testing ( 2 )
Remote Usability Testing ( 2 )
UX Salary Survey ( 2 )
Focus Groups ( 2 )
Marketing ( 2 )
Emoji scale ( 2 )
Sample Sizes ( 2 )
Prototype ( 2 )
Eye-Tracking ( 2 )
Variables ( 2 )
LTR ( 2 )
slider ( 2 )
Sensitivity ( 2 )
Cognitive Walkthrough ( 2 )
Star Scale ( 2 )
KLM ( 2 )
Formative ( 2 )
Desktop ( 1 )
Visual Analog Scale ( 1 )
Linear Numeric Scale ( 1 )
Within-subjects ( 1 )
consumer software ( 1 )
Margin of Error ( 1 )
History of usability ( 1 )
ISO ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Cumulative Graphs ( 1 )
Meeting software ( 1 )
User-Centred Design ( 1 )
Likert ( 1 )
Carryover ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
MOS ( 1 )
Research design ( 1 )
Bias ( 1 )
Probability ( 1 )
Information Architecture ( 1 )
Greco-Latin Squares ( 1 )
Site Analytics ( 1 )
R ( 1 )
t-test ( 1 )
Randomization ( 1 )
Latin Squares ( 1 )
Measure ( 1 )
Contextual Inquiry ( 1 )
Quality ( 1 )
Problem Severity ( 1 )
MOS-R ( 1 )
Anchoring ( 1 )
Mean Opinion Scale ( 1 )
graphic scale ( 1 )
coding ( 1 )
negative scale ( 1 )
Mobile Usability ( 1 )
Polarization ( 1 )
Hedonic usability ( 1 )
Unmoderated ( 1 )
Design ( 1 )
Task Completin ( 1 )
Top Task Analysis ( 1 )
Certification ( 1 )
Effect Size ( 1 )
Segmentation ( 1 )
Facilitation ( 1 )
Persona ( 1 )
User Testing ( 1 )
True Intent ( 1 )
Visual Appeal ( 1 )
Single Ease Question ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
Five ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
protoype ( 1 )
Metric ( 1 )
Expectations ( 1 )
UEQ ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
AttrakDiff2 ( 1 )
Competitive ( 1 )
RITE ( 1 )
Formative testing ( 1 )
CUE ( 1 )
Trust ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Moderating ( 1 )
Software ( 1 )
Customer effort ( 1 )
Ordinal ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Delight ( 1 )
MUSiC ( 1 )
Sliders are a type of visual analog scale that can be used with many online survey tools such as our MUIQ platform. The literature on their overall effectiveness is mixed (Roster et al., 2015). On the positive side, evidence indicates that sliders might be more engaging to respondents. On the negative side, evidence also indicates that sliders can be more cognitively and physically challenging than

Read More

Errors can provide a lot of diagnostic information about the root causes of UI problems and the impact such problems have on the user experience. The frequency of errors—even trivial ones—also provides a quantitative description of the performance of a task. The process of observing and coding errors is more time-consuming and dependent on researcher judgment than recording task completions or task times. Consequently, errors

Read More

There’s a lot of conventional wisdom floating around the Internet about rating scales. What you should and shouldn’t do. Best practices for points, labels, and formats. It can be hard to differentiate between science-backed conclusions and just practitioner preferences. In this article, we’ll answer some of the more common questions that come up about rating scales, examine claims by reviewing the literature, and summarize the

Read More

Questioning the effectiveness of usability testing may sound like a relic from the past. In the early years of industrial usability engineering, there was a constant need to justify the activity of (and your job in) usability testing. The book Cost-Justifying Usability (Bias & Mayhew, 2005) speaks to this. Usability testing has since gained significant adoption in organizations. But that doesn’t mean some of the

Read More

Smoking causes cancer. Warnings on cigarette labels and from health organizations all make the clear statement that smoking causes cancer. But how do we know? Smoking precedes cancer (mostly lung cancer). People who smoke cigarettes tend to get lung and other cancers more than those who don’t smoke. We say that smoking is correlated with cancer. Carefully rule out other causes and you have the ingredients

Read More

The Net Promoter Score (NPS) is widely used. But it’s not necessarily widely loved. Some people are quite critical of the Net Promoter Score. Jared Spool wrote a strong critique of the NPS  and Gerry McGovern largely agreed. When I followed up with Jared about his thoughts, he felt that “UX researchers should not use the NPS or, if that’s not an option, ignore it.”

Read More

Your car doesn’t start on some mornings. Your computer crashes at the worst times. Your friend doesn’t show up to your dinner party. If something or someone isn’t reliable, it’s not only a pain but it makes your life less effective and less efficient. And what is true for people and products is true for measurement. The wording of items and the response options we

Read More

Uncovering usability problems is at the heart of usability testing. Problems and insights can be uncovered from observing participants live in a usability lab, using heuristic evaluations, or watching videos of participants. But if you change the person looking for the problems (the evaluator), do you get a different set of problems? The Evaluator Effect It’s been 20 years since two influential papers (Jacobsen, Hertzum,

Read More

How satisfied are you with your life? How happy are you with your job or your marriage? Are you extroverted or introverted? It’s hard to capture the fickle nature of attitudes and constructs in any measure. It can be particularly hard to do that with just one question or item. Consequently, psychology, education, marketing, and user experience have a long history of recommending multiple items

Read More

You don't need to be a data scientist, database admin, or statistical maven to conduct quantitative research. You must, however, have a good grounding in some fundamental concepts to make the most of the efforts. While there are a number of skills, techniques, and concepts you'll want to be familiar with, I think it's essential to master these five: reliability, validity, statistical significance, experimental validity,

Read More