Randomization

Browse Content by Topic

UX ( 74 )
Methods ( 62 )
Usability Testing ( 55 )
Statistics ( 52 )
Survey ( 39 )
NPS ( 37 )
Benchmarking ( 33 )
Usability ( 32 )
Customer Experience ( 31 )
Sample Size ( 31 )
Rating Scale ( 30 )
User Research ( 29 )
SUS ( 28 )
Net Promoter Score ( 26 )
Usability Problems ( 18 )
Questionnaires ( 17 )
Metrics ( 17 )
Rating Scales ( 16 )
Measurement ( 16 )
UMUX-lite ( 15 )
User Experience ( 15 )
Validity ( 14 )
Satisfaction ( 14 )
Usability Metrics ( 13 )
Surveys ( 13 )
SUPR-Q ( 12 )
SUPRQ ( 12 )
Market Research ( 12 )
Reliability ( 11 )
Qualitative ( 11 )
Navigation ( 10 )
SEQ ( 8 )
Heuristic Evaluation ( 8 )
UX Metrics ( 8 )
Task Time ( 8 )
Research ( 7 )
Task Completion ( 7 )
Questionnaire ( 7 )
Mobile Usability Testing ( 6 )
Analytics ( 6 )
Mobile ( 6 )
Unmoderated Research ( 5 )
Confidence Intervals ( 5 )
Visualizing Data ( 5 )
Usability Problem ( 5 )
Confidence ( 5 )
Six Sigma ( 5 )
Credibility ( 4 )
UX Methods ( 4 )
Moderation ( 4 )
UX Maturity ( 4 )
Task Times ( 4 )
Key Driver ( 4 )
Expert Review ( 4 )
Quantitative ( 4 )
sliders ( 4 )
Loyalty ( 4 )
Customer Segmentation ( 3 )
Lean UX ( 3 )
Usability Lab ( 3 )
Summative ( 3 )
Data ( 3 )
Card Sorting ( 3 )
ROI ( 3 )
PURE ( 3 )
Desirability ( 3 )
Task Metrics ( 3 )
Voice Interaction ( 3 )
TAM ( 3 )
Focus Groups ( 2 )
Findability ( 2 )
SUM ( 2 )
Excel ( 2 )
PhD ( 2 )
Errors ( 2 )
Remote Usability Testing ( 2 )
Correlation ( 2 )
Salary Survey ( 2 )
Tasks ( 2 )
KLM ( 2 )
Tree Testing ( 2 )
Branding ( 2 )
UX Salary Survey ( 2 )
IA ( 2 )
A/B Testing ( 2 )
Personas ( 2 )
Marketing ( 2 )
Emoji scale ( 2 )
Sample Sizes ( 2 )
LTR ( 2 )
Eye-Tracking ( 2 )
Prototype ( 2 )
Variables ( 2 )
slider ( 2 )
Sensitivity ( 2 )
Formative ( 2 )
Cognitive Walkthrough ( 2 )
Star Scale ( 2 )
Carryover ( 1 )
Visual Analog Scale ( 1 )
Linear Numeric Scale ( 1 )
Desktop ( 1 )
Within-subjects ( 1 )
Likert ( 1 )
History of usability ( 1 )
Margin of Error ( 1 )
Task Randomization ( 1 )
Test Metrics ( 1 )
Quality ( 1 )
Cumulative Graphs ( 1 )
Meeting software ( 1 )
User-Centred Design ( 1 )
Polarization ( 1 )
consumer software ( 1 )
b2b software ( 1 )
Design Thinking ( 1 )
Problem Severity ( 1 )
Greco-Latin Squares ( 1 )
Research design ( 1 )
Bias ( 1 )
Information Architecture ( 1 )
Latin Squares ( 1 )
Site Analytics ( 1 )
Single Ease Question ( 1 )
R ( 1 )
t-test ( 1 )
Randomization ( 1 )
Probability ( 1 )
Contextual Inquiry ( 1 )
Mean Opinion Scale ( 1 )
ISO ( 1 )
MOS ( 1 )
MOS-R ( 1 )
graphic scale ( 1 )
negative scale ( 1 )
Measure ( 1 )
coding ( 1 )
Mobile Usability ( 1 )
Anchoring ( 1 )
CUE ( 1 )
Certification ( 1 )
Design ( 1 )
Top Task Analysis ( 1 )
True Intent ( 1 )
Unmoderated ( 1 )
Facilitation ( 1 )
protoype ( 1 )
Persona ( 1 )
User Testing ( 1 )
Effect Size ( 1 )
Visual Appeal ( 1 )
Task Completin ( 1 )
Report ( 1 )
Crowdsourcing ( 1 )
Random ( 1 )
Think Aloud ( 1 )
Sample ( 1 )
Five ( 1 )
Affinity ( 1 )
Z-Score ( 1 )
Performance ( 1 )
Perceptions ( 1 )
Segmentation ( 1 )
Metric ( 1 )
AttrakDiff2 ( 1 )
Conjoint Analysis ( 1 )
Regression Analysis ( 1 )
UEQ ( 1 )
Expectations ( 1 )
Hedonic usability ( 1 )
RITE ( 1 )
Formative testing ( 1 )
Trust ( 1 )
Competitive ( 1 )
meCUE2.0 ( 1 )
Microsoft Desirability Toolkit ( 1 )
NSAT ( 1 )
moderated ( 1 )
Software ( 1 )
Moderating ( 1 )
Customer effort ( 1 )
Delight ( 1 )
PSSUQ ( 1 )
CSUQ ( 1 )
Ordinal ( 1 )
MUSiC ( 1 )
The two-sample t-test is one of the most widely used statistical tests, assessing whether mean differences between two samples are statistically significant. It can be used to compare two samples of many UX metrics, such as SUS scores, SEQ scores, and task times. The t-test, like most statistical tests, has certain requirements (assumptions) for its use. While it’s easy to conduct a two-sample t-test using

Read More