Jeff Sauro, Ph.D., author of “Quantifying the User Experience,” will be a featured speaker at Rosenfeld Media’s virtual conference on “The Business Case for Design.”
He will be speaking from 10:00-10:50 AM (MST).
Dr. Sauro will be discussing topics that include: how to apply scientific thinking toward several design scenarios; how to form a testable hypothesis; ways to operationalize a study with the right tasks, metrics and methods; the importance of randomization to minimize nuisance variables; and analyzing data to differentiate real change from random noise.
Participants receive a User Experience Research Certificate and 2.5 Continuing Education Credits from the University of Denver. This is our 6th Annual Boot Camp. Each year we’ve had a great group of participants from dozens of companies including:
Google, CitiGroup, John Deere, American Airlines, Amica Insurance, Rockwell Automation and Charter Communications.
You gave me the tools and skills to give more credibility to my work…I am very grateful.
[My bosses] are thrilled with the possibilities that this opened up for us in terms of quantifying user feedback and behavior.
The early registration discounted cost is $3,000 US per participant for all three days (through July 1st). The regular price is $3,300. This includes:
Daily Lunch and Two Daily Breaks
Course Books, Binders and Materials (valued at over $700)
Up to 100 US-based participants for your study (provided by the panel OP4G)
Certificate and Continuing Education Credit from the University of Denver
Thursday Night Cocktail Reception
Classes will run from approx. 9am-5pm daily with lunch and a morning and afternoon break. Jeff Sauro will be the instructor. The courses will be a mix of lecture material and hands on practice learning and applying the methods.
All classes will be held on the University of Denver Campus.
The Denver UX Boot Camp will involve mornings with interactive lectures, visuals, witty jokes, and practice exercise. Afternoons will involve more hands-on workshops focusing on more detailed topics. Schedules and topics may change based on participants’ backgrounds and current needs.
Introduction to Usability & User Measurement
A 100 year history of usability in pictures, people and events followed by a discussion of how usability is defined and measured across dozens of organizations. We will review common methods for measuring usability. This will follow Chapters 1-2 in Quantifying the User Experience.
Top Tasks Analysis
It’s difficult to do everything well all the time. One of the biggest problems in interactions we see in software and websites is that developers and designers think users are doing one thing when in fact users are doing something else. The top task analysis is a simple, yet powerful way of understanding what critical few tasks tend to affect the bulk of the user experience.
Benchmark Usability Testing
We will review a case study for two online rental car company websites using the standard usability metrics to understand how to obtain a valid usability benchmark and set the foundation for statistical decision making with UX metrics.
Findability & Tree Testing
Despite improvements in search technology, browsing is still a dominant strategy used by most users. At the heart of findability problems is a navigation with a confusing structure and poorly chosen category labels. We will discuss how to conduct a tree-test to isolate and generate a valid quantitative baseline measure of findability. We will explore strategies for improving findability through card sorting and cross-linking and then comparing a new navigation structure to the baseline.
Sampling users of any size involves sampling error. Confidence intervals provide the level of precision around estimates of finability, completion rates, likelihood to recommend, and any metric you collect from a sample of users. We will cover both the concepts of confidence intervals and normal theory and how to apply this practice on any sample from 2 to 200,000 users. This will follow Chapter 3 in Quantifying the User Experience.
Observing & Coding Usability Problems
At the heart of usability testing, whether it be the do-it-yourself kind or the more industrial strength kind, is documenting, diagnosing and reducing usability problems. Following up from the early quantitative benchmark data we will watch videos of users attempting to rent a car and document and discuss the usability problems.
PM Workshop: Conducting Usability Tests
After covering the different methods, now we will get hands on and break into groups to work through the core usability testing methods of moderated sessions, unmoderated tests and tree-testing.
Hypothesis Testing & Statistical Significance
With everyone wide awake on caffeine, we will get into both the theoretical and practical applications of testing hypotheses in the user experience. We will cover the basics of statistical testing and get practice determining if two websites or designs are statistically different. Participants will receive a comprehensive Excel calculator and guide to walk them through how to conduct statistical tests for the most common situations in applied user research. This will follow Chapters 5 and 10 in Quantifying the User Experience.
A/B Testing and Metric Comparisons
We will continue getting practice forming hypotheses and testing the differences in designs by going more in depth into conducting A/B tests and determining if two designs are statistically significant at small and very large sample sizes. We will focus on balancing practical significance with statistical significance through a series of examples and exercises.
Project Design & Deployment In this half-day workshop, we’ll give you hands-on instruction in going through setting up and preparing to analyze an unmoderated study (of your choosing) and prepare to launch it to collect real data from targeted participants. Cocktail Reception Hosted by
Power and Sample Size Calculations
How many users do you test? It’s one of the most common and confusing questions in applied user research. Instead of relying on vague rules of thumb, you’ll learn how to compute sample sizes for making comparisons, discovering problems in usability tests, and for survey research.
Findability & Usability Case Studies
We will walk through a series of scenarios to bring together the methods learned earlier and apply research design, sample size calculations, metric selection and statistical comparisons. Working in teams, participants will design and conduct an unmoderated or findability study for analysis and discussion.
Final Analysis & Project Presentations
You’ll get help in preparing your analyis and statistical calculations then present your findings.
I love, love, loved the class! It was by far the best training that I’ve attended in my career (and I have been working for a very long time).
Jeff is a phenomenal speaker and teacher. He makes very difficult (and let’s face it, pretty dry) stuff fun to learn. Thanks for putting this on!
Denver UX Boot Camp Course Faculty
Jeff Sauro will be the primary instructor for the boot camp. He has taught UX research methods classes for companies including Autodesk, Cisco, Drobox and Salesforce. He is a regular speaker at UX conferences including UXPA and CHI and is an adjunct instructor at the University of Denver.
Due to the customized nature of this training, we cannot offer refunds. We accept substitutions. Please email us as soon as you know you will have a substitute. We may also carry over your registration for the following year. Email email@example.com with substitutions, carry-over, or extenuating circumstances.
MeasuringU official presentation date TBD
Jeff and the MeasuringU Team will present:
From Snake-Oil to Science: Measuring UX Maturity
Colorado Convention Center, Denver, CO | May 10, 2017 | 4:30pm