It was another busy year on with 50 new articles, a new website, a new unmoderated research platform (MUIQ), and our 5th UX Bootcamp.

In 2017 over 1.2 million people viewed our articles. Thank You!

The most common topics we covered include: usability testing, benchmarking, the 3Ms of methods, metrics and measurement, and working with online panels. Here’s a summary of the articles I wrote in 2017.

Usability Testing

Facilitation remains an important skill for conducting usability tests. We provided a resource for facilitators, covered the ten golden rules of facilitating, and shown how thinking aloud—a hallmark of usability testing—affects where people look. There’s evidence that observers may affect the data in usability tests and I suggested some ways to mitigate the impact. I also provided guidance on when to assist in a usability test, how to best prepare for a moderated benchmark, how to determine task completion, and five less conventional interfaces that could use usability testing.

Prototypes are a staple of early design validation in usability testing. I provided six practical tips for testing with them and five metrics for detecting problems. Interestingly, there’s evidence that even low-fidelity prototypes serve as a good proxy (although not a substitute) for testing with actual websites and products.


Benchmarking is an essential step in making quantifiable improvement to the user experience of websites and apps. I think benchmarking is so important I wrote an introduction to the topic, taught a four-part course dedicated to it, and am writing a book (Q1 2018). We benchmarked the user experience of hotel, retail, and entertainment websites as well as provided an update to our Consumer Software and Business Software benchmarks.


UX methods (qualitative and quantitative) are always an interesting topic. In addition to updating the salary calculator, we also examined other interesting data from the UXPA salary survey to see how method usage has changed since 2014 (some changes but a lot of similarities). We also wrote about a new method, PURE, which is a way to approximate UX metrics using analytic methods. It’s particularly helpful in B2B situations when it’s too difficult to test with actual users.

Questionnaires & Metrics

Questionnaires are an essential tool for capturing the attitudes of users and I wrote about how to scientifically remove items in questionnaires. Standardized questionnaires offer the benefits of reliability and validity  and some—for example, the SUPR-Q—offer a normative database to give the metrics additional meaning. It’s also one of the reasons the SUPR-Q is better than the SUS for websites.

Despite its age, the SUS remains a mainstay of product usability measurement but there have been some recent advances practitioners should be aware of, including a potential replacement for it: the shorter UMUX-Lite. Finally, the Net Promoter Score continues to be the metric people love to hate (but use anyway) and I provided some original research on how changing the number of scale points affects the score (hint: not too much) as well as continued advice on why the mean might be better than its traditional top box minus bottom six box scoring approach.

UX & Measurement

There’s been a lot of talk lately about design thinking, but often as effective is scientific thinking for design. Measuring the user experience can seem an opaque and fuzzy process, but I helped demystify the process with ten measurement resources for UX designers, ten concepts management needs to know about UX measurement, and updates to common mobile experience metrics. UX continues to be a popular field and I provided some advice for getting into the profession as well as thoughts on whether a UX certification is worth the cost.

We conducted an international industry survey in 2017 to better understand UX maturity and how it differs across organizations, and what the typical ratio is between designers and developers (a perennial question for UX managers).

Personas & Customer Segmentation

Understanding how your customers differ and are similar is an important technique for better tailoring design and marketing. I wrote about a better way to segment (based on jobs to be done versus attributes) and some of the technical details behind identifying clusters in your data. Using these concepts, I described our scientific approach for better persona development and presented it at UXPA along with Jan Moorman from Walmart. It’s since become the most popular article on

Online Research & Panels

Online research, such as surveys and unmoderated UX studies, are increasingly becoming common methods for researchers. I put together nine recommendations to make these studies more effective as well as provided some guidance on how long these studies typically are for participants. Large sample online research usually provides a lot of numbers, but you need ways for understanding the “why” behind the numbers, including examining open-ended comments systematically.

One common approach for obtaining a large enough qualified sample for online surveys and UX benchmarking is to work with online panels. Here are eight things to consider when working with online panels and some additional research on how accurate estimates are, including UX metric accuracy from these online sources of participants. Regardless of the source of participants, you need to “clean” your data to remove poor quality responses.

Data Visualization

Visualizing data can help with interpretation and communication. I provided ten best practices for graphing and displaying data from surveys and online research and four ways to visualize behavior of participants interacting with websites.

We’ll see you in 2018 where we have plenty of new articles planned for better measuring and managing the user experience.