The seven-point Single Ease Question (SEQ®) has become a standard in assessing post-task perceptions of ease. We developed the SEQ over a decade ago after our research showed it performed comparably to or better than other single-item measures. It is an extension of an earlier five-point version that Tedesco and Tullis (2006 [PDF]) found performed best
In earlier articles, we investigated the effects of manipulating item formats on rating behaviors—specifically, we compared sliders to traditional five-point and eleven-point radio button numeric scales. For those analyses, we collected data from 212 respondents (U.S. panel agency, late January 2021) who used radio buttons and sliders to rate online shopping websites (e.g., Amazon) and
We conduct a lot of surveys and unmoderated studies at MeasuringU® using our MUIQ® platform. One of the first steps in these studies involves both screening (ensuring you have the right participants) and characterizing (having sufficient information for further analysis on the participants such as prior experience) the participants. While there are over a dozen
Adding more points to a scale can increase its reliability and sensitivity. But more points also take up additional screen real-estate space. Imagine twenty or a hundred points displayed on desktop or, even worse, on mobile. One recent digital alternative, allowing for nuanced ratings using the same screen real-estate as a traditional scale, is the
When it comes to collecting numeric ratings in online surveys, there is a definite allure to using sliders rather than the more common numeric scales with radio buttons. It just seems like you should get higher-quality measurements with sliders. Sliders give respondents many more response options, and they appear more engaging than multipoint scales. The
If you build it, they will come. That may work for a field of dreams. But when it comes to software and products, if you want people to stay and use the product, it had better be useful and usable. Or, at least, the users should think that it will be useful and usable. That’s
What makes a product successful? How does a new technology get adopted? Whether business software, a mobile app, or a physical product, there are plenty of examples of products that had a lot of promise but failed, and others that many consider a success. Plenty of books expound theories on developing a successful product (e.g.,
Researchers love to argue about the “right” number of points to use in a rating scale response option. Is the right number five, seven, three, ten, or eleven? The opinions often exceed the data for helping drive the decisions. When there are data, they are often hard to generalize, or they don’t really support the
The Net Promoter Score is ubiquitous, with many large organizations using it as a key metric. But despite its widespread adoption, there are vocal critics. It’s been called snake oil, deceptive, fake science, and harmful. In our webinar series and on our website, we’ve addressed several aspects of the NPS, including the enmity toward it.
In a famous Harvard Business Review article published in 2003, Fred Reichheld introduced the Net Promoter Score (NPS). The NPS uses a single likelihood-to-recommend (LTR) question (“How likely is it that you would recommend our company to a friend or colleague?”) with 11 scale steps from 0 (Not at all likely) to 10 (Extremely likely).