Online panels are the go-to method for collecting data quickly for market and UX research studies.
Despite their wide usage, surprisingly little is known about these panels, such as the characteristics of the panel members or the reliability and accuracy of the data collected from them.
While there isn’t much published data on the inner workings of panels, we’ve conducted our own research and compiled findings from the literature to provide insights for researchers.
Here are 8 things to consider when using a panel in your next online research study.
1. Participants belong to multiple panels.
There are a lot of choices in online panel providers: Op4G, Toluna, Cint, SSI, Research Now, and Harris to name a few. And there’s nothing stopping participants from signing up for more than one panel. In fact, one meta-analysis found that as many as half of panel participants belong to five or more panels! The implication is that a minority of participants account for a disproportionate amount of the participation. Signing up to multiple panels increases a panel member’s chances that he or she gets access to more studies and therefore more rewards.
2. Multiple panel membership affects attitudes.
Belonging to multiple panels isn’t necessarily a bad thing. But repeated exposure can have an effect on results. Participants who belong to multiple panels tend to have above average brand awareness and higher stated intent to purchase products (soup or paint in the studies). It was also found that participants who belong to multiple panels and took a lot of surveys in the course of a year tended to be inattentive respondents (completing studies much too quick and providing only terse responses to open-ended questions).
3. Length of panel membership also has an effect.
One study found that participants who had been with a panel longer (> 3 years) were less likely to recommend a product (50% vs 37%). While there are likely a number of reasons for the difference, I suspect it has something to do with study exposure and even desensitization. This desensitization may not be a bad thing; generally participants tend to overestimate their likelihood to do things (like purchase or recommend). What is somewhat problematic though is not knowing whether your respondents are new to the panel (likely prone to recommend) or seasoned (somewhat jaded and less likely to recommend).
4. The average online study lasts around 17 minutes.
Study length is a major contributor to dropout. A meta-analysis of 11 panels found the average study time is 17 minutes, with around half of studies lasting 20 minutes or more, including 13% that were 30 minutes or more. We found that average time was a bit higher than our studies but this average duration can help answer questions about how long is too long relative to the other studies the panel members are taking.
5. Estimates vary.
It’s common to need to estimate the attitude or stated behavior of people in the general population. Samples pulled from online panels are used to estimate the general population (these are called point estimates). Research has shown that point estimates varied, and in some cases, varied quite substantially depending on the panel used and from known external benchmarks. It isn’t uncommon for point estimates for metrics such as intent to purchase and brand awareness to vary by 15-18 percentage points. This discrepancy was seen in a variety of measures, including demographics, stated behaviors (smoking and newspaper readership), brand awareness, and likelihood to purchase.
6. UX metrics vary too, but not as much as expected.
The point estimates for UX metrics (like the SUPR-Q and NPS) varied between panels, and for more general demographic and psychographic variables. On average, the differences were between 3% and 10% but in some cases exceeded 20%. This variance was less than we expected given the more ethereal nature of UX metrics.
7. Changing panels changes estimates.
Differences in estimates between panels can, in many cases, exceed real differences in the population. Our recommendation is to not change panels, especially when making comparisons over time, such as likelihood to recommend a product or brand attitudes. Changing panels is often unavoidable. If you’re making comparisons to historical data and you had to change panels, let your reader know to use caution when interpreting the results and provide any information on the differences in panel characteristics (if known).
8. Probability panels are probably better.
Most online panels are called non-probability panels, as they obtain their members using online ads, snowball sampling, river sampling, and direct enrollments; they also don’t sample proportionally from the general population. Probability panels in contrast, as its name suggests, ensure that every member of a population (often an entire country) has at least some chance of being selected to respond to a study. Probability panel companies have measures in place to ensure some level of representativeness, often for hard-to-reach populations. As expected, probability samples, while rare, tend to (but not always) perform better than non-probability samples by more closely matching the external benchmarks.
|Denver UX Boot Camp: August 16-18 3 Days of Hands-On Training on UX Methods, Metrics and Analysis|