There’s no shortage of methods available to the UX researcher.
The methods can generate both qualitative and quantitative data–and many of them complement each other.
A mixed-methods approach that combines qualitative and quantitative methods gives you a better picture of both the frequency or “how much” and the reasoning or “why” behind the numbers better than an approach that’s only qualitative or only quantitative.
While there are dozens of methods and techniques we describe and use at MeasuringU, many of the methods are just variations and combinations of broader methods that cross the behavioral sciences. The most common of these broader methods are surveys, experiments, observations, interviews, and focus groups. Understanding the strengths and weaknesses of the methods helps you make better decisions on the right method or methods for the research effort.
A survey includes questions about any phenomenon, such as an event, organization, or experience, that a sample of participants answers. Surveys can be administered electronically, telephonically, or in person. They’re popular because they are so adaptive to many research needs. They can contain a mix of open- and closed-ended items that have participants reflect on specific statements or provide their insights to specific questions.
Items can include descriptions about age, gender, or affiliation as well as attitudinal items such as satisfaction, likelihood to attend an event, or intent to use a mobile app to pay a bill.
Surveys work well when you know the right questions to ask, how to ask them, and to whom to ask. As such they are best used when you have a well-defined topic. Surveys can usually collect data efficiently from a large sample, and are good for answering the question of “how many.” Assuming participants are randomly and representatively selected for the survey, the quantitative results, including descriptive statistics and confidence intervals, can put realistic upper and lower boundaries around responses to items. Surveys can further refine or test a hypothesis that was formed from existing studies and using other methods.
For example, a company may want to better understand why its customers don’t use its mobile app to pay their credit-bard bills. Using a survey, researchers can ask an existing set of customers specific questions about how they use mobile phones and how they pay credit-card bills; customers can also provide reasons for why they may or may not pay bills through an app.
You can then examine the patterns statistically through the use of frequency distributions, ANOVA, or factor analysis to identify patterns in responses. Researchers can then code open-ended responses and create new variables and cross-tab them with demographic variables such as age or experience with technology. This may lead researchers to discover that older customers are statistically less likely to use an app.
An experiment tests a hypothesis. You manipulate independent variables and measure the effects on a dependent variable. An experiment works best with well-defined measures and a narrow and verifiable hypothesis. An experiment helps establish causation through proper control of variables, establishing associations and temporal precedence.
A hallmark of the experimental method is using quantitative outcome variables, which are then statistically tested and often include some measure of experimental effect. Using different levels of control and randomization can help you better understand causation. Examples of an experiment include the randomized control trial, quasi-experimental design, or correlational study.
For example, the company may come up with some new messaging and designs to improve the mobile app experience. The experiment might involve randomly assigning a representative sample of older participants to the existing and new designs to see whether attitudes and behaviors are changed. Pre- and post-test measures are collected and nuisance variables (like prior experience) are controlled.
In an observational study, the result is a rich description of the events, people, and interactions around the topic of interest as you observe them. Unlike an interview, which is more contrived and often not in the natural setting of the participant, direct observation establishes the authenticity of the findings. Ethnographic research relies primarily on observation as a means for data collection.
The observation can be passive or active and the observer can be either covert or overt. The role and engagement of the observer has pros and cons.
In a passive role, the participants aren’t aware of the research question or in some cases aren’t even aware they are being observed—which can lead to an ethical dilemma.
With an active role, you engage with the participants/events under study. The participants are aware they’re being observed, who the observers are, and often the research questions. Because of your direct participation, you need to take into consideration and mitigate the effects you have on the observed.
A subset of observation is the collection and examination of relevant artifacts that describe the phenomenon (often called content analysis). These can be videos, Google Analytics statistics, news stories, receipts, letters, or any material that provides information on what happened and why it happened. Such collection is a hallmark of the case study approach where significant effort can be placed on organizing the documented history of an organization or event.
For example, a researcher at the company may passively observe its customers in a coffee shop that has pay-by-mobile technology. The researcher can listen for comments describing why people are or are not using their phones and observe directly how each customer reacts when confronted with the new technology.
Interviews involve direct dialogue with participants. Interviews can be open-ended as the participant reflects on specific events or ideas, or you can direct the interview with specific questions. They can be conducted in one sitting or over time (weeks, months, or years). They are typically conducted one-on-one, which allows the interviewee to reflect deeply on a topic without being influenced by other participants.
Interviews can be used early in the research process to better understand an event (such as in a phenomenological study or case study) or later in a research effort to better understand the motivations or reasons behind actions or attitudes in experiments and surveys. In this sense, interviews can complement surveys and focus groups to better understand a phenomenon.
For example, researchers at the company may want to better understand what specific habits, beliefs, or influences lead older participants to not use a mobile app. They can then look for patterns through multiple interviews to form hypotheses. They may learn that one participant was told by his techy grandson that hackers can steal information right through the air. In another interview, they might learn that a customer saw a news story about how credit card information is stolen online all the time.
A focus group involves direct dialogue with a group of participants at the same time. While some may consider it a type of interview, it generates different data than interviews, because participants’ ideas influence each other in the group setting. It can be beneficial or detrimental to the authenticity and trustworthiness of the data. On the detrimental side, some participants may not speak up for fear of being criticized by other participants in the focus group.
However, on the flip side, there may be “safety” in numbers, as participants feel more open to discuss ideas and can build upon each other’s comments. Focus groups in this way can be used to generate ideas for new products or concepts or to get some early feedback on the types of questions to ask in surveys or in directed interviews.
For example, a researcher may get ideas about how to minimize the fear of online identity theft by hearing a participant wonder whether it really is safer to pay bills through a mobile app like his nephew said. Another participant in the focus group may then add that she also heard Wi-Fi can be hacked and wondered which was safer: cellphones or Wi-Fi?
Many people who are unfamiliar with usability testing call it a 1:1 focus group. In fact, we’ve heard focus group so often we jokingly call it the “F-word.” The major difference is that focus groups generally don’t involve much observation (what users do) and rely on group interviewing (what users say).
Despite the plethora of methods within UX research and across the behavioral sciences, most methods can be grouped into broader methods of data collection: the survey, experiment, observation, interview and focus group. Each has its strengths and weaknesses and the “best” method depends on the research goals of the study.
As you’re deciding which method works for the type of data you want to collect, remember that you’ll find most success when you combine methods. Here are a few types of studies that do so:
- A benchmarking usability study (summative evaluation) is a hybrid of a survey and experiment. There are tighter controls on the study and participants are asked to attempt tasks (which are observed) and answer questions (for example, a survey).
- An A/B test is an example of an instance of the broader experiment method.
- When a phenomenon is difficult to measure, not well defined, or when there isn’t a clear motivation or hypothesis, the observation, interview, and focus group methods are more appropriate than a survey and experimentation approach.
- A formative usability test is a combination of direct observation and interview. Through watching what users do, what they say, and then probing on these observations, you can uncover problems in an interface. A study like this one bridges the gap between what users do (observation) and say (survey).
- A tree-test can be considered as a special type of usability test, so if it’s more of a comparative tree-test, it falls more as a survey and experiment. For a stand-alone tree test in its early stages it’s more of a survey and possibly interview (if you’re using in-person tree-testing).
This table provides a comparison of the common data collection methods you’ll use often in your research.
|Method||Typical Sample Size||Hypothesis||Primary Analysis||Analysis Method||Focus|
|Survey||Small to large 30- 1000+||Formed||Quantitative||Descriptive & statistical||How many|
|Experiment||Medium to large, based on effect size; 30+||Well-formed and testable||Quantitative||Experimental & statistical||Establish causation|
|Observation||Depends on Unit of Analysis (but 3-30)||None||Qualitative||Transcription & coding plus document and artifact review||What and where it happens|
|Interview||1 to 2||None||Qualitative||Transcription & coding||In-depth “why”|
|Focus Group||5-10 per group||Open||Qualitative||Transcription & coding||Uncover and explore ideas|