What Are UX Research Deliverables?

Jeff Sauro, PhD • Jim Lewis, PhD

Feature image showing a laptop next to three colored bindersAs professionals, we’re judged on what we produce. So-called deliverables are the artifacts produced by researchers. But what are UX research deliverables?

Deliverables are almost always a digital record of inputs, outcomes, and recommendations in a document or presentation. But delivering documents and presentations fits the description of just about all knowledge worker output, so what is special about UX research deliverables?

UX designers have the benefit (and burden) of delivering visually distinct outputs, which are certainly different from market analyses. UX researchers, on the other hand, aren’t typically involved in delivering designs, wireframes, or specs—the sort of productions that tend to dominate online examples and lists.

What differentiates UX research deliverables? It depends on the type of UX method. But the deliverable itself isn’t the insight. Instead, it’s the vehicle that communicates it.

One of the benefits of working at MeasuringU is that we get to work on a variety of UX research projects that create varied deliverables.

Here’s a list of common UX research deliverables classified as interim, final, and artifacts.

Interim Deliverables

For most empirical UX research methods (e.g., interviews, usability tests), you need to agree on who the participants will be and what information you’ll hope to get. Like blueprints for a house, these interim deliverables can stand on their own and be adapted or adopted for future work.

Test Plans and Research Briefs

The test plan is a broad deliverable that defines the study objectives, hypotheses, participants, tasks, and logistics before data collection. These are often called research briefs and often precede the study being commissioned internally or by an external vendor. Test plans should include clear links to business questions, a realistic scope, and stakeholder-aligned goals (e.g., success metrics and timing).

Screeners

Screeners are the way to ensure you’re recruiting the right participants. Screener deliverables usually look like survey documents with indications of where to screen out (terminate) potential participants and the quotas (min or max) needed to get the desired sample.

Screeners are often programmed into a survey platform like MUiQ® when delivered electronically to a panel or list of prospective participants. For most unmoderated studies (e.g., UX benchmarks or surveys), screeners appear at the beginning to avoid wasting the time of unqualified respondents.

Discussion Guides and Study Scripts

For methods that include a moderator (e.g., interviews, usability tests), a discussion guide is another important interim deliverable. The discussion guide documents the questions to ask, expected responses, and notes to the moderator on what topics to probe, explore, or avoid.

For unmoderated studies like UX benchmarks or surveys, we also include a document we call a study script. Like the discussion guide for moderators, the study script includes the questions and tasks presented to participants and notes about the study that participants never see. These notes include quotas, screenouts, and study programming details such as randomization, logic, and branching.

Final Deliverables

UX researchers document the results and implications of their analyses in various types of final deliverables.

Slide Decks/Presentations

Tell Edward Tufte that PowerPoint is still alive and kicking. And it has a twin, Google Slides (and more recently, cousins in Miro, Notion, and Figma). The slide presentation (or slide deck, or just deck) remains the default deliverable. The slide presentation is meant to tell a coherent story from research questions and findings to insights with minimal text and strong data visualization. It’s often (but not always) presented to support live discussion and Q&A.

The challenge with slide presentations is that they often have to act as a visual overview, a comprehensive report, and an executive summary all in one.

The presentation contents depend heavily on the type of study. Formative usability tests will have a prioritized list of problems and examples. In-depth interviews and contextual inquiries will focus on themes and insights. Benchmark studies will have study- and task-level metrics. Eye-tracking studies will have heatmaps and gaze plots. Information architecture will have suggested category names.

Executive Decks

An executive deck is a short, high-level presentation that focuses on key takeaways, recommendations, and next steps derived from reports or other presentation deliverables. It may often contain scorecards and can be used to circulate or present to executive stakeholders.

Reports and Publications

When the details matter and you need to provide comprehensive findings, a written report in document (Word or Google Docs) format is needed. The written report allows you to include a narrative along with visuals plus more nuance in quantitative findings, references, and explanations. Amazon is somewhat famous for mandating six-page written reports, also inspired by Tufte. A report is the precursor to a white-paper, peer-reviewed journal article, or other public-facing publication that needs to withstand scrutiny.

Scorecards and Dashboards

For UX benchmark studies or any quantitative analysis that has multiple scores across comparable products, a scorecard provides a compact visual overview of leaders and laggards. Scorecards are also a key deliverable in PURE evaluations.

Highlight Reel

A highlight reel is a (typically video) compilation of participant quotes or interactions that illustrate points made in a report or slide presentation. It’s common to organize clips by problem areas or themes.

Problem List

A problem list documents the usability problems uncovered in a usability test or inspection method such as a heuristic or PURE evaluation. Elements in the list typically include a full problem description (what the user tried to do, what happened, why that’s a problem), a description of the evidence that led to the identification of the problem (e.g., problem frequency and severity, violated heuristic). The list might also feature design recommendations to eliminate or limit the problem, but this depends on whether or not the researchers are sufficiently familiar with the design space to provide such recommendations.

Banner Tables

For large-scale surveys with multiple segments, it’s good to display results in a format known as a banner table. Banner tables provide cross-tabulated results by key segments (e.g., demographics, personas, behaviors) to reveal group differences.

Journey Maps/Experience Maps

A journey map visualizes the beginning-to-end experience of a flow such as onboarding or purchasing. Data supporting the narrative of a journey map can be qualitative (e.g., quotes), quantitative (e.g., ease or satisfaction scores), or both.

Persona

The often-maligned persona (or user archetype) as a deliverable is a synthesis of qualitative (and ideally quantitative) data about typical characteristics of a segment. These usually include fictitious picture renderings and differentiating characteristics about behaviors and demographics. While personas can be included in slide presentations, they are usually also standalone deliverables with summaries and details for each persona, which will be used by design teams and product stakeholders.

Affinity Diagram

Just about any qualitative method where raw quotes and notes need to be organized can be presented in a visual overview. The resulting affinity diagram is associated with the famous sticky notes on a whiteboard. Affinity diagrams capture verbatim comments and include synthesis rationale to create visual clustering, showing frequency and intensity.

Site Map/Information Architecture

A card-sort and/or tree test can produce a site map or information architecture diagram as a research deliverable. It represents how users expect information to be organized and labeled. These can be included in a presentation, but they are often used and circulated separately.

Typing Tool

Typing tools are lightweight classification models, typically in a spreadsheet, that help classify participants. They are a common deliverable from segmentation analyses. Typing tools can use survey inputs to quickly “type” a respondent for screening or characterizing in results.

Artifacts

When a project is finished, key artifacts about what went into the findings and recommendations can also act as deliverables. These artifacts can be used to derive additional findings, to double-check insights, or to contribute to future work. These are usually not formally presented.

Raw Data

Raw data can take many forms, including raw survey responses (before cleaning), scoring sheets for lab-based studies, codebooks, exports from eye-tracking, dendrograms for card sorts, or click paths from unmoderated studies. Raw data can be used to check, re-analyze, and extend previous analyses.

Moderator Notes

The codes and notes moderators take as they use the discussion guide can be used to better understand (or question) reported insights and recommendations.

Participant Videos

Having video recordings of moderated sessions, interviews, or contextual inquiries allows direct access to what was said or done without interpretation. Being able to review videos can be valuable because, of course, even experienced researchers may disagree on what they see.

Summary

Deliverables are the recorded memory of UX research. Unlike UX design deliverables, which are primarily visual and tangible (what the website will look like, what using a prototype feels like), UX research deliverables must communicate the insights that drive the designs (e.g., what problems people had, how different groups in a sample had different experiences). Below is a summary table of the deliverables listed above.

DeliverableUsed InFunction
Test Plans and Research BriefsAll research studiesSets goals, participants, methods, metrics
Discussion Guides and Study ScriptsAll empirical researchOutlines questions, tasks, and moderator or programming notes to ensure consistency and coverage across sessions
Screener QuestionnaireRecruitment for most empirical methodsEnsures correct mix of participants in sample
Affinity DiagramAny qualitative methodOrganizes raw data into themes
PersonasDerived from interviews, surveys, diary studiesSynthesizes user types and motivations
Journey Maps / Experience MapsFrom interviews, contextual inquiry, diary studiesVisualizes end-to-end experience
Slide Deck / PresentationFrom any studyVisually communicates findings and insights
Executive Summary DeckFrom any study, often derived from reportsHighlights key takeaways and recommendations
Report / PublicationStudies requiring detailed documentationProvides comprehensive findings and analysis
UX Scorecard / DashboardBenchmark, PURE, or other quant studiesSummarizes KPIs and trends
Highlight ReelUsability tests, interviews, contextual inquiryIllustrates key moments with participant video clips
Problem ListUsability tests, usability inspection methodsDocuments the problems discovered during a study
Banner TableSurveys, concepts, or benchmark studiesCross-tabulates results by key segments
Typing ToolSegmentation or clustering studiesClassifies participants into data-driven segments
Site Map / Information ArchitectureCard sorts, tree testsDefines or validates information structure based on user input
Raw DataAll empirical and quantitative methodsEnables verification, re-analysis, and transparency
Moderator NotesModerated interviews and usability sessionsCaptures contextual observations during sessions
Participant VideosModerated sessions, interviews, field studiesPreserves original user behavior for review

Table 1: Summary of common UX deliverables.

In future articles, we’ll take a closer look at what goes into some of these deliverables, from the anatomy of a usability report to how to design an effective scorecard.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top