What Is the Product-Market Fit (PMF) Item?

Jim Lewis, PhD and Jeff Sauro, PhD

A social media platform, a GPS heads-up display, a video streaming platform: all are examples of viable products … that failed. The failure came despite often substantial financial funding and putatively useful features. Their failures were blamed on poor product-market fit.

Predicting a prospective product’s success in the market isn’t easy. But there’s a strong incentive to know whether a product will succeed or fail as early as possible for investors and the product makers.

Systematic attempts at measuring product fit usually include assessing attitudes towards features both before and after people had a chance to use the product.

Other measures used for assessing potential product success include the Net Promoter Score (NPS), which is a measure of recommendation intention, and the Customer Effort Score (CES), which measures the ease of working with a company. Both have been used as predictors of growth, but they are problematic when used for new products.

Despite their use in customer experience practice and some evidence the NPS is a reasonable proxy for future growth, neither the NPS nor the CES seems to be directly on point for assessing how well a product fits its market.

In recent years, however, an item has been proposed to assess product-market fit (PMF). In this article, we describe this item and its use.

The PMF Item

Sean Ellis is credited with creating the PMF item based on his experience working with startup companies.

Format of the PMF

Figure 1 shows the PMF item format. The item stem asks respondents to imagine how they would feel if they couldn’t use the product, and the response options are various levels of disappointment. The standard scoring for the item is the “top-box” score—the percentage of “Very disappointed” responses.

Figure 1: The PMF item (created in MUIQ®).

Usage

The PMF item is the key item in a larger PMF survey that is primarily made up of open-ended questions (much of the qualitative content of the PMF survey is proprietary). The recommended use of the survey is to send it to a random set of people who have appropriate experience with the product (core usage, at least twice, in the last two weeks).

The result of the PMF item is taken as a key indicator of whether a company has a good enough product-market fit to support growth. As described at pmfsurvey.com, “After benchmarking nearly a hundred startups, Sean Ellis found that the magic number for product/market fit was 40%. Companies that struggled to find growth almost always had less than 40% of users respond ‘very disappointed’ in the survey, whereas companies with strong traction almost always exceeded that threshold.” Using this 40% threshold is known as the “Sean Ellis test.”

Analysis of Available PMF Data

Using Google Scholar, we could not find any peer-reviewed papers describing research with the PMF item or the “Sean Ellis test.” Articles found using a regular Google search, for the most part, described the item and uncritically accepted the benchmark.

Luenendonk (2019) generally supported using the PMF item and the Sean Ellis test, but he described some limitations in addition to the benefits of the item’s simplicity and customer orientation. For limitations, he pointed out that it would be risky to use a survey and base passing/failing solely on exceeding the 40% criterion.

These are reasonable criticisms, but Luenendonk’s advice regarding appropriate sample size is problematic: “The very first question you should ask yourself is how many participants are necessary for you to get conclusive results. The answer to this is not definite. Of course, the more participants you get, the better, but this does not imply that you should wait until you [have] 1000 participants to commence analysis; a sample from about 50 participants could give you just as accurate a result as the 1000 participants would give you.”

While we agree that you don’t need to wait until you have 1,000 participants, there is a price to pay in precision at a sample size of 50. Using our binomial confidence interval calculator, if your estimate is 40% from a sample size of 50, then the margin of error for 95% confidence is ±13%, making the plausible range from about 27 to 53%. If the sample size is 1,000, then the margin of error is ±3% (plausible range from 37 to 43%).

Jackson (2019), despite having used and promoted the PMF survey, has voiced several criticisms. Using Slack as an example, he suggested that a PMF assessment using the Sean Ellis test might be more informative for early-stage products than for more mature products. On the other hand, results from a PMF survey of 750 Slack users in 2015 (two years after its announcement) found that it had a PMF score of about 50% (about 3.5% margin of error with 95% confidence).

Jackson was also concerned about asking customers to guess about their future feelings, based on the general belief that people are “bad at forecasting their future behavior.” As we have discussed in previous articles, this belief is overly negative. People’s future behaviors do not always match their stated behavioral intentions., but there is a positive correlation between intention and behavior (and, presumably, between current expectation and future attitude). Also, asking about past behaviors or attitudes often does not provide a more accurate measurement than asking for a future projection.

Luenendonk and Jackson encourage CX researchers to combine PMF with other business metrics and analyses of customer communications when making major business decisions.

Our Take

Predicting product success is notoriously difficult. Such a predictor would be worth millions if done well, so there is an incentive to develop one. From what we’ve read so far, the PMF item is interesting, but there is little compelling evidence to support its promotion for use in practice. A 40% threshold sounds authoritative and precise, but it’s based on the intuition of its originator, though to be fair, that intuition is grounded with considerable experience.

Given the effort it takes to get a reasonably precise estimation of a binary metric such as a top-box score and the squishiness of the current benchmark threshold, we don’t necessarily advise against using the PMF item. However, we advise against giving it undue weight in business decisions, whether startup or mature. We have started collecting some PMF ratings in our periodic SUPR-Q® surveys for established products to get some idea about the range of the PMF item for those types of products (and we hope this will inform its use for startups as well). We will publish those findings in future articles.

Summary and Takeaways

Part of a larger, mostly qualitative PMF survey, the PMF item is, on its face, a measure of anticipated disappointment if one were to no longer be able to use a product, with response options of “Very disappointed,” “Somewhat disappointed,” “Not disappointed (it isn’t really that useful),” and “N/A – I no longer use this product.”

The PMF metric is the percentage of survey respondents who choose “Very disappointed.” The “Sean Ellis test,” a widely used benchmark for interpreting the PMF, states that scores exceeding 40% indicate good product-market fit.

Most web articles describing the PMF uncritically report this benchmark. However, a few writers have made criticisms of varying quality. For example, while still supporting its collection and interpretation, some CX researchers have appropriately noted the lack of rigor in establishing the 40% benchmark and warned against giving it too much weight in business decisions.

In addition to these criticisms, CX researchers should also be aware of the inherent difficulty in getting precise estimates of binary measures like the PMF, especially when targets are close to the maximum binomial variability at 50%.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top