Willingness to pay is often cited as the predominant framework used to set pricing for your SaaS — and more broadly, XaaS — products
Author: Bryan Belanger
What is willingness to pay (WTP) and why does it matter?
Willingness to pay is a simple economic concept that refers to the maximum price a customer is willing to pay for a given product or service offering.
Why does WTP matter? Willingness to pay is often cited as the predominant framework used to set pricing for your SaaS — and more broadly, XaaS — products. If you’re a generalist SaaS product marketer, pricing specialist within a product marketing team, or dedicated pricing strategy leader, you’ve likely explored, encountered or used WTP research methods to set a pricing strategy.
WTP research is often executed prior to the launch of a new or updated product or service offering. The intent of research at this stage is to set or refine list pricing that is planned for the new product or service. While this is the most common impetus, companies with more mature pricing programs also conduct willingness-to-pay research on a quarterly, semiannual or annual basis to understand changes in the price sensitivity for existing offerings.
ProfitWell, through its Price Intelligently service, has emerged as a SaaS industry authority on pricing. Price Intelligently relies on a proprietary approach that is based on willingness-to-pay research (more on this in the next section below).
WTP is an invaluable concept and can be efficiently deployed in pricing research. Willingness-to-pay research can quickly yield tactical, tangible data on pricing and customer profiles that can be put into practice.
While these factors make WTP a key element in your pricing strategy arsenal, there are important drawbacks to consider when employing it in your price setting program.
First, it’s important to have a shared understanding of how willingness-to-pay information is collected. Typically, collecting WTP data involves customer research.
Customer research can take many forms. Some of the key elements that might guide your analysis include:
The decisions you’re seeking to make based on the customer research
The specific research questions that you need answered to support those decisions
The nature of the product or service you’re seeking customer insights on
The timeline you have to collect the research
The budget and other resources you have available to support the research
The skills you and/or your team members have to execute different types of research
The quantity and quality of customer insights you require
The customer segments you’re seeking to research
Whether you’re targeting existing customers, prospective customers, or a mix
The breadth and depth of the customer base you’re targeting according to factors such as customer size, industry, geography, product offering, buyer role, persona and demographics
The feasibility of accessing those customers
All of these factors will inform your selected approaches for customer research as well as the methodologies you choose to design that research. The goals and constraints of your customer research effort will help you determine whether you should use internal research, external research (with a third-party provider), or a hybrid approach, and whether you should employ quantitative methods, qualitative methods, or a mix of the two.
In our Ultimate Guide to Conducting SaaS Pricing Research, we touched on these trade-offs, constraints, approaches and methods in the general context of conducting SaaS pricing research. The following are the key methodologies that can be used to collect customer research, including willingness-to-pay insights:
Win/loss research: Pricing is typically only one element of win/loss research, and win/loss outputs are directed by client feedback. However, customer signals on pricing can surface during win/loss conversations, and those signals can be extrapolated into insight on willingness to pay. These insights are commonly anecdotal and qualitative in nature.
Sales-sourced qualitative feedback: Interviews, Slack conversations, email feedback and/or CRM notes from sellers are another source of anecdotal and qualitative feedback on pricing that can be extrapolated into willingness-to-pay insights. These insights are inherently biased, as they are generated from the seller and/or another internal colleague, and not directly from the customer.
Conversational intelligence tools: Tools like Gong, Chorus.ai and Outreach capture customer conversations that can be mined by pricing teams for insights on customer willingness to pay. Like win/loss research, these insights are customer-specific and anecdotal. Willingness-to-pay insights from these sources must be inferred from customer feedback.
Customer interviews or focus groups: In-person or virtual interviews, typically 30 minutes to one hour in length, can be utilized to directly capture insights on willingness to pay. Interviews can be structured to target existing customers and/or prospective customers. Interviews typically are conversational and cover a range of topics that are quantitative and qualitative in nature. Interviews provide the benefit of capturing deep contextual customer insights but are typically less well suited to capturing a statistically representative sample of customer feedback. Interviews are a one-on-one research methodology, but the same core tenets of willingness-to-pay research for interviews can be adapted to in-person or virtual customer focus groups.
Asynchronous customer feedback: The types of willingness-to-pay insights captured from interviews and focus groups can also be collected through asynchronous methods, such as Slack or other virtual messaging conversations, email, social media interactions, or video recordings. These methods are similarly well suited to capturing qualitative contextual feedback. They can support a greater volume of inputs than live interviews or focus groups, since the interviewer is not required to be physically present to gather the insights. However, this can also result in capturing less overall feedback, since the interviewer cannot probe and adjust the conversation in real time as with a live interview.
Social listening: Social listening on Twitter, Reddit and similar sites, as well as software buyer review sites, can provide customer feedback on willingness to pay. Social listening is a passive method of willingness-to-pay data collection, and thus is best suited as a complementary strategy to the other methods outlined in this section.
Micro-surveys: Product analytics and survey software tools can be used to design on-website and in-app micro-surveys to collect pricing feedback. This can include willingness-to-pay insights. These tools are helpful at gathering in-the-moment insights at scale. However, you are unable to control the profile and distribution of customers that provide feedback via these tools. As such, the insights collected may or may not be representative of your ideal customers’ willingness to pay. Additionally, these tools generally collect only high-level quantitative feedback, not contextual qualitative buyer insights.
Online surveys: Online surveys are purpose-built surveys in Qualtrics or other survey platforms, typically 15 to 20 minutes in length, that are distributed to your existing and/or prospective customers. Online surveys are custom-built to the goals of a given willingness-to-pay research effort and distributed to an exact demographic of customers that you specify, using your own customer panel or a third-party panel. Online surveys capture primarily quantitative feedback to close-ended questions as well as a small number of open-ended qualitative responses per survey. There are multiple standard survey techniques that are used for fielding willingness-to-pay research via online surveys. The most widely discussed online survey technique for researching SaaS willingness to pay is the Van Westendorp Price Sensitivity Meter.
What, or who, is Van Westendorp?
Peter Van Westendorp was a Dutch economist who invented something called the Van Westendorp Price Sensitivity Meter (PSM).
The crux of the methodology involves asking the following four questions in customer research. Any of the methodologies we outlined previously can be used to ask these questions. However, most practitioners structure an online survey to field Van Westendorp research, with structured segmentation of the research fielding to defined customer segments.
At what price would you consider the product to be priced so low that you would question the quality? (Too cheap)
At what price would you consider the product to be a bargain, a great buy for the money? (Not expensive)
At what price would you consider the product starting to get expensive, but not out of the question of purchasing? (Not a bargain)
At what price would you consider the price of the product to be too expensive and would not consider buying at that price? (Too expensive)
You then plot the distribution of responses to each of those questions on a graph, calculating the following:
Calculate the intersection of “too cheap” and “not a bargain” to determine the minimum acceptable price.
Calculate the intersection of “not expensive” and “too expensive” to determine the maximum acceptable price.
Calculate the intersection of “too cheap” and “too expensive” to determine the optimal price point.
The result looks something like the graph below, which is an actual example output from a recent study we completed using this methodology. The methodology provides a minimum acceptable price (also called the “point of marginal cheapness”), a maximum acceptable price (“point of marginal expensiveness”), and the optimal price point (OPP). Typically, those using this methodology view the range of prices between the minimum and maximum price as the acceptable price range for the product or service.
This approach is the foundation of ProfitWell’s Price Intelligently, which many consider the authority on SaaS pricing. ProfitWell is not alone, either; most in the SaaS industry espouse Van Westendorp as the go-to methodology for determining pricing. This article, for example, suggests that Van Westendorp has become popular among Silicon Valley-based companies and startups.
Van Westendorp has notable benefits. It’s easy to understand and simple to deploy. It provides a clear “answer.” It’s repeatable. It can be combined with other survey approaches to gather insights on pricing models and packaging through asking questions on relative preference of features associated with the products that are being tested. We use Van Westendorp in the same way that ProfitWell and others do.
While Van Westendorp is a critical part of a product marketer or pricing strategist’s arsenal, there are risks in solely relying on Van Westendorp to inform your product pricing.
The drawbacks of relying solely on WTP to set SaaS pricing
There are overarching drawbacks to relying solely on Van Westendorp, as well as tactical issues that arise in the implementation of the methodology.
In a recent post, he walks us through the Value Cascade, which he adapted from the Thomas Nagle book “The Strategy and Tactics of Pricing.” Borrowing this image from Dan to illustrate the value cascade:
Here’s where the big picture issue with Van Westendorp comes in: There’s a tendency to jump right into Van Westendorp to set pricing for a SaaS offering. You can run a study in a few weeks, crunch the results, and have answers on what your pricing should be. That approach, however, negates the quantification of value, a process that Balcauski defines in detail in his posts. That process includes assessing competitive alternatives and the value differentiation your product provides. If value is incorrectly defined — or worse, not defined — that skews the implementation of willingness-to-pay research, which in turn skews the results of your study.
In practice, there are also several tactical issues that crop up with Van Westendorp studies. In our experience, these have included:
Inability to establish a shared understanding of the offering described: This is probably the single biggest challenge. Technology products and services are complex offerings with many features, use cases, and different vectors of value for different customers. A survey tool provides a limited window in which to describe these types of products to customers and ensure a shared understanding of what is being assessed. ProfitWell recommends providing a summary that is equivalent to “about the level of detail as is outlined on your pricing page,” which seems about right based on our experience. However, this survey-based approach does not give the respondent enough time or detail to understand what is being presented and typically does not generate enough information from which you can generate pricing estimates, particularly for new and/or truly innovative offerings.
Lack of qualitative insights: This is one of the biggest challenges with survey methods overall. Yes, you can capture some limited open-response insights, but overall surveys are a quantitative instrument, designed for capturing information at volume. You can capture information on pricing models and packaging models through close-ended survey questions, but it can be challenging to get true insights on the “why” behind the information you gather in a survey.
Lack of respondent context: You can and should capture basic demographics on both the individuals who take your survey as well as the company they work for, assuming it’s a B2B survey. You should also target and segment the survey to specific customer personas. In most surveys, however, the context you gather on respondents stops here. You typically don’t get to understand the intrinsic factors such as bad past vendor experiences, individual biases, organizational idiosyncrasies, and other factors that might be impacting the responses.
A need to “fence” the survey questions: This is a big one with Van Westendorp research. ProfitWell says to basically ask the Van Westendorp questions in a completely open-ended manner. This means providing a sliding scale or open-text box for each pricing question and seeing what you get back from respondents. We can see the merits of this in terms of minimizing bias, but in practical terms, every time we have executed a Van Westendorp study we have had to help respondents understand the context by providing a large minimum and maximum range, particularly when testing new or innovative concepts. What do we mean? Instead of having respondents provide a pricing estimate that could be between $1 or $1,000,000, we have to create a reference range, such as $500 to $5,000. Fencing the data can introduce bias in the results, and if you don’t fence the data, you may get results that aren’t usable due to the diffusion of responses.
Gathering enough data to manage outliers: This goes hand in hand with data fencing. Ideally, you’d gather enough data that you don’t need to fence the responses. A greater volume of responses will naturally create a normative distribution that more closely reflects the reality of the target population you are seeking to analyze with your survey. However, in most practical scenarios, fielding enough surveys to ensure this type of statistical validation is either impossible based on the ability to access customers or cost prohibitive to reasonably execute. This is particularly true when surveying B2B audiences.
Managing respondent quality: Surveys just naturally beget a different level of respondent quality, attention and interaction than interview-based methods. If you are surveying your existing customers, that’s one thing, and if you’re working with a panel company, they have detailed safeguards in place to manage respondent quality. However, there’s an inherent difference in quality and depth of output in an interview, where a respondent is spending an hour in a one-on-one, face-to-face conversation, versus a survey, which is typically completed in 10 to 20 minutes, often on a mobile device.
Managing bias: Bias is going to be part of any method of customer research and must be controlled and managed. During interviews, biases can be identified and inferred through conversation, and interviewees are often trained to identify, detect and probe on points of bias. In surveys, where qualitative data isn’t typically captured, biases cannot easily be detected or managed. Bias can also be introduced when Van Westendorp approaches rely exclusively or primarily on inputs from an audience of existing customers. Existing customers of a given company or product are inherently biased toward certain price perceptions that prospective customers would not have.
False precision: Proper execution of the Van Westendorp model yields an optimal price point as well as a range of acceptable prices. The optimal price point suggests a level of precision that can be misleading, which can be compounded by these other issues such as small or biased respondent samples. SaaS companies should be measured in translating the outputs of Van Westendorp into actual list prices, and know that prices will still need to be evaluated in practice with customers. Van Westendorp should be seen as providing guardrails, not final answers.
A better framework for implementing SaaS willingness-to-pay research
To reiterate our earlier message, the first step in designing a better pricing research framework is to ensure you aren’t putting the cart before the horse. Meaning what? Make sure you aren’t diving directly into willingness to pay without defining and measuring value using a process like Balcauski outlines in his blog.
This process involves careful consideration of competitive alternatives during the “calculating economic value” stage, which is where a platform like XaaS Pricing can help. This process also involves using the types of customer research outlined in this post to define and quantify value. A deep dive on that topic is beyond the scope of this post, but research isn’t just for price setting — don’t discount the importance of customer research in defining value. For that, we love customer interviews.
Let’s assume you’ve moved through those steps of defining value, including looking at competitors. Let’s also assume you’ve done the blocking and tackling of defining goals and there is a specific outcome for the pricing exercise you’re working on. Now you’ve arrived at the stage of establishing willingness to pay, and you have to select the appropriate research framework for gathering customer data.
We aren’t going to offer one-size-fits-all frameworks or tools on the exact methodology you should use once you get to this stage. It’s going to depend on the specifics of your situation, which can include both externalities as well as real internal factors that may limit your approach (for example, maybe you just don’t know how to program and run a survey, and don’t have the budget to hire someone to do it). But we can offer a few rules of thumb that have served us well when doing this type of research for the past 10-plus years:
Establish a hybrid approach wherever possible: If budgets and other considerations allow, combining data gathering methods produces the most powerful results. Our favorite construct is a model where interviews are used to refine, refute and finalize hypotheses, as well as understand context and customer mindset. Surveys are designed to leverage those learnings to capture broad and defensible quantitative data from a large sample of the target customer population.
Start qualitative, finish quantitative: As alluded to above, we always like to start with more qualitative and contextual methods such as interviews, and then use those methods to inform survey fielding. Qualitative approaches can help define a question or topic, and quantitative methods can then provide more exactitude on key elements of that topic. This approach allows us to get smart before getting specific. Also, sometimes we find we’ve gathered all the intelligence we need after the qualitative phase and can save the time and effort of undertaking a survey.
Use the tools you have for their complementary strengths: Whereas many use surveys to gather Van Westendorp data on actual price points, we prefer surveys for gathering representative customer insights on things like preferred packaging content and pricing models. These are areas that can be challenging to cover holistically in an interview and are prone to individual response bias. Similarly, we like to use interviews to understand contextually how customers think about value and price versus what they think are fair actual prices. Understanding how customers evaluate alternative solutions and compare your offering to those alternatives can provide greater insight to inform price setting than a particular customer opinion on whether a price should be $5 or $7.
Always look to simplify: Like we touched on before, these are complex topics involving complex product and service offerings. When designing customer research, it’s best to find ways to simplify. For example, if you’re using a survey, consider providing a price page screenshot or a product demo video rather than a large text description to illustrate the product you’re seeking feedback on.
Approach everything open-ended: This is why we like qualitative interviews, and why we agree with ProfitWell about question design for Van Westendorp surveys. Open-ended approaches limit opportunities for your biases to enter the research process, which can shade the results of your research efforts. The challenge is ensuring that the open-ended approaches are truly open-ended, and that you have a means through which to collect high-quality data, as well as enough data to run the analyses you’re looking to run. This is where well-trained, experienced interviewers come in handy, and where well-designed questionnaires and survey guides can make or break your research process. More on that in a future blog post.
Create a process, not a project: Many approach customer pricing research as a one-time effort. They do the research, figure out a market-validated set of prices, and then move forward. Those that outperform establish systems for regular customer pricing research. There are an endless number of ways to architect a recurring program; perhaps you conduct a large survey exercise once per year plus quarterly interviews, or maybe you conduct monthly interviews and no survey, or only quarterly surveys. The options should be reverse engineered from your needs and goals. But having a regular stream of customer pricing research insights allows you to iterate in a more agile manner, as well as identify topics that warrant deeper research investigation.