‘Happy vs. Crappy’ Data to Winning Pricing Strategies | The Data Ninjas (Pricing I/O)

Episode 40 March 11, 2025 00:46:47
‘Happy vs. Crappy’ Data to Winning Pricing Strategies | The Data Ninjas (Pricing I/O)
Street Pricing with Marcos Rivera
‘Happy vs. Crappy’ Data to Winning Pricing Strategies | The Data Ninjas (Pricing I/O)

Mar 11 2025 | 00:46:47

/

Show Notes

Is your pricing metric actually tracking what delivers the most value to customers—and could a misalignment be causing hidden churn? 
 
Join me for a data-driven conversation with Pricing I/O data ninjas Sam Blumenfeld, Matthew Duncan, Kyle Eustaquio, Nishita Katere, Jack LoGrasso and Brenden Murden. We demystify what "good data" actually means for SaaS pricing decisions. The team unpacks how to transform "crappy" data into actionable insights by unifying fragmented datasets, tracking feature usage patterns, and analyzing lost deals.  
 
You’ll hear practical survey guidance with specific recommendations on sample sizes, survey length (keep it under 15 minutes!), and when to use methodologies like conjoint analysis versus Van Westendorp. For SaaS leaders struggling with packaging decisions, the experts explain how to identify value patterns in usage data and leverage AI for real-time competitive analysis while maintaining human oversight. This episode delivers concrete frameworks for moving beyond pricing guesswork to data-driven strategy. 

In this episode: 

 
(00:00) Marcos introduces six data leads from Pricing I/O who collectively analyze massive amounts of pricing data. Each team member shares their background, from strategic finance and investment analysis to market research and biotech. 

(04:05) Navigating the data landscape - “Happy vs. Crappy” - Critical aspects of data quality for pricing analysis, emphasizing the importance of unified data with consistent account IDs, tracking both absolute and relative feature usage, and collecting lost deal information to understand what products aren't selling well. 

(19:09) Tips on creating effective pricing surveys, including keeping surveys under 10-15 minutes to avoid respondent fatigue, using neutral language in questions, and adjusting survey methodology based on expected sample size. 

(29:11) Examining different pricing survey methodologies, focusing on conjoint analysis for determining feature value.  

(36:00) Benefits of customer surveys versus expert interviews, insights on how customer surveys force companies to think from the customer's perspective, while expert surveys provide deeper, more invested responses.
 

(40:30) “Get out of my way-I” - How AI is transforming pricing analytics by accelerating insights and putting data at the forefront of decision-making. 

(44:44) AI applications for competitive data analysis, real-time price adjustments, and processing large datasets, while addressing data privacy concerns when utilizing AI systems for client information. 

(45:55) Favorite current jams from the team!  

Welcome to Street Pricing, the only show where proven SaaS (Software as a Service) leaders share their mindset and mistakes in pricing so we can all stop guessing and start growing. Street Pricing is hosted by Pricing I/O CEO and Pricing Coach, Marcos Rivera, sought after slayer of bad pricing. With 20 years of pricing expertise, he has helped price over 200 SaaS products and coached over 100 SaaS CEOs and counting! From the streets of the Bronx to CEO, Marcos wants to take the guessing out of pricing.   

Resources: 

Sam Blumenfeld: https://www.linkedin.com/in/sam-blumenfeld/ 

Matthew Duncan: https://www.linkedin.com/in/matthew-david-duncan/ 

Kyle Eustaquio: https://www.linkedin.com/in/keust/ 

Nishita Katere: https://www.linkedin.com/in/nishitakatere/ 

Jack LoGrasso: https://www.linkedin.com/in/jack-lograsso-686487173/ 

Brenden Murden: https://www.linkedin.com/in/brenden-murden/ 

Company: https://www.pricingio.com/about/ 

Marcos Rivera LinkedIn  

Marcos Rivera X  

Pricing I/O 
 

Book: Street Pricing 
 

Email Street Pricing for a consultation

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: With customer surveys, I've tended to realize that it forces the client to really think on exactly how they're describing their features or how complicated their product is. And oftentimes, although internally they might know their product in really thorough ways, they have to reframe that and put themselves in the customer's brain to understand exactly what they want from their customers. [00:00:19] Speaker B: Yo, Mike check. What's up everybody? You're listening to the Street Pricing Podcast, the only show where proven SaaS leaders share their mindset and mistakes in pricing so we can all stop guessing and start growing. Enjoy, subscribe and tell a friend. Now let's break it down with your host and sought after slayer of bad pricing, Marcos Rivera. [00:00:42] Speaker C: What's up and welcome to the Street Pricing Podcast. I'm Marcos Rivera, author, founder and CEO of Pricing IO, and today I have an extra special episode. I am bringing a massive amount of brain power. This is going to be epic because I don't think anyone has ever done this before with this many folks that have seen so much pricing data. Today I have the data leads from Pricing IO. I got six of them joining me today. It's going to be hot. We're going to share a lot of great stuff, insights, and go really, really deep. So, team, are you guys ready to join the show? Have a great time. [00:01:18] Speaker D: Sounds good. [00:01:19] Speaker C: All right, well, let's do this real quick. Let's make sure that everybody knows who you are. We're going to do quick round of intros and when I say quick, we'll keep it short. Name, city and your job prior to joining Pricing IO and then we'll just roll through it that way. Okay, so let's start with the first one here. Nishida, why don't you go first? Yeah. [00:01:37] Speaker A: Hi everyone. My name is Nishida Kateri and I'm one of the associate consultants here at Pricing IO and I'm based in San Jose, California. Prior to Pricing IO, I worked in the consulting industry as a strategic finance analyst, working with Fortune 500 companies to help them manage their cash flow, track any expenses, and forecast high periods of super super. [00:01:57] Speaker C: I actually started finance. That's why I love that background. Sam, why don't you go next? [00:02:01] Speaker E: Hello, everyone. I'm Sam. I'm based in Palm Beach Gardens, Florida. And before joining Pricing IO, I was an investment analyst covering equities in the technology sector. [00:02:11] Speaker C: Again, numbers have to be there, right? Thank you, Sam, for doing that. Kyle, you're next. [00:02:15] Speaker F: Hey everyone, my name is Kyle Eustachio. I am located in Southern California, but grew up around LA I have over five years in data market research and consulting which was spent at the largest pharmaceutical data company in the nation. And excited to be here. [00:02:32] Speaker C: Excellent data, Charles, my friend. Thank you, Kyle. Matt, why don't you go next? [00:02:36] Speaker D: I'm Matt. I am located in San Diego, California. I've been with pricing IO almost five years now. But before that I was a little bit less traditional of a background. I was in biotech. [00:02:46] Speaker C: Excellent. And lots of data in biotech. Absolutely. Jack, when do you go? [00:02:50] Speaker G: Hey everyone. Jack Legrasso, based in San Diego. Also. I've been working as a pricing consultant here for a little over two years. Before that I worked in the insurance industry in an actuarial role. Spent about a year before that working in actuarial consulting. So pretty much spent my entire career working with data, numbers and math and I don't see that changing anytime soon. So excited to talk through these things with the team. Thanks for having me. [00:03:12] Speaker C: Not anytime soon. No, absolutely, Jack. Thank you for that one. And Brendan, why don't you take us home? [00:03:17] Speaker H: Hi, pleasure to see everyone. My name is Brendan Murden. I'm an associate consultant here at Persing IO based out of Royal Oak, Michigan. So unfortunately not as sunny as Florida or San Diego. Weather's relatively nice right now. Previously I was a consultant at Decker Carlisle, focused mainly within the automotive and heavy equipment industry. Focused solely on customer sentiment and designing surveys to understand market perceptions around Fortune 500 company's products. [00:03:42] Speaker C: Super, super. So lots of customer sentiment, huge part of pricing. We're going to be getting to that a little bit later. So team, thank you so much for the quick intros. Joining me today, you're probably at this stage should be analyzing something, but you're not. You're joining us. You're giving us a lot of brain power and knowledge. So let's get into the episode. I want to give a quick roadmap for everybody before we get started. This show is based on the book Street Pricing. But not today. Today I'm actually going to take a left turn. I'm going to change things up a bit. We got six really high powered folks here and I am going to move us into a different category of just straight up riffing and digging into three big topics. All right. And these are topics that just listening out there in the landscape, talking to SaaS, leaders and operators, I think have come up quite a bit because everyone always talks about, yeah, you need some good data to do pricing or you should have data. But no one ever really gets behind that. On what does that Mean, what does good data even look like? Or what am I missing? And so today I want to get really practical around that and sort of help people navigate, navigate the landscape of data, how to get better at it, how to avoid some of the biggest mistakes that you probably see every day in an attempt to just get them better and get them moving away from that guesswork. So we're going to have three big topics today. Topic number one, I'm going to call happy versus crappy. And this is, yes, about your data. And if you have crappy data, don't feel embarrassed. It's okay. Most companies have crappy data somewhere. And so we're going to talk a little bit about how to get your data, you know, from crappy to happy, and we'll unpack that together. Then I want to get into some of that sentiment surveys. I call this part Survey says Yes. Everyone talks about these tools, these, these mythical surveys that are supposed to answer every question under the sun. But what are they? What questions really matter? I want to unpack that a little bit. And then lastly, get out of my way. I and what I mean by that is AI is coming. It's changing the way we do a lot of things, buying, selling, and yes, analyzing data. So how do we see that impacting the data world when it comes to pricing, packaging, monetizing? All right, so those are the big three. We're going to start with the first one, from crappy to happy. So, team, let me just see up just one question here because I think a lot of folks may assume that they have decent data, but oftentimes could be missing something super important. So what have you seen as maybe, you know, in your data sets, you know, a missing, critical piece of data that just makes pricing so damn hard. So what is it around data that folks could be missing that's holding them back? [00:06:13] Speaker E: I think in the beginning, when doing the data collection, if the data isn't mapped well specifically with an account id, I always find that especially challenging because when you try to map demographics over from, let's just say usage data to account data, then some of them don't map well. For instance, if it's done by name and there are slight name discrepancies, then it's always more difficult when you're seeing only 60 to 70% of the demographics tie over. So I would start with mappable data. [00:06:44] Speaker G: Yeah, I think that's a good point. With just data in general, it's not necessarily just collecting it, but making sure that it's unfragmented. So a lot of SaaS companies collect a ton of data, they have all these metrics, but don't really look at it in the context of pricing or in really a unified way. You have your opportunity data in Salesforce, account data in some database usage data in Pendo. And when it's hard to link, usually we'll see client data sets with, you know, six different account IDs that don't match, like Sam was saying. So if those things don't match and we can't tie those things together, it's really hard to draw insights. And that's one of the biggest things with pricing insights that we can draw from these different data sets. It's not just one individual field that drives everything, it's sort of grouping everything together. So good example of this is correlating product usage with revenue expansion, retention. Are customers who are using the product getting more out of it? Are they paying for features they're not using? Is this leading to churn if they're not using the product enough? So these are all things that are super important, just beyond collecting, actually unifying the data sets. [00:07:46] Speaker C: I love that. I love that. So you can have a lot of dots, but not a way to connect them. And that makes it really tough. And that could be one of the reasons why a company would say, we got tons of data. Sure. But if you don't have a way to unify them to sort of tell a story or see a thread, that can be super limiting. I agree, man. I agree. What else? [00:08:04] Speaker D: One thing I would say is I absolutely agree with all of that. It's unfortunately very common that I see that a lot of companies do have a lot of data, but then it's impossible to compare the usage data to how much they're paying or something like that, because we just don't have the same ID to be able to tie them together. And maybe we have a name, but if they're different names, like Company Name versus Company Name Inc, we can try to do a little of that, but it won't be perfect. So if you have a consistent id, that's huge. One thing I also wanted to say is that this is a little bit more common with smaller companies, but a lot of companies don't track usage data very well at all. And if there is a company out there that's not doing that, I would say it's especially important to track usage data for whatever your main pricing metric is or whatever you're charging for. If there's going to be something in the contract that says you can use up to 10,000 users. You should probably be tracking how many user accounts they actually have set up. And it's more common than you would expect that they don't have val that information. And of course after that it's also important to track things that aren't necessarily directly a pricing metric, but might be potential signs of future expansion or potential churn or any of that. Or even just something you might want to be thinking about charging for in the future. But especially if there's one usage data that really needs to be tracked, it's the main pricing metric or any secondary pricing metrics. [00:09:23] Speaker C: So are you saying people are charging for something and they're not even tracking it? [00:09:27] Speaker D: You would be surprised how common that is. [00:09:29] Speaker C: Crazy talk, crazy talk. So you're charging for something that you can't see. Super, super dangerous. Let's talk about some examples of usage. You mentioned usage. I think you know, Jack Sam, I'm hearing it kind of creep out in these little ways that you need to understand what they're doing in the software. Like what type of usage. Can you give me an example? Because I think people would say well it depends on the software, but are just some basic things they should be looking at in general. [00:09:52] Speaker E: Specifically when we look at feature usage, we can take a look at the usage of certain products or usage of features. And if we can see feature level usage, it makes us understand a lot better what people are using versus not using. And with that it's a lot easier to tailor packages to those that would commonly be used together. For instance, those usage correlations is company A using feature XYZ and what are actually used together. Because once you can see what's used together, we can see strong bundling candidates. [00:10:29] Speaker C: Makes a lot of sense. What do you guys think? What else besides feature feature usage? I think that's a good one. [00:10:35] Speaker G: Yeah. I think what Matt said just around pricing metric and more importantly any secondary pricing metrics. Because often we'll get data that just has one pricing metric and it's good to see different options for how we want to charge in the future. This can really help with value based pricing. A lot of what we look at, not only for usage data looking at if customers are able to be successful with the current packaging, are they able to get exactly what they're looking for from the package? Are they actually using the product as intended? And on the other hand with pricing metrics, does the pricing metric scale with the value that you're providing? So it's sort of that win Win nature with value based pricing, are your customers getting enough value and are you getting enough value from your customers? Those are all super important things to try. [00:11:16] Speaker C: Ooh, ooh, value, that's tricky to measure in any way you slice it. So how do you see that? So say they're using feature I don't know A that Sam was talking about, right? He's using feature A and you're seeing kind of like a pattern of, I don't know, these companies use it more than those companies or whatever. How do you then correlate that to value with the data? I think this is where a lot of companies trip up. How do they correlate that? [00:11:38] Speaker G: Correlation is super important with sort of any success metric that you can find. So what we often look for in data is looking for any metric that we can find. Are your customers actually being successful with the product. So we can use this to see if the pricing metric makes sense, if it lends itself well, if it correlates to the actual success metric that customers are getting. A good example is for a lot of B2C customers, we'll see like revenue is a good success metric. Are your clients getting enough value from the product? Are they getting enough money? Is their revenue growing? So these are things that we can look at and what value is correlated to that most and use that as the pricing metric usually makes the most sense for customers and usually scales well. As they grow, they mature. You end up getting more money and expansion opportunities, upsell opportunities from that. [00:12:30] Speaker C: But is it realistic? Can you actually measure how your customers are growing and succeeding specifically with usage? [00:12:38] Speaker E: You can. We've talked about absolute usage, whether or not a client is using each feature. But then if you have detailed usage data, you can see just how much usage each feature has. So for instance, it's not necessarily enough to see, oh, a feature is being used, but you can see value in that. If there's relative usage in addition to absolute, you can see what customers are using a lot as opposed to just what they're using at all. [00:13:06] Speaker C: That's actually really interesting. So it's not just if they're using it or not, like the binary piece, but also how much they're using it is the other piece too as well. And so that could lead to maybe some, I don't know, decisions around packaging where you can, like maybe we just put some here in this plan and more in that plan based on, you know, the profile of usage and customers. But if you're able to connect the dots with the reference ID and you can see usage. Say for example, they call this API like all the time, right? And so that could be really damaging if you don't include the API access or the right amount of access in the plan. It could actually hurt you in winning deals because you're missing something they really want. Or on the flip side, you know, if you're just shoving, which is probably the more common thing, right, Is you're shoving all these features and things into a plan and like 80% of the stuff that particular a customer is not going to use and doesn't even want and there therefore it looks expensive, not competitive and they end up going somewhere else. Or if you do end up convincing them to buy, they end up churning, right? Because it's like I'm only, I'm paying for this thing, I'm only using 20% of it, so I'm just going to go somewhere else. I'm thinking about one more thing here, right? So the reference id, unifying your data, looking at usage patterns in terms of different groups, what are they using, how much they're using it, and being able to kind of see where those pockets are are sort of different or the same. Right? And that way you can use that to help make packaging decisions. I think that's very practical and a good way to use data. But if I had to think about all of the data sets that you guys look at, you guys probably see millions and millions of rows of data with all of you combined. In all your experience, my guess is that there is probably a very common, besides for the reference id, there's probably some common information that they're not grabbing today. And if there's like if you were sitting down with some data IT lead or some architect and they said if there's one thing I should make sure I should be collecting, right, it could be more than one, it's fine. What would it be? What's the most important thing you gotta have in that dataset if you were gonna tell em, tell that IT leader. [00:15:13] Speaker D: I don't know about most important. But one thing that also isn't always tracked and does require buy in from a lot of the sales team is lost deals. We see the one deals and what people are buying. But I think a lot of the time it can be just as important to see what people aren't buying, what are we trying to sell and potentially selling unsuccessfully. And even just simple things like what are our win rates for any specific product that we sell. A lot of the times if companies are tracking lost deals at all, it's often very inconsistent. But if there's a consistent way of tracking the lost deals, you can then get a much better idea of what products are performing well what and learn more about how that changes over time and get really good feedback on any potential product packaging pricing changes. [00:15:55] Speaker C: I love that. [00:15:56] Speaker E: Additionally, if we look at consistency of tracking opportunities in general, we see some discrepancies usually when it comes to over time at least where in 2022 an opportunity was anything in the TAM potentially the industry is small enough but then as you move into 2024 and onward then they're actually tracking opportunities as those that have gone through that are more qualified, that have gone through a sales call. So sometimes we see win rates that are completely incompatible over time and we can't really map the performance over time just because how opportunities are being classified. [00:16:36] Speaker C: That makes total sense here. Kyle, is this making any sense to you man? Like do you believe in this tracking loss data like they you lost the deal? What's, what's it to you? Like why is it so important? [00:16:46] Speaker F: I think that tells like an equal amount of the story as, as the one deals. I agree with Matt on that. It's important to understand the perceptions and to understand behavior because that understanding behavior better just allows you to adapt and make whatever changes you want to make in order to increase that value. So yeah, lost deals are absolutely valuable. [00:17:08] Speaker C: I love it. I love it. I think that makes tons of sense. Guys, I'm going to wrap it up here. I listened to every single one of what you said. I think all that is good information, good advice. I'm going to summarize it into three key things here. Just what I heard is you got to have a way to link all the data together, right? Usually like a common reference id, something like that. You have to make sure you look deep into the usage and find different patterns. And then lastly look at the losses. Losses can actually get you a lot of information or give you a sort of window into what people like. And when I say loss, I don't mean just win loss on the opportunity side like Matt was saying, but maybe even churns too, like why are they leaving? Like what's going on? And so if you link, you look and look at those losses, I think there's a really, really, I think rich amount of information you can use to improve your pricing and packaging. Right? That's what I heard. I think it makes tons of sense. I'm going to move us to the next theme and topic here guys and this gets a little bit more behind what your customers really want and what they don't want. Right. And I think Kyle was really kind of hinting at this understanding behavior theme, which is the whole point of looking at data anyway. You're not looking at data just for, you know, shits and giggles. You actually want to understand behavior. And surveys are a really great way to do that because you can ask specific types of questions and it could be to a customer, could be to a non customer as well, and really just kind of use that data to get behind different sentiments, perspectives, or potential things that could drive behavior or change the way they perceive something. So for me, I think surveys are an absolute great tool, but I don't think they're the magic pill that everybody thinks they are, at least if you don't do them the right way. So I want to start off with best practices in running a great survey, say on your customers, who wants to give me some do's and don'ts. [00:18:55] Speaker D: So I think one big do is don't make the survey too long. I know that we want to know every single thing about every customer, but the reality is that surveys over 10 to 15 minutes are going to have a lot of respondents drop off. And even if they don't drop off a lot of times by the second half of the survey, they'll just be clicking buttons and we'll get a lot worse information. So I think one big thing is keep the survey short. And then kind of related to that is that try to keep each question onto a single topic. Don't try to learn too many things from any one question. If you need to know two things, it's better to ask two questions than it is to try and say, do you think this and this? Just ask one question each. I know that that also means the survey has to be shorter. So go into any survey with a plan and understand what you want to know so you can design a good survey for that. [00:19:43] Speaker C: Short and sweet. Yeah, Kai, what are you going to say? [00:19:45] Speaker F: Yeah, I was actually going to say those two exact points, but Matt was just faster. But on that, I think that also the verbiage, the wordiness, I understand that making sure you want to be very specific, but at the same time, respondents don't want to read a paragraph every five seconds. So one of the things is, I guess, speed in a way of how much time you're expecting the respondent to invest into answering these questions. You want to make their lives easier. They just want to answer questions and leave. [00:20:19] Speaker C: Do you have like a, like a cutoff after 5 minutes, 10 minutes, 15 minutes, it's like people, they just tune out. Is any, any rule of thumb around how long this thing should be in terms of the time it takes to invest? [00:20:31] Speaker F: I mean, it depends on the topic, it depends on the, the demographic of who you are serving. If it's like a strong incentive and you're going for working professionals, experts, you could go up to an hour of survey duration. But if it's, let's say customers, probably a lot less. So probably looking at more like 20 to 30 minutes is the max for a customer's attention span. [00:20:56] Speaker C: What do you guys think? 20, 30 minutes? [00:20:59] Speaker D: I think that's probably twice as long as my absolute maximum. [00:21:04] Speaker C: I would not go, hey, Kyle likes to fill out surveys. That's what it is. He just likes to express what he, what he's thinking. Right. What do you guys think? [00:21:11] Speaker H: Yeah, I'd say my rule of thumb has typically been a maximum of 15 minutes. I've seen historically, once you actually surpass the 15 minute range, there seems to be a lot of drop off, a lot of survey fatigue. Especially if we aren't offering an incentive, which, speaking of do, that's another thing to kind of keep in mind. If you're offering an incentive and customers feel like they're being rewarded for taking the survey, there's actually a lot more opportunity to not only collect better data, but to have much higher response rates. If you aren't offering incentive and there's a very small customer base that you're actually sending the survey out to, then there are some potential fears that could arise. Whether, hey, will we actually hit this response rate ultimately or even more so? Like, will there be high drop off? Will we not actually have respondents reach the part of the survey where we're collecting this key information, key pricing data and key demographics that we can split up and segment between? [00:22:01] Speaker C: That makes a lot of sense. So I'm hearing 20, 30 minutes. I'm hearing 15. Do I hear 10? Anybody go down to 10 minutes? [00:22:07] Speaker E: I'm typically in the 10 to 12 minute range, maximum. I've just seen from my experience with clients that I've had right around that 12 minute mark is usually where I would cut it off. [00:22:20] Speaker D: Yeah, I have another point on incentives too. They can definitely be great for increasing the response rate to a survey, but they don't necessarily make your respondents care about the survey anymore. So even if they do make it more likely that someone finishes a survey who starts it, that doesn't necessarily mean they're devoting their full attention to the questions. So length is still a concern even if we are offering an incentive. I would say the biggest thing that would allow people to take a longer survey is if they're more invested in the outcome. So for example, if a company is surveying their own employees who actually care a bit more about how the product performs, they would be a little bit more willing to take a longer survey than for example, prospects who are hearing about the product for the first time via the survey. [00:23:03] Speaker G: That's a really good point with survey fatigue because it's not necessarily just around completions. We do see drop offs usually around that 15 minute mark. But what we don't see, what we can't track in data, is if after 10 minutes they're just like I'm tired of filling out the survey, I'm just going to click buttons. So that's something you got to keep in mind. It's hard to track that with data so it's easy to say, well they completed the survey, it all worked out. But you got to remember people are not going to be super engaged with this for 30 minutes on end. So after about 10 minutes I agree with Sam, you're going to start losing focus. You're just going to be, I want to be done with this. [00:23:40] Speaker C: Got it. So that would kill the quality, right? What do you think Brendan? [00:23:43] Speaker H: Oh yeah. Also one final do there is do know who you're actually sending the survey to? Oftentimes we'll work with customers that have been around for quite a long time, may have a long list of or like a very expansive customer base where we actually can reach a sufficient sample but other times we don't. Maybe it's a startup that was recently founded back in 2022, only have 150 customers and we may actually have to go out in the market to get additional similar customers through other companies via prospects. And ultimately we need to understand, hey, can we get this efficient sample that's very representative of your customer base and ultimately we have to take that into account through screening questions. So if we are going out through market panels and collecting customers who represent similar companies, are they actually the right customer that should be filling out the survey? Are they customers that are professional survey takers and are just going through the survey so they get that incentive but actually don't know about the product? And having like these little intricacies between all the screening questions to ensure we are actually reaching out to the right person is critical in order to get the most accurate data possible. [00:24:50] Speaker C: Team, I want to take a quick pause here to ask you for a huge favor that will mean a lot to me. Please review and share the show. Share it with your team, your friends, your peers. Not only will it help them stop the guesswork in pricing, but it'll also help you and increase the chances that you'll take action and change for yourself. All right, much love. Now back to the show. You know, it's interesting because you've said something about sample, which I thought was a really key one. Let's debunk the myth here. I don't think you need 45,000 responses to really get some value from a survey. Right? Like, what are you guys seeing from a sample size that really could, you know, have some meaningful insight without having to, you know, spend a million dollars and 45,000 responses. What do you guys say? [00:25:33] Speaker E: I think that depends on if you want to do additional data cuts. So, for example, if you want to make, if you want to segment out data, let's just say through SMB, mid market or enterprise, then you would need an adequate sample for each of those segments. And I'll let all you take over, Kyle. I know you probably have more time. [00:25:53] Speaker F: No, you're doing, you're doing great. Yeah, like, essentially, like if we were to, you know, bring in like statistics and everything, for every single segment group you want, at leave n equals 30 is like the rule of thumb. Sometimes n equals 25 if you're really pushing it. But so every single segment, considering that that totals up to over a hundred regularly. But if you want the flexibility to have another layer of data cutting, then that's when we start getting to hundreds, maybe even over a thousand, maybe thousands. Every single layer you, you want to consider, that's an exponential increase in how many you want. [00:26:30] Speaker D: Yeah, I think about it in a similar way. I would say that one thing that you should probably know going into designing a survey is how many people you think are going to respond to it. And what we notice is that for customer surveys, response rates tend to be in the low single digit percents. Even with an incentive. Sometimes they're more engaged, sometimes they're less. But 1 to 3% is a good rule of thumb. And for prospects, it's even lower. And I would say designing your survey around how many you expect to get. Thinking about that. So for example, if we are only expecting about 30 responses, maybe we just, for a pricing survey, for example, ask how they feel about the current price. Do they think it's too high? Do they think, are they okay with it? Do they think it's a Bargain. They're never going to tell you it's a bargain if they're customers, of course, because they know you might raise their price. But if you have about, if you're expecting about 30 responses, I would probably go in that direction. Once you get to about 100 responses, at that point you can start to think about maybe using a little bit more of a numeric question, maybe like a Van Westendorp, maybe a Gabor Granger, or any other type of numeric pricing question. And then once you can get to three to 500 or more, that's when we can start to think about using a Conjoint, which is really one of the best types of pricing surveys you can do. But getting that many responses is a real challenge. [00:27:44] Speaker C: Yeah. And you just dropped just in that one. Van Westendorf, Cabor Granger, Conjoint. I love it when you talk nerdy to me. Okay, I love it. But give me, give me like the quick over to everybody listening here, what is Van Westendorp like? Why is that such a popular technique? [00:27:59] Speaker D: Well, I can't tell you why it's popular. I'm not a huge fan of it myself, but I, I do think it has its use. This is where you would ask customers specifically to write what they would think is a bargain price. What would be so low of a price that they might not trust the product. They're like, this is. This is too good a deal. I don't trust it. What would be expensive but worth it and what would be too expensive? I think this is often not as good of a fit for a customer survey specifically, because there's really no such thing as this deal's too good, I don't trust it. If you're already a customer, you know what the product is at that point. So I think if you're going to use something like that, it's probably a better fit for things that are very easy to describe. And oftentimes in software that's not the case. And it's also usually a better fit for people who aren't current customers. [00:28:46] Speaker C: Ah. So no Van Westendorp on your current base for those reasons. That. That makes a ton of sense. [00:28:51] Speaker D: That's my personal opinion, anyway. [00:28:53] Speaker C: Well, I mean, again, it's pretty popular. But does anybody else feel the same way? Does Van Westendorp like the, the Grand Puba King survey that we should always use, or are there instances where we should not use it, like Matt was saying? No, everybody loves it. I think you just isolated yourself, Matt, from everybody I can see them scooting away even though they're not in the same room. Right? No, the other ones, Gabor Granger, which is that famous ladder that people love to throw darts at as well. And then lastly Conjoint, which, you know, when it comes to survey 101, conjoint is usually one of the early things you, you talk about now, Conjoint. I wanted somebody here explain to me what the hell it is and why does it matter for pricing and packaging? [00:29:32] Speaker D: Yeah, so Conjoint is a much more detailed form of pricing question that really includes both pricing and packaging. And you can use Conjoint for surveys that aren't related to pricing. But in the context of pricing, what you get out of a conjoint is you have a list of features and then a list of different levels of those features. So, you know, if you think about good, better, best packaging, you know, the good better and the best. What, what do we have? Maybe in support? At the lowest level you have email. At the middle level you have, you know, email and phone. And then at the top level you have a dedicated support rep. So that might be different levels of a feature. And then you define all of your features and then you define a couple of price points for the product as a whole. And then it asks a few random questions to the respondents. And then with a large enough sample size, what you get is you can get a dollar value for each specific feature and how much it's worth. So how much more is it worth to customers or potential customers to have a dedicated support rep compared to 24. 7 phone support? You know what, what is the difference there? And that's what Conjoint helps. And sometimes for some features that people don't care that much about, it could even be negative. [00:30:42] Speaker C: Oh yeah, that's interesting. That's super interesting. But here's the thing. I have 720 features in my product. Right. Conjoint, I think, starts to break down when you have a ton of features or frankly, if you just can't whittle it down to the ones that matter. And I think there's a, a big gotcha. There is to going back to your earlier point about length and keeping it short. You could, you could be involved for a super long conjoint survey if you're not, if you as the company are not choiceful about what features you want to put in there, which, what you want to measure. I don't think conjoints work when you're, when you have like a hundred features in there. Just Gets crazy. [00:31:17] Speaker D: For every feature you add and for every level of feature you add, the sample size that you're going to need to get good data is going to increase exponentially. So I would definitely keep it to, you know, a relatively small number of features with a relatively small number of levels each. [00:31:31] Speaker C: Give it to me. What's a relatively small number? 2 12, 6. [00:31:35] Speaker D: For a sample size of let's say 500 to 1000, I would say maybe seven features. And really that would include price because price is kind of considered a feature in terms of conjoint. And then each of those features, maybe three to four levels at the most, and then eight potential price levels you're looking at would be an example. And even that would be a fairly large conjoint. If you can get it smaller than that, that would be even better. [00:31:58] Speaker C: Yeah, and that can increase the response rate quality. All that fun stuff. Nishida, this is all crazy talk to you. I mean, do you actually think this stuff works? [00:32:06] Speaker A: No, I definitely think that surveys are a great way to, you know, figure out customer perspective in a way that maybe the sales team isn't aware of. I also think one thing that wasn't mentioned is the importance of using neutral language within your questions. I think a lot of people tend to lean towards questions that might lean a respondent to answer a certain way. So I think it's very important to be very fair and neutral in all the questions that you're posing, especially when you're asking current customers. I think that's very important as well as prospects that might be interested in buying a certain product. [00:32:38] Speaker C: I dig it. I think that makes total sense. Exactly right. So I already picked up a few, few big ones here. N equals 30, 25 from Kyle. Talking about like segment. When you're thinking about the sample size, think about how many people are going to respond. 1 to 3% or low single digit response rates is probably more realistic than 40 people or 40% of your people responding. So one single digit, 1 to 3% response rates. If you're doing conjoint, if you're around a 500ish, 500 to a thousand sample size, think about, you know, 7ish features. Of course, one of those could be pricing. So pricing plus maybe six features, you know, three to four levels of it, eight prices. So just to make it manageable, consumable for the end, the end survey responded. So I think that's super good advice for anybody listening. You're probably not going to Google your way to this. This is from here only, right? So let's take this home here. Survey. What is in. In your opinion? Surveying customers, surveying prospects surveying or maybe something else. What is your favorite survey and why? [00:33:40] Speaker E: I like the customer survey because there are insights that we take from it that are really helpful with understanding churn willingness to pay feature importance. I think the breadth and depth of the survey makes it my favorite because we're just able to get so much useful information out of it with not ne while still able to keep it to that 1012 minute length. [00:34:03] Speaker C: Got it. So talk to the customers. You like that one who has a vote. [00:34:07] Speaker F: Yeah. While customer data is the most useful, I gotta say I might like internal or interview or sorry surveys with experts better because customers are a lot less attached. So it's a lot of you have to consider when. When there's a false positives, false negatives more so it's like. But it is more useful. So it's a good point. [00:34:31] Speaker C: Yeah. So you think they're biased. You like expert surveys. So surveying like a targeted group of folks that have deep experience somewhere. [00:34:38] Speaker F: I do, I do. Not only do you get all the information that you want, but it's. It's like they're more invested because they're more passionate about this topic. So surveying experts, interviewing experts, it's. I think that's my favorite one. [00:34:53] Speaker C: Yeah. And those will probably hang around longer and answer that 30 minutes or hour long survey because they have so much in their heads they want to say. Right. Who else who has another big favorite one more. [00:35:02] Speaker G: Yeah. I think what's so valuable about customer surveys is that you already have all this data on customers. Right. You have everything you're tracking from opportunity data to account data just on your customers. And now you can add a layer of qualitative data from a survey so you can understand like customer sentiment. Like do they actually like the product? Are they getting value from it? Are they liking the features that they have? Are they looking for more features? So there's a lot of things you can get really creative with a customer survey to layer on top of the already super deep account data that you have to get more insights and see from a qualitative perspective. Is your pricing actually working and are there opportunities to improve your pricing? [00:35:41] Speaker C: Dude. Yeah. Connecting those dots. Yeah. [00:35:43] Speaker H: And building on that. It's just so fascinating to me when we actually do look at the current price levels and compare that to the price levels indicated within a survey, especially when we do utilize like a Ben Westendor about Gepper Granger or a contract analysis and just compare the current Price paid to how customers actually perceive that price. And a lot of times it actually does point to potential price increases and opportunities where we actually can have a better optimized price point than what the current offerings are at that. Or we can actually expand upon the current model. Maybe they're using that a la carte model and we have the opportunity, opportunity to actually bundle that into a good, better, best or in quarter more model. [00:36:22] Speaker C: I like that. I like that. My opinion here is that all the surveys can be a really great instrument if you're really quick and scrappy. I think the customer route is really good and you can connect some dots and move on. But I actually do like what Kai was saying, like that outside expert view even compared to the customer review. So I'm cheating a little bit. I'm saying both and actually seeing where they are both aligned in the same and where they're both very different. And I think that could be super interesting for pricing and packaging. [00:36:49] Speaker F: I think that's important for in the survey design phase as well. If you are surveying separate demographics, like where it's customers and experts, you want some questions that ask around the same idea, same topic. So you have an apples to apples comparison. [00:37:02] Speaker A: Yeah. I have a quick point about customer surveys. I personally, I think every survey has serves its own use case. But with customer surveys I've tended to realize that it forces the client to really think on exactly how they're describing their features or how complicated their product is. And oftentimes, although internally they might know their product in really thorough ways, they have to reframe that and put themselves in the customer's brain to understand exactly how to best, you know, understand exactly what they want from their customers. [00:37:32] Speaker G: Wow. [00:37:32] Speaker C: So just doing the survey is actually beneficial. Right. Because of that having to think through in their shoes, it gets you closer to the customer. I think that's a fantastic point. Just do it. And when it comes to designing your survey, I picked up, you know, we want to be really thoughtful about keeping it short, the screen or the screener to make sure you get the right population, your segments as well as the sweetener or the incentive to do it and the sample size. Right. So I pick up on all those things. I think folks listening here in the, in the last few minutes would I think have a better leg up on doing impactful, successful survey. So guys, thank you for that one. We have one last round and I want to punch it right through. This is the get out of my way. I so AI is coming. Actually that's not coming it's here. It's here where we're using it. It's changing the way people buy, analyze, do things. I want to know, like, from your perspective, how is AI going to impact pricing, data, data analysis, maybe even surveys? What's your point of view on how AI is going to change things? [00:38:35] Speaker G: Yeah, I see the emergence of AI doing really two major things. One, it's going to accelerate Insights tremendously. It already has. And two, it's going to put data more at the forefront to drive strategy and decision making. So a good example is what Tableau has done with AI. They recently launched Agent and Pulse Agent is pretty much like a chatgpt kind of thing where you put just in layman's terms, hey, can you spit out this dashboard? Can you generate some charts around discounting or something? And it can just pop them up so super quick, just time to travel to Insights from there. Pulse is also something super interesting where it's like a smartphone app and it will give you like daily notifications on how your data's changed from your database. Pretty much putting Insights in the palm of your hands, as they put it. So really interesting to see where AI is going. It's not going anywhere anytime soon or not going away anytime soon. So it's already pretty ubiquitous and excited to see where it takes us. [00:39:34] Speaker C: Yeah, that's crazy talk. I mean, are you saying that with the accelerating and the prioritizing data, do you think it could do your job? I'm just going to get controversial here, right? Are you just going to pack up and go work at a Wetzel Pretzel now because AI is here, like, what are you going to do? [00:39:47] Speaker G: Yeah, I think it is often used to supplement analytics. It can do a lot of things super quickly and get to those insights quickly. But I think one of the most valuable things is knowing how to use AI and being able to layer in that level of strategy and decision making with AI. So it's not necessarily just you have this chart. Using that expertise, especially from all the things that you've seen from a pricing perspective can help. Just use AI as a supplement to get to those better decisions. [00:40:19] Speaker C: Anybody else want to throw in their point of view in the ring? [00:40:22] Speaker E: I think data privacy is also a concern. When you have specific clients, there's always a level of trust that will safeguard their data and make sure that there's no outside access. And when you have a case, like with, let's just say chatgpt, where you can't just throw the client data into Chat GPT, especially when they're training their models because of it. So data privacy is definitely a concern where you have to make sure that if you're utilizing AI, it's in a closed system where it's in house and not giving outside forces or outside organizations access to the proprietary data. [00:41:03] Speaker C: You can't let that stuff leak out there. No, especially when you think about privacy laws across the US and other countries like in Europe. You gotta be extra mindful on that. One more point of view. Who wants to throw in, how's AI going to impact this? [00:41:15] Speaker D: Well, I'll just add to some of the other things other people said. You know, AI can be powerful and it can do a lot of things, but it's no substitute for actually understanding your data yourself. You know, you could ask the AI to generate a bunch of charts, but if you don't understand what those mean and what underlying data is going into that, sure you have a bunch of charts, but do you really like, are you really learning anything from that? You know, I think understanding what metrics are important and why is not really something that AI is currently able to decide. And so using AI, you know, to speed up workflows, potentially, I can see that being a little bit more of a use case. Whereas using AI to understand what matters and what we need to look at at all, that's something that you're still going to have to decide on your own. [00:41:58] Speaker C: Yeah, Mic drop on that one though. Nishida, you have one thing I want to make sure I get you in here. [00:42:03] Speaker A: Oh yeah, I was just going to say one of the. Adding on to Jack and Sam's points of the uses of AI. I think AI is great when handling large amounts of data like we all do. One thing that I think is really important is when you're analyzing or scraping competitive data, I think it's a great use case. And allowing businesses to adjust their prices in real time is also something that AI can help with. Just real time tracking, dynamic tracking, as well as aggregating larger data sets is something that I will personally be using AI for. [00:42:31] Speaker C: You're here, here, here. I love that. Especially because now the information is more fluid. You can actually see things and make decisions more in real time with relevant information. Guys, it does not sound like you're going to be at Wetzel Pretzel anytime soon serving those, those delicious little hot nuggets. So here's the thing. You guys went through and gave us some really good, strong points of view all around data. I want to thank you for that. And I think that when it comes for these different companies that are looking at how am I going to get better at analysis, at understanding what's happening with my products and where value and how it's changing, I think all the tips you laid out were stupendous. I have one more question for you, though. It's my favorite question of all, which is what is your favorite music jam right now? Present day. Think about it. I'm gonna hit you with this. I'm gonna go round robin really fast. I'm actually gonna go reverse round robin from the order I went last time, which means we. Brendan, you are first. Spit it out. What's your favorite song? Let me hear it. [00:43:31] Speaker H: Oh, this is a really hard hitting question, Marcos, but I would like to say Weird Fishes by Radiohead. I know it's more of an oldie or I guess oldie for me. Like it was produced in 2005, but I'm a really big Radiohead fan. Weird Fishes. If no one here has heard it, I definitely recommend it. Amazing song and definitely to the top of my course. [00:43:51] Speaker C: Thank you so much. Can you please sing the chorus for everybody? I'm joking. No singing on this one, guys. No singing. Jack, what is your favorite song? [00:43:58] Speaker G: Yeah, anytime it comes on lights me up. Every time I got to go with Dream on. Aerosmith. [00:44:03] Speaker C: Oh, very nice. Very nice. I love the build on that song. Aerosmith, Dream on. All right, Matt, you're up. [00:44:09] Speaker D: I'm a electronic music guy. I've been listening to Justice's newest album, which I guess isn't all that new anymore. It's almost a year old. But yeah, if I had to pick a favorite song on that one, maybe Never Ender. [00:44:21] Speaker C: Never ender on just that. That it's a energy builder. Kyle, you thinking about it? [00:44:26] Speaker F: Yeah, No, I know of all time now, before, it's always going to be new jobies for me and feathered by new jobbies. [00:44:36] Speaker C: Classic for you, Kyle. Now I know a little more about you, actually. Give it to me. [00:44:43] Speaker E: I like country music, so I'll go with Hurricane by Luke Combs. [00:44:47] Speaker C: Oh, very nice. Hurricane from Luke Holmes. Nishida, you're the last one. Take us home. What's your favorite jam? [00:44:53] Speaker A: Yeah, I'm a really big fan of Frank Ocean. One of my favorite songs by him is biking, the featured artist one. There's a lot of featured artists on that, but biking. Very calming song. I really like it. [00:45:03] Speaker C: I love it, I love it. And actually a little different when I think of across the mix here, all six, which is fantastic that you all can converge on it. Converge on a data set. Coming from all these different perspectives, I think that's one of the big things as well, is when I think about what can we do to get better at data? I'm just thinking a little bit of what you said and how AI is coming in. Think when you're thinking AI, you got the pace where it could pick up the pace with accelerators. It makes data a priority. Garbage in, garbage out. So make sure you prioritize good data. I also picked up on that you want to have good perspective, which means that AI is not going to understand it for you. You have to understand your own data overall. So I think that that's a really, really tangible. Those are tangible takeaways for the audience. And I want to thank you guys for coming on the show. I'd love to have you back to give us more deep perspectives. [00:45:54] Speaker D: Cool. [00:45:55] Speaker C: Yes. Yes. All right, so, team, those were. You just heard the pricing data leaders at Pricing IO. These guys, again, look at more data than anyone. They crunch it, they model it, they understand it, and they draw a lot of insights from it. And hopefully you were able to draw a lot of insights from today's episode. But don't just sit on this. Take what you learned, apply it, get 1% better. Move away from that guesswork. All right, you want to stop guessing and start growing. Until next time, thank you and much. [00:46:26] Speaker B: Love for listening to the Street Pricing podcast with Marcos Rivera. We hope you enjoyed this episode. And don't forget to like and subscribe. If you want to learn more about capturing value, pick up a copy of Street Pricing on Amazon. Until next time.

Other Episodes

Episode

April 30, 2024 00:35:18
Episode Cover

The Art of the Pivot: Why Freemium Failed | Rob Litterst (HubSpot, Pricing SaaS)

Unlock the secrets to skyrocketing your SaaS business with pricing strategies from none other than Rob Litterst, the pricing prodigy with HubSpot roots and...

Listen

Episode 39

February 25, 2025 00:30:32
Episode Cover

The Dark Side to Simple | Peter Bonney (FastbreakRFP)

Why did Peter Bonney decide to pivot from usage-based pricing to a flat, user-based model, and how has this shift impacted Fastbreak RFP's growth...

Listen

Episode 41

March 25, 2025 00:38:41
Episode Cover

Don't Procrastinate on Pricing | Rob Walling (TinySeed)

Why does Rob Walling call pricing "the biggest lever in SaaS" and how often does he recommend revisiting your pricing strategy?  In this value-packed...

Listen