The Importance of Validation in Product Development: A Strategic Approach

Published on
6 minute read

When you’re developing a product, it’s natural to assume that the features you’re adding will deliver value to your customers and your business. But how do you truly know that these features are providing the expected value? Recently, I’ve been working with a client facing a common issue—sales-driven features are fragmenting their product, making it harder to use. This issue stems from prioritizing short-term gains over long-term value. In this blog post, we’ll explore the pitfalls of this approach and how shifting focus towards value-driven development can lead to more sustainable success.

The Danger of Sales-Driven Features

Short-Term Gains, Long-Term Losses

One of the most significant challenges in product development is the pressure to close deals. Sales teams are often driven by the need to hit targets, which can lead to the inclusion of features that have little to do with the product’s overall vision or the needs of the end-users. While closing a deal is important, doing so at the expense of the product’s integrity can have long-term negative consequences.

  • Fragmentation of the Product: Adding features solely to close a deal can make the product more complex and harder to use.

  • Hidden Costs: A feature might help close a $30,000 deal, but if it costs $100,000 in support, maintenance, and lost future sales, was it worth it?

The Sales Incentive Problem

Sales teams are usually incentivized based on the deals they close. They might receive a bonus for securing a $30,000 deal, regardless of whether the feature they sold adds long-term value. This creates a misalignment between what’s good for the product and what’s good for the salesperson.

Example: Microsoft Azure’s Approach

Microsoft faced a similar challenge with Azure. Originally, sales bonuses were tied to the number of Azure hours sold, leading to situations where customers bought large amounts of capacity but didn’t use it, resulting in dissatisfaction. Microsoft shifted its approach by tying bonuses to the actual usage of Azure hours by customers, encouraging salespeople to focus on customer value rather than just closing deals.

Shifting Focus: From Revenue Extraction to Value Creation

The Hypothesis-Driven Approach

To ensure that your product is delivering real value, it’s crucial to adopt a hypothesis-driven approach. This method involves creating a hypothesis about the value a feature will add before it’s implemented, and then testing that hypothesis once the feature is live.

What is Hypothesis-Driven Engineering?

Hypothesis-driven engineering involves making educated guesses about the impact of a feature and then validating those guesses with data. For example:

  • Hypothesis: Adding a passwordless login option will increase user sign-ups.

  • Validation: After implementing the feature, track how many users choose the passwordless option and whether overall sign-ups increase.

By continually testing and validating these hypotheses, you ensure that every feature added to the product contributes to its overall value.

Measuring Success

To measure the success of a hypothesis, it’s essential to have the right telemetry in place. This means setting up your product to collect data on how features are used, allowing you to make informed decisions.

  • Data Collection: Track how users interact with new features.

  • Usage Analysis: Determine if the feature is being used as expected and if it’s contributing to the overall goals of the product.

The Importance of Telemetry

Telemetry is the lifeblood of hypothesis-driven development. Without it, you’re flying blind. You need to monitor how features are used, gather data on their impact, and adjust your approach based on that data. This allows you to make evidence-based decisions and ensures that you’re investing in features that truly add value.

Avoiding the Pitfalls: A Case Study

The Chaos Report

According to data from the Chaos Report, only 35% of features in software products are actually used by customers. This means that for every million dollars invested in product development, $650,000 is wasted on features that don’t add value.

  • Why is this happening? Features are often added without sufficient validation of their value.

  • What can be done? By adopting a hypothesis-driven approach and validating features before full-scale implementation, you can reduce waste and increase the value delivered to customers.

Practical Steps to Implementing Hypothesis-Driven Development

1. Define the Hypothesis

Before implementing any feature, clearly define what you expect it to achieve. For example, “If we add a Facebook login option, we expect a 10% increase in new user sign-ups.”

2. Set Up Telemetry

Ensure that you have the right tools in place to track the feature’s usage. This might involve setting up analytics to monitor user behavior, tracking how many users engage with the new feature, and analyzing whether it’s driving the desired outcome.

3. Validate the Hypothesis

Once the feature is live, compare the actual data against your hypothesis. Did you achieve the expected increase in sign-ups? Are users engaging with the feature as anticipated?

4. Make Data-Driven Decisions

Based on the data, decide whether to:

  • Double Down: Continue investing in the feature if it’s driving value.

  • Pivot: Adjust the feature to better meet user needs.

  • Stop Investing: If the feature isn’t adding value, consider removing it or stopping further development.

The Role of Product Managers and Owners

Accountability and Decision-Making

Product managers and owners play a crucial role in this process. They are responsible for ensuring that features align with the product’s vision and deliver value to users. This involves making tough decisions about which features to invest in and which to discard.

  • Ask the Right Questions: How many customers are using the feature? Is it worth the investment? What’s the total cost of ownership?

  • Use Data to Back Decisions: With telemetry in place, product managers can make informed decisions about the future of features, ensuring that the product evolves in a way that maximizes value.

Conclusion

In today’s fast-paced product development environment, it’s easy to fall into the trap of adding features to close deals without considering their long-term impact. However, by adopting a hypothesis-driven approach, collecting the right data, and making informed decisions based on that data, you can ensure that your product remains focused on delivering real value to your customers.

Key Takeaways

  • Avoid Sales-Driven Features: Focus on long-term value rather than short-term gains.

  • Implement Hypothesis-Driven Development: Define, track, and validate the impact of new features.

  • Use Telemetry to Make Informed Decisions: Collect data to ensure features are adding value.

  • Empower Product Managers: Give them the tools and data they need to make evidence-based decisions.

🚀 By following these principles, you can create products that not only close deals but also deliver sustained value to your customers and your business.

When you think a feature is going to be valuable to your customers or to your business, how do you know that that work has actually provided value? I’ve been working with a customer recently where a lot of sales-driven features end up in the product, which is actually having the impact of fragmenting the product and making it more difficult to use. The driving force for adding those features to the product is closing a deal; it’s got nothing to do with what the users of the product want. It’s got nothing to do with what the business that is creating the product wants. All it has to do with is closing the deal with the customer.

So why is that bad? We obviously want to close deals with customers, but what’s bad is because we don’t know whether those things that we’re creating actually produce value or not. There’s definitely an assumed value, right? We think if we add this feature, we’re going to get value from it, close that deal. But what’s the long-term impact of that feature? We might close that $30,000 deal, but if over the next five years that feature is going to cost us $100,000 in support and maintenance and all those kinds of things, and we don’t close any more deals because of that feature, then it wasn’t worth adding that feature to close that $30,000 contract.

But your sales guys don’t care because they’ve made their 5% bonus on the $30,000 of the deal they closed. There’s no incentive for them to really focus on the right features that will support many of their customers. They’re just worried about closing the deal because that’s how they get the bonus. That’s usually the metric for salespeople.

One of the ways to turn that around is to start changing the way you measure, changing the way you deliver bonuses. People behave how they’re measured, right? A great example again is Microsoft, which has lots of these examples because they’ve been through some of these traumas. They switched the Azure sales folks from getting their bonus based on the number of Azure hours they sell to being instead on the number of Azure hours the customer uses.

So then instead of selling a million dollars to the customer and then the customer being unhappy at the end of the year because they bought a million dollars’ worth of Azure and they’ve used $40,000 and the rest is waste, the salesperson is focused on usage. How can I help you as the customer use this product, not how can I be Draconian and close the deal and get you to sign on the dotted line? Signing on the dotted line is not the value that the customer wants; using the features and capabilities to the maximum capability that they have is what the customer wants.

It’s that shift in focus from revenue extraction towards value creation because quite often that short-term view on revenue extraction has a long-term cost that’s not obvious to the people that are making the decision that the feature goes in. It might be obvious to the people who are actually doing the work, but they don’t have any say or control over that.

This validation is really important because you pull back around and once you’ve shipped a feature, you monitor that feature’s usage. You collect telemetry from that feature. Now, in order to collect the right telemetry, we can always in hindsight say, “Well, this feature did this,” right? But was that what was intended? Is that why you added that feature in the first place? Is that the change that you wanted to make?

This is why I’m a big proponent, as part of product management, of hypothesis-driven engineering practices. It doesn’t have to be engineering; it can be anything we’re building. If we’re building a product, I guess I would just call that engineering anyway. If we’re building a product, every feature that we want to add to the product that is not just table stakes—there are features that we just have to have.

For example, if you’re going to have a web-based product and there’s dynamic content specific to the user, they’re going to have to be able to log in. I don’t need to have a hypothesis to say, “Is adding login a good idea?” We kind of have to have it; it’s table stakes. But what level does that go to might include some kind of hypothesis, right?

If we make it easier to log in—right, so base username and password is how most login systems work. Most systems are moving to passwordless. If we implement a passwordless system, do we get more or less users using our product? If we put a LinkedIn auto-login or a Windows or an Apple or Google auto-login, does that increase the number of users that we get in the system?

So the hypothesis would be, “If we add the capability for people to log in with Facebook, we’re going to get a higher number of people logging into the system because it’s easier for them to log in.” That would be a hypothesis. Then you might ask the question, “Well, how much more is that worth?” Hopefully, it’s very minimal. That sort of thing should be very minimal effort to add to your system, right?

If you add that to your system, what data are you going to collect to know whether you’ve successfully achieved that hypothesis? Well, I want to know how many people click the Facebook link versus use the username and password. I also want to know the total number of new net users coming onto the system.

What I would expect to see is, you know, we’ve got our line for new users in the system. We add that feature, and that line has something noticeable that says we’re increasing at a higher rate. Then we can look at the data and say, “Well, 10% of people clicked the Facebook button; we’ve got a 10% increase in the net new users. Therefore, we’ve opened up access to new users and new markets.”

Less people go to the page and then bounce; more people go to the page and then sign in just because it’s easy—they can just click the button. So that’s hypothesis-driven practices. We have to look at the data and figure out whether the thing that we added has the result that we expect.

But this is the important part: that means whose job is it to provide the hypothesis? It could be the person who wants the feature, who’s asking for the feature. This is something that I encouraged a customer recently. I encouraged them to push back to sales.

So if sales say, “We want feature X because we think it’s going to close this deal,” and this deal is worth X amount of money, engineering should say, “Well, how many other customers are going to use the feature? How much do we think it’s going to increase usage of the product?”

I’d like you guys to come up with a hypothesis of why we’re adding this feature and what we expect to be different other than just closing that deal because we want to look at the total cost of ownership, let’s say, over five years of adding a feature—support and maintenance and testing and automated testing and all of those things.

What’s the total cost of that? The amount of time it adds to the build, right? All those kinds of things. A lot of that is guesses, but we come up with, “Here’s what we think it will cost.” Are we actually going to make the money back that we’re putting in? Is it enough of a difference to make it worth doing? If we do it, what else is it going to support? How many other customers is it going to help?

That’s the clincher. Do you understand how many customers are using the features that you have in your product? I think there’s some data from the Standish Group in Boston that used to create the Chaos Report. I think they still do create a report called the Chaos Report every year; it might have a new name now.

In that report, they collect data—they’re a data analysts group. They collect data across, I think, about 70,000 to 75,000 projects worldwide, mostly in the US and Europe but some in the rest of the world. I think it’s like 60% US, 30% Europe, and then 10% of the rest of the world. I can’t remember exactly, but I seem to remember those numbers; I could be making them up.

The data that they analysed showed that only 35% of the features that we build in our products are actually used by our customers. I think it could be used a little bit but not enough to make it worth adding to the product. They’ve analysed that across all these products.

So why is that number so low? Because that sounds like for every million dollars you invest in your product, you’re only getting $350,000 worth of value, right? So that’s a lot of waste—$650,000 waste. That’s a lot of money. Where’s it going? Why are we building features that our customers aren’t using?

Even worse, why are we continuing to invest in features that we could know that our customers aren’t using? That’s the even more interesting question. How many features do you track the usage of features in your product? And how many features in your product are you actively adding new functionality to that your customers don’t use?

Because if they’re not using it, you probably want to think twice about adding new features. You may want to double down, right? You’ve got that old adage that we can either double down, right? So we keep investing in that feature because we think it’s going to be valuable in the future and customers will use it.

We pivot; we need to change the way that feature works in order to maximise user engagement with it, right? Which means it’s valuable if they’re engaging with it. Or we stop investing in it and perhaps take it out of the product.

How often do you make those decisions? And how often do you make those decisions based on data? Do you have the data to be able to make those decisions? This is something that product management wants. The only way they can get the data is if the team adds that capability into the product.

It needs to be integrated with the product because you want to be looking at the actual features, how they work, and create telemetry specific to those features. Understand based on what you intended the feature to do that you’re able to track that data, see the needles moving, and decide whether to continue investing in it or stop.

That’s something that every product manager, every product owner should have for almost every feature they add to the product or the intent to add to the product. That can be great feedback and information you can use when you’re talking to stakeholders who’ve asked for those features.

Before you’ve built the whole feature, build a little bit of it, validate that it’s the right feature, and if you see there’s very few people using it, go back to the stakeholder and say, “We don’t want to keep investing in this feature because nobody’s using it. Do you know why nobody’s using it?”

You can go ask the customers as well, but this is your pushback on that financial investment that’s maybe been imposed upon you if you don’t control everything. Validation is a super important part of product development. It’s often lacking.

I feel that it’s more often lacking than almost anything else in product development. So go out there, figure out what telemetry you need in your product, get your engineers to build it, and start making evidence-based decisions. Validate that the features that you create are actually adding the value that you intended for them.

Product Validation Hypothesis Driven Development

Connect with Martin Hinshelwood

If you've made it this far, it's worth connecting with our principal consultant and coach, Martin Hinshelwood, for a 30-minute 'ask me anything' call.

Our Happy Clients​

We partner with businesses across diverse industries, including finance, insurance, healthcare, pharmaceuticals, technology, engineering, transportation, hospitality, entertainment, legal, government, and military sectors.​

Emerson Process Management Logo
Illumina Logo
Jack Links Logo
Philips Logo
Big Data for Humans Logo

CR2

Sage Logo
Lean SA Logo
SuperControl Logo

NIT A/S

Bistech Logo
ALS Life Sciences Logo
Deliotte Logo
DFDS Logo
Cognizant Microsoft Business Group (MBG) Logo
Alignment Healthcare Logo
Brandes Investment Partners L.P. Logo
Higher Education Statistics Agency Logo
New Hampshire Supreme Court Logo
Ghana Police Service Logo
Washington Department of Transport Logo
Department of Work and Pensions (UK) Logo
Royal Air Force Logo
Washington Department of Enterprise Services Logo
Freadom Logo
DFDS Logo
ProgramUtvikling Logo
SuperControl Logo
Xceptor - Process and Data Automation Logo
Boxit Document Solutions Logo