When it comes to managing product investments, I’ve shifted my perspective significantly over the years. The phrase “stay within budget” doesn’t resonate with me anymore. Instead, I view it as having a pool of money that I can allocate strategically to maximise value. This approach requires a solid understanding of the data at hand and a clear vision of what we aim to achieve.
Embracing Hypothesis-Driven Engineering
One of the key concepts I advocate for is hypothesis-driven engineering practices. Whether we’re developing new products or enhancing existing ones, starting with a hypothesis is crucial. Here’s how I approach it:
Identify the Idea: What do we want to add to our product? This could be a feature on your backlog or a broader initiative across your product portfolio.
Define the Outcome: What are we trying to achieve? This clarity helps in aligning our efforts with the desired results.
Run Small Experiments: What’s the smallest experiment we can conduct to test our hypothesis? This allows us to validate our ideas without committing extensive resources upfront.
Measure Progress: How will we assess whether we’re moving towards our goal? Regular evaluation is essential to determine if we should continue investing in a particular initiative.
This methodology can be applied at any scale, from individual features to entire product lines.
A Real-World Example: Azure DevOps
Let me share an insightful example from the Azure DevOps team at Microsoft. They faced a challenge: how to help customers manage their technical debt effectively. The product unit manager, who oversees budgetary control, recognised that people are the most significant expense in product development. With a team of exactly 600 people, the focus was on allocating resources to ideas that would yield the highest return on investment.
The team proposed a grand idea: to create tools that would help customers identify and manage their technical debt. This initiative required collaboration across various teams, each contributing their insights and expertise. They dedicated a significant amount of time—around four to six months—to explore this concept.
However, after extensive experimentation and customer feedback, they discovered that their solutions didn’t resonate with users. Despite the investment—potentially around £10 million—they learned a valuable lesson: not every idea will succeed, and sometimes, it’s better to pivot and redirect resources elsewhere.
Learning from Failure
In traditional project management, this could easily be seen as wasted money. Yet, I argue that the learning gained from this experience is invaluable. They avoided the pitfalls of long-term investments in a failing idea, which could have resulted in a situation akin to the Windows 8 debacle. Imagine the cost of nearly 20,000 people working for six years on a product that ultimately disappointed customers and damaged brand reputation.
This experience catalysed a shift in Microsoft’s approach. They recognised the need for rapid testing and validation of ideas, ensuring that they could quickly adapt based on customer feedback. This shift is not just necessary for product teams; it should permeate every level of the organisation, from the ground up to the boardroom.
Conclusion: The Path Forward
In today’s fast-paced environment, adopting hypothesis-driven engineering practices is essential for success. It allows us to make informed decisions, minimise wasted resources, and ultimately deliver products that meet customer needs.
As we move forward, let’s commit to this approach at every level of our businesses. By doing so, we can ensure that our investments yield the maximum value and that we remain agile in the face of change. Remember, it’s not just about the money spent; it’s about the insights gained and the ability to pivot when necessary.
If you want to ensure that your products… I actually don’t like the phrase “stay within budget.” I know everybody uses it, but I don’t think that way anymore. When I’m investing in things, I think about it as I have a pool of money, and I’m going to move that pool of money around where it’s going to provide the most value. Right? So I need to be looking at… I need data, right? I need to understand what’s going on.
For example, I talk a lot about hypothesis-driven engineering practices. If we’re building brand new products or even adding capabilities to existing products, I want to have a hypothesis-driven story. That means that we’re going to have a… you know, it starts with an idea. What do we think we would like to add to our product? This could be applied to a thing on your backlog, this could be applied to the whole product, or it could be applied to a portfolio of products. Right? But what do you think you’re trying to do?
Then you would do some kind of analysis of that and figure out, well, what’s the outcome I’m trying to achieve? Right? This is what I want at the outcome. This is the thing that I’m going to do that I think will help us move towards this outcome. What’s the smallest experiment that I can run? And then I would look at how am I going to measure whether I’ve made progress towards that goal? And I’m going to regularly assess based on my progress towards that on whether I continue to invest in this thing or I don’t. This can be at the small scale or it can be at the grandiose scale—any scale you like.
So a good example is the Azure DevOps team at Microsoft. They were interested at one point in trying to answer a particular question. Right? So the head of the product, Microsoft called it the product unit manager, I think that’s their technical term, has overall budgetary control. Right? So they have all the money. They know that that means people. Right? So they have exactly 600 people—it’s not but exactly 600 people—and I need to allocate those people to the ideas that make sense in order to maximize my return on investment and keep my stakeholders happy, keep my customers happy, keep building products, and have an eye to the future on ideation and what is it that’s coming next.
So they’re allocating people’s time as the biggest expense. Right? So they had this idea brought to them that we’d like to help… this is the idea: we’d like to help customers discover and deal with their technical debt. Right? All they’re building software that helps build software. So we’re all building software, and we have technical debt. So it would be great if we could understand what technical debt we have and then have things we can do to help us—things that help us minimize that technical debt. So that’s a grandiose idea; it’s a big idea that goes across the entire product suite.
You know, I’m going to allocate a bunch of time. So the idea is that I, as the product owner—the Uber product owner—there’s lots of people in charge of different parts of the product, but the Uber product owner, product unit manager, decides we’re going to try this thing. I’d like to see some experiments within the context of this story, and you folks figure out what it is you’re going to do.
So his lieutenants, who run the different parts of the product, come up with ideas on how to collaborate together. What are we going to do? What are the ideas we think we can have? They come up with some ideas, and they bring that, saying, “This is what we’re thinking. This is what we want to try.” Okay, cool, let’s try this one. This looks pretty good, and they try it. This could involve 100 people spending some amount of time within a quarter, within a year, within a fixed time scale to try and create some features within this context. Right?
So when we’re spending time on this experiment, that’s money not being spent on other things that we know are valuable. We’re speculating on something that we think might be valuable in the future, right? That we might be able to do something with. So they spent a bunch of time on it. I think they spent about six months—four to six months on it, if I’m remembering rightly—and they come back and they’re like, “We don’t understand how to solve this problem.” The things that they’d created, they’d managed to create some things, they’d got them in front of real customers—like myself, I’m their customer—got them in front of real customers, and they just didn’t resonate. They just didn’t provide value.
They tried a whole bunch of different ideas. There were five or six different teams, different groups involved in it. They all had different ideas; they all collaborated together to create some of these ideas, and nothing resonated with the customers. So at some point, the product unit manager is like, “Right, let’s not invest any more money in this because we’re not… we’ve not got an idea that’s good enough to move the needle or solve this problem.”
You could say that that amount of money—maybe they invested 10 million in this capability—was wasted money. You could say it was wasted money, but you could also say that there was a lot of learning involved. They learned not to go tackle that problem. They learned to save their money and spend it on something else. So in the traditional world, they would have spent years on that capability to find out that it was a bad idea. Right? Think of Windows 8. No, not Windows XP—Windows 8, sorry, my brain died there. The Windows 8 debacle, where Microsoft spent six years. They had 4,500 software engineers working on Windows. They had 15,000 other people involved in that story. So you’re talking nearly 20,000 people working for six years. Imagine the cost in your budget for 20,000 people working for six years on a product, and the end outcome did not meet the expectations of the customer, did not meet the brand value, right? In fact, it detracted from the brand value, detracted from the customer experience, and you had a lot of unhappy customers that went to other products like Apple. Right? That was a big jump ship moment that people moved to other platforms.
This fundamentally was the catalyst for Microsoft looking at themselves and kind of going, “Yeah, we’re doing this wrong.” This long cycle, long budgetary five-year plan, right? And not getting things very quickly in front of customers, very quickly testing and validating whether we’re doing the right thing. That’s what we need to be doing, and we need to be doing it on every product, everywhere in the business, at every scale possible—from the lowest teams working on products, hypothesis-driven engineering practices, to the board making decisions about which products and capabilities to fund and which products and capabilities to not. Hypothesis-driven engineering practices—you need to be doing it at every level of your business.