Evidence-Based Management: The Four Key Value Areas in Scrum

Published on
5 minute read

When we talk about evidence-based management in Scrum, we’re focusing on making decisions grounded in data rather than gut feeling. A core element of this approach involves evaluating our work through four key value areas. These areas ensure a holistic view, covering different aspects of the system instead of focusing narrowly on specific metrics. This allows for a more strategic understanding and avoids suboptimal optimizations.

What Are the Four Key Value Areas?

The four key value areas outlined in Scrum’s evidence-based management guide are categorized into two groups:

  1. Market Value - Focused on the external perception and value of our product in the market.

  2. Organizational Capability - Concentrated on internal capabilities and efficiency.

Each of these categories contains two key value areas, offering a comprehensive view of both product and organizational performance.


Market Value: Understanding Customer Needs and Market Potential

Market value is about measuring how our product fares in the market. It includes two key value areas:

1. Current Value

Current value is all about understanding the present performance of our product. It measures the value we are providing right now, focusing on customer satisfaction and usage patterns.

Key Metrics to Measure Current Value:

  • Telemetry Data: Real-time or near-real-time data that shows which features customers use, how often they use them, and which segments of customers interact with these features.

  • Customer Satisfaction Scores: These can be gathered through surveys or feedback forms, though they tend to be lagging indicators.

  • Revenue Metrics: Such as Revenue Per Employee or overall Revenue Growth, which indicate the financial health derived from the current product.

💡 Pro Tip: If you’re a product manager, make sure to leverage telemetry data. It’s more immediate and can offer insights into how users are engaging with your product.

2. Unrealized Value

Unrealized value represents the potential opportunities that haven’t been captured yet. This can include new features, market segments, or entire product lines. It’s about envisioning what could be rather than what is.

How to Identify Unrealized Value:

  • Product Backlog Analysis: Review items that have not yet been developed and analyze their potential impact.

  • Market and Competitor Analysis: Look for gaps in your competitors’ offerings and industry trends.

  • New Market Exploration: Think about expanding your product into untapped markets, offering new capabilities, or attracting new customer segments.

🧠 Example: Think of a TV series. Typically, a brand-new show can capture a larger audience than an additional season of an existing one. The same applies to products: new features can attract more users than merely adding to existing ones.


Organizational Capability: Building a Strong Foundation for Success

The second category is organizational capability, focusing on how efficiently and effectively we operate internally. This is particularly crucial for engineering teams and product development.

3. Ability to Innovate

This value area is about how well the organization can create new features and improve existing ones. It measures whether the team is bogged down by technical debt or able to spend time on innovation.

Metrics for Ability to Innovate:

  • Technical Debt Ratio: High technical debt can severely limit innovation.

  • Innovation Rate: Percentage of time spent on new functionality versus maintenance.

  • Time Spent on Code Merges: Especially relevant for organizations with multiple branches.

📌 Real-World Insight: A few years back, I worked with a company managing 90 teams across 13 locations. With such a large setup, merging branches and getting a unified product was a significant challenge. The process consumed substantial time and effort, which hindered innovation. Reducing these complexities can boost your ability to innovate.

4. Time to Market

Time to Market is about how quickly you can deliver changes or new features into production. A fast time to market means you can adapt swiftly to feedback, fix issues quickly, and capitalize on new opportunities.

Key Metrics for Time to Market:

  • Cycle Time: The time it takes from code commit to deployment.

  • Release Frequency: How often you release new versions or updates.

  • Lead Time for Changes: How quickly a requested change reaches users.

🚀 Example: Facebook is known for its impressive 12.5-minute cycle from developer code commit to production, which includes all testing. This speed allows them to adapt quickly to user needs.


Balancing Innovation and Stability: Why Both Matter

Achieving the right balance between innovation and stability is crucial for long-term success. While it’s essential to keep introducing new features, maintaining a stable product is just as important.

Strategies for Effective Balance:

  • Reduce Technical Debt: Use tools like SonarQube to identify and manage code flaws, which can help free up time for innovation.

  • Focus on Quick Wins: Prioritize smaller, impactful features that can be quickly released to maintain user engagement.

  • Adopt Hypothesis-Driven Development: Test new ideas and features with real users to validate their potential impact.

🛠 Personal Experience: In maintaining the Azure DevOps Migration Tools, I implemented a fully automated CI/CD pipeline. The build time was 12 minutes, which initially slowed down my process. Shortening this feedback loop allowed me to iterate faster and deliver features more efficiently.


Why Evidence-Based Management Matters

Implementing evidence-based management helps organizations:

  • Make Informed Decisions: Use data rather than assumptions to drive strategic choices.

  • Enhance Customer Satisfaction: Understand and address user needs based on real-time data.

  • Improve Team Efficiency: Identify bottlenecks in processes and address them effectively.

In an environment where both internal capability and market adaptability are vital, the four key value areas of evidence-based management provide a framework for sustainable success.


Wrapping Up: The Four Key Value Areas in Action

To summarize, the four key value areas of evidence-based management include:

  1. Current Value: Focus on existing product performance and user satisfaction.

  2. Unrealized Value: Explore untapped opportunities for growth and market expansion.

  3. Ability to Innovate: Manage technical debt and prioritize time for new development.

  4. Time to Market: Speed up delivery cycles and adapt quickly to changes.

By measuring and optimizing these areas, teams can strike a balance between innovation and stability, ultimately ensuring a robust and sustainable approach to product development.

👉 Take Action Today: Start tracking metrics in each of these areas to ensure that your team is aligned with evidence-based management principles. Embrace data-driven decision-making, and you’ll be well on your way to creating products that truly resonate with your customers.


By adopting this evidence-based approach, we can transform how we deliver value, both internally and to our customers. It’s not just about what we build—it’s about how we measure, adapt, and grow.

When I’m talking about evidence-based management, I generally talk about four key value areas with specific metrics in those areas. The reason I talk about four key value areas is because they cover different aspects of our holistic system view. Rather than a suboptimal optimisation at a single level, we want to ensure that we have metrics in all of those key value areas.

The four key value areas in Scrum, as outlined in the Org’s evidence-based management guide, are kind of broken into two categories. There are two key value areas in each of the categories. The one that is more business-focused is the market value. If we have a product and we’re trying to build a software product and we’re trying to get it into the market, there are two places we have to look for where our value is. One is the current value of our product. That’s the product that already exists, potentially. If we don’t have a product that exists yet, that one maybe doesn’t have any metrics yet. But if we do have a product and it exists and we’ve got it in front of real users, then we want to think about what our current value is and how we measure our current value—the value we have in the system right now.

I might have telemetry. In fact, don’t just think about it—get telemetry on your product. If you’re a product manager and you manage a software product, you should have access to telemetry that shows you what features in the product are used, how often they’re used, which users, and which of your customers, if you’ve got bigger customers, use those features. That’s part of your evidence-based decision-making process. But also, are your customers satisfied with your product? Are they actually satisfied? You could do that with surveys; that’s a very lagging one. Telemetry is almost immediate. You could have real-time telemetry on what’s going on in your product or near real-time. That’s us understanding our current value—the product we have in the market right now. It could be how much money we’re making from it right now, which is also part of that or revenue by employee. There are a bunch of metrics you can use in that current value space.

Then we’ve got this idea of unrealised value. That’s value that could be in our products that isn’t yet. We store a list of the things that we’re going to go build in our product backlog, so that’s part of the story—your product backlog. But also, you’re probably looking at market analysis, competitor analysis, and industry trends to figure out how we can open up new markets for our product. That unrealised value piece is actually the scenario I like to use for market value.

TV showmakers generally prefer to invest in a new show, a brand new series, than to add another season to an existing series. The reason is that a second season to an existing series is almost never going to have a higher audience. Your first season is your maximum amount of audience, and then over time, it’s going to dip. How good the show continues to be will either be a “holy moly, that’s bad” or a slow decline over time. That’s why you have shows like Halo, which was just cancelled. The new Halo show showed a steep decline in the second season, so they’re not going to do a third. Or you think of another show, I think it’s Supernatural, which went for 16 seasons, 16 years, before they finally brought it to an end. That’s because that line is declining, but it’s declining at a slower pace, so there’s still enough money to be made to make it worth investing in that show.

If we do something brand new that the audience has never heard of, that doesn’t know whether it’s good or bad, then we can open out that new market, that new group of people, and bring them into seeing that show. You’re more likely to have a higher audience for a new thing than you are for adding features to an existing thing that existing people use. Existing people are the ones that care about it, and you’re already shortening your audience. The same is true for features in your product. Whenever you add a net new feature to your product, that’s you opening out new markets, new opportunities, new capabilities for your customers—brand new capabilities that will hopefully be able to bring in new customers that you didn’t have before or even whole new segments of customers that you didn’t have before into your story.

But that takes a lot of effort and focus and data and analysis, right? Trying stuff, experimentation, hypothesis-driven engineering practices sit in that space where we’re going to keep trying new stuff to engage or re-engage with users in that unrealised value space. That’s our market value at the top. That’s the focus of everybody. Let’s be clear that we want to have as many product developers as developers in our team and not just jobbers doing the job. We’re looking at how we all work together to make this product a success. We need experts as well that maybe don’t care about that stuff; they care about their piece. But we want at least some of this. That’s where product ownership kind of sits in the Scrum world, focusing on unrealised value and current value, the market value, and then up into the rest of the business.

Product management sits in that space, and hopefully, we have some product management skills in the developers on the team. Product developers, we have a good product owner who is a product manager who understands Scrum, and we have other people in the organisation also looking at this. You might have marketing people with a marketing skill set looking at this, people from a sales skill set looking at this, and all working together to start funneling features and capability and ideas into what are we going to do in this product—that’s market value.

The other piece is organisational capability. This is the piece that, if you’re on an engineering team, you lead an engineering team, or you’re part of an engineering organisation, you have 100% control over. There are no excuses in this space; it’s all you. Those two key value areas are the ability to innovate. The ability to innovate is about how much focus, how much time, how much effort do we spend innovating net new functionality? How much time do we spend on that net new functionality versus on the other stuff that we have to do? We have to do a lot of other stuff as well, but are we maximising the amount of time we are able to spend on that and minimising the amount of time that we spend struggling with complexity? Technical debt in that space of ability to innovate—any technical debt is going to reduce your ability to innovate. Any undone work is going to reduce your ability to innovate. Anything that is slow is going to reduce your ability to innovate. We have to spend time on the slow stuff rather than the fast stuff.

The other side of that is time to market. Time to market is how quickly can we go from that change we’ve made all the way to production? Those two make up organisational capability, and we need to measure them as well. Our ability to innovate can be measured in lots of different ways. I actually have some metrics examples that I can use over here. Ability to innovate has loads and loads of metrics. My background is engineering, so my list is going to be a little bit engineering-focused. If we’re building a product that has customers that take versions of our product, either because we run private cloud for them or because we’re building something that people install, I probably want to be looking at our percentage of people that are on the current version of our product. I probably want to be looking at our time spent merging code between branches. How much time do we spend on that? If you’ve got a branch branching policy, usually that’s if you promote by environment. This is really old-school DevOps, but if you promote by environment—Dev, QA, staging, production—then normally it’s four environments, and you promote by environment, i.e., you’re merging code between those actual versions of your code that are then deployed to physical environments. That’s really old school, by the way; don’t do that anymore.

If you’re in that world, then we might want to measure the amount of time we spend merging the code between branches. That could be a lot of time. I worked with an organisation a few years ago that had 90 teams in 13 locations in nine different countries. I think they know who they are when I describe that. They had a single product that had 90-odd teams working on it—90-odd active branches—and then merging that down and creating a unified product was a lot of work and a lot of effort. Getting a new version of your product was difficult. Ensuring that everything works together, ensuring that all of those things work well, could be something you measure. So installed version index, time spent merging code, production incident count—how many incidents are you having in production? If we’re innovating and we’re shipping bad quality code, then we’re going to have a higher level of incidents in production. We need to have both positive and negative measures. We’re doing the innovation really well, but we’re delivering crap to production. We need to fix that; that’s not going to help ensure that our customers are happy with us.

The two biggest ones that I think are easiest to collect from a data perspective are innovation rate—what percentage of your time do you spend on net new functionality versus maintenance versus support? This is the age-old CapEx versus OpEx conversation or can be, depending on your product and your organisation. How much time do you spend on those things? Obviously, you want to be spending more time on capital expenditure because it’s taxed differently anyway. From a financial perspective, that’s a good idea. But also, we want the capital expenditure—investing in our product’s future and new features and new capabilities—is the thing that’s going to bring in net new users, which is hopefully going to translate to revenue. So we want to be looking at innovation rate; that’s a big part of the story.

The other one that I like to look at is technical debt. Technical debt is really important. I generally use Sonar, SonarQube, to do that. You can do on-prem or you can use their cloud version. If you’re open source, it’s free; if you’re not open source, you have to pay. SonarQube looks at code bases using industry standards and recognises industry standard metrics for technical debt. It looks for known code flaws. There are constructs that people create in specific languages. Let’s say I write in C. If I run SonarQube against it and it tells me here’s a whole bunch of security problems you have in your code, I should really go fix them. They’re known potentials for attack. There will be other vectors that you don’t know about, but at least you don’t have the ones you do know about. It could have code smells—things that are just constructed in a way that means it will be more difficult to maintain and support. These are things that we can fix, and we can monitor and we can fix. When you do run it first, it’s going to look nasty; it’s going to find like 6,000 things that are a problem. But if you have a policy of leaving any code a little bit better than you found it, then if you’re going to edit some code, look up the metrics for that part of the code base, fix any of those problems, then make changes to the code, then revalidate, and you should find that it gets a little bit better over time. That’s just policies and procedures for the team—make things a little bit better.

Technical debt is super important, and it’s much easier to collect than you think. You just apply SonarQube, but it’s really hard to do something with it. There’s definitely a human thing that when we see 5,000 things dumped on us, we get dejected. I think that’s the right word—unhappy with that—and then let it go. Instead, that’s really important. The ability to innovate is one of our key stories here, and the other one is time to market. Time to market is about how quickly we get our product into production. Good examples in the industry of fast time to market are things like Facebook—12 and a half minutes from developer code commit to production. That’s including all testing and load testing and stress testing and all those kinds of things. That’s super quick.

Starbucks, at least when I worked for a company that engaged with them back 13 years ago, had decided that their effective planning horizon, i.e., time to market, effective planning horizon for changes they needed to get was 48 hours. From implementing a thing that they wanted to get out into production to actually being out in production, they wanted to be 48 hours. That was a business decision. Another one is, and this one’s a little bit more—it could be what’s it called when it’s not like a tale rather than reality—but I heard tell that after the Windows Vista quality debacle fixed in Windows 7 and then the Windows 8 usability debacle fixed in Windows 8.1 a year later, Satya, the CEO of Microsoft, reasonably new to it at the time, went down to that team—the Windows team, which is like four and a half thousand software engineers cutting code every day—and basically said to leadership that we’ve taken a business decision. We want to see working copies of Windows in the hands of real users at least every 30 days. We want it rolled out to everybody in the world at least every 90 days. They said that’s impossible; you can’t do that. He said it’s not my problem; that’s an engineering problem. This is a business decision; your job is to make it happen. Go figure it out.

That business decision is what birthed Windows 10 and that new release cadence. Now, they’re less than 24 hours from code to production inside of Microsoft—a week to production for what is it, 17 million people in the Insiders program. Every three months, they do that big ship to everybody in the world—the 950 million other people that are using Windows. That cadence means that you need a higher quality on a daily basis. Time to market brings in things like cycle time, release frequency—how often you release. Back in 2012, you could probably count on both hands the releases for major software products from Microsoft. As of about 2018, they were doing 86,000 deployments a day. That’s a huge difference, and it takes time and effort to get there. You have to pay back your technical debt; you have to build architectures in your product that support it. You have to build teams and knowledge and skills that support it. But once you get there and you’re on that continuous cycle, you can fix stuff much more quickly. Any changes you do make have a lower impact because you have that.

Instead of promoting through environments where it’s easy to miss stuff, they promote through what’s usually called a ring-based deployment structure or controlled exposure to production, where you’re actually in production very quickly but on a small subset of users. Then you continually increase the potential blast radius. You’re hoping that if each of those rings has enough people in it and you’re monitoring the telemetry and understanding what’s going on within that space, you should minimise the chance of things making it into production. Perhaps that’s something that CrowdStrike should have done. I just saw a post from those guys that said, “Yeah, we’re going to start doing that now.” It’s like, “Yeah, you should have been doing that already.”

So that time to market—getting things quickly—lead time, cycle time, how quickly you get things in front of customers, time to change direction. If you’re doing something and you find out it’s the wrong thing, how quickly can you change direction? Time to build, time to self-test—how quickly can your developers test things locally? I think the Azure DevOps team was like six weeks to get a build out the door back in 2010. Now they’re doing continuous delivery, so they’re delivering it out the door every day. That time to market reduction, even just time to self-test—how quickly do your developers know that they’ve caused a failure?

I think the Azure DevOps team went from 48 hours with their automated builds. Because they had mostly end-to-end system tests, don’t do that anymore. That’s like so 15 years ago. End-to-end system tests are not a way to validate that your product works. That’s the pants. I can’t be bothered actually building quality into my product, and I need to test quality in a way of building—of ensuring that your product works. Unit tests, unit tests, behaviour-driven—whatever you need to do, but unit tests, fast-running, super quick unit tests. I think the Azure DevOps team went from 48 hours, and it took them four years to take all of those. I think they had like 30,000 system tests and turned them into 990,000 unit tests. The 90,000 unit tests run in three and a half minutes, and the 30,000 system tests took two to three days to run. That’s a huge impact on your engineering team’s ability. Engineers are waiting for those things to be successful. If they’re not waiting, they’ve moved on to other things, and now you’re suffering context switching and cognitive load between those things.

From an organisational capability perspective, all of this stuff is 100% within the control of engineering—of whoever builds the product. Do that right. If you’re able to do that really quickly and effectively, that gives the business the incentive that we can ask for something and get a change really quick. So what should we be asking for? Perhaps we should start thinking about hypotheses and trying stuff and testing stuff that we can get into the product, testing in production, and then perhaps we ditch it out of the product because it’s not providing the value that we think it should.

This idea of evidence-based management has the four key value areas: current value, which is your product that exists right now; collecting telemetry, understanding what’s going on and how users are using it; unrealised value, which is things your product doesn’t do yet; market value, getting new customers, getting new capabilities; and at the bottom, you’ve got your organisational capability with the ability to innovate—how much time do you spend adding that new functionality versus supporting and maintaining existing functionality—and time to market—how quickly does it take for a change you make to get into production? Hopefully, close that learning loop all the way back to your products and have that full time to learn cycle in there. That is the four key value ideas of evidence-based management.

Metrics and Learning Ability to Innovate Decision Making Evidence Based Leadership Value Delivery Evidence Based Management

Connect with Martin Hinshelwood

If you've made it this far, it's worth connecting with our principal consultant and coach, Martin Hinshelwood, for a 30-minute 'ask me anything' call.

Our Happy Clients​

We partner with businesses across diverse industries, including finance, insurance, healthcare, pharmaceuticals, technology, engineering, transportation, hospitality, entertainment, legal, government, and military sectors.​

Slicedbread Logo
Boxit Document Solutions Logo
Boeing Logo
Lean SA Logo
Epic Games Logo
Hubtel Ghana Logo
YearUp.org Logo
Slaughter and May Logo
Milliman Logo
Ericson Logo
Bistech Logo
Microsoft Logo
Brandes Investment Partners L.P. Logo
Akaditi Logo
MacDonald Humfrey (Automation) Ltd. Logo
Genus Breeding Ltd Logo
Xceptor - Process and Data Automation Logo
Qualco Logo
Nottingham County Council Logo
Washington Department of Transport Logo
Washington Department of Enterprise Services Logo
Ghana Police Service Logo
Royal Air Force Logo
New Hampshire Supreme Court Logo
Graham & Brown Logo
Ericson Logo
Healthgrades Logo
ALS Life Sciences Logo
Schlumberger Logo
DFDS Logo