Subscribe to our blog by signing up for the naked Agility newsletter, or by subscribing to the RSS feed.

Professional Scrum teams build software that works

I am always surprised at the number of teams that release undone work to production. I understand that one may need a few sprints, or many if you inherited something nasty, to pay back that debt, but if it’s more then you are not a Professional Scrum Team. The sheer amount of software that I have that is buggy, slow, or just not finished makes me think that there are few Professional Scrum Teams out there!

TL;DR;

Every organisation has the right to hold their Development Teams accountable for the quality, but never quantity, of the software that they build. Every Development Team should pursue engineering excellence through DevOps practices, automation, and rigorous attention to detail for every release. Working software builds trust with your users and promotes your brand, faulty software encourages distrust and hurts your reputation. Defective software and technical debt are the causes of the current death grip that organisations have to traditional Taylorism based management techniques.

Ultimately if your organisation will not let you build software any way you please its because of the shit that you have been trying to get away with delivering in the past. You have work to do to build trust again.

Treat your team to a Professional Scrum Developer class to get the mup to speed.

Scrum teams build quality software that works

Working software is software that is Free from fault or defect. The Development Team is primarily accountable for quality and delivering working software. While they are also responsible for value delivery, the accountability for that lies with the Product Owner. That means that if there is a choice between delivering value that lacks quality or providing less value that is of higher quality, a Development Team should always choose quality.

Since “rules are for the guidance of wise people, and the obedience of fools” I am going to caveat that statement for those that like to latch onto absolutes. Since any software that you build is an organisational asset, and all assets are attributed to the value of your company then that software must exist on a balance sheet somewhere. If you as the Development Team decide to cut quality to make a delivery do you immediately speak to the CFO so that they can accurately reflect that loss of value on the companies balance sheet? Because if you don’t, then knowing or not there is a danger that your organisation is committing fraud by inaccurately reflecting the value of your software! Ultimately the decision to cut quality should only be taken with the full consent and understanding of your executive management team.

The decision to cut quality is not one that the Development Team, the Product Owner, or IT management are able to take, it is reserved for executive management.

Quality software is not about expectations!

Working software is software that is Free from fault or defect, but it does not necessarily meet the Product Owners or stakeholders expectations.

It is just not possible for everyone’s expectations to be understood let alone met, and thus it is unrealistic to expect the Development Team to deliver on them. At the end of every Sprint we have a Sprint Review where we invite stakeholders, and the Scrum Team, to pause and reflect on the Product Backlog based on that which was delivered. There you can explore the difference between expectation and delivery and then update the Product Backlog to reflect that difference. The Scrum Team should continuously be investigating the difference between what they delivered and stakeholders expectations so that they can close that gap as much as possible, so while they are responsible for meeting expectations, they can’t be held accountable.

The Development Team consists of professionals who do the work of delivering a potentially releasable Increment of “Done” product at the end of each Sprint. A “Done” increment is required at the Sprint Review. Only members of the Development Team create the Increment.
-Scrum Guide http://www.scrumguides.org/scrum-guide.html#team-dev

The Scrum Guide very deliberately does not tell you how to build working software. It only states that its delivery is the accountability and accountability, and responsibility of the Development Team. If you don’t have working software, then you are not yet doing Scrum, although you might be working towards it.

So, to define working software we have to look at what working software is not:

  1. Known errors or exceptions – if you find a bug then fix it. If it’s too big, then raise it with the PO, and get it on the backlog. To much time is spent managing rather than fixing bugs. Just fix them.
  2. Manual Tests – if you have manual tests then you are already working towards software that does not work, or that you struggle to deliver. It is unsustainable to have any manual testing, so get automating.
  3. Manual pipelines – in 2017 no-one should be building production code on their local computer, never mind shipping it to production from there. Even if all your build does is package up some files, and push them to an FTP location. Automate your build process… If you have a person that has to do something more than approving between code and production, then you should look to automate that process away. Humans make mistakes, and humans miss stuff. At least with an automated process if not continuous delivery, you get consistency, and you can increase the complexity over time for consistency. Make sure that you automate both your build and release pipelines.
  4. No Source Control – Yes, I still meat organisation with no Source Control, or no control over it. I wrote Getting started with modern source control and DevOps for just that reason. If you don’t even have source control, whatever you are developing, then you need to get help and quick. The business risks that are exposed by not having it are just too big.
  5. Lack of feature flags – It is a fundamental fallacy of the rejected backlog item, and your engineering team is going to have to figure out how to release at the end of every sprint (or every commit) regardless of the quality of the PBI’s being worked on. Hide features that are not completed behind feature flags so that they are not visible to end users, but your code can still be shipped.

The other name for the things that make it difficult to get to working software; Technical Debt. All of the things listed above are forms of Technical Debt, but the biggest form is just poor quality code. Code that is not tested or would not meet even a cursory code review by another software engineer.

What happens if the Development Team is not accountable for Quality?

If the Development Team is not held accountable for quality why do you believe that you have it? Quality is one of those hidden measures in software that can be there or not, and you would not know unless you were using that product in anger. If you put pressure to deliver on a Development Team, they will consistently and increasingly cut quality to meet whatever ridiculous deadline you give them.

Use Scrum to Inspect and Adapt empirically

Every organisation needs to focus on delivering quality working software that is of use to their customers. The first part is owned by the Development Team, the second by the Product Owner. This Professional Scrum Team then works together over many iterations, experimenting and continuously improving, to deliver the best possible outcome in the circumstances. So instead of being an amateur team, be a team of Professionals that deliver working software because that is what your organisation and your customers deserve. If you are having a hard time delivering then discuss your options anytime, but especially at your Sprint Retrospective, and figure out what actionable improvement you can make that will help you pay back some of your technical debt and move forward. Once such step could be making sure that your Development Team at least understand this with a Professional Scrum Developer course.

Use empiricism to Inspect and Adapt with Scrum.

Who is naked Agility Limited – Martin Hinshelwood

Martin Hinshelwood is the Founder/CEO of naked Agility Limited and has been their Principal Consultant and Trainer on DevOps & Agility for four years. Martin is a Professional Scrum Trainer, Microsoft MVP: Visual Studio and Development Technologies, and has been Consulting, Coaching, and Training in DevOps & Agility with Visual Studio, Azure, Team Services, and Scrum since 2010 and has been delivering software since 2000.
Martin is available for private consulting and training worldwide and has many public classes across the globe.

  • David V. Corbin

    Key to this topic is really determining what “Free from fault or defect” really means for a given situation. Lets look at performance – something that rarely has a set of specifications defined.

    What about a “slow web page”? What is meant by “Slow” (I am sure someone would not be happing if a shipping cart displayed “please wait” for 24 hours!)

    What about a low level function in a real-time application? Is an implementation that takes 1uS “defective” when there is an implementation that can take 800nS? [this is from a real case, were that difference caused a failure in the field!]

    Establishing these boundaries and having them communicated is a key component. Perfection can never be achieved – appropriate quality must be the practical goal.

    • Ohhh, how about a post on Defenition of Done… Yes, deciding what is on your defentioion of done is hard, and it should always be increasing… That’s where speed exists..

      • David V. Corbin

        “Definition of Done” is the top level. Typically I see things like “test pass”, etc. All of these are good things.

        From a pragmatic (i.e. empirical, experienced based) perspective I find that having a distinct “Definition of Quality” which can be highly technical as a distinct artifact (with different ones appropriate under different conditions) has value.

        When this implementation is used, the DoD contains “Software has met the requirements of the (Appropriate) Definition of Quality.

        In effect, just a refactoring from a single artifact.

        • I disagree only in that the DoD is your Definition of Quality and should mirror Releasable.

          • David V. Corbin

            I have some further information at: http://www.dynconcepts.com/quality-software-know-want/

            I agree about the “mirror” part, but have repeatedly found that if the top level DoD artifact gets too deep then it is less consumable by the necessary people. Additionally, by encapsulating and referencing it is possible (and I believe a good thing) to have a consistent (top level) DoD with the details as appropriate.

            I can just imagine something along the lines of “For method with a cyclomatic complexity exceeding X, the number of statements within a partition should not exceed Y” [probably with a chart of X, Y values and a third column with impact/risk analysis.” being *directly* in the DoD.

          • Whats wrong with:
            – Must meet acceptance criteria
            – Must meet organisational Coding Guidelines [link]
            – Must meet organisational UI guidelines [link]
            – All acceptance Tests pass

          • David V. Corbin

            Absolutely nothing. In fact the “[link]” approach is exactly what I was trying to communicates – as opposed to having all of the details directly in one document.

  • Pingback: Dew Drop - December 11, 2017 (#2621) - Morning Dew()

  • Pingback: The fallacy of the rejected backlog item()

  • Pingback: Release planning and predictable delivery()

  • Pingback: Page Not Found | Martin Hinshelwood - naked Agility Ltd()

  • Pingback: Getting started with a Definition of Done (DoD). | Martin Hinshelwood - naked Agility Ltd()