Sizing defects: a measure of quality, failure, or distraction

In our role of delivering some form of business value to our organisations and customers; we invest the time for our highly-skilled designers, developers, analysts, architects, and testers to produce quality software; however occasionally something falls through the net and we end up delivering software with flaws that produce unexpected results or that makes it behave in unintended ways.

We refer to such flaws, in a general way, as bugs or defects. If we don’t fix these, they undermine the value of our software, annoying our customers, potentially losing us money, and in the worst case endangering lives. While there is often considered to be a tolerable level of small defects, too many of these can amount to a significant issue. Software with significant defects (either in severity or quantity) is said to have ‘technical debt’, a term coined to highlight how defects detract from the value the software should be providing.

Consequently, teams striving for high-quality and few defects will focus on tracking the number of defects and looking at ways they can minimise, mitigate, or avoid them.

Triaging defects

Part of the normal process of handling defects is called triaging –much like hospital staff do in the Accident and Emergency department– who has a life-threatening condition, who has an acute problem but should be OK until there is a free spot to see them, who has a chronic ongoing problem so just needs some TLC or painkillers, and who is a time-waster. When it comes to defects, we tend to use a level of severity to measure this: Sev-1 defects are effectively blocking the system working, if they are not resolved we cannot ship the software; Sev-2 defects could be blocking some people some of the time; Sev-3 defects could be annoying but have work-arounds; and so on. This helps us focus on the defects that are the most urgent and important.

However, whenever team members choose to work on defects, they are not working on new features and new value. While this can be frustrating for product owners, it still an investment,¬†because we are making a decision to pay down technical debt. When we measure our team’s productivity by their velocity (sum of story points delivered in one sprint), this can hide how much work they’ve done of improving the system.

Should we size defects?

One one hand, we could size defects in story points too; in this way we can better assess whether we gain more value in removing a defect than in developing a new feature. When we report velocity for the sprint, we report points on removing defects separately from points on developing new features; however it all counts.

On the other hand, we could report only points from new features; a team spending a lot of time resolving defects would then show a lower velocity than normal. It is felt that the lower velocity should act as a flashing beacon, causing stakeholders to ask why and thereby prompting a better quality approach.

I can see the purpose and sense in both approaches, and have happily worked in both environments; however, my preference is to count the work, with one proviso: any bugs raised against a feature currently being developed should be considered as tasks under that feature — this is a normal part of agile team-work. We should only track, size, and prioritise defects as work items when they have escaped a sprint.

Sizing defects like this is tantamount to the way I would count work spent on team or process improvements agreed during a sprint retrospective. The product owner needs the transparency of knowing what the team is working on, and have the ability to prioritise the different types of work item the team could work on.

Of course, sizing defects is notoriously difficult; it’s not until a developer has fully investigated it that they know how complex it will be, at which point it might just take 10 minutes to resolve. However, this is much like our very early understanding of feature development too. We size our epics and first-cut stories based on our experience of like features, and that is often enough to prioritise and forecast work into a sprint.

I will write soon about processes for managing defects on agile projects, however for the moment, I am really interested in how you handle defects in your agile environment. Do you just work on them as you can; do you have a formal Kanban pull system; or do you size them alongside other work?

Leave a Reply

Your email address will not be published. Required fields are marked *