Did you ever play Jenga? It’s a challenging game of dexterity, patience, and brinkmanship — you start with a tower of blocks, 18 stories high, three blocks per story, then each player takes turns at removing a block from a lower level and placing it on top, until the tower becomes so unstable it topples over.
To ensure that a product is fit for market, as they draw close to a planned release date, many teams switch from feature development to stabilisation (or sometimes this is called hardening).
In a sense, we can liken stabilisation to playing Jenga in reverse.
We start out with a product that seems to stand up, but may be a little brittle or have some areas we’ve not yet touched; then we work to uncover and resolve defects progressively, and to refactor the product — like taking blocks from the top of the tower and placing them in holes lower down. In theory, we end our stabilisation with a product that is operationally resilient and also a firmer platform for future growth. Then we’re ready to release it.
Change the language a little, and the way this works can sound pretty close to the testing phase on a traditional waterfall project; so what is going on here? Why do we need this? If we have a robust definition of done, do we really need a whole separate phase of testing?
Is stabilisation acceptable on agile projects?
Some commentators say that any team going through a distinct stabilisation phase are not being agile, just using the label to sound good. I wouldn’t put it that strongly, although I think it’s fair to say that they are probably not operating in a wholly agile environment. Some examples of non-agile factors include:
- Organisations that develop financial software often have regulations with which they need to comply, and this often mandates a process with a distinct phase of verification.
- Organisations with a fragile infrastructure often implement an operational acceptance testing phase to be doubly or triply sure that anything released into that environment will not break it.
- Organisations that do not have access to a realistic environment during development, will need a distinct phase where the product is operated in an environment which is as similar as possible to the production environment.
- There are plenty more too.
This doesn’t mean that we just have to put up with this, though. The very act of applying agile practices is often progressive, and as it smooths out and unblocks parts of the new product development process, it starts to highlight inefficiencies and dysfunctions further down the value-stream. This is positive and to be welcomed, as it is only by uncovering this and causing some discomfort that we can hope to influence change.
Can stabilisation ever be a good thing?
Being pragmatic, though, we need to recognise that by allowing the architecture to emerge through development we sometimes make decisions early in development that are revised later.
We also find that features might be developed in an expedient manner to achieve a deadline that results in a less flexible framework for later development.
These are examples of what we mean when we describe our products as having ‘technical debt‘, so the sooner we pay down that debt the better. Having time set aside for reconsidering and refactoring those features can result in a more robust product for future development.
Edit: I would suggest it is still better to allow capacity for refactoring as part of normal development rather than shoehorning it in at the end.
Do you have stabilisation?
Do you have a distinct phase after development and before release; whether you call it stabilisation, hardening, pre-production, operational acceptance testing, or whatever? How does this work for you? Can you see ways that you can evolve the development process to approach something closer to a continuous integration and release environment? I’d be interested to hear your thoughts.