Did you ever play Jenga? It’s a challenging game of dexterity, patience, and brinkmanship — you start with a tower of blocks, 18 stories high, three blocks per story, then each player takes turns at removing a block from a lower level and placing it on top, until the tower becomes so unstable it topples over.
To ensure that a product is fit for market, as they draw close to a planned release date, many teams switch from feature development to stabilisation (or sometimes this is called hardening).
In a sense, we can liken stabilisation to playing Jenga in reverse.
We start out with a product that seems to stand up, but may be a little brittle or have some areas we’ve not yet touched; then we work to uncover and resolve defects progressively, and to refactor the product — like taking blocks from the top of the tower and placing them in holes lower down. In theory, we end our stabilisation with a product that is operationally resilient and also a firmer platform for future growth. Then we’re ready to release it.
Change the language a little, and the way this works can sound pretty close to the testing phase on a traditional waterfall project; so what is going on here? Why do we need this? If we have a robust definition of done, do we really need a whole separate phase of testing?
Some commentators say that any team going through a distinct stabilisation phase are not being agile, just using the label to sound good. I wouldn’t put it that strongly, although I think it’s fair to say that they are probably not operating in a wholly agile environment. Some examples of non-agile factors include:
This doesn’t mean that we just have to put up with this, though. The very act of applying agile practices is often progressive, and as it smooths out and unblocks parts of the new product development process, it starts to highlight inefficiencies and dysfunctions further down the value-stream. This is positive and to be welcomed, as it is only by uncovering this and causing some discomfort that we can hope to influence change.
Being pragmatic, though, we need to recognise that by allowing the architecture to emerge through development we sometimes make decisions early in development that are revised later.
We also find that features might be developed in an expedient manner to achieve a deadline that results in a less flexible framework for later development.
These are examples of what we mean when we describe our products as having ‘technical debt‘, so the sooner we pay down that debt the better. Having time set aside for reconsidering and refactoring those features can result in a more robust product for future development.
Do you have a distinct phase after development and before release; whether you call it stabilisation, hardening, pre-production, operational acceptance testing, or whatever? How does this work for you? Can you see ways that you can evolve the development process to approach something closer to a continuous integration and release environment? I’d be interested to hear your thoughts.