Monthly Archives: November 2012

The technical debt involves several creditors

The Technical Debt is a powerful metaphor. It helps to explain in two minutes the stakes of code structural quality to anyone. Although it’s been around for quite a while, it’s has become very popular recently and pundits are still debating its definition.


For the sake of clarity, the definition I’ll be using is the one from Dr Bill Curtis:
«Technical Debt represents the cost of fixing structural quality problems in production code that the organisation knows must be eliminated to control development costs or avoid operational problems.»


Before this measure was introduced, developers had no means of explaining to their management the consequences of their decisions and the sighs, the rolling of their eyes, the shrugging of their shoulders and warnings such as «the whole damned thing needs to be rewritten from scratch» may have often been exaggerated and were not taken seriously.


Until they were true and it was too late.


So the technical debt is a huge step forward adding a new dimension to improved IT governance but for this measure to be efficient we should be able to act upon it and, for instance, try and fix that big legacy code that is crippling our organisation’s agility.


But it’s not that easy.


Because at the level of maturity where there are many structural issues in the code, tests are usually long gone. And truth be told, requirements and test data are also often missing.


It’s the result of the spiral of debt:  the worse the quality of an application, the more difficult it is to improve it. So the path of least resistance is to let quality degrade with each new release. The process stops when the application has reached the following steady state: a big black box with no test assets, no requirements and a generous helping of structural code issues.
Technical Debt
Thus there are several creditors for the technical debt and there is an order with which they need to be paid off: requirements first, then test assets and code quality.


Sounds like rework?


That’s because it is.


Is the critical path becoming an endangered specie?

Before scheduling software packages were available, engineers would painstakingly perform the forward pass and backward pass algorithms manually with the occasional help of a trusty slide rule to identify the critical path of their projects.

At the time, schedules were true to their original purpose: simple graph models that helped understand how changes would impact the delays and what tasks should be looked at more closely in order to reduce the global duration of projects.

Fast forward to the present, many scheduling software packages have grown into ALM tools and students for the PMP certification discover during their training that there is actually a graph underneath the Gantt charts they have used on a daily basis for several years.

The last nail in the coffin of the Pert graph is a consequence of the feature race that takes place between ALM tools. Since the selection of these tools is a box checking exercise, features are added on top of one another and time tracking is one of the first.

It’s an easy sell, it´s one of those ideas that looks great on the surface but whose flaws are safely buried in technical layers.  Since all of the organisations’ projects are in the same database why not take this opportunity to track time and therefore costs?

So the package is bought, users are trained and trouble begins.

With their original purpose lost from sight and with the new mission to track time, schedules quickly morph into monster timetables: tasks not related to the production of one of the projects’ deliverables are added, dependencies are removed to ensure stability and easy timesheet entry. Until the Pert graph is no more, the critical path is lost and tasks have become budget lines.

Managing costs and delays with the same tool is a complex endeavour and for many organisations it is too complex. The costs side often wins because it has more visibility and it is easier to deal with and thus the ability to simulate the impact of scenarios on projects’ end dates is sacrificed.