Tag Archives: Requirements

Technical debt: paying back your creditors

 

 

Would you still go for a swim if you saw that sign on the beach?

Probably not.

Most people would agree that a swim in the sea is not worth dying for.

 

Now then, what drives people to cut corners in software engineering?

I think the factors that are weighed in to take these decisions are diametrically opposed to those of swimming in shark infested waters:

  • The immediate gain looks substantial
  • The consequences are far off and uncertain.

The Technical Debt metaphor mitigates the second factor but it might not be enough to deter compulsive corner cutters.

As I explained in a previous post, the technical debt involves several creditors because what drives people to cut corners on software quality usually has ar least already made two more victims along the way

.

Let’s study what would be the first steps of paying back the technical debt of a legacy application that has become a big black box. We’ll assume that the strategy to fix the application has been agreed upon, that a static analysis tool has been procured, installed and configured and that all the violations are identified, classified and prioritised. 

Of course this project is mission critical otherwise the management would not have bothered. So in order to start fixing the code of this big black box with the level of confidence required, you need to build a test harness. And to build the test harness, you need to discover what it is supposed to do, i.e. the requirements.

In other words you have to pay back the first creditors first.

1) Kaizen time

That does not mean you must restore dusty requirements documentation and manual test scripts like precious works of art. 

 

Rather you want to take advantage of a lesson and an opportunity. 

The lesson is staring right back at you: the cost of maintaining those assets was to high for the corner cutters in your organisation.

The opportunity comes from the time that has passed since those assets were created. Each year brings new ideas, methods and tools, a growing number of which are free.

It’s time to take a Lean look at those assets. For your work to be durable you don’t want them to stick out like big fat corners screaming «Cut me please !». Some of the forms these assets took in the past, such as large documents full of UML diagrams and manually written executable test scripts, were simply too high maintenance and should quietly go the way of the dodos.

Replace documents with structured tools that will store your requirements and use a simple framework like a keyword driven test automation framework to generate your tests. So shop around for the latest tools and methods and find the combination that works for your organisation.

2) Discover the requirements 

For this you need the skills of an experienced tester and of a business analyst.  He may not like it but the tester should have developed a knack for dealing with black boxes and reengineering requirements. So he should be able to figure out what the black box does. And with the input of the business analyst as to why the black box does what it does, together they should be able to come up with a set of requirements. The fact that they are two different individuals ensures that the requirements are well formulated  and can be understood by third parties. Furthermore the business analyst helps prioritise the requirements. Store the requirements in the tool you previously selected.

3) Build the test harness

Next, the Most Important Tests (MITs) can be derived from the prioritised requirements. The tester can write those tests using the keyword driven framework. Even if those tests will never be automated, it is still a good idea to use one such framework for consistency purposes. But really, if at all possible, by all means use the framework to generate the executable test scripts. Remember they should only be the most important ones. 

The last remaining obstacle between you and acceptance test driven refactoring bliss is test data. And this is often where test automation endeavours grind to an halt. Fortunately, the late frenzy about data has spawned a vibrant market for data handling tools and some of them have comprehensive data generating features. Talend is one of them. Personally I have recently used Databene Benerator and found it both reliable and easy to learn.

Mastering your test data makes a huge difference, it’s the key to unlock test automation. So it’s worth investing a little time in the tool you chose in order to achieve this objective.

Et voilà, now that you have paid the first two creditors you should be in a good position to pay the last one. Not only that, you have a built a solid foundation to keep them from coming back.

This article has also been posted on the www.ontechnicaldebt.com site: http://www.ontechnicaldebt.com/blog/technical-debt-paying-back-your-creditors/.

 

 

A specific fishbone diagram for software problems

In 1968 Kaoru Ishikawa created a causal diagram that categorised the different causes of a given problem. The fishbone diagram, as it is also referred to, was particularly well suited for the manufacturing industry where it was successfully used to prevent potential quality defects.

Ishikawa Diagram

One way to look at the diagram is to see it as a recipe: use all the best ingredients and you will be rewarded with a delicious dish, degrade the quality of one our several ingredients and you get a less savoury result.

For instance here is the recipe for bad coffee.

Bad CoffeeIt is very tempting to try and use this diagram in the context of software engineering.

Don’t we all have our own secret recipe for bad software?

Unfortunately, this diagram is not as well suited for software engineering as it was for manufacturing. I have therefore come up with a new version of this diagram for software problems.

Using the recipe metaphor, what are the ingredients required to build great software?

For me it’s clarity, time, motivation, skills and tools.

Clarity (I don’t understand it)

To write good software, having clear requirements is clearly required!

Clarity should be achieved through good requirements management practices such as the ones described in my previous article.

Time (I don’t have the time to do it)

Estimations are the output of planning processes. If not enough time has been allocated for the task then one might be tempted to cut some corners and put the quality of the result at risk.

Motivation (I don’t want to do it)

If all the other ingredients are gathered then the developer can technically do what is expected of him. But will he do it? Will the resulting code be stellar or botched? That depends on its motivation. It’s the project manager’s job to make things happen.

Skills (I don’t know how to do it)

This one covers both expertise and method because one can compensate for the other. An experienced programmer can debug a program in a language he doesn’t know because what he lacks in expertise he can make up for with method. Granted, he will take more time to complete the task than an expert.

Tools (I don’t have the tools to do it)

Tools also have an impact on software quality. Building software on top of a solid framework reduces the amount of code to write and therefore the likelihood of introducing mistakes. Static analysis tools can catch potential issues as code is written and further improve the quality of the code. Lastly, the availability of test assets increases the confidence of the team to improve its code while keeping the alignment with the requirements.

Software Problem Diagram

This diagram can be used for risks identification purposes and thus can guide the actions of the project manager in order to achieve better software projects.

The technical debt involves several creditors

The Technical Debt is a powerful metaphor. It helps to explain in two minutes the stakes of code structural quality to anyone. Although it’s been around for quite a while, it’s has become very popular recently and pundits are still debating its definition.


For the sake of clarity, the definition I’ll be using is the one from Dr Bill Curtis:
«Technical Debt represents the cost of fixing structural quality problems in production code that the organisation knows must be eliminated to control development costs or avoid operational problems.»


Before this measure was introduced, developers had no means of explaining to their management the consequences of their decisions and the sighs, the rolling of their eyes, the shrugging of their shoulders and warnings such as «the whole damned thing needs to be rewritten from scratch» may have often been exaggerated and were not taken seriously.


Until they were true and it was too late.


So the technical debt is a huge step forward adding a new dimension to improved IT governance but for this measure to be efficient we should be able to act upon it and, for instance, try and fix that big legacy code that is crippling our organisation’s agility.


But it’s not that easy.


Because at the level of maturity where there are many structural issues in the code, tests are usually long gone. And truth be told, requirements and test data are also often missing.


It’s the result of the spiral of debt:  the worse the quality of an application, the more difficult it is to improve it. So the path of least resistance is to let quality degrade with each new release. The process stops when the application has reached the following steady state: a big black box with no test assets, no requirements and a generous helping of structural code issues.
Technical Debt
Thus there are several creditors for the technical debt and there is an order with which they need to be paid off: requirements first, then test assets and code quality.


Sounds like rework?


That’s because it is.


Why the world needs better software projects now

In 2006, Gartner introduced the concept of ‘Dead Money’ to characterise the subset of IT budgets devoted to keeping the lights on as opposed to contributing to business growth or enhancing competitive advantage. At the time, the amount of average ‘Dead Money’ was estimated as 80% of IT budgets: eight dollars out of ten.

I haven’t come across a set expression for the remaining two dollars so I’ll stick my neck out and offer this one: ‘Pocket Money’.

Thus we can write the first law of IT budgets:

IT budget = Dead Money + Pocket Money

This equation would normally spur two reactions:

  • let’s reduce this ‘Dead Money’ so we can have more ‘Pocket Money’
  • and let’s make the most of our existing ‘Pocket Money’.

Unfortunately several studies have shown that ‘Pocket Money’ is shamefully wasted.

Alternate text

When you are overweight it’s of common knowledge that you have to do two things to loose weight: change your eating habits and exercise. One without the other will not give you the expected results.

Similarly, in order to efficiently fight back ‘Dead Money’, organizations need not only to take action against the causes of their overinflated ‘keeping the lights on’ budget but also to change the way they are doing new software projects because inefficient projects generally lead to high maintenance software. In other words, just like greasy hamburgers will eventually become fat, wasted ‘Pocket Money’ is ‘Dead Money’ in the making.

This looping mechanism is what drives the ‘Dead Money’ ever higher. This phenomenon is nothing new but what is unprecedented is the level it has reached in a context where IT budgets are under much pressure due to the global crisis and their ensuing new regulations.

That is why the world needs better software projects now.