You are on page 1of 1

Technical Debt and

Software Entropy
By: Kiko Basilio

We seem to be getting a lot of traction on these topics lately. The web has plenty
of diverse info on technical debt, but a quick google seems to credit Ward
Cunningham for coining this metaphor in his 1992 report:
Shipping first time code is like going into debt. A little debt speeds development
so long as it is paid back promptly with a rewrite... The danger occurs when the
debt is not repaid. Every minute spent on not-quite-right code counts as interest
on that debt. Entire engineering organizations can be brought to a stand-still
under the debt load of an unconsolidated implementation, object-oriented or
otherwise.

In practical terms, its the stuff that gets


left out when we are in a rush or when we dont
know any better. If left unchecked, software
entropy sets in and before you know it, youre
already considering scrapping the whole thing.
Its easy to imagine how we can sometimes
compromise on quality (perhaps willingly or out
of sheer neglect) in favor of getting an earlier
release. Heck, we dont even need to imagine it
happens in all our projects. The pressure of a
committed timeline can be tremendous. We all
know it too well.
So how does one approach this problem? When
faced with tens of thousands of lines of preexisting (inherited) code, how do you stop your
jaw from just dropping to the floor and giving up
without a fight? Do we just go on with the
practice of giving the fake thumbs up and saying
yep, we are optimizing and refactoring every
time we get asked? How do we get it the
attention that it deserves? Its actually quite
simple. As with any huge endeavor, we take the
first step.
And the first step is to exploit it and make it
visible. Get it out there with little regard for
being 100% accurate and people will react and
make noise. After all, the end goal is not to have
the be all, and end all definition of this. There
are smarter guys for that. It has always been
about just getting enough juice to make this
entropy reduction effort sustainable. So in a
world where all the reports talk about the ontime
or
otherwise
late
delivery
of
feature/functions, we had to come up with our
own. There are a lot of papers out there that
propose their own means of assessing technical
debt, and it was just as easy to pick one that felt
right as a humble start. After all, we already
started using sonar back in the early days of 2010
before we got interrupted by SNAP.

The good thing about having metrics is that


you have empirical data. You can have
benchmarks, measure progress over a given
time, make projections, etc. and at the end of
the day when a score goes up, you can actually
see that your small effort did matter. The bad
thing about having metrics, is that people
eventually learn to game the system. And
thats why its important to emphasize that
the end goal is not to get a higher score, but
hope that in doing the practice, each person
takes the principle to heart.
We chose addressing code coverage,
complexity and rules violations (PMD,
CheckStyle, FindBugs) for starters. Code
coverage was of a particularly higher
importance since we were really low here and
having good coverage of well written unit tests
allowed for devs to more freely venture into
refactoring code because it reduces the risk of
breaking something. We dont want this to be
just a separate activity though, we want them
to learn to use this opportunistically. That is,
exploit chances to refactor X as it affects the
code we are writing for feature Y, each time it
presents itself.
Kent Beck in his book Extreme Programming
Explained writes:
If all you could make was a long-term
argument for testing, you could forget about
it. Some people would do it out of a sense of
duty or because someone was watching over
their shoulder. As soon as the attention
wavered or the pressure increased, no new
tests would get written, the tests that were
written wouldn't be run, and the whole thing
would fall apart.

Reducing complexity for the entire project, as it


turns out, is a much tougher job than we originally
thought it was. Cyclomatic complexity is a measure
of linearly independent paths in a piece of code. A
method call, decision structures, iterations, etc. each
adds to cyclomatic complexity. But it doesnt mean
its automatically of lesser quality, it just means a lot
of logic is being applied in the code. At the end of
the day, its about maintainability. Following
complex nested ifs and nested iterations can
sometimes be really tricky.
And so with all the other effort already underway,
we thought it best to aim for refactoring just the
really big ones (those that have a CC score of 20+),
and at the very least maintain the overall complexity
ratio as we add new lines of code.
To summarize, I like how my good colleague Mykol
Mallete puts it in his article in Agile Record, Issue #7:
what we want to have visible is where
the software development team is headed. Guided
by what they have done, the metrics would reflect
the former, using the latter as evidence.
And so the saga continues. From a time when we
were struggling to defend even doing this and
finding a home for this effort in primavera, we are
slowly getting better support and visibility. Weve
already
started
seeing
some
behavioral
improvements within the teams.
I do hope this becomes infectious and sweeps
through all of them. We still have a long way to go,
but its the journey that matters. After all the road to
technical excellence does not end when all the sonar
flags have been fixed.

You might also like