Manage Technical Debt with Static Source Diagnostics

Aug 8, 2019 | Best Practices, Test Automation Insights

manage technical debt

Lines are too wide! Functions are too long! Everything is complex!

Working in a 21st-century software shop, you probably receive multiple reports that measure code by its stylistic quality, security, density and so on. What do you do with all that information?

One possibility is to ignore them. Just like “inbox zero” or recommendations about how much water to drink daily, plenty of our colleagues simply tune out these prescriptions with the thought that their finite attention is better spent on other matters.

You don’t have to make that choice, though; it is possible to bring your technical debt down to a low and sustainable level and keep it there. Here are a few guidelines to help get your automated metrics under control:

  • Establish “green zones” of appropriate levels of technical debt
  • Decide local coding standards with your team
  • Establish daily and weekly routines for paying off debt
  • Get help

Safe Levels of Debt

To clear all diagnostics — just like dealing with every piece of email you receive within a few minutes of first reading it — sounds like a good goal. Is it the best one, though? If five people suddenly write you in the last half-hour of the day, is it wise to stay late just to wipe your inbox clean before leaving?

Maybe not. At least consider the possibility that it’s OK to sustain technical debt at a small level that is still more than zero.

While I know of no rigorous evidence on what constitutes an acceptable level, practitioners and managers with whom I’ve spoken generally agree that for one developer or tester to have a backlog of more than a hundred items is a crisis. In contrast, a dozen tickets, issues, or tasks is probably appropriate for an experienced contributor. A newcomer to a domain does well to have no more than two or three open assignments.

Think of this another way: If your source code regularly reports thousands of warnings, or you have thousands of customer reports backlogged, it’s probably time to take extraordinary measures and reduce that accumulation quickly to a more manageable level.

Look for low-hanging fruit: Do different programmers on your team format function definitions in incompatible styles? Do you have someone on the team who regularly creates memory leaks? A team that addresses a new diagnostic system often finds it can knock out 80% or more of the initial volume of complaints by systematically addressing just two or three leading causes of debt.

When you first begin to work with a new diagnostic system — whether it has to do with test coverage, coding style, security hazards or something else — your first target should be to handle a few of the high-volume, “noisy” items. When you’re dealing with thousands of issues at a time, put a premium on consistency rather than careful prioritization: Pick any problem you understand, and solve it for good. Once you’ve slashed the backlog from the thousands to the hundreds or, ideally, a couple of dozen issues, you can return to more normal operations where you consider each item on its own merits.

That initial frenzy should only happen once. Reduce your count of issues to a sustainable level, and keep it there. “A sustainable level” varies a bit depending on individual psychology and team dynamics, but typical ranges are zero to five issues on the low side and twelve to twenty at peak. Anyone operating outside these ranges is probably under strain.

Local Standards

The first wave of results from running an industry-standard analyzer or application measurement tool is likely to be overwhelming; that’s the common reaction from the teams I’ve observed. Once you’ve followed the previous tip and shaved the diagnostics down to a manageable level, you can begin to consider individual items in a more nuanced way.

Maybe your analyzer expects a specific “header block” or “docstring” and complains when it doesn’t find one. Maybe your team has a reason to omit these particular source-code comments in specific circumstances. If so, it’s timely to reconsider the decision. Are you truly better off without a header?

If so — and there are plenty of reasons your team might choose to diverge slightly from common industry practice — make sure you configure your tools to reflect your standard. Don’t just expect the team to ignore a diagnostic that you’ve decided doesn’t apply; learn enough about the configuration of the tool to disable that particular diagnostic, preferably only in the situations where it shouldn’t appear. To leave the diagnostic turned on desensitizes programmers and makes it too easy to miss messages they should see. To turn off the tool entirely cuts down on the noise, but it also eliminates the signal of diagnostics that deserve more attention.

Consider this example: Your team writes (at least partially) in Python. Pylint expects all method definitions to have explicit docstrings, but your team typically doesn’t write docstrings for internal convenience methods.

You have a few possible choices:

  • Don’t run Pylint, and don’t see the missing-docstring complaints. The problem with this approach is that team members miss out on everything else Pylint might find
  • Run Pylint, but remember to ignore missing-docstring-s. This makes readers work too hard.
    Decorate individual instances of the pattern with “# pylint: disable=missing-docstring”. If there are only two or three instances of the pattern in all your source, this is probably a wise expedient
  • Elaborate a common pylintrc with a specific syntax for the occasions where docstring-s might be absent. This is a good choice when you are dealing with scores of local methods
  • Fill in minimal docstring-s, with the thought that, even though your team doesn’t like them, in this particular circumstance, it’s probably a healthy enough habit in general that its cost is less than the cost of making a special rule to handle it

In any case, make a conscious decision, and use the tools at hand to help enforce that decision.

Detect and Eliminate Application Vulnerabilities

360° code coverage with SAST and SCA

Routines to Pay Off Debt

Make a habit. Pick an hour out of the day, or a half-day out of the week, to catch up on diagnostics from analyzers. Different people operate on different rhythms, so no one prescription fits all.

I’ve seen quite a few programmers and testers reach more success by figuring out a schedule that’s easy to follow. Soon they know more about their own capabilities: that they can consistently clear one routine diagnostic with half an hour of concentrated effort, for example, is a useful measurement.

Software Is a Team Sport

Frustration is a signal. If you find something blocking your progress — if you had good flow, for instance, in reducing your application’s technical debt from 6% to 2%, but you can’t seem to get past that — ask for help. Talk over what you see with someone you trust, and you’ll probably find that you understand the problem differently, and that the other person has a background, experience, and perspective that pays off in this situation.

Finally, a strong recommendation on what not to do: Don’t ignore software tools. When you first run one and see thousands of complaints about an application that you know has flocks of satisfied customers, it’s only natural to wonder what the point is. These tools are more sophisticated than might initially be apparent, though. Give them a chance.

Work through a sequence of diagnostics. Restyle the source as suggested in one or two examples, even when you don’t agree with the tool. At the very least, you’ll be in a better position to judge where you agree or disagree with the tool. You might find that you’re writing better source than you thought was possible. In many cases, you can even get help from “pretty printers” or “source formatters” to help improve your code composition automatically.

These opportunities can help you decide how to address your static source diagnostics and pay down some technical debt, so get with your team and make the most of them.

All-in-one Test Automation

Cross-Technology | Cross-Device | Cross-Platform

Related Posts:

Effective Black Box Testing Methods You Need to Try

Effective Black Box Testing Methods You Need to Try

When users open software solutions, they expect them to function as needed. For example, when a business analyst opens Excel, they hope to work with data without requiring knowledge of what’s happening with the application internally. If something breaks, they won’t...

8 Steps to Create a Data Migration Plan

8 Steps to Create a Data Migration Plan

When companies change systems or move information to a more secure location, they typically need to perform a data migration. If a company wants to use cloud-based solutions, it must transfer existing information from localized hardware to a cloud environment. A...