Nothing New - continued

14 February 2018
thomson_post_2.jpg

In the first part of this 2-part post, I looked at what I felt were the deficiencies of static analysis tools, and shared my cynicism about their effectiveness. I promised I’d update you on what we found when we looked at a more modern version of the tools I’d dismissed in the past. This is that update.

There are a number of modern static analysis tools out there (we're not advertising) and what they seem to have that were not there in my experience many years ago are:

  • a far greater understanding of the language and how it is used
  • a database which builds a knowledge base of the source code structure
  • the ability to integrate with build systems
  • an interface which helps development by a team of engineers

This allows static analysis to do things like trap subtle bugs which would be very hard to spot in testing, but could cause serious flaws when particular conditions occur.

Let’s get into a bit of detail. Here is an actual code example:

      assert (d != NULL);

      /* allocate array of POINTERS - will be NULL'd by calloc */

      if ((d->entries = calloc (d->size, sizeof (dictEntry_t*))) != NULL)

      {

          d->size = hashsize;

          s = swOK;

      }

That piece of code seems absolutely fine at a casual glance, and it has been in OpenWare for years. However, the static analysis tool spotted something that is wrong - it assumes that the variable “d->size” has been assigned a reasonable value, in the “calloc” function call - and for some of the code paths where this is used, there is a possibility it has not. By following all the code paths, the tool spotted this, and so we then changed it to:

      assert (d != NULL);

      /* allocate array of POINTERS - will be NULL'd by calloc */

      if ((d->entries = calloc (hashsize, sizeof (dictEntry_t*))) != NULL)

      {

          d->size = hashsize;

          s = swOK;

      }

The important point here is that this bug would be very hard to spot with code inspection, and would only show up in particular conditions - which would be unlikely to be covered in testing.

Difficult to find

OK: that’s only one example of the many bits of code needing to be closely looked at that was flagged up by the static analysis tool. Another couple of errors which the tool picked up in multiple cases were memory leaks and array bounding errors. Once again, these are possible (but very difficult) to find any other way.

The wonderful thing is that the tool no longer shows up so many false positives, so the team can take the time to look at all the areas that it does flag up. And if something is a false positive, we can mark it as such, and the knowledge base learns it for the next time - without having to add odd "pragmas" to the code itself.

Of course, it would be great to be able to quantify the benefits of this tool by quoting the number and severity of bugs found and the impact that has on our code quality - and thus our customers. We don't yet have that level of detail (and may never have) but the one thing we can say for sure is that our team are now convinced that it is helping in their day-to-day code development. Being able to say that a tool is widely appreciated by our team of hardened programmers is quite something!

But it’s not perfect: one of Charlie’s colleagues, Chris Carr, who is the main architect for our Linux environment, has been able to break the static analysis tool on his first use of it! But even at that, he’s enthusiastic to avoid and overcome the problems, rather than just give up.

So, I'm willing to admit that I was wrong. Static analysis gives much more benefit than I expected; maybe it has now come of age. After all, it was the same wise man who wrote: "To everything, there is a season"…