The testing anti-pattern

I've now finished my previous gig and the guys are busy chasing some recurring issues with the application. Because of its complex architecture, those issues are equally complex to understand and debug.

Most of the code in this application happens around things we cannot put under unit-tests, because they all rely on heavy integration: syncrhonization of data between tiers, integration with external RFID hardware and 3G modems, sqlce and its multiple connections issue, and other equally difficult things to debug without running the system with said external entities. As a result, our code coverage is very low, and while we do have fakes for most of those external entities, we have to think hard and strong about the value provided by tests relying on fakes, or we'd end up testing a fake implementation. Not very useful to say the least.

That made me think of some testing anti-patterns I see recurring a lot on various project, and I thought I'd highlight them. That will let me reevaluate my position in a year time, and I also know that some of the people still invovled in some of my projects are reading this blog, so it will serve as a constructive criticism for their next projects.

  1. I'll paraphrase Chad Myers, code coverage is the new LOC/day. Code coverage doesn't ensure quality, and the percentage is just that, a number. It needs to be put into context of the project: some areas do not need unit tests because they don't provide value. If you chase 100% code coverage, you're on a wild goat chase and you should reevaluate your understanding of the value of code-coverage (beyond the marketing bullsh*t).
  2. It is more important to have a code-base with low cyclomatic complexity and loose coupling than it is to have thousands of text. Evaluating code quality is not only about testing scenarios, it is also about maintainability, understandability, extensibility, etc. Use NDepend to understand where you need to refactor or rework part of the systems.
  3. Finally, and more importantly, I see no value in spending time after your code is released to write unit tests for the sake of it. Because the code has already shipped, you should not do any refactorings to it (or you'd end up with changes that you'd need to re-deploy and re-test, in which case you'll have to ship again, it all becomes a virtuous circle). So you end-up putting code under tests without bringing much value at all (again, except for increasing code coverage, see 1). Unit testing is a tool *during development* that helps you shape your objects and functionality and refactor your code until you get a satisfying result. Writing code aftewards doesn't serve much purpose.

There you are. With the advances like BDD, the focus on unit tests and code coverage by management should hopefully die a much to be celebrated death, as the focus shifts to the real issue: shared knowledge and process, arriving at a common definition of the what and why, and let developers handle the how.

Ads

Comment