Practical .NET

The Case Against Zero-Defect Software

I recently had to update my password on a site that I joined many, many years ago -- far enough in the past that a "good enough" password was "any five characters." The site now wanted me to have a longer password and to include a "special character" of some kind. I fumbled it the first time (apparently a number isn't a "special character"?) and found myself stuck in an endless loop. Obviously, if your current password was unacceptable AND you got the password change process wrong, some error flag was set and nothing would ever unset it.

After two or three times through the loop, I exited, came back in, and was able to successfully change my password the first time through. If that hadn't worked, I would have contacted customer service and had them fix the problem.

Obviously, this is a bug or (in tester-speak) a "defect." If I was in charge, would I be upset that it hadn't been caught in testing? No. In fact, I probably wouldn't even be surprised. The number of people who have a password like mine AND would get the password change process wrong is probably vanishingly small. If someone had asked me to bet on this defect being caught in testing, I would have bet against it.

More importantly, if I was in charge and the problem was reported to me, while I might consider adding the defect to the backlog of changes, I doubt very much that the corresponding fix would ever make it into production: There would always be something more important/valuable for the development team to do than fix this defect.

The Real World of Testing
I believe that programming is the most complicated cognitive task that human beings engage in. So, when it comes to defects there are three axioms I accept as truth:

  1. Programming is a craft and inherently error prone
  2. We must, therefore, depend on inspection (testing) to track down errors
  3. Time spent on testing is time taken away from delivering new or improved functionality

As a result, I'm comfortable with code with bugs in it (I don't put it quite that way to my clients, though).

While people like Margaret Hamilton have advocated for software design processes that promise to generate defect-free code -- her own Higher Order Software methodology, for example -- those techniques haven't gained traction because people have felt that the additional cost wouldn't deliver sufficient compensating benefits. We're all comfortable with buggy code precisely because, unlike Margaret Hamilton's software, we're not trying to land on the moon. Our software should work, but it doesn't have to work perfectly.

In fact, your software almost certainly doesn't work perfectly.

There are good reasons for that. While it's possible to predict the output for some small component of the application (which is the justification for automated unit testing), predicting the result of the interaction of all the components of any interesting application is beyond human ability. Throw in AI and fuzzy logic and the problem only gets worse. We count on testing to report on those scenarios whose outputs we can't predict.

Unfortunately, the world is so various and the "ingenuity" of our users so great that we can't imagine all the scenarios we should test. While the percentage of those "unimaginable" situations is very small, the law of large numbers kicks in: A small percentage of a large enough set of real users turns into an actual number of cases (me, for example). And, I would suggest, even if we did test for and discover those scenarios, many will fall into that category of "not worth fixing."

Realistic Testing Strategies
I'm not suggesting we shouldn't do testing: Plainly, that's stupid. There's no point in delivering new functionality if it doesn't, ya know, work. But we need to take the reality of testing into account.

For example, when we stop finding defects it doesn't mean that there aren't more bugs -- there almost certainly are more in our code.

When we start the testing process, finding defects is easy: Very little effort is required to find the next defect. As we work through those initial defects, though, it starts taking longer to find the next defect. More precisely, the effort to find the next defect gets larger. When the effort increases and we don't put more people on the job of finding defects, the time to find the next defect increases.

Eventually, the "mean effort to find the next bug" becomes so high that the number of people required to find the next bug in a reasonable period of time exceeds the number of people we're willing to dedicate to the task. At that point, we release the software to production/our customers.

Effectively, by releasing our software, we enlist our user base into the defect hunting process: The number of people involved in the process increases by a couple of orders of magnitude. While the mean effort to find the next bug remains constant, the absolute time between bug reports drops to something close to zero and bug reports flood in (to be fair, many of the reports are duplicates). We don't release our software because it's "defect free"; we release it because it's "too expensive to find the next defect."

The Real Cost of Testing
This view gives us the real cost of testing: It's the cost of testing/fixes before release and the cost of bug tracking/resolution after release. If we assume that all defects are equal, transferring costs from testing to bug tracking is perfectly OK as long as the total cost decreases.

Except not all defects are equal: Some defects are catastrophic and we can't afford for them to be found in production. We need some scheme to categorize defects by importance to adjust our costs appropriately.

Even if you're classifying your bugs, though, I bet the justification for your classification system is wrong. That's a topic worth another column.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube