Practical .NET

Making Testing Worthwhile

In an earlier column, I argued against the idea of “zero-defect” software. Part of my claim was that your software will always have defects (bugs) ... you just won't have found them yet because the “mean effort to find the next bug” is high. You can call that “zero-defect software” using an "ignorance is bliss" argument, but that doesn't mean your software is free of bugs.

The (Un)Importance of Bugs
But I also argued against the idea of “zero-defect” software because having a defect doesn't mean I have to fix it. I may even know that I have a defect and feel that my software is ready for release. Since software is never “zero defect,” using that as a criterion for releasing software is foolish. The criterion for when software is ready to release is when the user is satisfied with it -- defects and all. Even if I know there is a bug and even if I can fix it, both my client and I can feel that the most valuable use of my time as a developer is to add functionality rather than fix some particular bug.

I can take that cavalier attitude toward a known bug because I know that some bugs are more important than others. There are bugs that the user is perfectly willing to live with in order to have the software installed, and bugs that the user is willing to leave in the software in order to get some additional functionality. To make good decisions in this area, then, we need a bug classification system that distinguishes between bugs-that-matter and bugs-that-don't.

The problem here is that, as developers, we tend to focus on logic bugs and data corruptions. For example, developers tend to treat layout issues in the UI as less important than logic bugs. I'm not suggesting that's necessarily wrong ... but it does miss the point. The goal of testing isn't to achieve zero defects (which isn't achievable, anyway). The goal of testing is to achieve some specified level of user satisfaction (or, if you prefer, keeping users' dissatisfaction below some level). A developer-based classification system isn't going to meet that goal.

Keep the Customer Satisfied
In that previous article, I had three axioms that I said drove testing for defects:

  1. Programming is a craft and inherently error prone
  2. We must, therefore, depend on inspection (testing) to track down errors
  3. Time spent on testing is time taken away from delivering new or improved functionality

I'm going to add a fourth:

  1. Anything the user doesn't care about isn't a defect (Alternatively: Only the things that a user cares about count as a defect)

There's a corollary to axiom 4: Even if a user cares about a bug, it doesn't mean we have to fix it.

Think of it in terms of customer service: When a customer gets a defective product, they become dissatisfied. While dissatisfied customers are a real cost to the organization, they're not an infinitely high cost, which means we can choose how to deal with that dissatisfaction. We can, for example, choose to eliminate the defect, or live with dissatisfied customers, or take action to make the customer satisfied -- perhaps through replacing the product, giving the customer a refund or some other action.

Which is a way of saying that defect classification and defect “rectification” should both be driven by user satisfaction. A problem with page layout that makes the page unreadable to the user is as catastrophic to the user as the application crashing, no matter how the developer feels about it. On the other hand, a "logic error" that results in a product's price being overstated may not even be an important error ... provided the customer is charged the correct, lower price at delivery time and the slightly higher price doesn't discourage customers from buying the product.

With this approach, testing isn't about “removing defects,” it's about “ensuring customer satisfaction.” Customer satisfaction includes, therefore, both the possibility of fixing the bug and the process of dealing with dissatisfied customers who have found a defect in production. This approach has a significant impact on the strategy we use to validate our tests.

Validating Tests
One strategy is to associate a test with a set of “results that will make our users satisfied” and use a classification scheme that that rates defects, once discovered, according to their impact on user satisfaction. Ideally, the process is so straightforward that a random stranger off the street would be able to identify defects and assign any defect to the appropriate category.

The alternative strategy is to have a reliable authority verify the results of testing. And, by "reliable authority," I mean: The user or a suitable proxy. Under this strategy the results of a test run of the payroll system is checked by the payroll department and a select number of employees. The employees, who are more keenly interested in the results than anyone else, will tell you if they're being underpaid; the payroll department will tell you if the employees are being overpaid (and whether all the withholding accounts are being updated, of course).

This, of course, means that you can't isolate the QA team from the user community since, ultimately, the QA team is a proxy for the users (and not, for example, the most annoying members of the development team). In fact, if testing is about ensuring customer satisfaction, then a place has to be carved out for the user community to participate. The practice of rolling out new software to a small group of users without telling them and then monitoring those users is a great example of incorporating users into the testing process, however unwittingly. More “witting” integration would be even better.

Making Testing Valuable
But thinking about testing purely as a means of finding defects that make customers dissatisfied is still missing the point. I'm sufficiently committed to testing that I think the best way to address testing is not to just focus on customer satisfaction.

Instead, you should be using the metrics that you gather about your testing process to make testing more valuable. There are two opportunities here:

  • Reducing the cost and improving the effectiveness of your testing process
  • Enhancing your software development process to have it create better software

First: How efficient is your testing process? Every time you run a test, you have an opportunity to assess how efficient you are in finding defects (cost vs. defects found). Every time you review your bug reports from production, you have an opportunity to determine how effective your testing is (cost vs. defects released into production). Every testing run gives you a chance to gather both of those metrics to see how you're doing, and every testing run is an opportunity to try something new and see if you can improve on those metrics.

Second: How effective is your software development process at producing defects? Here, you'd prefer your development process to not be very effective at all. Again, every testing run gives you an opportunity to see which defects occur frequently enough that they must be baked into your software development process.

These two opportunities work together. Discovering that your development process automatically generates some kinds of defects does not mean that you should change your development process. If your testing process is both efficient and effective it may, for example, be more cost effective to catch those defects in testing. The wrong answer is to treat every defect as if it were the most important thing in the world. It is, at best, the most important thing to your customer.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

  • What's New for Python, Java in Visual Studio Code

    Microsoft announced March 2024 updates to its Python and Java extensions for Visual Studio Code, the open source-based, cross-platform code editor that has repeatedly been named the No. 1 tool in major development surveys.

Subscribe on YouTube