Inside VSTS

Get Your Testing Process Right

When it comes to developing a successful product, perspective and timing in testing are key. Jeff shares the steps for getting it right.

In previous articles, I focused mainly on using the tools available in Team System. This article is a slight departure from that -- I want to focus a little more on process. Maybe, just maybe, development organizations can use this as a wake-up call to the fact that things need to change. Some organizations have a great development process; others have a not-so-great process. In this article, you'll come away with an idea of how best to leverage testers in your current process.

When developers demonstrate code to a customer and say, "We're done, what do you think?" and the customer says, "You got it wrong," whose fault is it? If you said it's the developers' fault, think again. So, is it the customer's fault? Actually, it probably isn't their fault, either. How about the testers, then, since this is an article about testing? No, that still isn't it.

In this case, the problem doesn't lie with any particular group, but in the process.

Here's the basic problem: If you allocate 10 percent of your development time to testing (a very low percentage, but it's pretty much the norm), nine out of 10 times, that 10 percent is the last 10 percent of the software development process. The problem with this is that all of the bugs are found when there's virtually no time left to fix them. And when this happens, either the schedule slips (and slipping the schedule at this late date will upset a lot of people) or the software is released with a lot of bugs (this upsets a lot fewer people than the first option, believe it or not).

So how do you fix this problem? And better yet, what really is the problem?

The problem is subjectivity. Users want a feature, but they don't know how they want it implemented and they can't describe it accurately. This is understandable; that's why we developed prototyping -- so that once users played with some screens, they'd have a better idea of what they're asking. Still, requirements are ambiguous despite our best efforts, and prototypes leave a lot of wiggle room because they focus on the user interface and not the underlying logic.

But you can fix that by looking beyond the requirements to test cases. Here's the key to solving the problem: Test cases are objective. If users approve test cases, then when those tests pass the requirement is complete!

Sounds simple right? It turns out to be a lot harder to implement than it sounds -- otherwise everyone would be doing it right.

The first step is getting a test manager involved by the conclusion of the requirements phase. By that, I mean as requirements are finished, so this can also be applied to agile development. The test manager should be working on the test plan and estimating how long it will take to test the requirements. Test cases should be written up by either professional testers or functional analysts, with the oversight of a test manager in parallel with the functional specifications being created.

These tests should be functional, system and user acceptance tests (I'm ignoring other types of tests which are used for other purposes). These tests, created at this point, provide two key benefits: Users now have objective scripts by which they'll accept the delivered software, and developers have objective tests that tell them when they're done with the code.

Much has been made of unit testing and Test Driven Development (TDD), so why haven't I mentioned those yet? I believe strongly in unit testing and I believe that TDD done right provides benefits -- but they aren't the be-all and end-all of code quality. They provide information on a specific aspect of code quality. But a unit test isn't a functional test or a system test. And because unit tests aren't supposed to be strung together, they never will be. Unit tests can be created in the context of functional tests to validate at the method level that a certain path through code will produce a certain result. So the basic rule of thumb here is to use unit tests (automated) in support of functional testing.

The next step is to break down the requirements into testable units of functionality. That is, each small part can be tested using functional tests (whereas system tests would be used to test the entire requirement). In this way, coding and testing can be done in an iterative fashion, even when using formal processes. Over the course of development, each part will have been tested numerous times and the bugs found ahead of time.

The key here is that since developers know they're done (because they have the objective test cases) when they've run the tests and they pass, it means they've already verified the functional tests before the testers get the code. This means that the formal testing process at the end of the development cycle can be used to find complex system and integration bugs instead of finding things that should've been found during development.

The process I've outlined isn't perfect and requires organizations to accept some basic realities about how they do development. Inevitably, there will be details missing in this short article, but the concepts above are implementable with a little help from an experienced test manager. Use this to take the burden off of the developers and spread around responsibility for quality code -- because it really is everyone's job.

About the Author

Jeff Levinson is the Application Lifecycle Management practice lead for Northwest Cadence specializing in process and methodology. He is the co-author of "Pro Visual Studio Team System with Database Professionals" (Apress 2007), the author of "Building Client/Server Applications with VB.NET" (Apress 2003) and has written numerous articles. He is an MCAD, MCSD, MCDBA, MCT and is a Team System MVP. He has a Masters in Software Engineering from Carnegie Mellon University and is a former Solutions Design and Integration Architect for The Boeing Company. You can reach him at [email protected].

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube