Inside VSTS

Get Your Testing Process Right

When it comes to developing a successful product, perspective and timing in testing are key. Jeff shares the steps for getting it right.

In previous articles, I focused mainly on using the tools available in Team System. This article is a slight departure from that -- I want to focus a little more on process. Maybe, just maybe, development organizations can use this as a wake-up call to the fact that things need to change. Some organizations have a great development process; others have a not-so-great process. In this article, you'll come away with an idea of how best to leverage testers in your current process.

When developers demonstrate code to a customer and say, "We're done, what do you think?" and the customer says, "You got it wrong," whose fault is it? If you said it's the developers' fault, think again. So, is it the customer's fault? Actually, it probably isn't their fault, either. How about the testers, then, since this is an article about testing? No, that still isn't it.

In this case, the problem doesn't lie with any particular group, but in the process.

Here's the basic problem: If you allocate 10 percent of your development time to testing (a very low percentage, but it's pretty much the norm), nine out of 10 times, that 10 percent is the last 10 percent of the software development process. The problem with this is that all of the bugs are found when there's virtually no time left to fix them. And when this happens, either the schedule slips (and slipping the schedule at this late date will upset a lot of people) or the software is released with a lot of bugs (this upsets a lot fewer people than the first option, believe it or not).

So how do you fix this problem? And better yet, what really is the problem?

The problem is subjectivity. Users want a feature, but they don't know how they want it implemented and they can't describe it accurately. This is understandable; that's why we developed prototyping -- so that once users played with some screens, they'd have a better idea of what they're asking. Still, requirements are ambiguous despite our best efforts, and prototypes leave a lot of wiggle room because they focus on the user interface and not the underlying logic.

But you can fix that by looking beyond the requirements to test cases. Here's the key to solving the problem: Test cases are objective. If users approve test cases, then when those tests pass the requirement is complete!

Sounds simple right? It turns out to be a lot harder to implement than it sounds -- otherwise everyone would be doing it right.

The first step is getting a test manager involved by the conclusion of the requirements phase. By that, I mean as requirements are finished, so this can also be applied to agile development. The test manager should be working on the test plan and estimating how long it will take to test the requirements. Test cases should be written up by either professional testers or functional analysts, with the oversight of a test manager in parallel with the functional specifications being created.

These tests should be functional, system and user acceptance tests (I'm ignoring other types of tests which are used for other purposes). These tests, created at this point, provide two key benefits: Users now have objective scripts by which they'll accept the delivered software, and developers have objective tests that tell them when they're done with the code.

Much has been made of unit testing and Test Driven Development (TDD), so why haven't I mentioned those yet? I believe strongly in unit testing and I believe that TDD done right provides benefits -- but they aren't the be-all and end-all of code quality. They provide information on a specific aspect of code quality. But a unit test isn't a functional test or a system test. And because unit tests aren't supposed to be strung together, they never will be. Unit tests can be created in the context of functional tests to validate at the method level that a certain path through code will produce a certain result. So the basic rule of thumb here is to use unit tests (automated) in support of functional testing.

The next step is to break down the requirements into testable units of functionality. That is, each small part can be tested using functional tests (whereas system tests would be used to test the entire requirement). In this way, coding and testing can be done in an iterative fashion, even when using formal processes. Over the course of development, each part will have been tested numerous times and the bugs found ahead of time.

The key here is that since developers know they're done (because they have the objective test cases) when they've run the tests and they pass, it means they've already verified the functional tests before the testers get the code. This means that the formal testing process at the end of the development cycle can be used to find complex system and integration bugs instead of finding things that should've been found during development.

The process I've outlined isn't perfect and requires organizations to accept some basic realities about how they do development. Inevitably, there will be details missing in this short article, but the concepts above are implementable with a little help from an experienced test manager. Use this to take the burden off of the developers and spread around responsibility for quality code -- because it really is everyone's job.

About the Author

Jeff Levinson is the Application Lifecycle Management practice lead for Northwest Cadence specializing in process and methodology. He is the co-author of "Pro Visual Studio Team System with Database Professionals" (Apress 2007), the author of "Building Client/Server Applications with VB.NET" (Apress 2003) and has written numerous articles. He is an MCAD, MCSD, MCDBA, MCT and is a Team System MVP. He has a Masters in Software Engineering from Carnegie Mellon University and is a former Solutions Design and Integration Architect for The Boeing Company. You can reach him at [email protected].

comments powered by Disqus

Featured

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

  • What's New for Python, Java in Visual Studio Code

    Microsoft announced March 2024 updates to its Python and Java extensions for Visual Studio Code, the open source-based, cross-platform code editor that has repeatedly been named the No. 1 tool in major development surveys.

Subscribe on YouTube