Special Reports

Create a Quality Testing Program

Testing is a crucial component of the software development lifecycle. Combine testing tools with methodologies such as XP and TDD to boost quality assurance.

Testing practices vary widely among organizations and their application development processes. Some organizations, for example, employ highly sophisticated and structured approaches with trained quality assurance (QA) specialists and software tools that measure and track quality with the precision of a moon launch.

But these groups are outnumbered by organizations where testing remains an afterthought, staffed by inexperienced and poorly trained and overworked staffers. These testers are accustomed to development that takes so long, that any and all testing must be done in a single frenzied weekend, manually and without hard quantitative data on how much of the application was tested and how well it did. And management dismisses as too expensive and unnecessary the software tools that the best QA shops take for granted.

It doesn't have to be that way. There's no substitute for experience and training, but some free tools—although they might not fully rival expensive commercial testing products—make it possible to build best practices in the most harried testing shop. In fact, you might be surprised to learn that many highly sophisticated QA operations are built around free and often open source tools.

Consider the activities that typically go into the testing process. At the basic level, they include functional testing, making sure the application meets its requirements, and identifying any errors in operation. In addition, most basic testing requirements include both performance and scalability testing. Performance testing ensures that an application meets its users' response needs, while scalability testing determines its ability to serve a given number of users.

Advanced testing is usually more technical and quantitative in nature, and its results are often able to provide the application developers with actionable information on how to improve quality and performance. More technically inclined professionals who can write test code in a native programming language or script typically conduct advanced testing. The goal is to find more subtle problems than functional errors, and to work side by side with developers in analyzing those errors.

In both types of testing, it's critical for the testers and the corresponding development team to have clear and unambiguous lines of communication. In particular, a development team should see the same data as the testers, be able to confirm the findings of QA easily, and have the ability to drill down into the data to do a deeper level of diagnosis. So the same tools should be useful to both groups.

Bringing Open Source to Bear
For both basic and advanced testing, a good starting point is JUnit, the test harness developed and distributed as part of the SourceForge effort. It is a Java-based regression testing framework that implements a test harness to let you integrate tests as part of a larger framework that can make it easy to add tests, manage tests, execute tests, and analyze test results. JUnit executes unit tests, small pieces of code designed to exercise code in the application.

You write a test in JUnit using a test method that exercises a small number of application features. To execute multiple tests in succession, JUnit provides an object, TestSuite, that runs any number of test cases together. TestSuite objects can also contain multiple TestSuites, so each developer can work on his or her own TestSuite, then easily combine them into a single TestSuite. To execute tests automatically, JUnit provides tools to define the suite to be run and display its results. You make your suite available to a TestRunner tool with a static method suite that returns a test suite.

As its name implies, JUnit is intended for unit testing, which is normally a development activity. However, it can execute almost any type of test, and several extensions make it useful in functional testing too. One such SourceForge tool is Jameleon, a functional testing engine written in Java. Jameleon separates applications into features that get scripted independently. When a feature changes, only the script associated with it requires modification.

Several other open source tools can make a difference to the testing process. Another SourceForge project is TestMaker, a tool that software developers, QA professionals, and others in the application development process can use to check Web-based applications for functionality, performance, and scalability. It is maintained and enhanced by PushToTest, a Java testing consultancy. As you use your Web application with a browser, the TestMaker recorder writes a test agent script for you, letting you replay the script for functional testing.

Once you have recorded a script, TestMaker runs the test agent and displays live results data in a live chart. The test agent shows performance in terms of a transactions-per-second report of your application. This lets testers perform functional testing while generating data that developers can use to help evaluate and improve performance.

Move Beyond Functional Testing
Moving beyond functional testing is important if the applications are critical to the success of the business, because scalability and reliability over time are keys to success. More sophisticated approaches are required to accomplish these goals. One tool to help implement such approaches is JMeter (see Figure 1). JMeter is an Apache project that enables you to test and evaluate the performance of Java applications. You can use JMeter to test the performance of both static and dynamic resources, including files, servlets, scripts, Java objects, databases using JDBC, FTP services, and other application components.

You should test performance during all phases of the application development process. In many cases, you can also do performance testing at the unit level, before you've assembled the complete application. For JavaServer Pages (JSP) or servlets, you can test your code with a sample database, or even with the database calls removed. Such testing might not be a completely accurate representation of the full application at that time, but it can expose some obvious problems.

Of course, there's a tradeoff in using open source tools for unit, functional, and performance testing: They are unsupported, and the update schedule might be erratic. You can add your own extensions because you have the source code, but that means you have in effect built your own tools that you must then maintain and enhance as a part of your test environment.

If those disadvantages seem insurmountable, corresponding commercial products can perform similar activities. These products sometimes cost thousands of dollars a seat, but you have the comfort of knowing that you can call someone to help with a problem or request new features. Among the more popular commercial products for testing include Jtest from Parasoft, PurifyPlus from IBM Rational, JProbe from Quest Software, and QACenter and DevPartner Java Edition from Compuware.

One approach might be to use the open source tools until they become unwieldy or your testers outgrow their features, and then move to a corresponding commercial product. This way, you can assess the value to the application lifecycle before spending a significant sum on software licenses.

Tools Alone Are Not Enough
It should go without saying that any testing regime should apply a formal methodology that provides direction and focus to the efforts. Testing-oriented methodologies such as Extreme Programming (XP) and test-driven development (TDD) put testing in a critical role in building applications. Doing so makes testing on par with development and an equal partner in the venture.

XP is a model of how you might achieve such cooperation. It places a premium on the production of working code rather than documentation of the process and results. Testing plays a big role in XP; tests are written prior to writing code, so when all the tests have been passed, the coding is complete. If the application doesn't pass a test and defects are found, more tests are written. Often these tests are unit tests, written to determine if any logic errors exist.

But XP also defines acceptance tests, which are black-box tests in that they simply compare expected outputs with actual ones. Acceptance tests focus more on function than the underlying logic, and help determine if an application does what it should.

Test-driven development (TDD) is an offshoot of XP that incorporates the testing strategies of XP while not adhering to all aspects of the methodology. TDD requires that tests are written before coding the application and run often during development. If an application fails a particular test, it indicates a flaw in the application. More tests can be written during development, but none can be taken away. Once again, when an application passes all of its tests, it is done.

Testing isn't usually the glamorous part of the software development lifecycle. After all, it's the practice of finding imperfections in someone's creation. But that doesn't mean it's not important, and that it's not a science and an art. Testing enables applications to deliver what they promise, and it's through a thorough testing program that end users can feel confident in relying on software for their jobs.

But successful testing is done in partnership with development, and that partnership treats both building and testing applications with equal importance. Using the same tools across these application lifecycle stages, sharing data, and collaborating according to a defined methodology, both groups can work together with the common goal of building reliable, high-performance, and scalable applications.

About the Author

Peter Varhol is the executive editor, reviews of Redmond magazine and has more than 20 years of experience as a software developer, software product manager and technology writer. He has graduate degrees in computer science and mathematics, and has taught both subjects at the university level.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube