Special Reports

Testing in Application Lifecycle Management

Understand the five phases of the application lifecycle, and learn which tests to perform in each of them.

Testing should occur throughout the application lifecycle. If you test your application as a project progresses, then you'll encounter fewer bugs when you deploy. It is important to start testing in the planning phase of your project, and continue testing through the analysis and requirements, architecture and design, construction, and testing and deployment phases. These five loosely grouped phases of development represent a typical spiral or iterative lifecycle pattern (see Figure 1). You should perform various types of tests—such as business and process validation, architecture analysis, and unit, system, and integration testing—in each phase (see Table 1). In this article you'll learn what type of tests to use and how to implement them throughout your application lifecycle.

Let's examine the type of testing done in each phase:

  • The planning phase requires business validation. Your business units need to determine if the business model allows for an application development process or purchase process that will lead to a more efficient environment. This validation can be a combination of resource savings, transactional speed, better customer service, and more. You do not look at technology in this phase.
  • The analysis or requirements phase happens when analysts work with business users to create use cases. After the use cases are created, you complete a testing exercise to weed out unnecessary, undocumented, or incorrect steps in the process.
  • The architecture and design phase varies. You can create your architecture at a high level, or at a deeper level, in this phase. From either view, the architecture must be tested. You can use a variety of methods to test your architecture.
  • The construction phase happens after the detailed design and construction are finished. You can associate the type of testing in this phase with application development: unit testing and functional testing. In this phase you test the smallest unit of an application (a method); this is called white-box testing. The methods that make up a system function are also tested in this phase; this is called black-box testing. All of the unit tests, which make up your functional area, must pass before your functional tests are completed.
  • The testing or deployment phase, typically handled by dedicated, expert testers, is designed to test the system, or the sum of the functions that the system performs based on functional and non-functional requirements. This is also called black-box testing. Integration testing examines whether the system works well in the overall business environment. Does the system send and receive messages from other systems correctly?

Once you finish these testing cycles, the application is deployed and evaluated. Then the process begins again. Evaluating change requests is part of the planning phase, which continues throughout the entire lifecycle. You should go through each of these testing phases with your change requests.

Security Testing
You might think it's strange that security testing sits in the middle of this process. Most lifecycle-pattern diagrams don't include security, but it's important to explain how it relates to testing. Security is not an afterthought. It is embedded in everything you do as members of a software development team.

Not long ago, Microsoft was painted as the villain in the security arena. Since then, Microsoft has established a world-class security lifecycle, similar to the one demonstrated in Figure 1—they make security the center and everything revolves around it. Other companies should follow Microsoft's lead.

Why is security a big deal? With all of the hype around security holes and zero-day vulnerabilities, security is even more critical than it has been in the past. Your company can lose their reputation and trust because of security violations. Have you heard of the Sarbanes-Oxley Act (SOX)? SOX was passed by congress in the wake of the Enron scandal, and deals with business security issues such as access, auditing and accountability.

What type of security testing do you complete in each phase of the application lifecycle? In the planning phase, your business is responsible for determining its security requirements. This process involves data classification, data access, and auditing. Your business tests these requirements as an evaluation process. The analysis or requirements phase ensures that the security requirements stated in the planning phase are enforced. You can evaluate the process through which the user interacts with the system to validate these requirements. You test in the architecture and design phase to ensure that your architecture supports the type of security your business users require. (For example, if you require a security mechanism that must authenticate users before every operation, you must ensure that the architecture supports it. If your system takes feeds from other systems, then you must also ensure the architecture can support the necessary security to accept feeds.) During the construction phase, you must create security tests to validate that your code can withstand hacking attempts. And finally, during the testing or deployment phase, you need to make sure that the overall functionality of the system is secure. You must ensure that using two modules of the system in combination doesn't open up a new security hole.

A Test Approach for Each Phase
How do you perform testing and validate the results at each phase in the software development lifecycle? Some phases are obviously easier than others from a testing standpoint. As a developer, you have a given set of parameters with which to work. You test code; you test performance; you test functionality. But, in some phases, you use more "art" than "science" to complete testing. Let's look at testing approaches for each of the phases.

Planning
The business is the only group that can conduct testing in the planning phase. The business group understands the value proposition of an idea. They are able to formulate the return on investment, and weigh the cost of dollars against the improvement to process and their public image. In many cases, this process isn't clear cut because you are dealing with both tangible and intangible benefits and sometimes these demarcations are blurred. Here is a quick set of guidelines to evaluate the value of a planned improvement:

  • Will your business save time in terms of people? For example, can less people do the same amount of work, or can the same amount of work be done in less time?
  • Will your business improve its reputation? This is a sometimes nebulous concept that's often driven by your customers.
  • Will the improvement help your business save on maintenance costs? This is typically a service-level–agreement issue. For example, will the improvement reduce the amount of necessary maintenance?
  • Will the new system enable another system that is critical? Will the new system require another system to make it work? Sometimes this applies to upgrading applications or making improvements.

This list of guidelines can continue on and on, and it can almost always be tailored to a certain project or situation. Your company needs to define their own set of guidelines.

Analysis or Requirements
This phase is tough. The leader(s) of this activity should be your architect or lead analyst. Your analyst has a better understanding of the business. Your architect has a better understanding of how the logic is implemented, and an overall view of whether two requirements that are seemingly different might actually be related or conflict with one another. Testing requirements is an art; it is probably the most critical phase of your development effort. If the requirements are wrong, then what follows will be wrong, and you can't recover without spending a lot of additional money.

So how do you test requirements? Testing your requirements is a political and social art, rather than a science. Different stakeholders have different views of what a system should do. Everybody wants a system that will help them, and not necessarily their collogues. This means that you end up being a mediator. Take this to heart—you must have good people skills and look at the requirements from all sides. Before testing your requirements, make sure that the requirements are not ambiguous. Ambiguity is the biggest cause of problems in systems. Ensure that all requirements are clearly outlined and documented.

Here are my favorite phrases when reading requirements documents: "The system should be robust," and "the system should respond in a reasonable period of time." What do robust and reasonable mean? It's unclear, but their ambiguity in these phrases illustrates an important point about the analysis or requirement phase. Even if everyone agrees on the requirements, you can still get them wrong because you can't measure whether or not the requirement was met!

Several opinions can influence requirements development. You need three or more stakeholders in a room to "test" the requirements. At least one of these stakeholders must use the system that you intended to augment or replace on a daily basis, and another must be in charge of the system and have final say. Take them through the use cases. The stakeholders will probably disagree as you explain the system processes, but allowing them to discuss issues will help you come to an acceptable solution. You must also make sure that the requirements documents are officially signed off. This creates a measure of responsibility and traceability among those people who sign the documents.

Architecture Testing
This is a long and involved discussion; I'm providing only a brief overview. An excellent method for testing architectures was developed by Carnegie Mellon University: Architecture Tradeoff Analysis Method (ATAM). This process allows you to examine the architecture, and compare known patterns in the architecture against the known characteristics of those patterns. It provides a means to gauge whether you are on the right track. ATAM isn't completely exact, but it provides, with reasonable reliability, a measure of the probability that your architecture is correct.

Construction
Testing in the construction phase is usually straightforward. Developers write unit tests to evaluate method-level calls. These tests are called white-box tests. You use white-box tests when you understand everything "going on under the hood." You don't use magic in this process, and you can follow the function calls step by step.

There are various methodologies to incorporate white-box tests into your development process. You can use test-driven development in agile methodologies such as XP. You can use continuous integration tests in conjunction with unit testing to catch build-time breaks quickly. Although using structured-testing phases doesn't preclude also using agile-testing methodologies, you generally use structured-testing phases in longer projects to maintain iterations within the construction phase. You can refer to various books to explore other testing methodologies for the construction phase.

After the unit testing has been completed, analysts run functional tests to test specific pieces of functionality. These functional tests are called black-box tests. Analysts don't need to know what occurs at the code level—they need only to test whether the application respond appropriately.

Testing or Deployment
For most of you, tests in the deployment phase are large-scale, system and integration tests. If possible, you should automate them so you can run through many tests quickly. But you often can't automate these tests because test scripts have to be created and ran manually by testers. (You usually see users performing these tests under the direction of the development team or a test lead.) These tests are called black-box tests because you do not know what is going on inside the code; these tests interact only with the application interface(s).

Many different types of tests are performed in all phases of the application lifecycle. Maybe you haven't previously considered some of these concepts testing, but they are, in fact, all various tests that you must run on your applications. If you don't start your testing until the construction or testing phase, you will have more difficulty building a successful application. If you take time to test the business analysis, requirements, and architecture as the project progresses, then you'll encounter fewer bugs when you complete the testing deployment phase.

About the Author

Jeff Levinson is the Application Lifecycle Management practice lead for Northwest Cadence specializing in process and methodology. He is the co-author of "Pro Visual Studio Team System with Database Professionals" (Apress 2007), the author of "Building Client/Server Applications with VB.NET" (Apress 2003) and has written numerous articles. He is an MCAD, MCSD, MCDBA, MCT and is a Team System MVP. He has a Masters in Software Engineering from Carnegie Mellon University and is a former Solutions Design and Integration Architect for The Boeing Company. You can reach him at [email protected].

comments powered by Disqus

Featured

Subscribe on YouTube