Developer Product Briefs

Building Quality Applications

Improving quality during development isn't easy. Only through careful work and testing at all life-cycle stages using better tools will teams be able to deliver higher quality applications.

Building Quality Applications
Quality doesn't happen by accident. Development teams have to work at it intelligently.
by Peter Varhol

November 13, 2006

As software becomes more and more pervasive in everyday life, the need for higher quality grows almost exponentially. Mission critical has a far broader meaning than it did even a few years ago, with commerce dependent on Web sites, kiosks, up-to-the-second transactions, event processing, and a whole slew of other applications that keep the business moving forward on a daily basis.

As we depend more on software, that software is also becoming more complex. It can include hundreds of thousands of lines of code (or more), several different programming languages, multiple application platforms, loosely coupled interacting components, legacy code, and much more. Certainly our development tools are better than they were a decade ago, but the complexity of the software still greatly outstrips the advances in tools.

Which brings us to the question of how we measure and improve the overall quality of that software. Depending on the resources available and the culture on the development team, it can range the gamut from nothing at all to sophisticated tools, dedicated professionals, and statistical methods.

Today, quality is worth fighting for, and it is worth doing right. The downside—loss of business, customer or user dissatisfaction, or errors in business information—will have a much greater impact today and in the future. In some cases, poor quality software can have life-threatening consequences, raising the stakes still higher.

For those who give thought to application quality, the focus is usually on the functional behavior of an application after most or all of the code has been written. While this focus is surely important, it represents one of the most expensive times in the process to find errors. In fact, the farther along an application progresses in the development life cycle, the more expensive it gets to address bugs and other application errors. That expense is one reason why the first step, writing accurate and unambiguous requirements, is critical.

Quality Begins with Requirements
There are two reasons why good requirements make a difference. First, the development team has to know precisely what to build. Without clear requirements, there is a lot of uncertainty downstream when it comes to implementing features to fulfill those requirements. Second, requirements provide a basis with which to test an application in the later stages of development because bugs encompass more than just application crashes or wrong data. Bugs also include incorrect functionality, which occurs because requirements are ambiguous and misinterpreted by the developers or there is a missed feature or functional mistake.

Only by testing to requirements can an application have a demonstrated level of quality and adherence to original intent. Of course, the emphasis on requirements means that such requirements must be clear, complete, and traceable. Often words and even diagrams can be ambiguous in communicating needs, and analysts responsible for requirements are increasingly turning to formal approaches such as the Unified Modeling Language (UML).

Developers clearly play a key role in the building of high-quality applications. The process of turning requirements into specifications, and specifications into a working application offers ample opportunity for error. The potential for error can be based on misunderstandings, poor communication, bad coding practices, or a lack of understanding of the underlying application platforms.

Good processes, including code reviews and check-in gates, can take up much of the slack. Developer-oriented testing tools can also play a big role. Today, it is necessary, but not sufficient, to only be careful and meticulous in writing code. In addition, you have to not only trust but also verify.

Here is where tools fit in. By using carefully defined processes along with tools to verify code at frequent intervals in the process, development teams can have a good handle on quality throughout the development cycle. It is possible to make estimates on the number of errors or poor coding practices, and whether the bug count is going up or down, based on regular use of error detection, performance, and code coverage tools.

In practice, however, developers tend not to be big users of testing and quality tools, including free ones. They tend to use them when a problem is apparent but intractable, rather than on a recurring basis. Because of this sporadic use of tools, and because the development phase is so critical to application quality, there remains outstanding opportunities to improve quality during development.

To alleviate some of the problems inherent in the traditional development process, teams are increasingly turning to so-called agile development methods. Agile methods tend to break up the requirements and development steps into smaller increments that can be more easily managed and turned around more quickly. This approach provides for more frequent user feedback, which can find common problems and get them fixed quickly.

Agile processes also insist on extensive unit testing. In general, if a bug is discovered, the team writes additional unit tests for that particular piece of code. This approach can reduce bugs, but doesn't necessarily mean that the feature is what the user needs.

Some development efforts don't lend themselves well to agile processes. It tends to be difficult for commercial product development, for example, as well as exceptionally complex software that cannot be broken down easily into small development steps. Many projects that are more amenable have already adopted at least some of the concepts of agile development, which have the potential to improve overall quality.

Functional Testing and Deployment
During the latter phases of development, functional testing takes over. Functional testing is all about ensuring that an application meets requirements with as few bugs as is feasible. QA teams work with the application primarily through the user interface as a user might, methodically working through individual features as well as representative workflows. Tools are essential here too. QA teams typically test every functionally complete build until the software is frozen and ready to go into production. This process can mean dozens or even hundreds of complete iterations through the software. Most teams that are serious about quality automate this process by recording keystrokes and mouse clicks, which can then be played back against the application on an ongoing basis.

Of course, tracking bugs, identifying quality trends, and estimating overall quality also fall into the realm of QA testing. These varied activities make it essential for QA teams to invest in bug tracking (shared with the development teams, of course), test harnesses, and analytical software.

Virtually all software ships with known bugs and limitations. One of the goals of QA testers is to categorize and prioritize the bugs they find. P1 bugs are the showstoppers; it is rare for software to go into production with this category of bugs. Beyond that, everything is negotiable. A high-priority bug could be left in if it were an edge case in usage, while a lower-priority bug might be among the first fixed if users were expected to encounter it regularly. Making these determinations requires lots of data and the ability to analyze and make extrapolations on that data.

Approaching application deployment, good QA teams will also load test a server-based application to ensure it can support the intended number of users. This technique is different from developers' collecting raw performance data and acting on it in that it invariably works with bottlenecks in processing power, network bandwidth, or other critical computing resources.

The Holy Grail of Quality
Despite meticulous planning and testing up front, few people believe that applications in production can be made bulletproof using the current state of technology. Deployed applications are likely to have issues, but those issues are stemming increasingly from problems that are very difficult or impossible to find during design, development, and testing. For example, the problem may be an application's interaction with new operating system patches or other software installed on the server after the fact.

In the past, the isolation between discrete steps of the application life cycle has prevented us from learning of and quickly fixing quality issues in deployed applications. That characteristic may be changing. To borrow a phrase from the world of systems engineering, the goal is now to reduce the Mean Time to Repair (MTTR) a flawed application, test it once again, and get it back into production.

Past production monitoring tools have generated lightweight error and environment data that was sufficient primarily for getting the application running again, and not diagnosing its flaws. Even if the data were useful, it wasn't possible to move it into developer tools for in-depth analysis of those flaws.

Today, production data can be used increasingly by developers, which can make it possible to more quickly identify and analyze an application defect. Once the defect is addressed, the application can be retested for regression using existing functional scripts and placed back into production.

Ideally, the availability of testing assets across the application life cycle will improve quality, but there is no silver bullet. Quality still requires careful work, attention to detail, and testing at all stages of the development life cycle, and like in any profession the better the tools, the better the job the teams can do.

About the Author
Peter Varhol is FTPOnline's editor in chief.

About the Author

Peter Varhol is the executive editor, reviews of Redmond magazine and has more than 20 years of experience as a software developer, software product manager and technology writer. He has graduate degrees in computer science and mathematics, and has taught both subjects at the university level.

comments powered by Disqus

Featured

Subscribe on YouTube