Developer Product Briefs

Best Practices for Building Quality and Performance in Apps

A life cycle approach is the best way to assure application performance is a part of the quality process.

Best Practices for Building Quality and Performance in Apps
A life-cycle approach is the best way to assure application performance is a part of the quality process.
by Steve Dykstra

November 17, 2006

Today's highly complex infrastructures and applications, coupled with limited resources, time constraints, and the need for greater collaboration among IT teams, make implementing an effective application performance assurance strategy a challenge. Despite this challenge and the amount of work involved, businesses want to take their service from good to great, and recognize the fundamental importance for assuring application service. Those businesses that meet the challenge stand to gain substantial benefits, including optimally running applications, satisfied end users, efficient use of IT staff, and more importantly, success and growth.

For mission-critical business applications, application performance assurance—an important part of application quality—is a continuous process spanning both pre-production and production. These applications are critical to your business' ability to improve profitability and extend its market reach. Poorly performing applications can hinder customer service, render employees unproductive, and directly affect revenue. To avoid this chain of events, your business must demand consistently high levels of performance.

To ensure these high performance levels, your application performance assurance practices must measure and provide feedback on how a given system meets the needs of end users on an ongoing basis. You must integrate performance profiling with load testing to ensure well-rounded application performance assurance strategy. Unlike other methods, this combination offers you an end-to-end approach that builds performance into the application early in the development life cycle.

Driven by requirements, application, and system testing is an iterative process strengthened by monitoring practices that provides you with views of application and system performance from both internal and external end-user perspectives. To be truly effective, you must tie this iterative process together with an application performance assurance strategy. This strategy should provide a streamlined way to correlate all performance data so the team can make more informed decisions about changes and improvements.

Weave Best Practices into Quality Assurance Process
While no one process or methodology fits every IT environment, you should consider several general best practices for managing application performance. A disciplined approach is at the heart of an ideal service assurance solution. This solution should bring together the key components of performance requirements planning, predictive analysis, infrastructure testing, and monitoring into a single, integrated process.

IT must also be able to identify and eliminate production performance problems as early as possible. This solution should provide answers to tough performance questions about IT application delivery. Lastly, the ideal solution should allow teams to build performance into the application from the earliest phases of the development life cycle, rather than testing it in the final stages before deployment when it's often too late for you to resolve problems effectively.

Best Practice #1: A Disciplined Approach to Gathering Requirements
The root of many performance issues is a poor understanding of required and expected service levels. You must develop performance requirements during project planning, beginning with disciplined performance analysis that clearly outlines a business-oriented definition of production service-level requirements.

Unfortunately, infrastructure and performance requirements are often inadequately addressed in the requirements phase. As difficult as it can be to predict the future, you need to design a business-critical application and plan out the supporting infrastructure.

To start, create a logical flow of how the application will work; and based on the business type and size, understand how much transaction volume an application can handle. Walk through the logic of the application to ensure it is correct, as straightforward as possible, and meets the stated business objectives. Make sure that the appropriate business and technology stakeholders are represented in the process of application and infrastructure planning.

Testers need to take into account how Web, application, and database servers perform individually and as a connected infrastructure. Generally, network specialists are the first group called in to troubleshoot production issues. They also work with other IT teams to manage infrastructure growth as a business expands. Because their knowledge of the network is critical, the operations team should naturally be involved at the start with requirements gathering.

Best Practice #2: Proactive Performance Analysis During Development
Identify and address the majority of performance issues before an application goes live because testers can catch problems at a stage when they are less expensive and easier to fix. The goal of testing is to manage the risk of deploying an application into production. Although configuring and tuning applications in production is more costly, testers can't necessarily avoid it all together, particularly in light of constantly changing business conditions. However, the earlier defects are uncovered and resolved, the greater cost savings you will achieve.

Once your design of the infrastructure is agreed upon and the initial components are in place, you can build a prototype of the application that touches all tiers of the infrastructure. This is a good way to test and monitor how the individual infrastructure components will work together. As soon as an application creates traffic between itself and its servers, testers can measure how much the application consumes network resources and the latency using a single transaction in an isolated environment.

Best Practice #3: Coupling Performance Profiling with Performance Analysis and Testing
Complete application performance profiling before an application goes live. Performance profiling provides response time metrics by transaction type to help IT teams understand how well individual components of the infrastructure are working together.

To begin, IT teams should state their expectations for performance based on infrastructure choices, such as what response times are expected for each transaction type and class of connectivity. Then IT teams can run performance tests and response time predictions to see if applications meet expectations.

You can use results from the first round of performance tests and predictions as reference points or performance benchmarks to highlight where improvements can be made as the application evolves. The final set of acceptable results establishes the baseline and thresholds for performance monitoring in production.

An important result of early profiling is both business and IT management gain a realistic understanding of achievable performance. With performance measurements, you have factual data to manage user expectations and service-level agreements, eliminating performance surprises and increasing the overall awareness of an application's capabilities. Performance profiling is also useful in the production environment. For example, if network traffic is slow and transaction response times fail to meet expectations, a service-level alert can be triggered. IT teams can automatically collect transaction traces in realtime for in-depth performance analysis after the fact.

Best Practice #4: Integrating Root Cause Analysis with Load Testing
An effective solution should provide developers with the information that a performance or scalability problem exists and the data to help developers analyze a problem based on how the applications respond under load. With insight to the line of code or SQL statement level, developers can quickly identify the root cause of performance problems, fix them and improve performance on the spot.

By running performance analysis in conjunction with load testing, the performance team can do more than simply report poor user response or scalability. They can provide developers with the information they need to analyze, diagnose and fix the problem, thereby reducing the overall mean time to repair (MTTR) for the application. Critical applications reach production more quickly and perform more reliably, lowering maintenance costs and losses due to downtime.

Best Practice #5: Iterative Performance Testing Throughout the Life Cycle
When conducting scalability tests, you can simplify the problem identification process by individually screening small pieces of the application. Once these pieces have passed their tests, the operations group can start tuning server configurations for the application components. These early performance tests will eliminate bigger problems later on when the application and its infrastructure are more complex. They also serve as benchmarks for performance improvements as the application and its supporting infrastructure evolve.

When the entire application is assembled, you must perform more rigorous performance tests. If performance testing and monitoring have been used iteratively throughout development, an application should be ready for tests that push its scalability limits.

Start with a small number of virtual users and increase the number gradually to hundreds or thousands of simultaneous users to understand how an application will scale. Once testing has found the application's thresholds, testers can gradually decrease the number of virtual users. Make sure sessions are closed in a timely manner and the memory is freed up for new users.

Best Practice #6: Robust Test Data Environment to Simulate Production
When running these large-scale performance tests, pay particular attention to the data used. The goal is to mimic real-world traffic as closely as possible and use a broad range of variable data types. Achieving quality testing requires valid data, complete with working relationships between tables and files.

An effective performance assurance solution should include variable parameterization so the full range of variable test data can be adequately load tested. Consider automatically generating millions of unique names, social security numbers, and many other values. This allows you to test all aspects of the application, down to the line of SQL code, against any number of users and data records.

Because valid data and relationships are already defined in production data, test data is often created from the production environment. This data is essential to avoid privacy abuses and data mishandling, as well as maintaining customer trust.

Best Practice #7: Combine Pre-Production Testing and Production Monitoring for the Defense
To protect against the risks associated with poor application performance, IT managers have steadily elevated the role that system and application testing plays within their organizations. However, despite continued adoption of load testing tools at a double-digit rate, IT organizations continue to deploy applications that do not perform.

Problem resolution times are also not improving. System and application quality are a significant issue as applications and their supporting infrastructures continue to grow more complex and dynamic. Where testing alone may be inadequate, the use of both production monitoring and automated testing tools significantly increases the odds of maintaining acceptable levels of performance and user satisfaction. You can use the information gained from monitoring a production to enhance the realism of ongoing performance tests that occur as the application is updated and modified.

For instance, understanding how real users interact with an application—their access methods, browsing time, types, and frequency of completed transactions—can improve the development of performance tests and deliver more accurate results. Combined, testing and monitoring create a detailed picture of system and application performance. This approach measures the effect of change and helps teams understand where performance issues originate, and resolve them correctly. You're left with a multi-dimensional view of performance that works to eradicate errors continuously.

By monitoring the right parts of the application and the system, IT teams can access the necessary data more quickly and solve problems more efficiently when issues arise. An added benefit is enhanced communication. The disparate parts of an organization's problem-solving team can develop stronger and more effective lines of communication.

Best Practice #8: End-to-End Performance Visibility for Effective Decision Making
Making efficient and effective decisions based on realtime information leads to better results and performance. You can provide this realtime information with a combination of testing and monitoring that gathers and presents a detailed picture of system and application performance. This detailed picture provides a summary of key metrics necessary for IT management to make informed decisions regarding resource assignments, deployment and investment decisions, and most importantly, recommendations to the business.

End-to-end performance should be measured in terms that provide value for the business. End-user, service-oriented metrics offer the best-of-breed in today's IT organizations. These metrics focus on the end user, not the IT infrastructure. Managing from the end-user perspective lets IT identify problems long before help desks receive the first call. By measuring end-user response time and application availability consistently over time, IT can resolve problems more quickly and identify patterns and trends to avoid problems in the future.

About the Author
Steve Dykstra is director of product management at Compuware Corporation.

comments powered by Disqus

Featured

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

  • What's New for Python, Java in Visual Studio Code

    Microsoft announced March 2024 updates to its Python and Java extensions for Visual Studio Code, the open source-based, cross-platform code editor that has repeatedly been named the No. 1 tool in major development surveys.

Subscribe on YouTube