Special Reports

Achieve Optimal Performance

Response time and scalability are key to optimal performance of your enterprise's applications. Set expectations and provide analysis tools to help you achieve performance goals.

It can be hard to believe that in an era of three gigahertz servers with four gigabytes of memory, hard disks with ten millisecond access times, and gigabit Ethernet and fiber optic networks that business applications still face significant performance and scalability limitations. However, a combination of more complex and demanding applications, additional users, an increasingly complex infrastructure, and a greater need for real-time information means that faster hardware may never be the answer to maintaining satisfactory application performance.

There are two aspects to performance, both of which are separate and independent yet interrelated. The first is response time, the ability of an application to return a result or be ready for the next action by an individual user. Response time is a critical measure of the usability of an application because it must be able to keep the user active and engaged to have value in a business process.

The second part of performance is scalability, the ability of the application to service the necessary number of users while maintaining an adequate response time. Further, as users are added, response time should degrade gradually, rather than crash or prevent further activity all at once.

These performance aspects seem like simple rules that can be built in, confirmed, and managed easily in an application. What makes application performance management so difficult is that these two aspects of performance combine to touch every single part of the application, database, operating system, and infrastructure. Often each of these components by itself is optimized, yet some combinations of interactions among them can bring an otherwise normal distributed application to a crawl. Anticipating these interactions, and monitoring the application when they are unanticipated, is key to ensuring that an application meets the needs of its users and the organization.

Even with the best planning and execution, sometimes poor performance can be within the application itself, as the result of inadequate application planning, development, or testing. Or the application may have been created just fine, but it developed problems as it was maintained and enhanced over time.

Get It Together
What makes performance management so important to the enterprise is that poor user response time or inadequate scalability costs money, directly or indirectly. If the application is an e-commerce Web site, it will either turn users away during peak times, or appear unresponsive as it performs its tasks too slowly to keep the users engaged. In an internal application, it hinders workers at their jobs, making them less productive and efficient, and perhaps unable to use the application at times.

Poor performance can also be a symptom of more serious application problems. An application could perform poorly because an object leak causes it to consume increasing amounts of memory over time, or because a database call brings far too much data into the application than is necessary to satisfy a query. In these cases, poor performance stems from application defects that must be corrected.

Getting control of application performance requires that user response and scalability be on the minds of everyone involved in the application life cycle: architects, designers, developers, testers, and system managers. Perhaps the most important participants are the application users themselves, who need to define performance as a critical application characteristic.

Integral to the application life cycle are tools that assess, forecast, measure, and improve performance at each step of the way. While few enterprises employ such tools strategically across that life cycle and share information with one another, those that do have a better chance of meeting the service-level agreements (SLAs) required by their applications to deliver the performance needed.

There is a good reason few look at performance during the application development life cycle: there is little inherent motivation to do so. Those responsible for designing and building an application are rarely given instructions regarding the performance of that application. Performance, after all, is neither a feature nor a platform. It is difficult to define precisely how much performance is enough, so it is a challenge to test for it prior to going into production.

In many cases, there are not even formal requirements for applications, but rather lists—sometimes not even written—on what the application has to do. Even with formal requirements, performance can be forgotten. When requirements are less formal, performance may not even be tested until the application is ready to be used. The ongoing issue with IT is that there are no standards for how much it should cost. As a result, one organization may spend a great deal on systems, software, and people and think they're getting a good deal; another will spend little and think of it as money wasted. The strange part is that both organizations may be right.

All of these factors argue for a defined process for starting the application life cycle, including a problem statement, business justification, written requirements, and approvals. Using requirements-management software, such as CaliberRM from Borland, makes it easier for all participants to understand the requirements, note changes as they occur, and track them through the development life cycle.

Test It Yourself
It is still possible to forget about performance because it is what might be termed a nonfunctional requirement; that is, it doesn't represent a specific function or feature built into the application. The best way to work with this circumstance is to include performance characteristics as a part of the application-approval process, ensuring that it has been defined before a line of code has been written.

But how do those responsible for requirements know what the performance requirements should be? Underestimating response time and scalability is almost as bad as not specifying them at all because the application still won't meet the needs of the organization. Conversely, overestimating them entails excessive design, development, and testing efforts that can be expensive and inefficient.

Lacking any experience in specifying application response time and scalability, the best thing those responsible for defining the application can do is to perform their own tests. These tests don't have to be formal and controlled studies; rather, they can simply be observations of how users work with similar applications and what kind of response times they find satisfactory with those applications.

Those system analysts must also do the research on not only the expected number of users, but also on how many users there potentially could be in the future. And perhaps just as important, they have to understand the strengths and limitations of the infrastructure on which the application will sit, and factor those limitations into application requirements.

Overall, the requirements reflect not only the function of the application, but the expected number of users along with use cases and system and server requirements. By looking beyond application function and instead building complete requirements, the question of performance is at least framed in terms that can be measured and evaluated.

Application designers and developers play a unique role in application performance management in that their initial efforts determine the potential of the application. But in practice they aren't often called upon to maximize that potential. Application builders focus on implementing those features, rather than how fast those features execute, because application requirements are typically filled with feature descriptions.

Real-world use is another weak point in the development process. When performing individual tests, developers tend to use trivial tests and data cases that simply aren't representative of the intended use of the application. When the application is stressed with real data sets, often much later in the application life cycle, it often fails to deliver on any performance goals.

Maximizing the performance potential should begin when the application is designed and built. However, designers and developers make decisions, in practice, that result in lower potential, partly because of poor architectural choices that get spread around the industry as universal truths. "Tomcat doesn't scale" is a popular one; "use the MVC architecture under all circumstances" is another.

In reality, these rules of thumb usually reflect failures, or at least limitations, of design and architecture of applications in general, rather than the products or components they reference. In fact, you might find Tomcat's performance perfectly adequate for large numbers of users if many of your Web pages were static, or if those users tended to spend long periods of time on individual pages. The truth that Tomcat doesn't scale becomes universal after a while, so application designers may not even consider it as an option.

Go the Extra Mile
Application designers and developers have to look beyond the commonly stated rules of thumb to each and every technology and component used in the application, as well as the interactions among them. Rather than pursuing a single design, at least two designs should be prototyped and tested for performance and scalability. While this technique seems like excessive effort, performing this activity could well save excessive performance tuning at the end of the development phase, and could also prevent further performance problems from arising later in the application life cycle.

Performance testing during development typically doesn't work with either response time or scalability, but rather the execution speed of the code. This limitation exists because the user interface may not be completed, and in any case development teams lack the data needed to provide more real-life testing. The assumption behind this kind of performance testing is that the speed of the code has a direct bearing on both of the more observable measures of performance. Memory profiling is similar, in that it assumes that correct and efficient use of memory improves response time and scalability.

This type of performance and memory profiling is common in development, and there are a number of products that support his type of analysis. IBM Rational PurifyPlus, Compuware DevPartner Java Edition (see Figure 1), and Quest JProbe all provide counts of CPU cycles, memory footprint, and other characteristics that contribute to slow code execution. Using these types of tools provides development teams with objective information that can help to set expectations for good performance characteristics early in the application life cycle.

In reality, the answer to whether or not CPU performance and memory equates well to response time and scalability is "maybe." Certainly reducing the amount of time it takes to execute lines of code helps the code execute faster, but slower code doesn't necessarily improve either of those observable results in practice. Memory use is somewhat more straightforward. Excessive use of temporary objects in code makes the garbage collector work harder, for example. While there is likely some significant benefit to speeding up code and optimizing memory, working for optimum performance should stop here.

During the testing phase of the application life cycle is typically when performance is first tested in use, even if that use doesn't reflect how users will interact with the application. Formal testing is the first time most applications are subject to any sort of performance testing—either response time or scalability or both. At this stage the testers are exercising features, rather than using the application as it might be used in production, and testers rarely have the tools needed to assess response correctly.

But that doesn't mean that adequate testing isn't possible right after the application is feature complete. For both basic and advanced testing, a good starting point is JUnit, the test harness developed and distributed as part of the SourceForge effort (see Resources). It is a Java-based regression testing framework implementing a test harness that lets you integrate tests as a part of a larger framework that can make it easy to add tests, manage tests, execute tests, and analyze test results. JUnit executes unit tests, small pieces of code designed to exercise code in the application.

Withstand the Load
As its name implies, JUnit is intended for unit testing, which is normally a development activity. There are several other open source tools that can make a difference to the testing process. Another SourceForge project used in testing is TestMaker, a tool that software developers, QA professionals, and others in the application development process can use to check Web-based applications for functionality, performance, and load testing for scalability. It is maintained and enhanced by PushToTest, a Java testing consultancy (see Resources). As you use your Web application with a browser, the TestMaker recorder writes a test agent script for you, letting you replay the script for functional testing.

Moving beyond basic testing is important if the applications are critical to the success of the business because both response time and scalability are keys to success. More sophisticated approaches are required to accomplish these goals. One tool to help implement such approaches is JMeter (see Figure 2). JMeter is an Apache project that enables you to test and evaluate the performance of Java applications. JMeter can be used to test the performance of both on static and dynamic resources of an application.

Commercially, load-testing products that are available from Mercury Interactive and Compuware enable testing professionals to simulate users and run those simulated users in organized or random ways against applications, including those that are distributed and Web-based. While running, high numbers of users can cost a significant amount of money—the value returned in having more responsive and reliable applications can make it worthwhile. And it is usually possible to get load testing as a service rather than buying a product, so you can buy only what you use.

When the application is just about ready to deploy, other parts of the infrastructure are typically brought into the mix. Prior to configuring it for real use, an IT group would typically engage in a sizing exercise, determining the CPU, memory, and network needs. If application performance hasn't been a factor in the life cycle yet, it will be at this time. This stage is more concerned with system and network behavior, rather than application behavior. For the first time, the application is set up and run in an environment similar to the one in which it will exist into production.

And in many cases, response time and scalability will not match either what is needed or what was estimated earlier in the life cycle. This reason is why it is important to perform final testing on the same systems and configurations that will be used in production. Such testing provides the opportunity to make any final adjustments to either the application or infrastructure before the application takes the final step.

Once an application is deployed and being used for real work, any response time or scalability problems will quickly become apparent. The problem is that at this point such problems can have a substantial business impact, and they must be addressed quickly and with certainty.

The operations staff often assumes the issue is with the infrastructure, rather than the application. In many cases, this condition may be true; network glitches, server capacity limitations, and changes in topology can all affect the performance and reliability of a heavily used distributed application. These problems are within the control of the operations staff, which can design and implement infrastructure changes to adapt to application performance needs.

Keeping It Real
However, what if the performance problem is a limitation of the application architecture or implementation, or an application defect, that isn't found until it's in use or under certain circumstances? Collecting data on the application itself, rather than the underlying server and network infrastructure, is a challenge not yet well mastered. One product that claims success in this realm is Wily Technology's Introscope (see Figure 3). Introscope works with production applications, rather than the IT infrastructure, to track the execution of application components. It traces transactions and shows class and method invocation, and claims a minimal performance hit to do so. Other similar products from vendors such as Mercury Interactive and Altaworks also deliver good diagnostics data with only a small performance impact.

All of these limitations and caveats across the application life cycle argue for more realistic performance testing from the moment the application is capable of such testing. Participants in the life cycle may make the argument that the early stages of application development and test rarely reflect the final form of the application, but perhaps it should.

One way of bringing performance to the forefront of the application development life cycle is to start with the conclusion, so to speak. Few development teams consider the production systems an application will run on as a prerequisite for building that application. This drawback often results in incompatible security configurations, poor decisions made on partitioning application data, logic, presentation, and unrealistic demands on network or system resources.

As mentioned earlier, requirements are one way to bring performance to the forefront. Yes Another way is to apply infrastructure constraints during application design, so that the designer is not permitted to apply options that violate those constraints. In either case, the end result is an application that is likely to be more compatible with its intended platform and thus perform better.

Perhaps because of the foundations of Java as a managed language running on a virtual platform across multiple systems, the subject of performance has always been front and center in development, testing, and production. Even as the language and platform have become faster and more efficient, the technologies complementing them are more complex, as are the needs of the business.

As a result, technical information on all aspects of performance is available from any number of sources. Most trade book publishers have at least one current title on the topic, and there are tips and recommendations scattered across Sun's Java Web site as well as a variety of others. Performance information is even gathered from many sources and organized on at least one Web site (see Resources). Expert assistance is also available from a variety of consultants and system integrators.

Expensive consultants are not the answer to Java performance issues, however. What is needed by IT organizations is a performance life cycle that runs parallel to the application life cycle. Both user response time and scalability have to be considered as key application components from the moment they are proposed until they are retired from active use.

Because of the volume of information available, educating designers and developers to be cognizant of decisions that affect performance isn't difficult. Likewise, there is ample data available on configuring and monitoring applications and application servers in production. All participants in the application performance life cycle must be fully trained and prepared to apply a wide variety of techniques.

The performance life cycle requires the active involvement of IT management. A good example is in application design. If IT management promotes a desire to see and evaluate design alternatives, then designers will seek to build prototypes that maximize the performance parameters needed by the application. And at each step of the way, there must be goals for application performance, along with objective and quantitative measures of that performance. By setting expectations and providing the measurement and analysis tools necessary to determine whether those expectations are met and how to improve upon them, enterprises can achieve their performance goals while at the same time maintaining or even lowering life-cycle costs.

comments powered by Disqus

Featured

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

  • What's New for Python, Java in Visual Studio Code

    Microsoft announced March 2024 updates to its Python and Java extensions for Visual Studio Code, the open source-based, cross-platform code editor that has repeatedly been named the No. 1 tool in major development surveys.

Subscribe on YouTube