Special Reports

Identify Slow Code and Bottlenecks

Processor speed continues to increase according to Moore's Law, but application developers still need to worry about performance. Use performance analysis data to seek out code that executes slowly and examine the reasons why.

Even after more than 30 years of the Moore's Law doubling of processor performance every 18 months, application developers must still be concerned with performance. Part of the problem is that software and data complexity advanced at a similar pace, meaning that more powerful processors have more code and data to move about.

But there's more to it than that. Developers build applications differently from how they have in the past. Fewer applications are written completely from scratch, and many use high-level operating system services for all manner of actions, from communications to graphics to memory allocation.

This reliance on operating system services and application frameworks is good in that it makes it possible to deliver sophisticated applications quickly. But the downside is that developers are ceding direct control to those services. If the result is poor performance, developers often conclude that the problem is outside their control.

But developers can have a significant impact on application performance, even when other code is doing most of the work. While their influence might not be direct, developers can work within the constraints of the system to significantly improve performance. They do so through the use of performance analysis tools, which measure the execution time of lines of code, methods, and even system and framework calls.

The importance of this information cannot be underestimated. Armed with detailed timing data, developers can seek out sections of their own code that execute slowly, and examine the reasons why. In some cases, it's because code simply takes too long to execute. In other cases, the code executes quickly enough, but is called far too many times.

Yet in most cases developers are not aware of their code's behavior. Even if they are aware, they often underestimate its effect on overall application performance.

Performance analysis provides information that developers can use to identify slow code and bottlenecks, and to get an overall understanding of application execution. Further, such tools provide the only way to do "what if" analysis on the performance implications of different implementation strategies. Developers can prototype several implementations and compare those implementations in detail. This approach makes it possible to have a degree of predictability over application performance early in the development cycle.

System calls are an especially prominent source of performance issues, yet developers almost always ignore them as a source of slow code. A common failing of using system calls is excessive numbers of such calls. Many developers view system calls as "free," and tend to use them often and inefficiently. Such calls seem free because developers have no knowledge of just what their costs are.

Performance analysis helps in these situations by tracking not only the time it takes to execute these calls, but also the number of times the calls are made. System calls might be fast individually, but thousands of them for simple and repetitive tasks can slow execution significantly. Armed with this information, developers can rework their code to reduce the number of calls, or they can use an alternative call that is less expensive computationally.

Managed Platform Control is Indirect But Real
Managed platforms add additional challenges to achieving high performance. Platforms such as Java and Microsoft .NET provide virtually all the high-level services needed to build complex distributed applications. And because they manage the error-prone details of allocating and deallocating memory, the use of managed platforms often results in fewer bugs and higher-quality applications overall.

But there are aspects of managed platforms that can be highly inefficient. For example, garbage collection, the process of reclaiming memory returned to the system, takes a significant amount of time. Garbage collection must scan all of memory to determine what objects no longer have active references. There are ways of lessening the impact of garbage collection, such as scanning only parts of memory at one time, or scanning newly created objects first because they have a better chance of being reclaimed. But garbage collection is by nature a resource-intensive activity, so managing it properly is one of the most important activities that developers can do to improve performance.

Managed platforms also include extensive class frameworks and libraries that provide many application services. These frameworks are so extensive that today developer-written code can constitute less than 10 percent of the total application code. Developers have a significant challenge in squeezing added performance out of the small amount of code they write in such applications.

That's not to say that the only way to make a managed application fast is to not use high-level operating system services and frameworks. That is simply not feasible today, given both the complexity of applications and the need to deliver these applications to production quickly.

You can't avoid garbage collection or other reasons for slow code on managed platforms, but you can minimize it. Doing so requires an understanding of the algorithms by which the managed platforms identify and reclaim memory, along with information on how their code creates and manages objects. The keys are to minimize the creation of objects, and to prevent objects from lingering around in the application footprint longer than they need to. Getting information on object creation and lifespan through memory analysis can result in better utilization of managed memory and faster code.

Other aspects of performance are also important during the application development lifecycle. User responsiveness is critical to user acceptance of a completed application, while acceptable performance under load determines how useful it is when many users are doing real work. But both of these measures validate good practices by developers during coding. You cannot achieve these end-user goals for the completed application without writing fast code. Fast and efficient code provides the basis for meeting performance goals in production.

Application performance continues to be a significant challenge for developers, and fast code is a requirement for satisfactory performance. Laying this foundation is the only way to assure user satisfaction of application performance. User satisfaction—and the ability to scale up easily as use increases—are important goals in and of themselves. But they're also key to reducing application lifecycle costs and meeting development schedules.

About the Author

Peter Varhol is the executive editor, reviews of Redmond magazine and has more than 20 years of experience as a software developer, software product manager and technology writer. He has graduate degrees in computer science and mathematics, and has taught both subjects at the university level.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube