In-Depth

The Year of the Build

Is wrestling with your software builds a fact of life? Not necessarily.

Building software used to be easy. You would compile the code you wrote, with few or no libraries and no dependencies. Moreover, if it built, it usually ran. Today, you might spend 20 hours or more building a complex application with hundreds of dependencies; and even if the build is successful, it probably has plenty of bugs, some of them serious.

Building software remains a black art. Enterprises developing custom applications build the software only at major milestones. While most see this as a convenience, it is instead putting off the day of reckoning. Once builds are attempted, they invariably fail. Getting the software to build takes developers away from writing code, and often requires costly rewrites of the software itself. When the build finally works, it's jury-rigged, almost certainly hard-wired, and likely to fail the next time anything changes in the build process.

Even development teams who set up nightly builds face issues. In most cases, the build system is cobbled together with an off-the-shelf compiler and difficult-to-read scripts that execute a motley combination of commercial, custom, and open source tools for compiling, linking, moving, and executing the application.

And branching? You plan for a month to create a branch for a specific point delivery, and then freeze all check-ins; you return the next week to start checking it again, and you continue checking in bug fixes to both branches for the next three months. And sometimes confusion reigns, so you check into both when you were only supposed to check into the mainline. The productivity hit taken by the development team for this conceptually simple exercise is incredible.

Take this scenario another step further. You can't assume that your application will run because it builds. So you use smoke tests. These run automatically at the conclusion of a successful build. If they fail, then the build processes must halt until the cause of the failure is found. Once again, delays ensue as efforts are turned toward tracing and fixing another spurious problem.

Taking these issues into account, the "Groundhog Day" scenario for most software developers can be summarized in a few sentences. You build at night, using an automated build process held together as if by magic. When everyone arrives in the morning, the build is broken. You blame the guilty developer, make him or her buy donuts for the team, fix the problem, check in the fix, and manually complete the build by late morning. Then the smoke tests fail. You find out what code failed, check in another fix, and manually complete the build again. It's now after lunch, and both developers and testers have been waiting for a clean build to continue work. The smoke tests finally succeed, and QA takes over. After an hour of productive activity, QA announces that there are two P1 blocking bugs that must be fixed before testing can continue. The development leads stop working, analyze the bugs, fix them or hand them off, and make the decision that it's too late in the day to kick off another build. The QA staff goes back to playing Solitare, and the rest of the development team, including you, completes their coding frantically and checks in modified files before the 6PM cutoff.

And this cycle begins again the next day. Every day.

Why You Build
What about building is so critical to the software development process? You can find the answer in the tenets of the Agile development movement. According to this movement, you prove that your software works by having working software. You create working software by building continuously, so that you can test the software to ensure it works. If you're using an Agile development process (XP, Scrum, or any other of the alternatives available), then you're intensely concerned about building and testing your software on an ongoing basis.

Reliable builds offer other advantages, too. They break the cycle of the Ground Hog scenario—a sequence of errors, delays, and temporary fixes that become permanent until the next time they break. You can focus on writing code rather than building software.

You also waste time and effort whenever a build fails. While fixing build breaks and other build issues is almost never an "all hands on deck" situation, the delays they cause to others in the development process can be massive. Build problems by themselves can hold up further development, testing, pilot projects, and, ultimately, the delivery of completed software.

For development organizations seeking certification under the auspices of the International Standards Organization (ISO) or the Software Engineering Institute's Capability Maturity Model Integration (CMMI), establishing and maintaining repeatable processes is a strict requirement. Certain industries, such as health care and aerospace, also face the need for definitive processes.

Solutions on the Horizon
The software build process is on the radar of vendors seeking further improvements and automation on the application development lifecycle (ALM). Several established vendors have just announced new build solutions as a key part of ALM processes—Borland with Gauntlet, and IBM with BuildForge. Both companies announced major initiatives into ALM, with build management being a keystone of these initiatives. Only recently have companies positioned build management as a critical aspect of their ALM solutions.

However, some organizations have specialized in build management for several years, and they have established products that help you rationalize the build process. Catalyst Systems, for example, has marketed its Openmake build management system for over 10 years. Openmake delivers a single build process for enterprise applications regardless of IDE, language, or operating system.

Electric Cloud also provides a state-of-art offering for build management. The company has three products: a build accelerator that parallelizes build tasks across clusters; a build analyzer that mines the information produced by its make facility to provide a graphical representation of the build structure for error and performance analysis; and most recently, a product to manage individual builds and the build process in general.

This recent product, Electric Commander, directs builds irregardless of platform or language. It can manage the multiplatform builds, ensuring that the same build sequences are followed across, for example, Windows, Linux, and Unix. You can drill down into the build sequence to quickly diagnose and address build problems, including problems with smoke tests. And you can get reports on individual builds, or aggregate statistics on builds over time (see Figure 1).

Build Early and Often
Any development team should strive for continuous builds. When one build is complete, the next one should be ready to go. This might seem at odds with the ability to test and validate a build, but building should be as normal and predictable a process as checking in a file.

Ultimately, Borland and IBM are on the right track with Gauntlet and BuildForge, respectively. Application building and build management has been a neglected part of the development and testing process. A build solution, while essential, functions best as part of a larger software development and software quality process. Companies that can offer complementary solutions in addition to build management have an advantage in the market.

You can also find opportunities to use point products, such as those from Catalyst Systems or Electric Cloud. If you're just starting to reign in your build processes, or optimize existing processes independent of existing tools, then you will find these specialist solutions easy to adopt and use.

In the defense of both vendors still rolling out products in this area and development teams still deficient in their approach to build, the complexity of this situation has mushroomed over the past several years, and has caught many off guard. Frameworks, libraries, virtual machines, and other innovations have eased coding as complexity has increased, but placed a corresponding burden on the build process.

Until now, development teams have responded in a haphazard fashion, deferring builds or piecing together a poorly-conceived build solution from scripts and spare parts. Today, and in the future, you will find multiple choices for build management, so your development team can choose the solution that best meets the needs of your development environment and existing tool chain.

About the Author

Peter Varhol is the executive editor, reviews of Redmond magazine and has more than 20 years of experience as a software developer, software product manager and technology writer. He has graduate degrees in computer science and mathematics, and has taught both subjects at the university level.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube