News

Dev Teams Hobbled by Poor Metrics, Study Suggests

Borland commissioned a study on the effectiveness of performance measurements in application development projects. The study surveyed 20 application development and project management organizations with revenues ranging from $1 billion to $5 billion.

The aim of the study was to investigate the question of "metrics" in software development. The study found big problems with how well the software development process is tracked and measured.

For instance, most application development metrics were collected manually by project managers or lead developers, according to the survey. In some cases, respondents indicated that manual collection of this information took up as much as a third of the manager's time.

The study, "Changing the Cost/Benefit Equation for Application Development Metrics," stated that collecting metrics is "expensive," regardless of whether the data are collected manually or automatically.

"Superficial metrics" often lead organizations astray, according to the report. An example of such poor measures is the use of "on-time, under-budget, and on-scope" metrics, which are typically collected at the end stage of a project. The report described these metrics as "unsuitable for application development," even though they are commonly used by application development professionals.

The ability to actually use the collected data was also considered to be problematic in the survey. Eight of 20 respondents were "unable to trend or aggregate the metrics," according to the study. Forrester Consulting cited inconsistent collection methods and the use of multiple tools and repositories as potential stumbling blocks in this area.

The report suggested two approaches as a way out of the metrics mess.

First, application development teams should use "iterative, incremental development processes," in which metrics are collected in intervals throughout the project, rather than at the end. An example given in the report is the use of six iterations in a project with 100 requirements. For instance, if just 20 requirements are completed by the third iteration, the project might be in trouble.

Second, application development teams need to practice a "disciplined estimation of business value" in their project estimates, the report recommended.

Metrics are typically used at three levels in projects, according to the study. Those levels include "portfolio metrics," which provide executives with an overall project view. There's also "in-flight metrics," which describe measurements taken during the project. Finally, there's "post-mortem project metrics," or information collected at the end of the project.

Organizations should maintain a "comprehensive metrics program" and include all three levels, according to the report. Still, the most useful metric is also the one most neglected by application developers -- namely, in-flight metrics.

"The lack of in-flight project metrics that really describe the work being performed on a project is a major fault of most application development metrics programs, and it's one that most shops aren't even aware of," the report stated.

The Forrester Consulting report on metrics is expected to available today for free to registered users at Borland's Web site.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube