News

Dev Teams Hobbled by Poor Metrics, Study Suggests

Borland commissioned a study on the effectiveness of performance measurements in application development projects. The study surveyed 20 application development and project management organizations with revenues ranging from $1 billion to $5 billion.

The aim of the study was to investigate the question of "metrics" in software development. The study found big problems with how well the software development process is tracked and measured.

For instance, most application development metrics were collected manually by project managers or lead developers, according to the survey. In some cases, respondents indicated that manual collection of this information took up as much as a third of the manager's time.

The study, "Changing the Cost/Benefit Equation for Application Development Metrics," stated that collecting metrics is "expensive," regardless of whether the data are collected manually or automatically.

"Superficial metrics" often lead organizations astray, according to the report. An example of such poor measures is the use of "on-time, under-budget, and on-scope" metrics, which are typically collected at the end stage of a project. The report described these metrics as "unsuitable for application development," even though they are commonly used by application development professionals.

The ability to actually use the collected data was also considered to be problematic in the survey. Eight of 20 respondents were "unable to trend or aggregate the metrics," according to the study. Forrester Consulting cited inconsistent collection methods and the use of multiple tools and repositories as potential stumbling blocks in this area.

The report suggested two approaches as a way out of the metrics mess.

First, application development teams should use "iterative, incremental development processes," in which metrics are collected in intervals throughout the project, rather than at the end. An example given in the report is the use of six iterations in a project with 100 requirements. For instance, if just 20 requirements are completed by the third iteration, the project might be in trouble.

Second, application development teams need to practice a "disciplined estimation of business value" in their project estimates, the report recommended.

Metrics are typically used at three levels in projects, according to the study. Those levels include "portfolio metrics," which provide executives with an overall project view. There's also "in-flight metrics," which describe measurements taken during the project. Finally, there's "post-mortem project metrics," or information collected at the end of the project.

Organizations should maintain a "comprehensive metrics program" and include all three levels, according to the report. Still, the most useful metric is also the one most neglected by application developers -- namely, in-flight metrics.

"The lack of in-flight project metrics that really describe the work being performed on a project is a major fault of most application development metrics programs, and it's one that most shops aren't even aware of," the report stated.

The Forrester Consulting report on metrics is expected to available today for free to registered users at Borland's Web site.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

comments powered by Disqus

Featured

  • Windows Community Toolkit v8.2 Adds Native AOT Support

    Microsoft shipped Windows Community Toolkit v8.2, an incremental update to the open-source collection of helper functions and other resources designed to simplify the development of Windows applications. The main new feature is support for native ahead-of-time (AOT) compilation.

  • New 'Visual Studio Hub' 1-Stop-Shop for GitHub Copilot Resources, More

    Unsurprisingly, GitHub Copilot resources are front-and-center in Microsoft's new Visual Studio Hub, a one-stop-shop for all things concerning your favorite IDE.

  • Mastering Blazor Authentication and Authorization

    At the Visual Studio Live! @ Microsoft HQ developer conference set for August, Rockford Lhotka will explain the ins and outs of authentication across Blazor Server, WebAssembly, and .NET MAUI Hybrid apps, and show how to use identity and claims to customize application behavior through fine-grained authorization.

  • Linear Support Vector Regression from Scratch Using C# with Evolutionary Training

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the linear support vector regression (linear SVR) technique, where the goal is to predict a single numeric value. A linear SVR model uses an unusual error/loss function and cannot be trained using standard simple techniques, and so evolutionary optimization training is used.

  • Low-Code Report Says AI Will Enhance, Not Replace DIY Dev Tools

    Along with replacing software developers and possibly killing humanity, advanced AI is seen by many as a death knell for the do-it-yourself, low-code/no-code tooling industry, but a new report belies that notion.

Subscribe on YouTube