News

Dev Teams Hobbled by Poor Metrics, Study Suggests

Borland commissioned a study on the effectiveness of performance measurements in application development projects. The study surveyed 20 application development and project management organizations with revenues ranging from $1 billion to $5 billion.

The aim of the study was to investigate the question of "metrics" in software development. The study found big problems with how well the software development process is tracked and measured.

For instance, most application development metrics were collected manually by project managers or lead developers, according to the survey. In some cases, respondents indicated that manual collection of this information took up as much as a third of the manager's time.

The study, "Changing the Cost/Benefit Equation for Application Development Metrics," stated that collecting metrics is "expensive," regardless of whether the data are collected manually or automatically.

"Superficial metrics" often lead organizations astray, according to the report. An example of such poor measures is the use of "on-time, under-budget, and on-scope" metrics, which are typically collected at the end stage of a project. The report described these metrics as "unsuitable for application development," even though they are commonly used by application development professionals.

The ability to actually use the collected data was also considered to be problematic in the survey. Eight of 20 respondents were "unable to trend or aggregate the metrics," according to the study. Forrester Consulting cited inconsistent collection methods and the use of multiple tools and repositories as potential stumbling blocks in this area.

The report suggested two approaches as a way out of the metrics mess.

First, application development teams should use "iterative, incremental development processes," in which metrics are collected in intervals throughout the project, rather than at the end. An example given in the report is the use of six iterations in a project with 100 requirements. For instance, if just 20 requirements are completed by the third iteration, the project might be in trouble.

Second, application development teams need to practice a "disciplined estimation of business value" in their project estimates, the report recommended.

Metrics are typically used at three levels in projects, according to the study. Those levels include "portfolio metrics," which provide executives with an overall project view. There's also "in-flight metrics," which describe measurements taken during the project. Finally, there's "post-mortem project metrics," or information collected at the end of the project.

Organizations should maintain a "comprehensive metrics program" and include all three levels, according to the report. Still, the most useful metric is also the one most neglected by application developers -- namely, in-flight metrics.

"The lack of in-flight project metrics that really describe the work being performed on a project is a major fault of most application development metrics programs, and it's one that most shops aren't even aware of," the report stated.

The Forrester Consulting report on metrics is expected to available today for free to registered users at Borland's Web site.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

comments powered by Disqus

Featured

  • Hands On: New VS Code Insiders Build Creates Web Page from Image in Seconds

    New Vision support with GitHub Copilot in the latest Visual Studio Code Insiders build takes a user-supplied mockup image and creates a web page from it in seconds, handling all the HTML and CSS.

  • Naive Bayes Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the naive Bayes regression technique, where the goal is to predict a single numeric value. Compared to other machine learning regression techniques, naive Bayes regression is usually less accurate, but is simple, easy to implement and customize, works on both large and small datasets, is highly interpretable, and doesn't require tuning any hyperparameters.

  • VS Code Copilot Previews New GPT-4o AI Code Completion Model

    The 4o upgrade includes additional training on more than 275,000 high-quality public repositories in over 30 popular programming languages, said Microsoft-owned GitHub, which created the original "AI pair programmer" years ago.

  • Microsoft's Rust Embrace Continues with Azure SDK Beta

    "Rust's strong type system and ownership model help prevent common programming errors such as null pointer dereferencing and buffer overflows, leading to more secure and stable code."

  • Xcode IDE from Microsoft Archrival Apple Gets Copilot AI

    Just after expanding the reach of its Copilot AI coding assistant to the open-source Eclipse IDE, Microsoft showcased how it's going even further, providing details about a preview version for the Xcode IDE from archrival Apple.

Subscribe on YouTube

Upcoming Training Events