Developer's Toolkit

Blog archive

Got Your Back

That's what my regrettably soon-to-be former public relations manager Kayla White says to me after covering for yet another one of my miscues, lost documents, and senior moments in communicating with the outside world on products, progress, and plans. What she means, of course, is that she is watching out for just such inadvertent mistakes on my part, and making sure they don't cause harm or create havoc to the image of my products.

It occurs to me that this is the same reason we have built up a long and elaborate application lifecycle to deliver a working application that actually has value to those using it. Because application frameworks do a lot of the heavy lifting today, many applications can be successfully built and deployed by a small team, or even a single individual. We rarely if ever do it that way because there are simply too many things that can go wrong.

And so we divide the roles and responsibilities into the application lifecycle. While it can take many possible forms, the lifecycle ensures that numerous people in many different roles review the work done in designing and building the application. In other words, the process is so unreliable that we have accepted multiple layers of redundancy to get even a small measure of consistent results.

What do I mean by this? Development practices such as code reviews, unit tests, and extensive exception-handling code all exist to watch our backs, or in other words, to correct for potential mistakes or unanticipated conditions in the application code. Exception-handling code, and other efforts to find and account for application defects, all constitute watching the back of the developer or development team in implementing application features.

This isn't necessarily a bad thing; getting quality to the point where the application is useful under a wide range of circumstances is important. Thanks to a combination of complex applications and imperfect developers, manual coding will always contain anything from simple mistakes to unanticipated consequences. But the journey there is still far too time-consuming for the pace of the industry. It still takes months of design, development, and testing before we get a useful business application into the hands of users, which is months after someone has identified a real need or a real opportunity for that application.

Instead of building in human redundancy, it would be more efficient if we could do more to use automated redundancy. We do some of this today, through the use of debuggers and analysis tools for memory or performance optimization. Prepackaged exception code, better testing of that code, and simply better quality code in general can reduce some of the human redundancies we have built into the system.

What makes these things happen is automation—the ability to use tools to effectively ensure that we write high-quality code and deliver it quickly without other developers, project managers, testers, beta cycles, and documentation writers to make sure the software does what it's supposed to do, and does so consistently.

We're not there yet, and it might take quite some time. But the time will come when analyzing source code for good software engineering practices, automated error-handling code, and performance and load testing through simulated application workloads will take the place of their manual counterparts. These development and testing capabilities will watch our backs, freeing more of us for the exacting yet satisfying role of creating new applications and application features.

Posted by Peter Varhol on 11/17/2004


comments powered by Disqus

Featured

  • Uno Platform Wants Microsoft to Improve .NET WebAssembly in Two Ways

    Uno Platform, a third-party dev tooling specialist that caters to .NET developers, published a report on the state of WebAssembly, addressing some shortcomings in the .NET implementation it would like to see Microsoft address.

  • Random Neighborhoods Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the random neighborhoods regression technique, where the goal is to predict a single numeric value. Compared to other ML regression techniques, advantages are that it can handle both large and small datasets, and the results are highly interpretable.

  • As Some Orgs Restrict DeepSeek AI Usage, Microsoft Offers Models and Dev Guidance

    While some organizations are restricting employee usage of the new open source DeepSeek AI from a Chinese company due to data collection concerns, Microsoft has taken a different approach.

  • Useful New-ish Features in .NET/C#

    We often hear about the big new features in .NET or C#, but what about all of those lesser known, but useful new features? How exactly do you use constructs like collection indices and ranges, date features, and pattern matching?

  • TypeScript 5.8 Beta Speeds Program Loads, Updates

    "TypeScript 5.8 introduces a number of optimizations that can both improve the time to build up a program, and also to update a program based on a file change in either --watch mode or editor scenarios."

Subscribe on YouTube