Practical ASP.NET

No More Drive-By Debugging

Peter Vogel abandons ASP.NET (mostly) to discuss how to better debug programs.

You've had this experience: You've got a bug in your program and you can't figure it out. You call someone over to your desk and start to explain the problem. Part way through the description you figure out what the problem is. The person you invited in leaves without ever saying a thing.

We've all been on both sides of this scenario. I call it "drive-by debugging." Drive-by debugging works, but it may not always be possible to implement it -- it's late at night, no one is available, you work alone.... The good news is that you can achieve the same results as drive-by debugging without using another person.

Debugging isn't a specifically an ASP.NET topic, but I was inspired by Kathleen Dollard's great three piece article on debugging support in Visual Studio 2010. Kathleen introduces the series with a discussion of the Scientific Method approach to debugging and I can't resist throwing in my two cents worth.

Years and years ago -- 24 to be exact -- I bought "Debugging C" by Robert Ward. I am not now, nor have I ever, been a C programmer. But I knew that I couldn't keep following my existing "thrashing around" approach to debugging. I merged what I learned from Robert Ward's book with the Kepner Tregoe Problem Solving and Decision Making (PSDM) methodology I happened to be learning at the same time.

If that all sounds very esoteric to you, here's something more practical: I know why drive-by debugging works. When you call that person over to your desk and start describing the problem, it's probably the first time that you give anyone -- including yourself -- an accurate description of the problem. Once you have a complete and accurate description of the problem, then the cause of the problem is usually obvious. The solution may not be, of course, but you'll know what's going wrong.

Most developers often skip that first step in debugging: getting a complete and accurate description. Instead, at the first sight of a symptom of a problem a developer often leaps straight to the solution. The solution is, at best, the third step. The first step is getting that good description. The second step is spotting the cause. So, how can you test if you've got a good description?

You can test that you've got a description of the problem by stabilizing the bug. A stabilized bug is a bug that you can make reveal itself anytime you want to: you know the scenarios that expose the bug in your application. In addition, a bug is stabilized when you also know all the scenarios where the bug won't happen: scenarios similar to the ones that make the bug reveal itself but don't cause the bug to be revealed. You have a good description when you know what "is" in the scenario and "is not" in the scenario, because you can make the bug happen whenever you want.

This leads to the contribution that PDSM made to my debugging process: PDSM provided a process for determining what factors are part of the scenarios that expose the bug (and what are not). Many factors will obviously fall into the "is" or "is not" category of the scenario (the weather is usually in the "is not" category, for instance). Other factors will fall into a "could be" category for the scenario. For those "could be" factors you must run tests to determine whether the "could be" factor is an "is" or an "is not" factor (this is where I start overlapping with Kathleen's scientific method). Where you have a lot of "could be" factors you can organize this Is/Is Not problem into four categories of factors: What, Where, When, and Extent.

As an example, ASP.NET provides two pieces of input to the Is/Is Not process. In the Where category we have the URL of the page. The first piece of evidence that you can gather is the page that was being requested when things went horribly wrong. Second, read the whole error message. Thanks to the ASP.NET error page, all errors tend to look alike and we don't always read the whole thing. The error message contributes to the What section of the Is/Is Not analysis. The goal is to drive to an accurate description of what "is" and "is not" part of the scenario that causes the bug to occur in all four categories.

Why is having a description so important? It's because, as programmers, we know that the computer will only do what we tell it to. So, for any bug, there's code in the program making that bug happen. Once we have a description of that bug is doing, it's easy to imagine the code that would make it happen. And, knowing that, we probably also know where that code is.

The only problem we can't solve is when we have a description of the bug and realize that we couldn't make the computer do that. Now that's a problem!

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • IDE Irony: Coding Errors Cause 'Critical' Vulnerability in Visual Studio

    In a larger-than-normal Patch Tuesday, Microsoft warned of a "critical" vulnerability in Visual Studio that should be fixed immediately if automatic patching isn't enabled, ironically caused by coding errors.

  • Building Blazor Applications

    A trio of Blazor experts will conduct a full-day workshop for devs to learn everything about the tech a a March developer conference in Las Vegas keynoted by Microsoft execs and featuring many Microsoft devs.

  • Gradient Boosting Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the gradient boosting regression technique, where the goal is to predict a single numeric value. Compared to existing library implementations of gradient boosting regression, a from-scratch implementation allows much easier customization and integration with other .NET systems.

  • Microsoft Execs to Tackle AI and Cloud in Dev Conference Keynotes

    AI unsurprisingly is all over keynotes that Microsoft execs will helm to kick off the Visual Studio Live! developer conference in Las Vegas, March 10-14, which the company described as "a must-attend event."

  • Copilot Agentic AI Dev Environment Opens Up to All

    Microsoft removed waitlist restrictions for some of its most advanced GenAI tech, Copilot Workspace, recently made available as a technical preview.

Subscribe on YouTube