Desktop Developer

Trace Code With .NET

Build a .NET Framework based class library that makes tracing a true asset to your development experience.

Technology Toolbox: VB.NET

The .NET Framework's Trace class lets you record details about your application's runtime behavior. Tracing gives you insight into your running code, but using it "as is" produces unwieldy code and nearly useless traces. I'll show you how to get around these problems with a trace class library to maximize the value of tracing in your development. The wrappers in this library enable you to articulate precisely what trace entry you're making or what assertion you're checking within the calling code. You'll get a high level of consistency in your traces, and you'll get a single location for modifying the output format as your needs evolve.

The .NET Framework contains two nearly identical classes that let you output trace information: Trace and Debug. You activate each with a special compiler directive. By default, both are active in debug builds, while only Trace is active in release builds. Both Trace and Debug include methods for WriteLine, Write, and Assert. Assert displays a message box when used with the default listener. The message box contains programmer-friendly information that is inappropriate for the end user, so don't use the Assert method of the Trace class. On the other hand, most trace output is desirable in both release and debug builds, so use Trace.WriteLine except for temporary debug statements.

Debug and Trace provide output through trace listeners. Both classes route the parameters of WriteLine, Write, Assert, and other methods to each of the registered listeners. In addition to controlling what listeners you attach, you can use trace switches to control the output sent to all listeners. Boolean switches let you turn features on and off, trace switches let you set the verbosity level, and custom switches let you set any type of value. You can use these switches any way you want, even for purposes unrelated to tracing.

I designed my trace library to help you control the output of the WriteLine method. This help sample demonstrates the weaknesses with normal WriteLine usage:

Public Shared Sub MyErrorMethod()
	If generalSwitch.TraceError Then
		Trace.WriteLine("Error message")
	End If

The extra lines of trace code make it harder to determine the flow and intent of your method. Imagine how your method would look cluttered with a half dozen individual traces, each outputting a substantial volume of information. This code also leaves the trace level to the discretion of the programmer, making trace levels unpredictable. Another programmer debugging your code won't know whether execution bypassed a trace or whether the trace had an unexpected level.

Control Trace Output
You'll have cryptic and incomplete tracing if each programmer creates trace output explicitly. It's hard to enforce guidelines for trace output, and you can't maintain as much information manually as you can programmatically. You also disperse formatting throughout the application if each programmer creates trace information explicitly. Dispersed trace entries make it nearly impossible to modify the formatting when your needs change, even if your entire team maintains absolute consistency in its formatting.

Most programmers don't trace because it's too much work with this freeform approach, and the result is only minimally helpful. What you need is strong typing for every trace. You can create this through a custom trace library like the downloadable sample. A portion of the sample shows several different trace entries and demonstrates how a trace library improves the readability of tracing entries. TT is an alias for the library namespace, and Tools is the class containing the shared trace methods:

Public Sub Bar1(ByVal iVal As Int32, _
	ByVal s As String, ByVal btn As _
	Windows.Forms.Button)
	TT.Tools.StartingMethod(iVal, s)
	TT.Tools.ParamLessThan("iVal", _
		iVal, 3)
	TT.Tools.ParamCanNotBeNull _
		("btn", btn)
	For i As Int32 = 0 To iVal
		TT.Tools.InFor(i)
		TT.Tools.MarkSpot("Random Spot")
	Next
End Sub

This sample from the main application traces the method start, parameter value assertions, loop iteration, and a random location. Be consistent in choosing which entries you'll make in all methods. For example, you might want only the method start and the two assertions. Note that you want a full set of methods in your library for quick, consistent access during debugging, regardless of your upfront strategy.

Not all trace entries are of equal importance. For example, broken assertions signal serious issues in your application, whereas entering a method is interesting information. It's rare that you want to know exactly what's going on inside methods, but it's important information when you need it. TraceSwitch lets you specify which entries to output, based on this varying importance. The five predefined switches are off, error, warning, information, and verbose. You compare the trace level for the entry with the level set at run time.

Checking the trace level in the subroutine lets you supply a default level for each type of trace. For example, the assertion methods (ParamLessThan and so on) default to error, entering the method defaults to information, and the internal logic flags default to verbose. Programmers adding traces can ignore these details and produce consistent trace levels. They can override the default trace levels on the rare occasions that's needed.

Multiple methods might support a single trace issue, such as a missing string (see Listing 1). Make sure your trace code won't raise exceptions whenever you insert trace logic. For example, be sure to compare strings to nothing before checking the length.

Retrieve Method Information
When debugging, you want to know as much as possible about what went on in the application. For example, you want to know which run of the application you're looking at, the time the entry was logged, the trace level of the entry, and what the entry represents. You also want to know the specific line of code you're tracing:

894ef2f1-2de7-486b-873c-437e648a48f
2/15/2004 7:46:32 PM
TraceTest.Foo.Bar1(Int32 iVal=3,
	String s="This is a string", 
	Button btn="Button1") 
row,col=799,7 
level=Info 
Starting Method 

The trace library builds complex trace entries like this from simple calls:

TT.Tools.StartingMethod(iVal, s, btn)

The shared constructor of the Tools class creates a GUID for each run of the application. The trace level is either the default or the passed overriding value, and you pass the specific entry. The method information comes from a stack frame. A stack frame describes a position in code. A set of stack frames makes up a call stack, such as the one used for exceptions. You can request a single stack frame or the whole current stack at any point in your code.

The first trace method you call retrieves the stack frame. The current stack frame is the current method and located at position 0. The calling method you're interested in comes next on the stack, at position 1. This positional requirement restricts the refactoring you can do in the library functions.

The stack frame contains a Reflection.MethodBase object, which holds the namespace, method, and parameter metadata. It also holds the current row and column in the code. However, the stack frame does not contain parameter values, which you'd probably like to include in your trace output. You must pass the parameter values in order to output them. You can then interleave the passed parameter values with the stack frame information (see Listing 2). If you're curious, the Visual Studio callstack window grabs them for display from deep in the bowels of the debugger through unmanaged code.

It's important that you use a single formatting function because you can't know how someone will work with your entries in the future. The programmer-friendly output you create might not be viable for another use such as an import into an analysis database or tool. Similarly, it might not be consistent with updated organizational guidelines. You need to give a future programmer who needs to change the format the ability to make this change in a single place.

The solution described so far addresses our original goals for the project: Tracing doesn't diminish calling code readability; you get meaningful output with minimal effort; trace levels are consistent; and you can modify the output. However, implementing these features comes at a price: performance. A performance hit when collecting trace information isn't usually reason for concern. However, this hit becomes important if your tracing code slows the application appreciably when you have tracing turned off.

Optimize For No Output
Conventional wisdom says that you should put the trace code in the body of your methods to avoid a performance hit. Method calls are fast, so the trace library approach works for all but highly stressed servers. For example, you can turn off tracing (the normal runtime scenario) and run the sample code about a million times a second. That's more than 10 million tracing calls. The percent of time spent calling the method becomes negligible when you add a reasonable amount of real-world code. You'll call these methods often, so write the library to optimize performance with tracing turned off.

You optimize by comparing the trace level with the currently set level immediately on entering a trace method (such as ParamStringRequired). Exit the method immediately if the current trace level doesn't specify this output or the assertion passes (see Figure 1 and Listing 2). Output clarity takes precedence over performance in building the entry, once you know you'll output it. Tracing is valuable only if you can use the results effectively. Be careful if you refactor the code run when you aren't outputting the entry. You'll add about 30 percent to the test timings if you add another subroutine call.

So far you have a solid tracing library and have traced at least some of your code. Next, you need to work out how to use the library during debugging. The configuration file is the most common way to set trace information such as switches. The configuration file doesn't support Enum names, so off is 0, error is 1, and so forth:

<switches>
	<add name="TraceLevelSwitch"
		value="1" />
</switches>

The downloadable trace library includes methods that let you adjust the trace level and attach listeners with your application running. You can use these methods in the debugger's Immediate window. This means you can leave tracing off until you've set the stage to reproduce the problem. At a breakpoint, type this line in the Immediate window (the namespace alias doesn't work here):

KADGen.TracingTools.Tools.TraceLevel _
	= Diagnostics.TraceLevel.Error

Next, decide how you'll use tracing in a particular application before building your tracing library. I've included 10 trace entry samples to give you a head start. However, you'll need to determine the set of entries that matter to your team. Ideally, you need to make decisions about tracing at the start of your project. You can trace either as you build code or in response to debugging problems. Trace your application up front if downtime is expensive. If you trace in response to debugging, leave your trace code in place because you'll trace the problematic parts of your application gradually, and those tend to be concentrated in a few areas.

The key to an effective tracing strategy is to make decisions you can live with and trace consistently throughout your application.

About the Author

Kathleen is a consultant, author, trainer and speaker. She’s been a Microsoft MVP for 10 years and is an active member of the INETA Speaker’s Bureau where she receives high marks for her talks. She wrote "Code Generation in Microsoft .NET" (Apress) and often speaks at industry conferences and local user groups around the U.S. Kathleen is the founder and principal of GenDotNet and continues to research code generation and metadata as well as leveraging new technologies springing forth in .NET 3.5. Her passion is helping programmers be smarter in how they develop and consume the range of new technologies, but at the end of the day, she’s a coder writing applications just like you. Reach her at [email protected].

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube