Ask Kathleen

Lay Down the Law on Code Rules

It can be intimidating at first, but it helps you in the long run to run your code against a static code analyzer like FxCop; also learn how to use dynamic analysis to test your forms during runtime.

Q
I'm running the free version of FxCop from GotDotNet, and it gives me more than a thousand errors when I run it against my application. I really want to use FxCop, but I have to get my application written, and I don't even understand many of these errors. Is my code really that terrible? Would the Team System version be easier to use?

A
FxCop is a code analysis tool that compares your application with a series of coding rules and tells you every place a rule is broken.

FxCop is available from GotDotNet, and a version is available in Team System as Static Analysis. I prefer the GotDotNet version because it includes name checking for English language developers. It also includes some new rules and I prefer the external UI to the Visual Studio one.

FxCop provides several hundred rules to compare against. Some of these rules are esoteric, but only a few are controversial. It's a good idea to write code that passes most of the rules, but no one has ever run FxCop against a previously written medium to large application and had no errors. No one understands all of these issues before they start using FxCop; one of the key values of FxCop is that it's a repository of arcane knowledge about the compiler and runtime. FxCop can feel like a second-grade teacher rapping you on the knuckles with a ruler when it piles a thousand errors on you, but you can configure it to be more like my violin instructor: He's always complaining about something, and silence is often the best news. But he does keep the criticism manageable by restricting himself to focusing on only a few things that are the most important right now. You can be successful in FxCop if you give up trying to address everything all at once and focus instead on what's most important to your application.

The unfriendly interface comes from FxCop's history. Microsoft developed strategies to create better code and couldn't force these techniques into all Microsoft code without writing tools. If an application at Microsoft has thousands of errors, they can put a stop on development and spend a few weeks fixing and retesting. If you have that luxury, that's the cheapest way to implement FxCop. Once your error count is near zero, run FxCop on a regular basis, and you'll see all new problems immediately.

Few of us have that luxury because deadlines don't change when we decide to implement FxCop. The trick is implementing FxCop slowly and working your error count to near zero on only a few key rules at a time. Select the three or four rules you think are most important and disable all the rest. You disable rules on the Rules tab of the FxCop UI. Fix the issues you selected throughout your code to keep the count near zero. From then on, you can run FxCop regularly and keep your code on track with these important issues. Each week, pick another three or four new rules to include.

It might seem that implementing new FxCop rules gradually is more expensive than the full stop approach, because you're continuing to write code you'll have to fix later. What's more expensive is ignoring FxCop altogether. You'll have to deal with these issues eventually, whether through FxCop or by having to overcome problems related to maintainability, globalization, performance, or debugging.

In any case, please don't' give up on FxCop: It can help you as much or as little as you need when you adapt it to fit your environment. If you add a few rules each week, you'll include all the rules that matter to you eventually.

When you're working toward compliance, and you encounter code that you think is correct but breaks an FxCop rule, indicate why you ignored the rule in the code itself. This is important because it allows other developers to see why you ignored the rule. You can right-click on the message in the FxCop UI and select Copy As?Suppress Message. You can paste this message directly into C#, but Visual Basic requires that you change its syntax a bit:

<SuppressMessage("Microsoft.Design", _
   "CA1024:UsePropertiesWhereAppropriate", _
   Justification:="This makes more sense as a" & _
   "function")> _ 
   Public Shared Function GetRenderer() As _
      ToolStripProfessionalRenderer
   Return renderer
End Function

You need to include System.Diagnostics.CodeAnalaysis to resolve the SuppressMessage attribute whether you use C# or VB. It's important to include the justification, so other programmers know why you made the decision.


Q
We have some rules on how forms should behave, such as always having an Accept button. Our testing team doesn't always catch these and then development gets blamed. Is there a way to test our forms for consistency with standards before we give them to testing? I don't see a way to do this with FxCop.

A
You could write an FxCop rule to check whether properties are set, but you cannot determine easily whether other code resets the values prior to displaying them. FxCop relies on static analysis—analyzing the assembly without running it—to check your application. A better approach in this case is to utilize dynamic analysis. Dynamic analysis serves a different purpose than static analysis by running the application and reporting on runtime conditions. It's especially useful in WinForms applications. Dynamic analysis doesn't replace FxCop, but it augments its functionality.

To implement dynamic analysis, run a check method on each form after it opens (Listing 1). While you can add this method call to each form, it's far more reliable to add it in a base class. This enables greater flexibility if you have different rules for different types of forms, such as modal dialogs or MDI children. Of course, this works only if your forms derive from common base classes, but using such classes is a good idea for many reasons. Inserting your own base class into the hierarchy provides a façade, or a point of entry for any customization, you need to apply to all your forms. With this form in place, you can simply call your dynamic analysis method after the form is loaded. You can use a flag to ensure dynamic analysis runs only the first time you open the form:

private bool isLoaded;

protected override void OnActivated(EventArgs e)
{
   base.OnActivated(e);
   if (!isLoaded)
   {
   DynamicAnalysis.FormStartupDynamicAnalysis(this);
      isLoaded = true;
   }
}

You can start writing tests once you insert this call in your code. The test you request is simple:

public static void FormDynamicAnalysis(Form form)
{
   if (form.AcceptButton == null)
   {
      LogDynamicAnalysisError(string.Format(
         "{0}-Accept button not set",
         form.Name));
   }
}

You must resolve a couple of issues before you can create a good dynamic analysis solution. These warnings are important to developers, but you don't want your users to see them, and there's no sense hurting your runtime performance if the checks are slow. You can limit your testing to debug builds by including preprocessing directives:

protected override void OnActivated(EventArgs e)
{
   base.OnActivated(e);
   if (!isLoaded)
   {
#if DEBUG
      DynamicAnalysis.FormStartupDynamicAnalysis(this);

#endif
      isLoaded = true;
   }
}

Of course, it's still possible for code to reset the form property values you've just tested. You can let your testing team repeat these checks at any time by handling an otherwise unused event, such as the form's DoubleClick event. Note that it's slightly more efficient to use the OnXXX methods than the events on the current class:

#if DEBUG
protected override void OnDoubleClick(EventArgs e)
{
   base.OnDoubleClick(e);
   DynamicAnalysis.FormDynamicAnalysis(this);
}
#endif

You need to report any problems to your programmers. You can do this with message boxes, but they annoy other members of your team. TraceSource offers more sophisticated features and using it for dynamic analysis gets you familiar with this important subsystem.

The TraceSource class is like a bucket. You create an instance and record logging information:

private static void LogStaticAnalysisError(string message)
{
   if (analysisTraceSource == null)
   {
      analysisTraceSource = 
         new TraceSource(
         "DynamicAnalysisSource");
   }
   analysisTraceSource.TraceEvent(
      TraceEventType.Warning, 
      (int)AnalysisTraceSourceId.VisualError, 
         message);
}

The TraceEvent method records that execution hit this line of code. The TraceEventType sets an overall level for the information, which can be warning, information, error, or any of several other values. The second parameter to this overload is an integer that represents a key to the type of message you're logging. You can create a unique key for every trace call or use an enum to organize messages by category. The final parameter is the message.

This little fragment of code is surprisingly powerful (Listing 2). Configuring the TraceSource object at runtime through app.config lets you fine tune the message response at different times in your application development. You can send this information to a log file or even create a listener that displays a dialog box only to the owner of the form if you are fond of the old style Assert message boxes. The easiest use is to send the information to the console, which C# displays in the Output window or Visual Basic in the Output or Immediate windows.

About the Author

Kathleen is a consultant, author, trainer and speaker. She’s been a Microsoft MVP for 10 years and is an active member of the INETA Speaker’s Bureau where she receives high marks for her talks. She wrote "Code Generation in Microsoft .NET" (Apress) and often speaks at industry conferences and local user groups around the U.S. Kathleen is the founder and principal of GenDotNet and continues to research code generation and metadata as well as leveraging new technologies springing forth in .NET 3.5. Her passion is helping programmers be smarter in how they develop and consume the range of new technologies, but at the end of the day, she’s a coder writing applications just like you. Reach her at [email protected].

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube