C# Corner

Write Robust Exception-Handling Code

Thrown exceptions break the normal flow of execution in a program to report error conditions. A few simple techniques can help you preserve execution flow and give users and administrators the information they need to understand what went wrong.

Technology Toolbox: C#

The .NET Framework Design Guidelines has this to say about reporting errors:

"DO NOT return errors codes. ... "DO report execution failures by throwing exceptions. If a member cannot successfully do what it is designed to do, ... an exception should be thrown." [From Framework Design Guidelines; byy Krzysztof Cwalina, Brad Abrams (Addison-Wesley), ISBN: 0321246756.).

These few sentences have huge implications on your daily coding life. Regardless of your preferences when dealing with error reporting and error handling, your code must be robust in the face of exceptions. The .NET Framework designers have made a strong design statement: Exceptions are the correct way to report execution failures. If you call the .NET framework code (and you know you do), you need to write code that handles exceptions professionally and expertly.

In this article I'll discuss three distinct mindsets you need to keep in mind if you want to write strong error reporting and error handling code. First, I'll discuss the mindset you should have when you create libraries, and you're the one reporting errors. Second, I'll cover how to create classes that are robust when exceptions are thrown through their methods. Finally, I'll explain how to write UI logic that handles exceptions gracefully, propagating them up to the top level code.

The choice of words in the quote I've cited contains the key to understanding the proper times to throw exceptions in your code. An "execution failure" doesn't merely mean something went wrong. It's a more restrictive condition. It means the method cannot do what it was designed to do. You shouldn't be throwing exceptions every time something goes wrong. You shouldn't write brittle methods that break so easily. You can predict many things that might go wrong with a method, its inputs, its environment, or the system in which it's running. All these conditions are expected, and you should write code with these conditions in mind. Of course, you might not be able to recover from each and every one of them. Those errors that prevent your method from completing its task result in exceptions. Let's illustrate this distinction with a handful of examples.

You're probably familiar with this method:

public string[] GetFiles(string dir, 
	string searchPattern);

It's in the System.IO.Directory class, and it returns a set of files that match the search pattern in a given directory. Without looking at the docs, consider these conditions and think about what the method should do. Suppose the directory doesn't exist. What if the directory string is null? Or maybe it's an empty string. Or, the string might contain illegal path characters. Or, the string might exist, but the current account doesn't have read access. Suppose the search pattern is empty or the search pattern contains illegal characters.

Make the Right Choice
In every case, you have a choice: You can throw an exception, or you can pick some default answer. In some cases, the right approach is obvious: A directory that doesn't exist won't have any files, which makes returning an empty array a good choice. A null directory string doesn't have a good answer: Should that be interpreted as some default directory? If so, which one: the program directory, the user's home directory, C:\? There isn't an obvious answer. The method can't do what it's intended to do. And that's the point: Not every error condition is an execution error. Some conditions have obvious default answers, and you should return those default answers instead of throwing an exception. It will help your users build more robust systems.

Some conditions require more thought. For example, assume you have a directory that's inaccessible due to security reasons. Should that be an error, or should it return an empty vector? My own opinion is that if the directory being requested cannot be read, that's an execution error, and you should throw an exception. In this case, trying to create some default answer leads to inconsistent behavior. Assume the same code executes later with different credentials, only this time it works. Because you picked a default answer rather than an error condition, important information that could have helped diagnose the problem has been lost. You've made it harder for developers using your library to diagnose the error conditions.

This gets us to the second point: Exceptions are error conditions that can't be ignored. Your colleagues that call your code in error will be alerted to the first execution failure immediately, rather than being forced to reconstruct a series of small events that lead up to some catastrophic error. By reporting errors in this non-ignorable fashion, you force client code to respond or let the application terminate. It's harsh, but it prevents critical errors from causing catastrophic errors.

That small example provides enough background to discuss some general rules for exceptions and library creators. I've already mentioned that using exceptions as the error reporting mechanism ensures that small errors don't go undetected and cause more serious problems as the system destabilizes over time. This suggests a strategy of throwing exceptions whenever a problem occurs. But, countering that, exceptions break the normal control flow. Throwing exceptions can be considered the modern equivalent of goto statements. Throwing an exception transfers control up the call stack until a suitable catch clause can be found. Any intervening cleanup code might be missed. As a quick aside, there are some misconceptions about the performance implications of using exceptions as an error reporting mechanism. The presence of try? catch? finally clauses doesn't have a significant effect on performance in .NET when the runtime control flow doesn't generate exceptions. Measurable performance costs occur only when exceptions are generated.

Create Exception Guidelines
This means you can create some general guidelines for how to handle exceptions. If a method cannot do what it's intended to do, that is an execution failure, and you must throw an exception. You should throw exceptions only when the method cannot do what it's intended to do; you cannot use exceptions as an excuse to avoid writing robust code. The Exception class hierarchy already provides a rich set of properties that you can use to communicate error information to client developers. System.Exception includes a Message property, a StackTrace, a Source property, and even a HelpLink. You can learn from this design: An Exception is just another object, so you can add whatever information you have that would help your clients diagnose the root cause of the problem. It might not be possible to recover from all exceptional conditions, but you can provide enough information to enable developers to fix the core problem.

Library designers should consider the tester/doer pattern to provide client applications with a path that avoids possible errors:

// test
if (File.Exists("MyFile.txt")) 
{
	// do:
	string[ lines = File.ReadAllLines("MyFile.txt");
	// maybe do more.
}

Note that you could call File.ReadAllLines without checking to see if MyFile.txt exists. It would fail, and it would throw an exception. The library designer has no choice: ReadAllLines cannot read from a non-existent file. However, the framework designer did provide an alternative path that provides a way to test the condition before blindly trying an operation that might fail.

In cases where the tester/doer pattern would have too much overhead, the Try/Parse pattern is a reasonable alternative. System.Double.TryParse() is a good example of this pattern. Double.Parse() will parse a string and return the value as a double. If the input string doesn't represent a number, it throws an exception. TryParse, by comparison, returns a Boolean that indicates whether the input string is a number. An out parameter stores the answer. The reason for the difference is performance: The test to see whether a string represents a number is, more or less, the same amount of work that's needed to parse the string. Therefore, implementing the tester/doer pattern would more or less double the time it takes to parse the string and determine if it is a double. In cases where the cost of the test approaches the cost of the work, the try/parse pattern is a good alternative to the tester/doer pattern.

Handle Middle-Layer Code
One of the biggest advantages to using exceptions as your reporting mechanism is that errors propagate from the error point to any routine up the call stack that can recover from that error. There is one major concern for code that neither generates nor catches exceptions: You cannot manage program state robustly. You need to structure your code so that it satisfies two expectations for all callers: First, any allocated resources are cleaned up in both success and failure cases, as appropriate. Second, if an exception escapes a method, observable program state must not change.

You can satisfy the first expectation requirement by putting all cleanup code in a finally clause:

TextWriter stream = OpenMyTextFile();
try { 
	// code that might throw...
	} 

finally 
{
	stream.Close();
}

Satisfying the second expectation requires a bit more thought. Methods that don't change state are simple: If they don't change state during their normal execution, they won't change state during error conditions. Mutator methods can be made safe by making any potentially unsafe changes on temporary objects, then swapping those temporary objects with the permanent storage after ensuring that the unsafe operations have succeeded. For example, consider this pseudo code that replaces an array with the contents of a file:

// declared elsewhere:
private string[] capturedData;

// public method to read data file:
public void ReadData(string pathName)
{
	// use temp storage:
	string[] newLines = File.ReadAllLines(pathName); 
	// might throw.
	// swap:
	capturedData = newLines;
} 

This example is simple, but you might think that the copy/swap implementation is a performance robbing operation on large structures. Like all guesses at performance, the only way to really find out is to test your assumptions. I wrote a test program to look at four different ways to take a list of integers and double each value (Listing 1 you can download the sample code projects here). The first approach modifies the list in place; the second creates a new empty List and adds values while iterating the first list; the third creates a new list that is the same size as the permanent storage, then adds values while iterating the first list; and the fourth test creates a copy of the permanent list and does all its work on the copy.

Here are some typical results from my desktop machine:

00:00:00.6580000		Modify in place
00:00:01.8660000		New / add
00:00:01.4620000		initial capacity
00:00:00.8510000		copy / modify

It's true that operating in place is the fastest method, but the performance cost is minimal if you take advantage of all information about the current set of objects (see test 4 in Listing 1). Much of that cost is the management of the collection size. That's why the second and third tests are quite a bit slower than the first and fourth. The copy of 10,000 items is costing a total of .2 milliseconds, each time. It's not free, but it's not that expensive either. All things considered, it's a small price to pay for writing exception-proof code.

Assess Recovery Options
If you write code that catches an exception, you are responsible for complete recovery of the root problem. Ask yourself three questions before you create each clause. First, can you fix the root problem completely? Second, can you remove any side effects? Third, can you roll back or cancel any in progress operation?

The answer to the first question should be obvious when you write the code. But the other questions relate to properties of the libraries and services you use. If all the middle-layer code follows the strong-exception guarantee, then there are rarely any side effects of in-progress operations. As such, you can continue. Otherwise, it becomes much harder to recover.

There are a handful of places where a general catch clause can create a more robust program. One such place is in top-level event handlers in Windows applications. If exceptions are thrown from those event handlers, the CLR tries to recover for you, but that often leaves your application in an unstable state. You're better off catching the exception, logging it, and calling System.Environment.FailFast when you can't recover.

You should catch general exceptions at thread boundaries. Exceptions cannot be thrown across thread boundaries. Exceptions that aren't caught by the thread procedure result in hanging threads. The System.Component.BackgroundWorker has functionality to marshal the exception information across thread boundaries, but that is beyond the scope of this column.

If you write a Windows Service, you should ensure that you don't let exceptions pass through your service methods. The CLR hosting the service will shut it down and refuse to restart it.

Note that you aren't fixing the problem in any of these examples. Rather, you're finding a more graceful way to notify the end user or administrator of the failure.

Bad things happen to all code. So, it pays to make use of the most robust error-reporting system you can use. In other words, it behooves you to use exceptions. With a little practice, the difficulties of working with exceptions are offset easily by the advantages gained by the rich reporting environment and the fact that exceptions simply can't be ignored. Of course, if you're still not convinced, the practical answer is even more compelling: The .NET framework uses exceptions almost exclusively for its error reporting system. Like it or not, you need to deal with exceptions.

About the Author

Bill Wagner, author of Effective C#, has been a commercial software developer for the past 20 years. He is a Microsoft Regional Director and a Visual C# MVP. His interests include the C# language, the .NET Framework and software design. Reach Bill at [email protected].

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube