C# Corner

Tips for Easier C# Unit Testing

C# Corner columnist Patrick Steele says writing unit tests can be a time-consuming chore. He looks at some approaches that can make writing unit tests easier and more efficient for C# programmers.

Many of the complaints of writing unit tests are that they take a long time to write or that they are "boring" to write. While I can't help conquer the "boring" part, this article will look at approaches that make writing unit tests easier so that you're more inclined to do it.

This article will apply to the more common unit testing frameworks like MSTest and nUnit, but the concepts can be applied to most other frameworks.

Testing XML Serialization
The built-in XML serialization in the .NET framework is pretty handy. It's great for storing some simple configuration information or other small sets of data. Whenever you write a class that is designed to be serialized using XML serialization (or any serialization for that matter), you should make sure you have some unit tests to ensure the data is persisted and retrieved as expected.

Some people say that testing XML Serialization is kind of a waste because you're really just testing the .NET Framework classes. If you have a problem with the XmlSerializer class, how can you fix it? The reason we test XML serialization is to make sure we have our class set up properly so that the right data is saved during serialization and the right data is retrieved during deserialization.

The first thing to make testing XML serialization easier is to make sure your application is flexible and can read and write data directly from a Stream. While most configuration data on a Windows application will be saved to the local file system, changes in requirements could mean that we need to save the data elsewhere. By working with a "least common denominator", the Stream, we're able to support a much wider range of targets -- File streams, Http Streams, TCP/IP Streams, etc.

Let's look at a simple configuration class that we'll want to save and load via XML serialization:

	public class ConfigData
	{
		public string CurrentOperation { get; set; }
		public int MaxRetries { get; set; }
		public string StorageDirectory { get; set; }
	}

I like to create an interface for loading and saving configuration data. This allows me to inject it into my components using Dependency Injection (See "Inversion of Control Patterns for the Microsoft .NET Framework"):

	public interface IConfigurationRepository
	{
		ConfigData Load();
		void Save(ConfigData configuration);
	}

Notice there is nothing in the contract above that stipulates where the data will be saved -- it's simply a contract for saving and loading configuration data. It's the code that implements this contract that will decide where the information is saved.

Since we decided we're going to save and load the XML data from the local file system, we need to implement an IConfigurationRepository that uses a file:

	public class ConfigDataFileRepository : IConfigurationRepository
	{
		private readonly string filename;

		public ConfigDataFileRepository(string filename)
		{
			this.filename = filename;
		}

		public ConfigData Load()
		{
			var serializer = new XmlSerializer(typeof (ConfigData));
			using(var fs = new FileStream(filename,FileMode.Open))
			{
				return (ConfigData) serializer.Deserialize(fs);
			}
		}

		public void Save(ConfigData configuration)
		{
			var serializer = new XmlSerializer(typeof(ConfigData));
			using (var fs = new FileStream(filename, FileMode.Open))
			{
				serializer.Serialize(fs, configuration);
			}
		}
	}

You can see that this class requires a filename in the constructor. It then saves and loads to that file. A simple test verifies that when we read in what we wrote out, we get the same data:

	public void ConfigData_Roundtrips_Successfully()
	{
		var configData = new ConfigData
			                {
			                 	CurrentOperation = "test",
			                 	MaxRetries = 55,
			                 	StorageDirectory = @"C:\temp"
			                };
		var repository = new ConfigDataFileRepository("sample.xml");
		repository.Save(configData);

		var loaded = repository.Load();

		Assert.AreEqual(configData.CurrentOperation, loaded.CurrentOperation);
		Assert.AreEqual(configData.MaxRetries, loaded.MaxRetries);
		Assert.AreEqual(configData.StorageDirectory, loaded.StorageDirectory);
	}

The problem with this approach is that we're hitting the physical file system. As noted earlier, if we do our operations on a Stream, we get more options for the saving and loading of the data as well as easier unit testing since we can use a MemoryStream instead of the file system. By removing the file system access from the test (or the database access or the Web site access or the network access, etc.), we truly have a "unit" test that tests only our code and not interaction with other components or systems.

A quick refactor of the ConfigDataFileRepository exposes some Stream-based methods for saving and loading, while still supporting the original file-based methods:

	public class ConfigDataFileRepository : IConfigurationRepository
	{
		private readonly string filename;

		public ConfigDataFileRepository(string filename)
		{
			this.filename = filename;
		}

		public ConfigData Load()
		{
			using(var fs = new FileStream(filename,FileMode.Open))
			{
				return LoadFromStream(fs);
			}
		}

		public void Save(ConfigData configuration)
		{
			using (var fs = new FileStream(filename, FileMode.Open))
			{
				SaveToStream(configuration, fs);
			}
		}

		public ConfigData LoadFromStream(Stream stream)
		{
			var serializer = new XmlSerializer(typeof(ConfigData));
			return (ConfigData)serializer.Deserialize(stream);
		}
		public void SaveToStream(ConfigData configuration, Stream stream)
		{
			var serializer = new XmlSerializer(typeof(ConfigData));
			serializer.Serialize(stream, configuration);
		}
	}

Now we refactor our tests to use a MemoryStream, thus avoiding the file system:

	public void ConfigData_Roundtrips_Successfully2()
	{
		var configData = new ConfigData
		{
			CurrentOperation = "test",
			MaxRetries = 55,
			StorageDirectory = @"C:\temp"
		};
		var repository = new ConfigDataFileRepository("sample.xml");
		var ms = new MemoryStream();
		repository.SaveToStream(configData, ms);

		ms.Position = 0;
		var loaded = repository.LoadFromStream(ms);

		Assert.AreEqual(configData.CurrentOperation, loaded.CurrentOperation);
		Assert.AreEqual(configData.MaxRetries, loaded.MaxRetries);
		Assert.AreEqual(configData.StorageDirectory, loaded.StorageDirectory);
	}

This new test passes and it doesn't need to hit the file system. But it's not complete yet -- I would add one more test that takes an XML string representing your serialized class and reads it back in (hint: convert the string to a byte[] and then create a MemoryStream on the byte[]). This type of check will catch cases where someone renames a property in the ConfigData class and doesn't realize they're obsoleting current customer's configuration data!

But wait -- doesn't the new test mean we're not actually calling the Save() and Load() methods? Yes, it does. But the only thing those methods do is open a file stream and pass that stream to the methods we actually have a unit test for. If the Save() and Load() methods fail, we have issues with something outside of our component -- perhaps a hard disk problem, a permission/security configuration on our build machine where the tests are running, etc.... In other words, these are things our component is not designed to handle anyway, so we don't test for them.

While I approached this subject of using Streams vs. the file system in the context of an XML serialization issue, it really can be extended to any cases where you use file system access.

Collection Verification
When a method returns a collection of items, you don't need to verify each element individually. Suppose we have some method that generates even numbers (based on a starting number and the count of items to return). We could check this in MSTest like this:

	[Test]
	public void Validate_GetEvens()
	{
		var sample = new SampleComponent();
		var evens = sample.GetEvens(12, 4);

		Assert.AreEqual(12, evens[0]);
		Assert.AreEqual(14, evens[1]);
		Assert.AreEqual(16, evens[2]);
		Assert.AreEqual(18, evens[3]);
	}

But that's kind of a pain. Microsoft agrees. They have a special "CollectionAssert" class that can be used to validate that two collections are equal. The test above could be re-written as:

	[TestMethod]
	public void Validate_GetEvens()
	{
		var sample = new SampleComponent();
		var evens = sample.GetEvens(12, 4);

		CollectionAssert.AreEqual(new[] {12, 14, 16, 18}, evens);
	}

There's even an overload of the CollectionAssert.AreEqual method that allows you to provide a custom IComparer so you can change how "equality" is determined in your tests. This little utility assert can make collection compares much easier.

Use LINQ For Verification
LINQ doesn't have to be confined to only application code. There are a number of places LINQ functions can make your unit tests a little easier to write.

FindFirst
I've seen some code perform a FindFirstOrDefault and then do an Assert that the returned value is not null (or whatever the default value for the type should be). Since the FindFirst method will throw an exception and abort your test with a failure, I prefer to use FindFirst -- it's only one line of code and it shows that you expect to find at least one item. The same can be said for other similar methods -- i.e. Single/SingleOrDefault, Last/LastOrDefault, etc.

Select/Where
Imagine you have a method that returns some enumerable list of items -- an array, a collection -- and you need to make sure that you've got at least 5 items with a temperature over 100 degrees. Before LINQ, our unit test would have to loop over all of the items and keep a counter going as it checked each temperature. LINQ can make this so much easier:

	[TestMethod]
	public void Find_Five_Temps_Over_100()
	{
		var sample = new SampleComponent();

		var results = sample.ComputeSamples();

		var query = from s in results
			        where s.Temperature > 100.0
			        select s;
		Assert.AreEqual(5, query.Take(5).Count());
	}

And let's not forget LINQ's "orderby" clause. Let's say we need to verify the computed value of the 4 highest temperatures:

	[TestMethod]
	public void Find_Four_Highest_Temps()
	{
		var sample = new SampleComponent();

		var results = sample.ComputeHotSamples();
		var query = (from s in results
			            orderby s.Temperature descending 
			            select s.Temperature).Take(4);
		CollectionAssert.AreEqual(
			new [] { 106.7, 106.2, 105.2, 103.9 }, query.ToArray());
	}

Don't forget to take advantage of LINQ's clear and concise query syntax for validating data in your unit tests.

Exception Handling in nUnit
This section on exception handling does not apply, unfortunately, to MSTest. However, I wanted to include it here since it's a significant improvement to exception handling in unit tests.

In .NET, we don't use error codes or return values anymore -- we throw exceptions. As part of any good unit test, you want to make sure that when certain conditions happen, you're throwing the exceptions you expect. All of the popular frameworks support being able to catch a specific type of exception during a unit test. Here's an example:

[TestMethod]
[ExpectedException(typeof(ArgumentNullException))]
public void Capitalize_Throws_Exception_When_Argument_Is_Null()
{
	var component = new SampleComponent();
	component.CapitalizeThis(null);
}

This simple test checks that if we pass a null parameter to the ‘CapitalizeThis' method, an ArgumentNullException will be thrown.

The problem here is that we actually execute two lines of code in this test: the component creation as well as the method we want to test (and this is a small simple test -- a "real" test would have much more code). The ExpectedException attribute is applied to the entire test. If any code in the test raises an ArgumentNullException, this test will pass. Suppose that a bug in the SampleComponent constructor produces an ArgumentNullException? This test will pass! Not what we want in a robust unit test.

To counter this, the nUnit framework (as well as some of the other frameworks) added an Assert.Throws method. This generic method takes an exception type and a lambda that executes code. Only the code in the lambda is checked for throwing the exception type. Here's how we'd rewrite the above test using Assert.Throws:

	[Test]
	public void Capitalize_Throws_Exception_When_Argument_Is_Null()
	{
		var component = new SampleComponent();

		Assert.Throws<ArgumentNullException>(
			() => component.CapitalizeThis(null)
			);
	}

As you can see, we no longer have an ExpectedException attribute on the entire test method. The only code that is checked for throwing an ArgumentNullException is the call to CapitalizeThis. If the constructor for SampleComponent were to throw an ArgumentNullException, the test runner would mark this test as failed.

Conclusion
Unit testing can be tedious at times, but the benefits of a good set of unit tests for outweigh the tedium. I hope this article has given you a few ideas to help you not only make your tests easier to write, but given some insight on how certain design choices can help make your software more testable. If you have any tips or ideas that make writing unit tests easier, drop me a line.

About the Author

Patrick Steele is a senior .NET developer with Billhighway in Troy, Mich. A recognized expert on the Microsoft .NET Framework, he’s a former Microsoft MVP award winner and a presenter at conferences and user group meetings.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube