Ask Kathleen
Fine-Tune Unit Testing in Visual Studio
With a few tweaks, you can turn Visual Studio's basic unit-testing capabilities into a powerful and extensible tool for improving code quality.
Q: Our group is trying to create a unit tests suite for an application nearing completion. I need to run tests with several different users logged and can't figure out how to do this from a single test class. I also have a number of tests that are almost identical, differing only in the data input and outputs. Can I reuse code between tests? Finally, we've only created 50 tests and I'm already lost about what conditions are being tested. How will my group ever keep this straight after we've created hundreds of tests? I've tried to read about unit testing online, but so much of that is focused on system architecture, which we can't change. I'd like to give up unit testing, but we have a management requirement to have complete code coverage.
A: Don't give up on unit testing!
Unit testing is still a very young discipline, regardless of the number of years it's been around. Because of this, there's a bit of opinion in my answer. I care about pragmatic unit testing that is applicable to all projects. If you're in a position to structure your system design around testing, you gain because there's a close parallel between easily tested designs and easily evolved designs. I'm just not willing to say you can't test unless you did test-driven development and have an architecture that allows mocking.
Testing existing architectures and systems requires devising a test approach that matches your system. Some designs are nearly untestable from a unit-test perspective, and must rely on manual testing. These designs place business logic within the user interface. Assuming your logic is already in a business layer, the next testing problems come from granularity and database access. Direct database access means test performance is drastically slowed, changes to the database break tests as well as break your actual code and-of course-test data must be maintained.
I assume you're encountering all of these issues. The simplest solution to maintaining test data is to recreate your database on each run and populate it as part of your testing. Yes, this slows down testing, but I'm willing to relegate testing to nightly builds if the testing happens every night and it's not practical to retroactively add a mocking layer. I don't claim that it's ideal, but it gets you beyond stressing over testability and lets you focus on writing tests. You must commit to keeping all tests in synch with the database whenever the database changes. Test sets can fall into complete disuse in a matter of weeks during development unless these changes are done in real time.
It's easy to fall into thinking tests are special and follow different rules than the rest of your code-but tests are still code and deserve the kind of attention you would give any other type of code. Object design, reuse, meaningful naming and good code quality are important as you move into doing more unit testing.
The first questions to ask in exploring objects are: "What will this object do?" and, "Why is it a separate object?" The generated test file created by Visual Studio gives the entirely incorrect impression that the purpose of a test class is to test a single class under test. A quick review of the attributes used in testing demonstrates that the real purpose is to set conditions for a set of tests, test and tear down the test conditions. Attributes mark methods as ClassInitialize, TestInitialize, ClassCleanup, TestCleanUp and TestMethod. Of course you can't test multiple users easily from a single class-the generated code may imply this, but the class simply wasn't designed to work that way. In your case, the simplest approach is a different class for each log-in scenario.
Reuse is important in testing. Watch for opportunities to refactor your code to minimize redundancy and feel free to use inheritance and help methods when appropriate. For example, different log-in scenarios can use the same base class.
Another technique available to maximize code reuse during testing is data-driven testing. This technique has nothing to do with the database you may be using when your application executes. It's a technique that lets you declare input and output values by an external mechanism and lets you consolidate tests that differ only by input and output values. You can use any ODBC source, including SQL Server. I prefer Excel because it allows testers and support staff to easily add more test conditions. You'll need to access the TestContext, write your tests to use the test data and ensure deployment:
[TestClass]
public class TestClass
{
public TestContext TestContext {get; set;}
[TestMethod]
[DataSource("System.Data.OleDb",
@"Provider=Microsoft.Jet.OLEDB.4.0;Data
Source=TestData.xls;Extended Properties='Excel
8.0;HDR=Yes;IMEX=1';",
"Sheet1$", DataAccessMethod.Sequential)]
[DeploymentItem("TestData.xls")]
public void TestMethod()
{
int x = Convert.ToInt32(TestContext.DataRow["x"]);
int y = Convert.ToInt32(TestContext.DataRow["y"]);
int result = Convert.ToInt32(TestContext.DataRow["Result"]);
Assert.AreEqual(result, Class1.AddValues(x,y), "Addition is not correct");
}
}
The TestContext allows access to the test environment, including the data source. The data source points to an Excel spreadsheet named TestData, which contains a sheet named "Sheet1." The dollar sign is added as part of Excel naming. This sheet contains column headings x, y and Result, which are accessed to set variables prior to calling Assert.
Deployment of data can be a headache when doing data-driven testing. Visual Studio tests are generally not run locally, but copied into a test directory. This allows insertion of code to measure code coverage and ensures any local resources such as text files are not altered during the test. Because your data must be available to the tests, it must also be deployed to this interim directory. The DeploymentItem attribute gives the name of files that should be deployed as part of the test environment. When filenames include a relative path, as in this example, the path is relative to the built executable. To place the spreadsheet in the executable directory, I included it in the project and set its Copy to Output Directory property to Copy Always. If you allow testers or support staff to edit the spreadsheet to create additional tests, be sure to protect the data file, probably by placing it in source control.
Organizing a large number of tests and relating them back to project expectations or requirements can be challenging. An important emerging technique follows naming patterns used by tools such as RSpec. This approach documents what scenarios are tested by explicitly stating the conditions. For example, classes might be:
[TestClass()]
public class When_an_admin_is_logged_in
...
[TestClass()]
public class When_a_normal_user_is_logged_in
...
[TestClass()]
public class When_any_user_is_logged_in
...
Class initialization then logs in the appropriate user and runs a series of authorization tests. The tests reflect what you're actually testing:
[TestMethod()]
public void User_class_CanCreate_property_is_true()
You can imagine a report based solely on reflection that allows you to check the conditions and features you're testing. Depending on your authorization rules, this test may belong in either the admin or any_user test class. This organization documents how your system actually works. You may need to be creative in naming to describe how your system is designed. If authorization is based on configuration, for example, the test might be ...property_matches_configuration in the any_user class. In many cases, you can improve reuse by deriving similar classes from the same base class, running the actual code from a protected method in this class, and just performing asserts in the terminal classes.
It's easy to wince at the long names, but you'll never call these methods-you'll only read them in the context of test output and documentation. Here the extra clarity offered by full sentence structure is of great benefit. It helps to create standards so that, as much as possible, test names created by different people are identical.
You mention testing conditions and say management is interested in code coverage. Code coverage is a terrible metric-no better than counting lines of code as a measure of productivity. Code coverage can never determine whether the system is well tested, because it can't consider whether boundary conditions (such as null, negative and zero values) and other likely failure points were considered. The only thing coverage indicates is what code has not been tested at all. The first thing to do with code that isn't being covered is to ensure that it's being used. If the code is used, determine the corresponding scenario and add more tests.
About the Author
Kathleen is a consultant, author, trainer and speaker. She’s been a Microsoft MVP for 10 years and is an active member of the INETA Speaker’s Bureau where she receives high marks for her talks. She wrote "Code Generation in Microsoft .NET" (Apress) and often speaks at industry conferences and local user groups around the U.S. Kathleen is the founder and principal of GenDotNet and continues to research code generation and metadata as well as leveraging new technologies springing forth in .NET 3.5. Her passion is helping programmers be smarter in how they develop and consume the range of new technologies, but at the end of the day, she’s a coder writing applications just like you. Reach her at [email protected].