Practical .NET

Organize Your Tests to Reduce Overhead

In test-driven development, you have to decide how you'll divide your test methods between your test classes. The best solution is the one that requires the least effort on your part and implements the Single Responsibility Principle for tests.

The key issue in deciding what tests go in which test Class isn't a technical issue: It's a productivity issue. Obviously, as you make changes to your code, you want to make sure that you can pass the test you wrote that proves that this latest change works (at least, as far as your definition of "works" goes). You'll run that test every time you think you're done with the change.

Once you do pass that test, though, you want to make sure your latest change hasn't introduced a bug that causes other tests to fail. That means, ideally, you want to run some relevant tests (one that checks for something that you might have broken with your latest changes) and no irrelevant ones (one that checks for something that you couldn't possibly have broken).

On the other hand, you don't want to obsess about this, either. Test-driven development (TDD) shouldn't be adding much (or any) overhead to your development process. The goal here isn't to be "ideal" but to be "good enough." Avoiding overhead means setting up your tests so that, if you do make a mistake in your test coverage, your error is on the side of running some irrelevant tests rather than skipping some relevant ones. After all, the typical runtime for a test is measured in milliseconds, so the cost of running an irrelevant test is small. The cost of missing an irrelevant test is people yelling at you later.

As a result, the simplest solution for organizing your tests is to put all the tests for a Controller or a Class in a single test Class. Not every test in the Test class will be relevant for every change to the target Class, but the cost of running those irrelevant tests should be small. In fact, this approach can raise a code smell: If your test Class has so many tests that those tests take a long time to run, then your class is probably violating the Single Responsibility Principle and could benefit from being refactored into a couple of smaller, better-focused classes.

Unit Tests vs. Integration Tests
Of course, a change you make in one class could result in some other class's test (or tests) failing. However, at the developer level, TDD is supposed to be all about unit testing. In fact, your unit tests should be isolating your target classes from the classes it integrates with so that, when your test fails, you know it's the target class's fault (see my article on using Moq).

Don't get me wrong: I'm not saying that integration testing should be ignored; I am saying it's a separate problem that should be handled separately. I recommend setting up a test Project for each set of integration tests (a CustomerOrders project that tests integration between your Customer and Order classes, for example) to hold those tests, for example.

You also don't want to defer running those integration tests for too long. You want to ensure that, when you do get around to running your integration tests, that your latest changes are still fresh in your mind -- that lets you diagnose the problem quickly. With classes under your control, you can add your integration test Projects into your solution. That way, every once in a while (when you're on the phone or just want to take a break), you can go over to Test Explorer and select Run All. This will reassure you that a recent change you've made hasn't created an integration problem. When a red light does come up, your changes should be fresh enough in your mind that you'll almost instantly go "Right! Duh! Of course that would fail."

Changes that involve classes that aren't under you control will have to be deferred to a joint-integration testing plan that's outside your direct control. Those tests may only be run once a day or, perhaps, every time you check in your class. However, don't let those tests be deferred until close to a release date (unless you release every day). With that sort of schedule, every integration test that fails will be a mystery because you'll have forgotten what your changes were.

Putting it all together, I'm advocating for a Single Responsibility Principle for test classes: Each test class should test one thing and do it well. But, again, if you find yourself spending a lot of time organizing/managing your unit tests, I think you're missing the point.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at

comments powered by Disqus


Subscribe on YouTube