Practical .NET

The End of Integration Testing: If You've Passed All the Tests ...

Really, you only need to do two kinds of testing: Unit testing (to make sure that your individual components work) and end-to-end testing (to make sure your application works). Anything else is just a waste of your time.

Let me be clear: What I mean by "integration testing" is what was the usual next step after what we call "unit testing": bringing together selected components that make up the application to see if they can actually work together. These days, provided your developers are doing unit testing, you should be skipping integration testing and going straight to end-to-end (E2E) testing.

There are three lessons here:

  1. If your developers aren't doing unit testing then it really doesn't matter what you do after that
  2. Regardless of whether you're building the client, the service, or both, as soon as one transaction's path is complete, you should be doing E2E testing to prove that the application works
  3. Monitoring and logging must be built into the application from the start and incorporated into your testing framework

This requires a change to the teams' testing environment. It also means that teams have to stop thinking about just "their" tests. But I'll get to that.

Why Integration Testing is Obsolete
In the bad old days, integration testing reflected the idea that it wasn't possible/affordable to re-test everything ... so why try? Typically, full regression testing was handled by releasing the application to production and waiting for error reports to come in. What you could do, what was called "integration testing," was thoroughly test some identified (and "risky") combinations of components.

But those restrictions aren't true any more: With the current crop of testing tools, it is possible to test everything ... and why wouldn't you?

There are objections to E2E testing, the chief being that if something doesn't work then you won't be able to identify the point of failure. This is a core issue in unit testing where we talk about isolating the "Component Under Test" (CUT). In unit testing, we isolate the CUT so that, if something goes wrong, we know the problem is because the CUT is broken and not, for example, because someone has screwed up the data in the test database.

It's true: E2E testing ignores that goal and ignores it in a very big way.

It would be petty of me to point out that integration testing, by definition, ignores that goal also.

But the objection misses the point: The reality is that things are going to go wrong in your production environment. When that happens, you'll need some combination of logging and monitoring tools to track down the problem. E2E testing not only tests your application but also identifies gaps in your monitoring and testing. Quite frankly, if you can't find your points of failure during E2E testing then you've revealed a new and more fatal problem -- you won't be able to find your problems in production, either.

Microservices: The Ignorance Problem
But the fundamental problem is that integration testing assumed that you knew all the paths through a transaction and could assemble some of them for testing purposes. In other words, integration testing assumed relatively simple applications in a well-defined world.

In an era of digital transformation where everything in the business can potentially be turned into software (i.e. "software eats the world") that assumption makes no sense. Teams building clients don't necessarily know what microservices are involved in processing their transactions, especially if a client writes to a queue or raises an event. Even if the team once knew what services were triggered by their client, those service(s) could have changed radically since then.

On the microservice side, because microservices gets requests from several sources (through HTTP, by reading from a queue, by responding to an event), microservices may not even have direct contact with their clients. In a microservice world integration testing isn't just a waste of your time, it's actually impossible because the team doesn't know who their clients are.

It can't be the microservice team's responsibility to determine that they can work with any particular client; It's the client team's responsibility to determine if the service works for them. There's a reason that the HTTP protocol has 2-1/2 times more client-side-error status codes than server-side-error status codes.

Besides, there aren't a lot of parts of to a microservice: there isn't much difference between integration testing and E2E testing (or, if there is, then what you have isn't actually a "micro" service).

The Required Changes
Because you need to start E2E testing early, developers need to be able to quickly access a test environment built from the production environment plus the developers' changes (immediate access is good; overnight delivery is probably the limit). Ideally, teams should have access to an E2E testing environment that consists of the production environment and just the team's latest changes (this is the closest E2E testing can get to the concept of isolating the CUT: isolating the team's changes).

But developers have to stop looking just at the results of "their" tests. E2E testing necessarily crosses team boundaries: A change to a microservice (or a client) may cause a failure that seems, initially, to completely unrelated to the change. Logging and monitoring tools that trace any error back to the start of the transaction, flagging all the clients and services involved, are essential to assigning responsibility for fixing the problem.

On the other hand, you can take comfort in one truth: If you pass all the tests, then you've passed all the tests. And that means you're ready to release.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube