Practical .NET

Strategies for Isolating Applications

If you're not careful, you'll replace your huge, lumbering unmaintainable enterprise applications with a web of applications that can't be changed without blowing each other up. But if you apply the same tools you use inside your applications to your application architecture, you can avoid that.

Building "enterprise-wide applications" is tough. From a design point of view, creating an application that genuinely supports the different needs of different parts of an organization often leads to "death-march projects." And maintaining those enterprise applications is harder yet, as those different parts of the organization all change at different rates. Often these enterprise applications are caught in what I call the CRAP cycle (Create, Repair, Abandon, Replace), become too complicated to extend or maintain and have to be completely replaced.

But the reality is that meeting the real needs of users, customers/clients and business partners often crosses department boundaries. Rather than create a single application that tries to integrate diverse parts of the organization, a better approach is to create domain-specific applications for each part of the organization. These simpler, more-focused applications are easier to design, build, and maintain (and a better fit for Agile processes). To handle those needs that cross departmental boundaries, these domain-specific applications must communicate with each other, either through RESTful services (using the ASP.NET Web API) or WSDL/SOAP-based Web Services (using Windows Communication Foundation).

However, if you're not careful, these solutions won't get you out of the CRAP cycle. It turns out that it's easy to create a web of service relations so tight that when you make a change in your domain-specific application you suddenly start getting calls from people you've never heard of before, complaining that their domain-specific application has stopped working.

Politics of Isolating Applications
To truly break out of the CRAP cycle you need some level of loose coupling between these applications. Loose coupling enables you to make changes to your application without having to consult every other application team that might, potentially, be affected. And, of course, you'd also like to have your application be protected from random changes made in some other application.

The first step in implementing loosely coupled relationships is political: Where two applications talk to each other, one needs to be declared as the upstream application and one as the downstream application. This distinction is made to settle disputes: Changes to the upstream domain must be accepted by the downstream domain. The team responsible for the upstream domain has the biggest responsibility -- if they force too many changes on downstream applications, they'll find their releases held up waiting for downstream teams to adapt their applications. This model encourages upstream teams to isolate their applications to avoid having to wait on downstream teams.

Few applications will be purely upstream or downstream: Most applications are going to be upstream of some applications and downstream of others -- striking an "appropriate" balance among these demands will be one critical success factor for any application team.

There are two alternative strategies to the upstream/downstream distinction. Two applications might need to be allowed to force changes on each other (a partnership relationship); Alternatively, two applications might share objects or related code (a shared kernel relationship). While these relationships are probably unavoidable among some applications, they're less desirable than a recognized upstream/downstream relationship because changes in one of these applications inevitably ripple through to the other application.

Isolating Application
In an upstream/downstream relationship, changes to the upstream application can be hidden from the downstream application through effective API design in the upstream application (which also frees the upstream team from having to wait for the downstream team to adapt). Effectively the upstream and downstream applications cooperate to create an "anti-corruption layer" that isolates the two applications from each other.

When an upstream application exposes an API for downstream application it's tempting to design that on the basis of "here's what the upstream application can do." However, on that basis, changes to the upstream application will necessarily change the API, forcing the downstream application to adapt. Instead, the upstream application should create APIs that expose only what the downstream client needs through a set of adapters designed to support a small number of downstream applications. With this design, changes to the upstream application only require changes to the adapters' internals to ensure that, from the downstream application's point of view, the API never changes.

The APIs defined by these adapters focus on creating Data Transfer Objects (DTOs). An application that passes one of its own internal objects to a related domain is asking for trouble: Should the upstream application need to change one of those internal objects to meet its own needs, that change will ripple through to the downstream application. However, a DTO that's constructed in an adapter purely to move data to the downstream application isolates the downstream application from changes in the upstream application's internals. Again, changes to the upstream application's internals are handled in the adapter by changing the way the DTO is built -- the DTO itself doesn't change (this is referred to as the hexagonal pattern in DDD).

Upstream applications should also resist the desire to pass a lot of information to the downstream application, at least when both applications are internal to the organization. Instead, upstream application DTOs should consist primarily of key values that will be used by the downstream application to create their own internal objects. Similarly, downstream applications should make themselves dependent on as little of the DTO that they get from the upstream applications as possible to further isolate themselves from changes in upstream applications.

More complicated strategies are possible to support isolating applications. Downstream applications can send requests specifying what they need to upstream applications; upstream applications can dynamically configure the DTOs they return (probably as XML documents) based on those requests. Again, however, the focus is on the upstream applications sending a limited number of values that the downstream applications can use to create the objects they need.

Developers have long recognized that the best strategy for creating maintainable applications is assembling those applications out of dedicated objects, programming against interfaces, and creating loosely coupled components. It turns out that applying those practices to the organization's enterprise architecture can provide a way out of the CRAP cycle.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube