Lhotka's Labrynth

Is There a Universal Architecture?

Many people find an architecture that works well for them and then they try to apply that architecture to every subsequent project. I'm guilty of this myself. I always look first to see if my CSLA .NET architectural model can be made to fit any project I work on. Usually it works well, but sometimes it just doesn't fit.

Which leads to the question: Can there be a single, universal architecture that will solve all problems? The obvious answer is no, clearly not. But what if we limit the discussion to business applications, leaving out software projects like operating systems, compilers, device drivers, hardware control, and so forth? I'm not so sure that we can't come up with some basic architecture, or at least some common concepts that cover nearly all business software requirements.

This is relevant because of the incredible rate that Microsoft throws technology at us. Each new technology must fit into your overall architecture to be useful. And if we're to help each other figure out how and where each technology fits, then we need some commonality across the architectures we all apply.

Business software virtually always has three primary components: an external interface (user interface or API), some business logic, and data storage. As a result, most architectures over the past 15 to 20 years have focused on arranging these three concepts in some standard manner. While that's good, it's too high level to be useful. Any useful architecture must dive deeper.

Today we have architectures that use various controller, presenter, and other models to describe how the external interface layer interacts with the business layer. Oddly, many of these architectures ignore the incredibly powerful data-binding capabilities Microsoft has built into Windows Forms, Web Forms, and Windows Presentation Foundation (WPF). Given the very tight integration of data binding in WPF, I think future architectures will be forced to consider ways to utilize data binding. I think that's the right approach. Why write a lot of code to do something Microsoft already does for you?

At the business layer, some architectures use a separation of data and logic, while others encapsulate data behind logic. These decisions cascade up to the external interface part of the framework, determining whether the interface can interact with data, business logic, or both. They also cascade down to the data storage part of the framework in a similar manner.

If your architecture is based around workflow or services, odds are good that you employ strong separation of data and logic. If you are pursuing a more object-oriented architecture, or are trying to create a highly interactive user experience, you are probably using a more encapsulated and abstract model. I think an ideal model uses some of each, because neither approach provides a comprehensive solution.

Data storage and data access are becoming more standardized. Today, your architecture will take one of three approaches: use the DataSet, use ADO.NET directly, or use an object-relational mapping (ORM) tool. With LINQ and the ADO.NET Entity Framework looming in the near future, it's a good bet that most people will start leaning toward the ORM approach. It's important to remember that both the ORM and DataSet approaches incur overhead. Direct use of ADO.NET is almost always faster, though it often requires more code, so you need to evaluate your approach in terms of both developer productivity and performance to decide what is best for you.

In the end, it's hard to say if we'll ever have a universal architecture. But I do believe we can agree on a universal set of concepts and patterns. And that's important, because we all need to share knowledge and experience to help each other deal with the rising flood of technologies coming at us every day.

About the Author

Rockford Lhotka is the author of several books, including the Expert VB and C# 2005 Business Objects books and related CSLA .NET framework. He is a Microsoft Regional Director, MVP and INETA speaker. Rockford is the Principal Technology Evangelist for Magenic, a Microsoft Gold Certified Partner.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube