Lhotka's Labrynth

Is There a Universal Architecture?

Many people find an architecture that works well for them and then they try to apply that architecture to every subsequent project. I'm guilty of this myself. I always look first to see if my CSLA .NET architectural model can be made to fit any project I work on. Usually it works well, but sometimes it just doesn't fit.

Which leads to the question: Can there be a single, universal architecture that will solve all problems? The obvious answer is no, clearly not. But what if we limit the discussion to business applications, leaving out software projects like operating systems, compilers, device drivers, hardware control, and so forth? I'm not so sure that we can't come up with some basic architecture, or at least some common concepts that cover nearly all business software requirements.

This is relevant because of the incredible rate that Microsoft throws technology at us. Each new technology must fit into your overall architecture to be useful. And if we're to help each other figure out how and where each technology fits, then we need some commonality across the architectures we all apply.

Business software virtually always has three primary components: an external interface (user interface or API), some business logic, and data storage. As a result, most architectures over the past 15 to 20 years have focused on arranging these three concepts in some standard manner. While that's good, it's too high level to be useful. Any useful architecture must dive deeper.

Today we have architectures that use various controller, presenter, and other models to describe how the external interface layer interacts with the business layer. Oddly, many of these architectures ignore the incredibly powerful data-binding capabilities Microsoft has built into Windows Forms, Web Forms, and Windows Presentation Foundation (WPF). Given the very tight integration of data binding in WPF, I think future architectures will be forced to consider ways to utilize data binding. I think that's the right approach. Why write a lot of code to do something Microsoft already does for you?

At the business layer, some architectures use a separation of data and logic, while others encapsulate data behind logic. These decisions cascade up to the external interface part of the framework, determining whether the interface can interact with data, business logic, or both. They also cascade down to the data storage part of the framework in a similar manner.

If your architecture is based around workflow or services, odds are good that you employ strong separation of data and logic. If you are pursuing a more object-oriented architecture, or are trying to create a highly interactive user experience, you are probably using a more encapsulated and abstract model. I think an ideal model uses some of each, because neither approach provides a comprehensive solution.

Data storage and data access are becoming more standardized. Today, your architecture will take one of three approaches: use the DataSet, use ADO.NET directly, or use an object-relational mapping (ORM) tool. With LINQ and the ADO.NET Entity Framework looming in the near future, it's a good bet that most people will start leaning toward the ORM approach. It's important to remember that both the ORM and DataSet approaches incur overhead. Direct use of ADO.NET is almost always faster, though it often requires more code, so you need to evaluate your approach in terms of both developer productivity and performance to decide what is best for you.

In the end, it's hard to say if we'll ever have a universal architecture. But I do believe we can agree on a universal set of concepts and patterns. And that's important, because we all need to share knowledge and experience to help each other deal with the rising flood of technologies coming at us every day.

About the Author

Rockford Lhotka is the author of several books, including the Expert VB and C# 2005 Business Objects books and related CSLA .NET framework. He is a Microsoft Regional Director, MVP and INETA speaker. Rockford is the Principal Technology Evangelist for Magenic, a Microsoft Gold Certified Partner.

comments powered by Disqus

Featured

  • Creating Reactive Applications in .NET

    In modern applications, data is being retrieved in asynchronous, real-time streams, as traditional pull requests where the clients asks for data from the server are becoming a thing of the past.

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

Subscribe on YouTube