Practical .NET

Domain-Driven Development: Where Does the Code Go?

Rather than arbitrarily deciding what code goes into an application and what goes into your business objects, you can lean on the rules that experienced developers have learned to follow to make those crucial decisions.

One of the issues that developers are constantly addressing is deciding how to distribute the code required throughout their system's architecture: What code belongs in individual applications, what goes into the middle-tier business objects and where, exactly, does the data access code go? Because I'm the kind of guy who likes firm guidelines rather than "making it up as I go along" (it avoids arguments with my co-workers), I appreciate having a framework to guide those decisions. Domain-driven development (DDD) provides a framework for making those kinds of decisions.

Don't get your hopes up, though: If you're reading this column with the impression that you'll get a paradigm-changing approach for deciding how to distribute your code … well, you're going to be disappointed. One of the things I like about DDD is that it often codifies the practices of experienced programmers. I believe that the practices of experienced programmers often reflect a kind of collective wisdom, expressing the best architectural practices for solving problems in the real world. By codifying those practices, DDD can provide guidance in implementing those practices.

So what you'll see in this column won't be "Wow, I never thought of that before" but, instead, "Yeah, that's right." This isn't about a new vision so much as it's about clarity.

Distributing Code
In DDD the code for any requirement that applies to a single application should be in the application. If you're writing up requirements in use cases, this means that the code for a requirement that appears in a single use case goes into the relevant application.

In an earlier column, I discussed how DDD recognizes that applications don't so much work with individual objects, but with groups of objects that DDD calls aggregates (for example, a SalesOrder aggregate will be made up of SalesOrderHeader, SalesOrderDetail and Product objects). Any requirement that applies every time an aggregate is used goes into that aggregate.

Internally, an aggregate consists of three kinds of objects: the object that the application references directly (the aggregate root), updateable objects (called entity objects in DDD) and read-only objects (called value objects in DDD). Code responsible for extending/managing the aggregate goes into the aggregate root. After that, code for requirements that apply every time an object is used goes into that object.

However, that description actually exaggerates how much code you'll have inside an aggregate. As I discussed in my column about value objects, in any aggregate, many of the objects will be value objects, which are primarily collections of read-only properties -- there's no real business logic in these objects at all.

Of course, that leaves a certain amount of code that's used in more than one application but isn't tied to every use of a particular object. In a domain-driven design, that code isn't put in the applications or the aggregates. Instead that code is put into a separate class called a "domain service."

The methods in the domain service classes will be organized in an almost arbitrary fashion: There really isn't any universal organization principle for deciding what should go into or be left out of any particular domain service class library. Obviously, methods that share helper routines should go into the same project, as should classes/methods that are frequently used together. Primarily, whatever organization principle makes sense to the developers using the services is a good basis for dividing code across domain services.

Data-Related Code
A special case of a "domain-service-like" project is a repository -- the class (or classes) responsible for updating/retrieving objects (aggregates are "storage agnostic" and have no idea how they're created or saved). To simplify the development of repositories, DDD offers two design guidelines.

The first guideline is to tie each aggregate to a single repository that's responsible for converting changes to that aggregate into updates to the database. There doesn't have to be a one-to-one relationship between aggregates and repositories, but dedicating a repository to a single aggregate does make the code inside the repository simpler (and makes it easier to make changes to the repository because those changes only affect a single aggregate). While there may be significant code shared between repositories for different aggregates, in the .NET Framework most of that code will be generated for you by Entity Framework.

The second guideline is to divide data retrieval code from data update code: The Command Query Responsibility Separation principle (CQRS). CQRS recognizes that data retrieval is often messy, ugly and (in order to support real-world screens) usually doesn't return highly structured data. Creating general purpose solutions that support every retrieval scenario often results in very complex solutions.

On the other hand, update code is organized around the few entity objects in a particular aggregate and handles just three tasks: Updates, inserts and deletes. In other words, the model for retrieving data doesn't look much like the model for updating data. It makes sense to me, therefore, to consider generating unique solutions for retrieving data while creating reusable solutions for updates -- even to the point of retrieving data from a non-normalized database while updating a highly normalized one. For me, CQRS is a solution that applies in many applications. Martin Fowler, on the other hand, disagrees and sees CQRS as very much a niche solution that applies only in a few cases (primarily because he feels that the typical case is that the data retrieval model is very much like the update model).

There's another reason for using a CQRS model -- it allows you to defer some of those update commands until later rather than doing them immediately, a DDD tactic that further simplifies your code. But that's a topic that I'll discuss at another time.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube