Practical .NET

Domain-Driven Development: Where Does the Code Go?

Rather than arbitrarily deciding what code goes into an application and what goes into your business objects, you can lean on the rules that experienced developers have learned to follow to make those crucial decisions.

One of the issues that developers are constantly addressing is deciding how to distribute the code required throughout their system's architecture: What code belongs in individual applications, what goes into the middle-tier business objects and where, exactly, does the data access code go? Because I'm the kind of guy who likes firm guidelines rather than "making it up as I go along" (it avoids arguments with my co-workers), I appreciate having a framework to guide those decisions. Domain-driven development (DDD) provides a framework for making those kinds of decisions.

Don't get your hopes up, though: If you're reading this column with the impression that you'll get a paradigm-changing approach for deciding how to distribute your code … well, you're going to be disappointed. One of the things I like about DDD is that it often codifies the practices of experienced programmers. I believe that the practices of experienced programmers often reflect a kind of collective wisdom, expressing the best architectural practices for solving problems in the real world. By codifying those practices, DDD can provide guidance in implementing those practices.

So what you'll see in this column won't be "Wow, I never thought of that before" but, instead, "Yeah, that's right." This isn't about a new vision so much as it's about clarity.

Distributing Code
In DDD the code for any requirement that applies to a single application should be in the application. If you're writing up requirements in use cases, this means that the code for a requirement that appears in a single use case goes into the relevant application.

In an earlier column, I discussed how DDD recognizes that applications don't so much work with individual objects, but with groups of objects that DDD calls aggregates (for example, a SalesOrder aggregate will be made up of SalesOrderHeader, SalesOrderDetail and Product objects). Any requirement that applies every time an aggregate is used goes into that aggregate.

Internally, an aggregate consists of three kinds of objects: the object that the application references directly (the aggregate root), updateable objects (called entity objects in DDD) and read-only objects (called value objects in DDD). Code responsible for extending/managing the aggregate goes into the aggregate root. After that, code for requirements that apply every time an object is used goes into that object.

However, that description actually exaggerates how much code you'll have inside an aggregate. As I discussed in my column about value objects, in any aggregate, many of the objects will be value objects, which are primarily collections of read-only properties -- there's no real business logic in these objects at all.

Of course, that leaves a certain amount of code that's used in more than one application but isn't tied to every use of a particular object. In a domain-driven design, that code isn't put in the applications or the aggregates. Instead that code is put into a separate class called a "domain service."

The methods in the domain service classes will be organized in an almost arbitrary fashion: There really isn't any universal organization principle for deciding what should go into or be left out of any particular domain service class library. Obviously, methods that share helper routines should go into the same project, as should classes/methods that are frequently used together. Primarily, whatever organization principle makes sense to the developers using the services is a good basis for dividing code across domain services.

Data-Related Code
A special case of a "domain-service-like" project is a repository -- the class (or classes) responsible for updating/retrieving objects (aggregates are "storage agnostic" and have no idea how they're created or saved). To simplify the development of repositories, DDD offers two design guidelines.

The first guideline is to tie each aggregate to a single repository that's responsible for converting changes to that aggregate into updates to the database. There doesn't have to be a one-to-one relationship between aggregates and repositories, but dedicating a repository to a single aggregate does make the code inside the repository simpler (and makes it easier to make changes to the repository because those changes only affect a single aggregate). While there may be significant code shared between repositories for different aggregates, in the .NET Framework most of that code will be generated for you by Entity Framework.

The second guideline is to divide data retrieval code from data update code: The Command Query Responsibility Separation principle (CQRS). CQRS recognizes that data retrieval is often messy, ugly and (in order to support real-world screens) usually doesn't return highly structured data. Creating general purpose solutions that support every retrieval scenario often results in very complex solutions.

On the other hand, update code is organized around the few entity objects in a particular aggregate and handles just three tasks: Updates, inserts and deletes. In other words, the model for retrieving data doesn't look much like the model for updating data. It makes sense to me, therefore, to consider generating unique solutions for retrieving data while creating reusable solutions for updates -- even to the point of retrieving data from a non-normalized database while updating a highly normalized one. For me, CQRS is a solution that applies in many applications. Martin Fowler, on the other hand, disagrees and sees CQRS as very much a niche solution that applies only in a few cases (primarily because he feels that the typical case is that the data retrieval model is very much like the update model).

There's another reason for using a CQRS model -- it allows you to defer some of those update commands until later rather than doing them immediately, a DDD tactic that further simplifies your code. But that's a topic that I'll discuss at another time.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at

comments powered by Disqus


  • GitHub Copilot for Azure Gets Preview Glitches

    This reporter, recently accepted to preview GitHub Copilot for Azure, has thus far found the tool to be, well, glitchy.

  • New .NET 9 Templates for Blazor Hybrid, .NET MAUI

    Microsoft's fifth preview of .NET 9 nods at AI development while also introducing new templates for some of the more popular project types, including Blazor Hybrid and .NET MAUI.

  • What's Next for ASP.NET Core and Blazor

    Since its inception as an intriguing experiment in leveraging WebAssembly to enable dynamic web development with C#, Blazor has evolved into a mature, fully featured framework. Integral to the ASP.NET Core ecosystem, Blazor offers developers a unique combination of server-side rendering and rich client-side interactivity.

  • Nearest Centroid Classification for Numeric Data Using C#

    Here's a complete end-to-end demo of what Dr. James McCaffrey of Microsoft Research says is arguably the simplest possible classification technique.

  • .NET MAUI in VS Code Goes GA

    Visual Studio Code's .NET MAUI workload, which evolves the former Xamarin.Forms mobile-centric framework by adding support for creating desktop applications, has reached general availability.

Subscribe on YouTube