Practical ASP.NET

A Modest Proposal on Validation in the Middle Tier

Peter looks at a strategic issue: When to do validation? The answer isn't "Everywhere" but it could have been.

One of the standard questions that I get when designing n-teir applications is "Where should I validate my data?" The short answer is: At the database. The only way to ensure data integrity is to check as much as you can at the database because you can't guarantee that everyone writing code that updates your data will get the validation right.

The longer answer is: wherever possible. Allowing bad data to move from the client through your business objects and into your database is time consuming: It doesn't make sense to make your users wait through a round trip to the database to discover that they've entered a date in the wrong format.

More importantly, doing all validation in the database server is expensive. Your servers are a shared resource: the more people using your application, the greater the demand on your servers, limiting the maximum number of users you can support. Checking data in the Web browser means that you're using the users' CPU cycle. Effectively, every user who shows up not only adds to your application's burden but also adds to the application's computing power.

If this suggests that you should validate data in the Web page (using JavaScript), in the code-behind file before handing the data over to the business objects, in the business objects themselves, and in the database... well, it is. Sadly, it's also a great way to run up the labor (and, as result, the cost) of building your application.

Actually, as a consultant who's paid by the hour, I don't feel too bad about that.

But, in addition to being costly, embedding validation code in several places is an accident waiting to happen. The more times you implement the same validation logic, the more likely it is that you'll get it wrong someplace. (We will now skip over the time I inadvertently paid everyone in my organization twice because I got a "greater than/less than" test backwards in a business object).

The primary danger is that validation code early in the process (e.g. client-side code) errs in being stricter than validation code later in the process (e.g. the server-side code). If the client is incorrectly rejecting entries that the server is perfectly willing to accept then the system is failing. The reverse (client-side code that's less strict than server-side code) is not a problem: Yes, bad data will slip through the browser and make its way to the server but it will be rejected at the server. Granted, your server is working too hard (there's that shared resource problem again) but your data retains its integrity.

A Modest Proposal

Interestingly enough, then, it appears that the optimal strategy is to validate everything you can at the client (to reduce demand at the shared resource) and then check it again at the database (to protect yourself from incompetent -- or malicious -- front-end developers). If your business object encounters a problem that could have been checked by the client, it should blow up; if there's an issue that can be checked at the database, the middle tier shouldn't attempt to detect it. Only those checks that can't be easily/transparently implemented at the client or the database should be implemented in middle-tier business objects.

And, by "validating everything you can at the client," I mean exactly that. Calling a Web service to validate data is not validating at the client, because you're transferring the validation back to the server.

Obviously removing all validation from your business objects and limiting validation at the client has its issues. Response times increase as users wait through a round trip to the database to find that they've got a problem; your database engine is working harder because requests that would otherwise have been denied by middle-tier validation are reaching the database.

However, if you believe that your database is your last defense against bad data, you must do as much validation there as possible. If you believe that scalability is important, then you want to do as much validation as possible at the client. Duplicating validation in the middle tier that is performed in the other tiers creates an opportunity for error.

Logically then, validation should be performed in the middle tier only if it can't be done in the other two tiers, or if it's absence can be demonstrably shown to be creating a bottleneck.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube