Practical ASP.NET

A Modest Proposal on Validation in the Middle Tier

Peter looks at a strategic issue: When to do validation? The answer isn't "Everywhere" but it could have been.

One of the standard questions that I get when designing n-teir applications is "Where should I validate my data?" The short answer is: At the database. The only way to ensure data integrity is to check as much as you can at the database because you can't guarantee that everyone writing code that updates your data will get the validation right.

The longer answer is: wherever possible. Allowing bad data to move from the client through your business objects and into your database is time consuming: It doesn't make sense to make your users wait through a round trip to the database to discover that they've entered a date in the wrong format.

More importantly, doing all validation in the database server is expensive. Your servers are a shared resource: the more people using your application, the greater the demand on your servers, limiting the maximum number of users you can support. Checking data in the Web browser means that you're using the users' CPU cycle. Effectively, every user who shows up not only adds to your application's burden but also adds to the application's computing power.

If this suggests that you should validate data in the Web page (using JavaScript), in the code-behind file before handing the data over to the business objects, in the business objects themselves, and in the database... well, it is. Sadly, it's also a great way to run up the labor (and, as result, the cost) of building your application.

Actually, as a consultant who's paid by the hour, I don't feel too bad about that.

But, in addition to being costly, embedding validation code in several places is an accident waiting to happen. The more times you implement the same validation logic, the more likely it is that you'll get it wrong someplace. (We will now skip over the time I inadvertently paid everyone in my organization twice because I got a "greater than/less than" test backwards in a business object).

The primary danger is that validation code early in the process (e.g. client-side code) errs in being stricter than validation code later in the process (e.g. the server-side code). If the client is incorrectly rejecting entries that the server is perfectly willing to accept then the system is failing. The reverse (client-side code that's less strict than server-side code) is not a problem: Yes, bad data will slip through the browser and make its way to the server but it will be rejected at the server. Granted, your server is working too hard (there's that shared resource problem again) but your data retains its integrity.

A Modest Proposal

Interestingly enough, then, it appears that the optimal strategy is to validate everything you can at the client (to reduce demand at the shared resource) and then check it again at the database (to protect yourself from incompetent -- or malicious -- front-end developers). If your business object encounters a problem that could have been checked by the client, it should blow up; if there's an issue that can be checked at the database, the middle tier shouldn't attempt to detect it. Only those checks that can't be easily/transparently implemented at the client or the database should be implemented in middle-tier business objects.

And, by "validating everything you can at the client," I mean exactly that. Calling a Web service to validate data is not validating at the client, because you're transferring the validation back to the server.

Obviously removing all validation from your business objects and limiting validation at the client has its issues. Response times increase as users wait through a round trip to the database to find that they've got a problem; your database engine is working harder because requests that would otherwise have been denied by middle-tier validation are reaching the database.

However, if you believe that your database is your last defense against bad data, you must do as much validation there as possible. If you believe that scalability is important, then you want to do as much validation as possible at the client. Duplicating validation in the middle tier that is performed in the other tiers creates an opportunity for error.

Logically then, validation should be performed in the middle tier only if it can't be done in the other two tiers, or if it's absence can be demonstrably shown to be creating a bottleneck.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • Microsoft's Tools to Fight Solorigate Attack Are Now Open Source

    Microsoft open sourced homegrown tools it used to check its systems for code related to the recent massive breach of supply chains that the company has named Solorigate.

  • Microsoft's Lander on Blazor Desktop: 'I Don't See a Grand Unified App Model in the Future'

    For all of the talk of unifying the disparate ecosystem of Microsoft-centric developer tooling -- using one framework for apps of all types on all platforms -- Blazor Desktop is not the answer. There isn't one.

  • Firm Automates Legacy Web Forms-to-ASP.NET Core Conversions

    Migration technology uses the Angular web framework and Progress Kendo UI user interface elements to convert ASP.NET Web Forms client code to HTML and CSS, with application business logic converted automatically to ASP.NET Core.

  • New TypeScript 4.2 Tweaks Include Project Explainer

    Microsoft shipped TypeScript 4.2 -- the regular quarterly update to the open source programming language that improves JavaScript with static types -- with a host of tweaks including a way to explain why files are included in a project.

Upcoming Events