In-Depth

.NET Survival Guide: Data Access Technologies

BACK TO .NET SURVIVAL GUIDE

Data Access Technologies
The frequency with which Microsoft has changed database access technologies is remarkable, with each new scheme offering the promise that .NET developers have finally arrived at a long-term solution. The core problem with the .NET Framework was that although there was a strong database access technology provider in ADO.NET, it failed to deal with the key pain point for developers: object-relational mapping (ORM). Specifically, how the data represented in an object-oriented programming language such as C# or Visual Basic .NET serializes into a relational database. Although there were numerous solutions available in 2007, none truly dealt with the problem of eliminating the need for SQL and of SQL schema seeping into the client-side programming language.

Tool Box

  • ADO.NET
  • LINQ to SQL
  • Entity Framework
  • NHibernate
  • LLBLGen Pro

Visual Studio 2008 and the .NET Framework 3.0 changed this situation radically. For the first time mainstream .NET development had a superb mechanism, in the form of LINQ, for programming the object-relational impedance mismatch. Although Visual Studio 2008 presented two database LINQ solutions -- LINQ to SQL and the Entity Framework -- Visual Studio 2010 addressed this ambiguity with the Entity Framework 4. Microsoft has essentially deprecated LINQ to SQL. Though Prominent ORM alternatives exist in NHibernate and LLBLGen Pro, the Entity Framework 4 has emerged as the most logical data access target for .NET developers. This leaves developers with four prudent options when developing under the .NET Framework 4:

  • Leave existing development on LINQ to SQL but search for an opportunity to migrate to the Entity Framework.
  • Leave ADO.NET development in place, but capitalize on any opportunities to implement with the Entity Framework.
  • Commit to using the Entity Framework by default for database-access development going forward, outside of exceptional cases.
  • Consider widely deployed alternatives like NHibernate and LLBLGen Pro, which boast features and support that rival those of the ORM technologies coming from Microsoft. Prior to the .NET Framework 4, gaps in the first version of the Entity Framework made it a less-than-perfect solution. Microsoft has resolved many of those shortcomings with the Entity Framework 4. Among the significant new features making the Entity Framework 4 a credible data access target:
  • Foreign Key properties on entities allow for accessing data associated with foreign key relationships.
  • Plain Old CLR Objects (POCO) can be generated, using online templates, so that no special entity framework derivation or attribution is required.
  • Text Template Transformation Toolkit (T4) Code Generation enables deep customization of the entities that are generated from the database, and the ability to even generate the database given a set of POCO objects.
  • Lazy Loading, enabled by default, doesn't load an entire object or its graph until the code actually uses it.
  • Improved stored-procedure support, which is accessed through member functions on an entity, allows for such features as returning complex types from stored procedures.
  • Enhanced support for entities that are disconnected from the Entity Framework context and possibly transferred across WCF boundaries -- even with support for change tracking within the entity.
  • Inclusion of interfaces like IObjectSet that enable the development of mock data layers.

These new features, all of which are very significant to the vast majority of database development scenarios, provide a compelling narrative for the Entity Framework 4. In addition, Microsoft continues to provide interim community technology previews of future enhancements slated for the Entity Framework.

In Summary
After some fits and starts, Microsoft has landed on a viable target for .NET data access in the form of Entity Framework 4.

About the Author

Mark Michaelis (http://IntelliTect.com/Mark) is the founder of IntelliTect and serves as the Chief Technical Architect and Trainer. Since 1996, he has been a Microsoft MVP for C#, Visual Studio Team System, and the Windows SDK and in 2007 he was recognized as a Microsoft Regional Director. He also serves on several Microsoft software design review teams, including C#, the Connected Systems Division, and VSTS. Mark speaks at developer conferences and has written numerous articles and books - Essential C# 5.0 is his most recent. Mark holds a Bachelor of Arts in Philosophy from the University of Illinois and a Masters in Computer Science from the Illinois Institute of Technology. When not bonding with his computer, Mark is busy with his family or training for another triathlon (having completed the Ironman in 2008). Mark lives in Spokane, Washington, with his wife Elisabeth and three children, Benjamin, Hanna and Abigail.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube