Controlling Model Binding in ASP.NET Core

It seems to me like magic when model binding takes data from the client and loads it correctly into the properties of the Customer object in the parameter to this method:

public ActionResult UpdateCustomer(Customer cust)
{

However, sometimes model binding doesn't do what I'd like. For example, let's say my Customer object looks like this:

public class Customer
{
  int id {get; set;}
  string FirstName {get; set;}
  string LastName {get; set;}
  int TotalOrders {get; set;}
}

If model binding can't find any data from the client to put in the LastName attribute, it will just set the property to null. I'd prefer that model binding do a little more because the typical first line of code in my method is to check for problems in model binding using the ModelState's IsValid property:

public ActionResult UpdateCustomer(Customer cust)
{
  if (!ModelState.IsValid)
  {

With model binding's default behavior, IsValid won't be set to false when there's no data for LastName.

I can get that behavior by adding the Required attribute to my Customer class' LastName property. The problem is that Required is also used by Entity Framework in code-first mode to control how the LastName column in the Customer table is declared. That probably isn't a big deal to you (though I worry about it).

Things get messier with the TotalOrders property because, unlike string properties, properties declared as integers aren't nullable. With or without the Required attribute, non-nullable datatypes are set to their default values. This means that if no data comes up from the browser for TotalOrders, it will be set to 0 ... and IsValid still won't be set to false. It's now hard to tell if the customer has no orders or if the data wasn't sent.

I could change the datatype on TotalOrders to a nullable type (that is, int?) and put the Required attribute on it ... but now I'll have to work with the TotalOrders' Value property to retrieve its data. It's all getting a little complicated.

When working with non-nullable types, I prefer using the BindRequired attribute instead of Required. BindRequired will cause model binding to set the IsValid property to false if no data comes from the client (and it will do that without affecting how my columns are declared in the database).

This is how I might declare my Customer class to get the IsValid property set when TotalOrders is missing and still have TotalOrders as a nullable column in my database:

public class Customer
{
  int? id {get; set;}
  [Required]
  string FirstName {get; set;}
  [Required]
  string LastName {get; set;}
  [BindRequired]
  int TotalOrders {get; set;}
}

If I wanted the FirstName and LastName columns to also be nullable, I'd use BindRequired on those properties, also.

Posted by Peter Vogel on 09/17/2018 at 9:23 AM0 comments


Adding Your Own Files to Your Visual Studio Solution

Despite the file extensions you see in the Add Existing Item dialog box, Visual Studio isn't limited to working with specific kinds of files. If you have some file that you want to include in your project, you can add it in Solution Explorer. If you want to be able to edit it in Visual Studio, you just need to associate its file extension with one of Visual Studio's editors.

To do that, go to Tools | Options | Text Editor | File Extension. Once there, type an extension in the Extension text box in the top left-had corner, pick an existing editor from the dropdown list to the right of the text box, and click the Add button just a bit further to the right. Now, when you click on a file with that extension, Visual Studio will open it using that editor.

You can also create a new custom editor for Visual Studio based on the Core Editor, using the Visual Studio SDK (though, in the most recent version of the SDK, you'll have to write your editor in C++ because C# and Visual Basic aren't supported any more).

Posted by Peter Vogel on 08/07/2018 at 10:17 AM0 comments


Switching Your Xamarin Project to Standard Class Projects

There are lots of differences between using a Standard Class/Portable Class Library (PCL) and Shared projects in a Xamarin solution. However, the most obvious one appears when you open any XAML file in a Shared project: In a Standard Class library you'll get IntelliSense support; in a Shared Project you won't get any IntelliSense support and virtually every element in your XAML file will be flagged as an error (though, fortunately, your solution will still compile).

Unless you have a compelling reason to go with a Shared project (for example: your version of Visual Studio doesn't support Standard Class Library projects), you'll want to use PCL or a Standard Class Library project ... and a Standard Class Library project is your best choice going forward. In fact, if your version of Visual Studio doesn't support Standard Class Library Projects and you want to work with Xamarin, it might be time to upgrade to a newer version of Visual Studio (remembering that, for example, Visual Studio 17 Community Edition is free).

If you're not sure which kind of project your Solution is currently using, first look at the icon beside your common project in Solution Explorer: If it's two overlapping diamonds, you have a Shared project (bad); if you have a simple box with "C#" inside of it then you have a Standard Class Library (good) or a PCL (not so good) project. To distinguish between a PCL and a Standard Class Library, open the project's properties and see if you have a Library tab on the left. If you do, you have a PCL project (as I said: not so good).

To convert your solution to using a Standard Class Library project, first right-click on your Solution node in Solution Explorer and use Add | New Project | .NET Standard | Class Library (.NET Standard) project to add a Standard Class library project your solution. Once the project is added, delete any default resources in the project (you won't need the Class1.cs file, for example). Then drag and drop any resources from your existing Shared/PCL project to your new Standard Class Library project. If you have an App.xaml file that marks the start point of your application in your old project make sure that you drag it to your new project.

Next, right-click on your new project and use Manage NuGet Packages to add the Xamarin.Forms package to your project. You'll need to add any other references or NuGet packages your original project was using.

Now do a rebuild on your new project. If you get some compile-time errors you haven't seen before, open your project's Properties and, on the Application tab, check that the Target framework dropdown list is set to the highest level (as I write this, that's .NET Standard 2.0). If it isn't, set the dropdown to the highest level and try another build. If you still have compile-time problems, then it's too early to move to a .NET Standard Class Library project and you'll have to live with your Shared or PCL project.

Now, the scary part: right click on your Shared or PCL project and pick Remove. Remind yourself that the project isn't gone, it's just not part of the solution. If it turns out you need something from it, you can use Add | Existing Item to pick up anything you've forgotten (you can also open the old project in Visual Studio to check any settings you might have missed).

If you don't yet have a XAML file (other than App.xaml) in your new project, right-click on your new project in Solution Explorer and select Add | New Item | Xamarin Forms | Content Page to add one. If you want this to be your start page, make sure this new Page's name matches the name in the App class's constructor in the App.xaml.cs file (you can either give your new XAML file a matching name or change the name in App.xaml.cs).

Finally, in the other projects in your solution, use Add | New Reference to add a project reference to your new Standard Class Library project and do a rebuild of your solution to flag any namespace issues that you have to clean up.

Posted by Peter Vogel on 08/06/2018 at 10:42 AM0 comments


Organizing Test Cases

In addition to the TestInitialize and TestMethod attributes that you're used to using when creating automated tests, there's also a TestCategory attribute that you'll find useful as the number of your tests starts to get overwhelming.

Effectively, using TestCategory lets you create a group of tests using any arbitrary system you want. This allows you to create (and run!) groups of tests that you feel are related without having to run every test in a class, a project or a solution. These could be, for examples, all the tests in all your test classes that involve your Customer factory or all the tests that use your repository class.

To use TestCategory, you add it as a separate attribute to a test or combine it with your already existing test attributes. These two sets of code are identical, for example, and assign GetAllTest to the Data category:

[TestMethod, TestCategory("Data")]
public void GetAllTest()

[TestMethod]
[TestCategory("Data")]
public void GetAllTest()

You can also assign a test to multiple categories. These examples tie the GetAllTest test to both the Data and the Customers categories:

[TestMethod, TestCategory("Data"), TestCategory("Customers")]
public void GetAllTest()

[TestMethod]
[TestCategory("Data")]
[TestCategory("Customers")]
public void GetAllTest()

You can run tests in any particular category from Test Explorer. First, though, you must make sure that your tests are in List view: If your Tests are grouped in any way other than Run | Not Run | Failed, then you're not in List view (List view still groups your tests, it just does it in the default grouping of "by result"). The toggle that switches between List and Hierarchical view is the second button in on the Test Explorer toolbar, just to the right of the Run Tests After Build toggle.

Once you're in List view, the Group By toggle (just to the right of the List View toggle) will be enabled. Click the down arrow on the right side of the Goup By toggle and you'll get a list all of the ways you can group your tests. To group by category, you want to pick Traits from this list. Not only will this list all the tests you've assigned to a category, it will list any test to which you haven't assigned a category in a group called No Traits. Right-clicking on a category name will let you run all the tests in that category.

You can also run tests by category using the VsTest.Console or MSTest command-line tools. Those tools also give you an additional ability: You can combine categories with logical operators to either run only those tests that appear in the intersection of the categories you list or run all the tests from all of the categories you list.

Posted by Peter Vogel on 08/01/2018 at 12:36 PM0 comments


Use JavaScript Code from One File in Another File with IntelliSense

If you have a JavaScript (*.js) file containing code, it's not unusual for your code to reference code held in another JavaScript file. If you're using more recent versions of Visual Studio, you'll find that the editor knows about all the JavaScript code in your project and will provide some IntelliSense support as you type in your JavaScript code (not as much support as you'd get with TypeScript, of course).

If your version of Visual Studio isn't doing that for you, you can still get that IntelliSense support in your code by adding a reference to that other JavaScript file. A typical reference to another JavaScript file (placed at the top of the file you're entering code into) looks like this:

/// <reference path="Utilities.js" />

Now, as you add JavaScript code to the file containing this reference, you'll get IntelliSense support for any functions and global variables declared in Utilities.js.

And you don't have to type that reference if you don't want to. Visual Studio will generate that reference for you if you just drag Utilities.js out of Solution Explorer and drop it into the file you're adding code to.

Posted by Peter Vogel on 07/23/2018 at 10:21 AM0 comments


Eliminate Code and Add Functionality with Fody Attributes

Fody is such a cool NuGet package that it's a shame it's only been mentioned on this site once and in passing. Fody handles the problem you have all the time: crosscutting concerns. A crosscutting concern is something that happens in many places in your application but not in every place.

The .NET Framework's attributes are probably the most common tool for handling crosscutting concerns. For example, security is a crosscutting concern: Many parts of your application should only be accessed by authorized users ... but not all parts (the login screen, for example, must be accessible to everyone). You can handle that crosscutting concern in ASP.NET by putting an Authorize attribute on those methods that you want to lock unauthorized users out of. Most attributes address issues important to users (security, for example). Most Fody attributes, on the other hand, handle those problems that annoy developers.

For example, the two Fody attributes I'm using the most right now (as part of building a Xamarin application) are NotifyFor (which eliminates the need to write code for the PropertyChanged event in a property) and AlsoNotifyFor (which fires a PropertyChanged event for a related property when a property changes value). All I have to do is put the attribute on my property and Fody takes care of the rest.

But there are dozens of useful Fody attributes, including ones to make your string comparisons caseless, allow you to specify the backing field for an auto-declared property, and check the syntax of your SQL queries during builds. There's also SexyProxy, which I've never needed but its name is so cute that I keep trying to find a use for it.

Posted by Peter Vogel on 07/23/2018 at 10:23 AM0 comments


The Simplest Way to Create an Asynchronous Method

You like the idea of using await and async to execute asynchronous methods, and you've got a method that you'd like to turn into an asynchronous method ... but you haven't called any native asynchronous methods within your method and you're not sure what to do to make your method "awaitable." There is an easy solution: Pass the result of your method to the Task object's static FromResult method, which will load the Task object's Result property (which, really, is what all that async/await processing depends on).

Here's an example of some code that creates a Customer object:

public Customer GetCustomer()
{
   Customer cust;
   cust = new Customer {Id = 1, FirstName = "Peter", LastName = "Vogel"};
   return cust;
}

Here's the same method, in an asynchronous version, taking advantage of the FromResult method:

public Task<Customer> GetCustomer()
{
  Customer cust;
  cust = new Customer { Id = 1 };
  return Task.FromResult<Customer>(cust);
}

You can now use the GetCustomer method like this:

public async Task<Customer> ProcessACustomer()
{
  Customer cust = await GetCustomer();
  //...do something with the Customer object asynchronously
  return cust;
}

Posted by Peter Vogel on 07/23/2018 at 8:30 AM0 comments


Overriding Controller Authorization in ASP.NET MVC

You have a Contoller class called Adminstration that only admins should use. There's about a dozen Action methods in the Controller class and they all should only be accessed by users in the Admin or SuperAdmin roles. Rather than put an Authorize attribute on each method, you can put just one on the Controller class, like this:

<Authorize(Roles:="Admin,SuperAdmin")>
Public Class AdministrationController

Did I say that all of your methods in this controller should be accessed only by the Admin and SuperAdmin users? I lied. There's one really annoying method that doesn't require this level of authorization (it just displays a list of administrators with their contact information). You could try moving it to another Controller or you could put Authorize attributes on all the methods ... or you could use OverrideAuthentication.

The OverrideAuthentication attribute lets you discard the authorization set at the Controller level. You can then follow the OverrideAuthentication attribute with whatever Authorize attribute your method actually needs.

Here's an example that lets anyone in the User role use the ListAdmins method:

<OverrideAuthentication>
<Authorize(Roles:="User")>
Public Function ListAdmins() As ActionResult

There are four other Override* attributes including one called OverrideException that lets you discard HandleError attributes set at the Controller or Global Filters level.

Posted by Peter Vogel on 07/18/2018 at 8:59 AM0 comments


The New Cross-Platform Standard: Version 2.0

Microsoft has always had a plan to support cross-platform development using the .NET Framework. For the longest time, the plan was for you to create a Portable Class Library (PCL) -- any API you used from a PCL was supposed to work on any .NET supported platform.

If you did create a PCL project, you were given a list of platforms and asked to check off which ones you wanted to run on. After checking off your choices, what you could access in your library was the intersection of the .NET Framework APIs supported on each of those platforms. Fundamentally, that meant the more platforms you checked off, the less of the .NET Framework you got to use (in fact, some combinations would take out whole versions of Visual Studio).

The process wasn't dynamic, however. In reality, what you were picking was one of a set of predefined profiles that combined various .NET Framework APIs.

Microsoft has a new approach: Standard Class Library projects. A Standard Class Library consists of those APIs that "are intended to be available on all .NET implementations." The news here is that there is only one Standard and it supports all the .NET platforms -- no more profiles agglomerated into an arbitrary set of interfaces.

The catch here is that the Standard may not include something you want ... at least, not yet. With PCLs there was always the possibility that, if you dropped one of the platforms you wanted to support, you might pick up the API you wanted. That's not an option with the Standard, which is monolithic. In some ways it's like setting the version of the .NET Framework you want to support in your project's properties: The lower the version you pick, the less functionality you have.

Obviously, then, what matters in the .NET Standard is comprehensiveness. There have been several iterations of the .NET Standard specification, each of which includes more .NET Framework APIs. The latest version (as of June, 2018) is .NET Standard 2.0 and (like version 1.3 before it) it's a real watershed in terms of adding common functionality -- more than 5,000 APIs. With version 2.0 there's a very high likelihood that what you want to use is in the Standard.

You can check out the whole list here. The page also includes links to a list of namespaces and APIs added in any version of the .NET Standard. It's telling that the API list for version 2.0 is too big to display in anything but its raw format.

Posted by Peter Vogel on 06/26/2018 at 7:29 AM0 comments


Dealing with Read-Only Files

It's easy to miss that you've opened a read-only file in Visual Studio: When you open a file you can't change, a tiny little lock icon appears on the tab of the editor window to the right of the file's name. By default, Visual Studio won't even tell you that you can't change the file until -- after you've made all your changes, of course -- you try to save the file. Only then do you get the bad news with a dialog that gives you three choices:

  • You can create a new file
  • Attempt to overwrite the file (that is, attempt to make the file writeable)
  • Cancel and go back to the file which holds a ton of changes you can't save

Notice the absence of a "Oh , just throw everything away" option.

If you'd prefer to know about this problem before you start making your changes then you just need to set an option in Visual Studio. Go to Tools | Options | Environment | Documents and uncheck the option called "Allow editing of read-only files; warn when attempt to save."

Now, when you start to make changes a read only file you'll get that dialog box asking if you want to create a new file, make the file writeable, or cancel. This time, the cancel option will return you to a file that you haven't invested any time in.

By the way, and for the record, the "make writeable" option never works. It's just there to give you hope ... and then crush it.

Posted by Peter Vogel on 06/20/2018 at 10:50 AM0 comments


Disposing of the DbContext Object

I have two separate styles for using the DbContext object. In one style I create the DbContext object when my class is instantiated, either as part of defining a field for my class:

Public Class Customer Repository
  Dim db As New CustomerEntities

Or in my class's constructor:

Public Class Customer Repository
  Dim db As CustomerEntities

  Public Sub New()
    db = New CustomerEntities
   End Sub

I use this style when all the methods in the class us the DbContext object and I expect the methods to be called independently of each other (and in a variety of different combinations). I shouldn't admit this, but I frequently forget to call the Dispose method at the end of those methods.

My other style is to leverage the Using keyword, like this:

Using db = New CustomerEntities
  '...use CustomerEntities
End Using

This is the style I follow when my Entity Framework code is integrated with other processing (typically, other EF code). The primary reason I use Using in this style is to ensure that the Dispose method is called -- the End Using statement that marks the end of the block will make sure that happens. Basically, I'm compensating for my failures in the previous style.

It turns out that I needn't have felt bad about those missing calls to Dispose. A few quick tests with performance monitor will show that it's difficult (I would say "impossible") to detect any difference between applications that call the DbContext's Dispose method and those that do not.

There is, as always, one exception: when you take control of opening and closing the Connection object available through the DbContext object. In that scenario, it's entirely possible that you may forget to close your open Connection, something that DbContext won't let happen if you leave control of the Connection up to it.

Leaving a Connection open is bad because it defeats connection pooling (an open Connection object ties up a connection at the database, forcing other applications to create new connections at the database). Calling the Dispose method ensures that your Connection is closed.

So, as long as you let DbContext manage your connections, feel free to ignore the Dispose method. On the other hand, if you're managing your Connections, the Dispose method may be your bestest friend.

Posted by Peter Vogel on 06/18/2018 at 3:15 PM0 comments


URL Rewriting vs. HTTP Redirects

As Philip Jaske mentioned in his interview with Becky Nagel, one of the cool things in ASP.NET Core is the ability to rewrite incoming URLs to "fix up" a request. There are lots of reasons to do this, the primary one being that it gives you the flexibility to move server-side resources to new URLs: You just rewrite incoming requests using the old URLs to point to your new URLs.

But it does raise the question of when you should use URL Rewriting instead of sending an HTTP redirect to the client.

On the face of it, HTTP redirects are inefficient because they require you to send a redirect response to the client (network latency) with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect (even more network latency).

If the HTTP redirect is implemented with a 301 code (indicating that the change in URLs is permanent), then the client should automatically replace the original URL with the new, redirected URL when a request to the old URL is issued. That should eliminate the network latency on any future requests ... but the reality is that most hand-crafted consumers used to call Web Services aren't set up to do that.

HTTP rewriting should be more efficient than redirects because the client makes a single trip to the server: All the redirection is handled on the server. In addition, if you're using HTTP rewriting to support moving services to new URLs then you don't have to leave a client at the old URL just to return redirects. As an added feature, the client never sees the rewritten URL (unlike a redirect where the new URL is sent to the client), which may or may not be important to you. Effectively, the difference is similar to using a Server.Transfer versus a Server.Redirect in ASP.NET.

However, I say rewriting "should" be more efficient than redirects because Microsoft notes that if your rules get complex enough (or if you have "too many" rules), rewriting has the potential to slow performance on your site. If you use URL rewriting, then you'll want to monitor response times in case they start increasing.

You have other options. IIS has the URL Rewrite extension which is tightly coupled with IIS and will give you better performance. However, the URL Rewrite extension requires you to set up rewrite rules in IIS manager, moving rewriting out of your control as a developer and into the hands of your site administrator. ASP.NET Core's URL rewriting feature lets you keep your hands on the reins.

Posted by Peter Vogel on 06/18/2018 at 3:39 PM0 comments


Most   Popular
Upcoming Events

.NET Insight

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.