I didn't know I could do this until a few weeks ago: While still in Edit mode, you can right-click on a line of code and select Run to Cursor. Visual Studio will compile your application (if necessary), start your application in Debug mode and stop on the line you've selected. If your cursor is already on the line where you want to stop, you don't need to touch your mouse -- just press Ctrl+F10 to get the same result. Once Visual Studio stops on your line, you can set more permanent breakpoints by pressing F9.
In retrospect, this was obvious: I've used Run to Cursor while in Debug mode for years. And, to add insult to injury, every time I've right-clicked on a line of code in Edit mode, Run to Cursor has been right there on the shortcut menu. I just never thought to use it until recently.
Posted by Peter Vogel on 09/09/20140 comments
One reader's comments in an article I wrote about Entity Framework's async programming turned into an interesting discussion on the role of asynchronous programming in the modern world (interesting for me, at any rate). Back in the day, I used to tell developers that the surest way to create a bug they'd never be able to track down was to write what we used to call "multi-threaded applications." I gave seminars on designing multi-threaded application where, perversely, I spent the first five minutes explaining why you shouldn't do multi-threading unless you absolutely had to. And then, of course, I'd go home and write multi-threaded applications for my clients: do as I say, not as I do.
Obviously, multi-core processors and the new async tools in the .NET Framework have changed the environment tremendously. But I discovered during the discussion that I still think of asynchronous programming as something you do when you have specific issues in your application that only asynchronous programming will solve. In my mental model, I would write my code synchronously, see where I had responsiveness issues, and then rewrite those parts of the application to use asynchronous code (which, on occasion, could trigger some architectural changes to the application). Only if I could see the responsiveness problems in advance would I start by writing asynchronous code. But I was always doing it as an alternative to my default mode: writing synchronous code.
The commenters challenged that, effectively saying that (in many cases) asynchronous programming should be the developer's default choice. As one reader pointed out, a blocked thread can take up to a megabyte of memory while it's idling. Integrating async programming can eliminate that loss.
Of course, there is a question of whether you care about that megabyte: for about $10,000, I can get a server with 256 gigabytes of RAM -- that's over a quarter of a million of those megabytes that we were worrying about saving. The fully loaded chargeback for a computer programmer would swallow that ten grand in a couple of days; so if the programmer is spending much "extra" time to save the odd megabyte, it doesn't make fiscal sense.
But here's the point: If the costs of writing the code asynchronously from the start is "trivial" (to quote another reader), shouldn't I be writing asynchronously from the beginning? You wouldn't need to buy the server, and while the incremental cost for the developer time in writing async from the start might not be zero, it could easily be negligible.
It's a powerful argument. I don't think I'm there yet (I still see the async tools as making it easier to do asynchronous programming
when I need to do it), but I may be coming around. I still worry about those bugs you'll never track down, though. The exception EF raises when executing multiple asynchronous processes on the same context seem to me to be the kind of problem that wouldn't occur during development, but would raise its ugly head in production, for example. But I may be worrying unnecessarily.
Posted by Peter Vogel on 07/08/20140 comments
If I'm writing a method that returns a collection, I can, of course, declare my method's return type using a class name, like this:
Public Function GetCustomersByCity(City As String) Returns List(of Customer)
But by declaring my return type as class in this way, I restrict my method to only returning that class (in this example, a List). I might eventually want to rewrite the method to return some other class, but my overly-specific return type will prevent me from doing that. A much better practice is to just specify an interface name when returning a collection. That allows me to return any class I want, provided I pick a class that implements the interface I choose.
You want to choose an interface that applies to the maximum number of classes (giving you maximum flexibility in deciding what class to use), while also exposing all the functionality that someone using your method will want to use (giving your clients exactly as much flexibility as you want). There's going to be some conflict here because, presumably, the most common interface is going to be the one with the least functionality. Microsoft gives you at least three choices: IQueryable, IList and IEnumerable.
From the point of view of supplying functionality, if you just want to give your users the ability read the entries (i.e. loop through the collection with a For…Each loop or apply LINQ queries to it), any of these interfaces will do. If you want to give the application that's calling your method the ability to add or remove items from a collection, you'll want to return the IList interface (that does restrict your method to returning classes that support adding and removing items, which means, for example, that you won't be able to return an array from your method).
From the point of view of giving yourself maximum flexibility, IEnumerable is your best choice (both IList and IQueryable inherit from IEnumerable). A quick, non-exhaustive survey suggests to me that IQueryable is your most limiting choice (you can't return a List from a method with a return type of IQueryable). But performance matters also: IQueryable is the right choice for LINQ queries running against Entity Framework, because IEnumerable doesn't support server-side processing or deferred execution the way IQueryable does.
Summing up, my current best advice is: Use IList if your clients need to change the collection; IQueryable if your method is returning an Entity Framework result; IEnumerable for everything else.
I bet I'm going to get comments about this advice …
Posted by Peter Vogel on 06/28/20140 comments
For all the complaints about the muted color scheme in Visual Studio 2012, it's actually pretty easy to change the colors. Just go to Tools | Options and you'll find that the Environment node includes a Visual Theme choice (there's no equivalent feature in Visual Studio 2010, but then, no one complains much about Visual Studio 2010's color scheme).
Even in Visual Studio 2012, though, the Visual Theme choice just gives you a list of pre-defined muted color schemes: What if you don't like any of them, either? It is possible to change the colors individually for Visual Studio's components through Tools | Options | Environment | Fonts and Colors -- but only in the same sense that, if you're trapped by yourself at the South Pole, it is possible to remove your own appendix. The good news here is that Microsoft has theme editors for Visual Studio 2010, 2012, and 2013 (the editors are also available as Visual Studio extensions from Visual Studio's Tools menu).
Download the editor, install it, restart Visual Studio and you're ready to go. In Visual Studio 2010 you'll find a new top-level menu called Theme; in Visual Studio 2012, you'll find a Change Color Theme choice on your Tools menu. From the dialog that either choice opens you can pick a pre-defined theme or modify an existing theme to create your own, custom theme. The editor for Visual Studio 2012 provides a much better experience for changing colors than the Visual Studio 2010 version (the Visual Studio 2012 version gives you more feedback on the results of your change) and throws in the ability to export your custom theme to share it among other copies of Visual Studio. But then, I guess, Visual Studio 2012 needs more help here.
Posted by Peter Vogel on 06/27/20140 comments
I was writing a For…Each loop yesterday to create a comma-delimited string from the properties of objects in a collection (and, thanks to LINQ, I don't write many For…Each loops any more). As I was typing in my code I noticed that the String class's Join method now accepts a collection as one of its parameters. That let me delete my whole For…Each loop because, with this new version of Join, all I have to do is pass the Join method the separator I want between my strings (in this case, a comma) and a collection of strings.
Since the collection I wanted to process was a collection of Customer objects, I did need to include a lambda expression to specify which property on the Customer object I wanted to concatenate (I could also have overridden my Customer's ToString method to return the value I wanted to concatenate). So, in the end, all I needed was this:
Dim res = String.Join(",", db.Customers.Select(Function(c) c.FirstName))
I got back the string "Peter,Jan,Jason…" and so on.
One sad thing: This feature is only available in .NET 4.5. I'm not saying that, all by itself, this feature is a compelling reason to upgrade, but I am now lobbying a little bit harder to get my clients using older versions of .NET to upgrade.
Posted by Peter Vogel on 06/17/20140 comments
According to the Time magazine article about the team that fixed the ObamaCare site, the first thing the team did was build in a cache (in fact, they were horrified to discover that the site was built without a server-side cache for frequently used data). If you're reading this Web site, you already knew about that and how easy it is to implement using either the ASP.NET Cache object on the server or local storage on the client. It could have been you on that team.
There's even better news: With .NET 4, a MemoryCache object is available in the System.Runtime.Caching library, and you can use it in non-ASP.NET applications.
Posted by Peter Vogel on 06/09/20140 comments
Every once in a while I find myself writing one set of code to do something to the Customers collection and then writing almost identical code work with the Orders collection. When I spot that duplicate code, I use the Set method on the DbContext object to find the collection of entities I want and write the code once. I just pass the Type of the object I want to the Set method.
For instance, I could write code like this to work with the Orders collection:
res = ctx.Orders
If I use the Set method, I just have to pass the type of the entity in the collection. For the Orders collection, that's the Order class:
res = ctx.Set(GetType(Order))
You can use the Set method to create general purpose methods in at least two ways. First, you can pass the type of the object to a method that accesses the DbContext object:
Public Sub MyMethod(EntityType As Type)
res = ctx.Set(EntityType)
'…code to work with the collection
End Sub
Alternatively, you can write a generic method and pass the type when you call the method:
Public Sub MyMethod(of T As Class)()
res = ctx.Set(GetType(T))
End Sub
Posted by Peter Vogel on 05/30/20140 comments
Many developers are aware that they can write a method that works with many different kinds of objects without using the Object data type: Just write a generic class or method. For example, this method will work with any class, provided that the developer specifies the class when calling the method:
Public Sub MyMethod(Of T)()
Dim im As T
'...using the variable im
End Sub
To use this method with a Customer class, a developer would write code like this:
MyMethod(of Customer)
The problem is that you can't do much with the variable declared as T. You can't instantiate the class with code like this, for example:
Public Sub MyMethod(Of T)()
Dim im As T
im = New T
'...using the variable im
End Sub
That is, unless you promise to .NET that the class will have a constructor (in Visual Basic, a New method) that accepts no parameters. A version of the method that would let you instantiate the class would look like this:
Public Sub MyMethod(Of T As New)()
Dim im As T
im = New T
'...using the variable im
End Sub
Of course, you do have to specify a Class that has a parameterless constructor when you call the method -- you've constrained the number of classes that will work with this method.
You can specify other constraints on the class: that the class must inherit from some other class and/or implement a specific interface, for example. If you do, you'll be able to use methods or properties defined in those classes or interfaces in your method. You can even combine multiple constraints by enclosing them in curly braces ({}).
The following example requires that the class used in the method inherit from Customer, implement the ICreditRating interface, and have a parameterless constructor. That doesn't leave a lot of classes that can be used with this method, but the trade-off is that you can do more with the class in the method:
Public Sub MyMethod(Of T As {New, Customer, ICreditRating})()
Dim im As T
im = New T
im.FirstName = "Peter"
End Sub
Posted by Peter Vogel on 05/27/20140 comments
It's not something database developers do much anymore, but it is possible to have a primary key in a table that isn't generated by the database. If you do have a key like that, here's what you have to do: when writing an entity class in Entity Framework, you need to tell Entity Framework about it by flagging the corresponding property in the entity class. You do that by decorating the property with the DatabaseGenerated attribute and specifying a DatabaseGeneratedOption.
For a key not generated by the database, the option to pick is DatabaseGeneratedOption.None, like this:
Public Class SalesOrder
<Key(), DatabaseGenerated(DatabaseGeneratedOption.None)>
Public Property SalesOrderId As Integer
Posted by Peter Vogel on 05/19/20140 comments
Before your first query against an Entity Framework (EF) model, EF gathers data about your database based on the entities and relationships you've defined to EF; then EF does it again before your first save. If you have lots of entities and relationships in your model, the information-gathering phase can take awhile. Just to make it worse: doubling the number of entities in a model seems to more than double the time in the information-gathering phase. Let's not ask how I know this.
Entity Framework Power Tools can help here by gathering that information at design time, rather than at run time. Depending on how many entities and relationships you have, you can see order of magnitude improvements on your first query and save (literally: reducing the time to execute the first query or save by 10 to 100 times).
To generate this data at design time, first install Power Tools through Visual Studio's Tools | Extension Manager menu choice. With that installed, find the file containing your DbContext object, right-click it, then select Entity Framework | Generate Views. You'll end up with a new file with the same name as your context class file, but with "Views" inserted before the file type (e.g., NorthwindContext.Views.cs). Some caveats: Your DbContext class must have at least one constructor that doesn't require parameters; your project must compile; and your EF classes must be valid (check Visual Studio's Output window for messages from Power Tools if you don't get the Views file).
But nothing is free: If you make a change that affects how your application stores or updates tables either to your database schema or to your model's configuration, your application will blow up because your design time Views code no longer matches the reality of your database. To avoid this, you'll need to test your application when you make a change that might conflict with the code in your Views file. If it turns out that you have a problem, you must regenerate the Views file and redeploy your application. That's a pain, so you'll only want to use this technique if that first query and save is really hurting your performance.
Another option is to not generate the Views file at all. This step is only necessary if you have a lot of tables and relationships in your model, so a better solution is to not have a lot of tables and relationships in your model. Your first step should be to consider ways to limit the size of your EF model (see my earlier tip on bounded models). But if you can't do that and your performance is unacceptable, Views may give you the relief you need.
Posted by Peter Vogel on 05/01/20140 comments
I recently got some feedback from readers about using Visual Basic rather than C# for the sample code in my Practical .NET column. Ironically, the feedback came just before I started doing a whole bunch of columns with code in C# (the columns are about Entity Framework 6 and I used Entity Framework Power Tools to generate my entity code; sadly, Power Tools for Entity Framework 6 is still in beta and only produces C# code).
But that feedback got me looking for a good conversion tool, and eventually I found one at Telerik. Copy your code, surf to the site, paste your code in the top window, set the direction of the conversion (the default is C# to VB), click the button, and copy your converted code from the bottom window. You're now a dual-language warrior.
But I didn't use that tool in my columns. Honest. I wrote all the code myself. At least, that's the basis I'm using for billing my time.
Posted by Peter Vogel on 04/25/20140 comments
If you're not reading Julie Lerman's Data Points column over at MSDN Magazine, you're missing out (in fact, I just realized that my recent column and her column are almost the same, except that hers covers more stuff). I especially appreciated her columns on Domain-Driven Design. And those columns got me thinking about where I should put my Entity Framework (EF) code.
The problem with code-first EF is writing out all the properties in all the entity classes and then applying the customizations those classes need. It's a lot of work: you don't want to do that more than once for any entity. And once you've got the entity built, its code may well be useful in other applications: you want to re-use. It's tempting strategy to package up all of your entity code and your DbContext class in a class library that you can copy from project to project.
The problem with that strategy is that, over time, that class library is going to contain references to every table and relationship in your organization's database. Because of that, your program is going to stop dead the first time you issue a query or call SaveChanges, as EF gathers information about all those tables and relationships (I have a partial solution to that problem coming up in another tip).
But, as Julie suggests in her Domain-Driven Design columns, your applications should use bounded domains: your application should just be using the entities that your application needs. And that led me to think about how to share my EF code while avoiding the performance hit. I realized, for example, that the performance hit isn't related to how many entities I've defined -- it's related to how many entities I've referenced in my DbContext object.
It seems to me that the class library I'm copying from one project to another should contain just the code that defines my entities (i.e., I should think of it as a strategic resource). My DbContext object, on the other hand, should be defined in my application and just reference the entities my application needs (it should be a tactical resource).
There are obviously some wrinkles here. Navigation properties that reference other entities are one example: If you include one entity in your DbContext object, you'll need to include all of the entities those navigation properties reference… and pretty soon all your entities are back in your DbContext object. Another example: EF 6's custom configurations (see my column on using them to make Complex Types useful). Like your entities, those customizations are both a lot of work, and because they're part of an entity's definition, something that should be treated as a strategic resource.
For navigation properties, partial classes may be the solution. First, define the DbContext object in the application, referencing just the entities you need. Second, define the entities in partial classes in a class library that's shared among applications; where necessary, this class library will also hold methods to configure these entities (these methods will accept a ModelBuilder object). Third, in your application, add partial classes for the entities you're using, but use these classes to define the navigation properties your application needs (and only those properties). Finally: also in your application, in your DbContext's OnModelCreating method, call out to those methods that configure the entities you need. You'll be able to reuse your entities with their configurations and still have a bounded domain that will load as fast as it can.
This sounds promising. I'll try it out in my next project (provided that I can get my client on board, of course. Or maybe not: what they don't know won't hurt me).
Posted by Peter Vogel on 04/21/20140 comments