Control Class ToolTip During Debugging

You're debugging some code and you need to know the value of a string variable. You move your mouse over the variable and -- voila! -- a tooltip appears showing the value of the string.

But, when you think about that, things get complicated. After all, a string has many properties: How did Visual Studio know that the property you're interested in is the value of the string and not, for example, the string's length? And, more importantly, why don't you get that feature with your classes? When you hover the mouse over a variable pointing at one of your classes all that you get is your class' name: Distinctly unhelpful.

You can control what appears in the debugging tooltip in one of two ways. One way is to override your class' ToString method because Visual Studio defaults to calling ToString to generate the debugging message. However, using ToString to support debugging isn't necessarily an option.

For example, I often use ToString to supply the default representation of my class in my user interface (if I add a class to a dropdown list, the list will call my class' ToString method to get some text to display in the list). What I want displayed in my UI and what I want displayed when I'm debugging are often two different things.

There's a better solution for controlling what appears in the debugging tooltip: the DebuggerDisplay attribute. Just decorate your class with the DebuggerDisplay attribute and pass the attribute a string with property names from the class enclosed in curly braces (you can also surround the property names with additional text if you want).

This example will cause the debugging tooltip to display the Address and Type properties from my CustomerAddress class along with some explanatory text:

<DebuggerDisplay("Address={Address},Type={AddressType}")>
Friend Class CustomerAddress
    Public Property Address As String
    Public Property AddressType As String
End Class

Now, isn't that more informative?

Posted by Peter Vogel on 01/26/2015 at 10:26 AM0 comments


Inheriting Interfaces

It's no secret that I love interfaces (I did a whole column about them once). As you add more interfaces to your application you may find that you have several interfaces that look very much alike. These two, for example:

Public Interface ICustomer
    Property Id As Integer
    Property Name As String
    Sub Buy()
End Interface

Public Interface IVendor
    Property Id As Integer
    Property Name As String
    Sub Sell()
End Interface

But, looking at those interfaces, there's obviously the idea of a "business partner" buried in this interface design in the shared Id and Name property. It wouldn't be surprising to find that there are other interfaces in this application that share those Id and Name properties.

You can implement that "business partner" (and simplify maintenance of your application) by defining the business partner interface and then having the ICustomer and IVendor interfaces inherit from it. The result would look like this:

Public Interface IBusinessPartner
    Property Id As Integer
    Property Name As String
End Interface

Public Interface ICustomer
    Inherits IBusinessPartner
    Sub Buy()
End Interface

Public Interface IVendor
    Inherits IBusinessPartner
    Sub Sell()
End Interface

There are lots of benefits to building this inheritance structure: You can now extend both the IVendor and ICustomer interfaces with shared members by adding them to IBusinessPartner. If you ever need to add another "business partner" interface, all the common work is done for you: Your new interface just needs to inherit from IBusinessPartner.

Finally, a variable declared as IBusinessPartner will work with any class that implements ICustomer or IVendor, giving your application more flexibility.

Posted by Peter Vogel on 01/22/2015 at 8:59 AM0 comments


Control XML Output with SaveOptions

As my column Creating Complex XML Documents with XML Literals indicates, I think your best choice for creating complex XML documents is to use XML Literals with the XElement object. As I note in that column, generating the XML document from an XElement object is easy: Just call the Save method, passing a file name. That gives you a beautiful XML document, with each nested element nicely indented and starting on a new line.

But, of course, if you're just going to pass that document to some other process then all that "pretty printing" is a waste of time. You're better off passing SaveOptions.DisableFormatting as the second parameter to the Save method which saves your XML without indentation:

Dim elm As XElement
elm.Save("c:\test.xml", SaveOptions.DisableFormatting)

There, that's much simpler.

Posted by Peter Vogel on 01/16/2015 at 9:08 AM0 comments


Retrieve Multiple RecordSets in a Single Trip to the Database

I know that I keep going on about this, but the best way to speed up your application is to retrieve all the data you need on each trip to the database and make as few trips to your database as you can. One way to do that when retrieving rows is to retrieve multiple sets of rows on each trip.

This means that you can reduce trips to the database in a stored procedure by returning multiple sets of rows from a single stored procedure with a single call. If you're using ADO.NET, you can combine multiple Select statements in your Command object's CommandText property (just make sure you put a semicolon between the statements):

Dim cmd As New SqlCommand
cmd.CommandText = "Select * from Customers; Select * from Countries;"

When you call ExecuteDataReader to get your DataReader, the DataReader will be processing the first set of records returned from your stored procedure or from the first Select statement in your CommandText:

Dim rdr As SqlDataReader
'Processing customers
rdr = cmd.ExecuteReader()

You can process that DataReader or not -- your choice. When you're ready to process the next set of rows, just call DataReader's NextResult method. This command moves the DataReader to process the Countries that I retrieved:

rdr.NextResult

Because of the way that Entity Framework talks to your back-end database will vary from one database engine to another and on how much data you're retrieving, I can't guarantee that each NextResult won't trigger another trip to the database (ideally, all of the data will come down to the client in one trip). But you're guaranteed that you'll only make one trip to the database when you make the initial request, and that's a good thing.

And, as I mentioned in another tip, "Speed Up Your Application by Doubling Up on Database Access," if you want to mix some update commands in with your Select statements, you can do that, too -- saving you even more trips. I wouldn't suggest that combining these tips eliminates the need for stored procedures; I would, however, suggest that you only use stored procedures when you need some control logic mixed in with your SQL statements.

Posted by Peter Vogel on 01/13/2015 at 11:43 AM0 comments


The Power of Indexes

As I mentioned in a previous tip, Giving Your Database Updates Enough Time, I had a client contact me with a problem: The updates for an unusually large batch of data in their online application was taking so long that the updates were timing out. As a short-term fix, we increased the update time to just over two minutes but we all recognized the right, long-term solution was to reduce the time the updates were taking.

I am a developer so we discussed some code-based solution but, before I touched the keyboard, I looked at the database. I wanted to see if I could apply some indexes to speed up processing. I was somewhat surprised to discover that none of the tables had any indexes or primary keys on them (though the tables did ... usually ... have columns that would uniquely identify each row in a table). This wasn't a huge system but some tables had as many as half-a-million rows in them.

Without indexes, every Join and every Where clause had to scan the whole table to find the rows it needed. It's a testament to SQL Server that the application ran as fast as it did (and it was certainly "fast enough" -- except for this problem update).

Rather than do any testing or analysis, I just went through the stored procedures involved in the update and added a primary key or an index to each table that was involved in a Join or a Where clause. If the Join or Where clause used two columns from the same table, I created a primary key or index that included both columns.

The results, as is usual with indexes, were miraculous. Well, it certainly looked like a miracle to my client: With no code changes, that update that was taking over two minutes now took less than fifteen seconds -- almost an order of magnitude speed improvement (900 percent, to be exact). In addition, every other transaction that used those tables now executed faster.

Because the overall load on the database dropped, even transactions that didn't use those indexes were completing marginally faster.

Of course, adding indexes isn't free: Each index is, effectively, a table that needs to be updated. The index must always be updated when a row is inserted or deleted from the parent table, but updates only affect the index if one of the indexed columns is changed.

Even if you're updating an indexed column, though, the net result for updates is usually positive because the update statement probably includes a Where clause that will run faster when there's an index in place (and, in my experience, the indexed columns are often the columns that are least likely to be updated). I find that I can add up to a half dozen indexes to a table before update performance starts to degrade. Certainly, my client didn't see any impact.

If you're not paying attention to the indexes on your tables, you're missing an opportunity.

Posted by Peter Vogel on 01/08/2015 at 9:58 AM0 comments


Giving Your Database Updates Enough Time

My client was having a problem processing some very large batches of data in their online application: Their SQL updates were timing out. I offered to look at ways of fixing the problem (more on that in a later tip) but, in the meantime, the client asked me to see about "fixing the timeout problem."

To give the updates more time to complete, my client already tried playing with the ConnectionTimeout value in the connection string used to connect to the database. However, that value just controls how long ADO.NET will wait when opening a connection -- it has no effect on the time allowed for an update statement to complete.

To fix the update time, you need to set either the CommandTimeout property on the ADO.NET Command object or, if you're using Entity Framework and LINQ, the CommandTimeout property on the ObjectContext object.

In ADO.NET, the code looks like this:

 Dim cmdUpdateStatus As New SqlCommand
cmdUpdateStatus.CommandTimeout = 120

With LINQ and Entity Framework with the ObjectContext, the code looks like this:

  Dim doc As New MyObjectContext
doc.CommandTimeout = 120

With LINQ and Entity Framework with the DbContext, the code is a little more complicated:

  Dim dbc = New MyDbContext
Dim oc As ObjectContext CType(dbc, IObjectContextAdapter).ObjectContext
oc.CommandTimeout = 120

Setting CommandTimeout to 0 will cause your application to wait until your update command completes, however long that takes. As tempting as that option is, you could wait forever, so I don't recommend it: set a value, however big, for your timeout so that you know it will, eventually, come to an end.

Posted by Peter Vogel on 12/18/2014 at 10:05 AM0 comments


Getting the Second Last Thing You Copied

You copy something in Visual Studio and, before you can paste it you realize that you need to copy something else. Or, worse, you copy something, forget what you're doing, copy something else, go to paste ... and realize that you've lost that first thing you wanted.

Good news! The Visual Studio clipboard actually remembers the last 20 things you cut or copied. To access that history, instead of pressing Ctrl_V to paste your item, just press Shift+Ctl+V. The first time you press that combination you'll paste the last thing you cut or copied; the second time you press it, you'll paste the second last thing you copied right over top of the first item; the third time you press it ... you get the picture.

This feature even works if you press Ctl+V the first time. So, if you paste something you don't want, just switch to Shift+Ctl+V. The first press will get you what you just pasted but your second press will start working you back through your "copy history."

So, go ahead and copy that other thing -- you'll be able to get back to the item currently sitting in the clipboard when you need it.

Posted by Peter Vogel on 12/11/2014 at 1:51 PM0 comments


What's Important in Coding Conventions

I was having an e-mail exchange with a reader who commented that he was having trouble following my coding conventions. Among other issues, I seemed to arbitrarily capitalize variable names. I explained that, in these columns, I specifically varied my coding conventions from article to article, just to avoid looking like I'm endorsing any particular style (one of the reasons that I don't use C# in most of my columns is I'm a "every curly brace on its own line" kind of guy and I don't want to get into an argument about it).

You can probably assume that anything you see that does turn up regularly in these columns is there because I think it's mandated by Visual Studio Magazine.

But there's another reason for the variation you see: Often the code in these columns is drawn from projects that I'm doing for a client and, since I've just copied the code and obfuscated its origin, the code reflects my client's coding conventions. And that made me notice something: As I move from client to client, I keep changing my coding conventons and, you know what? All the coding conventions I use seem good to me. And that, in turn, got me thinking: What does matter in a coding convention?

There's actually some research around this, if you're interested. The earliest work, I think, is in Gerald Weinberg's The Psychology of Computer Programming but the most complete review I know of is in Steve McConnell's Code Complete.

It turns out that the only thing that makes a difference is consistency (or, if you prefer, simplicity). The more special cases and exceptions that a set of coding conventions include, the more likely it is that the conventions won't be followed correctly by programmers writing the code or won't be understood by programmers reading the code.

The most successful coding conventions (the ones that programmers comply with, implement correctly, and find useful when reading code) are short and have no exceptions. As an example, the Microsoft C# Coding Conventions would probably cover three pages if printed out (and less than a page if you omitted the examples). Given the research available, that seems about right.

Posted by Peter Vogel on 12/09/2014 at 1:51 PM0 comments


Finding Where a Method, Property, or Variable Is Used

I think everybody knows that if you click on a variable, method, or property name and press F12 (or select Go To Definition from the pop-up menu) you'll be taken to the method or property's code or to the variable's declaration. But sometimes you want to see the reverse: All the places where a property, method, or variable is being used -- and not everyone seems to know about that.

That's too bad because it's a cool feature of Visual Studio: Just click on a method (or property or variable), press Shift+F12 (or select Find All References) and Visual Studio opens a new window below the editor window, listing all the places where the method is used. If you double-click on one of the items in the list, you'll be taken to the code. With that list, therefore, you can step through all the places in your application that you might want to look at to understand how the method is used (the definition of the method appears at the top of the list, by the way, so you can go there, also).

The Window stays visible until you close it or do something that moves another window in the group to the top (building your application pops the output window over the list, for example). The Window is still there if you want to go back to it, however: It's called Find Symbol Results.

Posted by Peter Vogel on 12/04/2014 at 1:51 PM0 comments


Restricting Columns Retrieved in Entity Framework

A couple of months ago, I wrote a column on how to avoid downloading columns in a table that has hundreds of columns or columns containing large objects (or, at least, only downloading those columns when you want them). But that solution only makes sense when getting the columns you want is something that you'll be doing frequently.

If, on the other hand, you have exactly one place in your application where all you want to get is, for example, the Customer's first and last names then there's a simpler solution: Just define a class that has the columns you want.

Two caveats: First, you can't do updates through the objects you've retrieved using this technique. Second, don't expect to get a huge performance gain from this unless you're avoiding retrieving many other columns or the columns you're avoiding are blob columns.

As an example, to get the Customer's first and last name columns I'd begin by defining a class, outside of my Entity Framework model, to hold just those columns:

Public Class CustFirstLastName
   Public Property FirstName As String
   Public Property LastName As String
End Class

Now, I write a LINQ query to retrieve just those two columns by instantiating the class in my LINQ statement's Select clause and setting its properties with values retrieved through Entity Framework. This code assumes that my DbContext object (in the db variable in this example) has a collection called Customers:

Dim lastFirstNames = From c In db.Customers
                     Select New CustFirstLastName With {
							    .FirstName = c.FirstName,
							    .LastName = c.LastName
							   }

The SQL generated by Entity Framework to get the data from the database will just grab the FirstName and LastName columns because that's all that's been used in the Select statement.

If you're new to Entity Framework, you probably consider this obvious -- that's what EF should do. But, in the early days of EF, this wasn't the behavior you got: EF always retrieved all the rows specified in the entity class (in this case, whatever class makes up that Customers collection). EF's gotten smarter since then and you can take advantage of it.

But, as I said, my CustFirstLastName class is not part of my entity model. If I make changes to the CustFirstLastName object's properties and call my DbContext object's SaveChanges method those changes will not be transferred back to the database. To update the database, I need to make my changes to whatever objects are in the Customers collection in my sample code.

Posted by Peter Vogel on 12/02/2014 at 1:51 PM0 comments


Retrieving Multiple RecordSets in a Single Trip to the Database

I know that I keep going on about this, but: The best way to speed up your application is to retrieve all the data you need on each trip to the database and make as few trips to your database as you can. One way to do that when retrieving rows is to retrieve multiple sets of rows on each trip.

This means that you can reduce trips to the database in a stored procedure, by returning multiple sets of rows from a single stored procedure with a single call. If you're using ADO.NET, you can combine multiple Select statements in your Command object's CommandText property (just make sure you put a semicolon between the statements):

Dim cmd As New SqlCommand
cmd.CommandText = "Select * from Customers; Select * from Countries;"

When you call ExecuteDataReader to get your DataReader, the DataReader will be processing the first set of records returned from your stored procedure or from the first Select statement in your CommandText:

Dim rdr As SqlDataReader
'Processing customers
rdr = cmd.ExecuteReader()

You can process that recordset or not -- your choice. When you're ready to process the next set of rows, just call DataReader's NextResult method. This command moves the DataReader to process the Countries that I retrieved:

rdr.NextResult

Because of the way that ADO.NET talks to your backend database will vary from one database engine to another and on how much data you're retrieving, I can't guarantee that each NextResult won't trigger another trip to the database (ideally, all of the data will come down to the client in one trip).

But you're guaranteed that you'll only make one trip to the database when you make the initial request and that's a good thing.

And, as I mentioned in another tip, if you want to mix some update commands in with your Select statements, you can do that, too -- saving you even more trips. I wouldn't suggest that combining these tips eliminates the need for stored procedures; I would, however, suggest that you only use stored procedures when you need some control logic mixed in with your SQL statements.

Posted by Peter Vogel on 11/20/2014 at 1:51 PM0 comments


Passing Exception Information

In the bad old days, when an application threw an exception, we frequently extracted the system-generated message and put it on the screen for the user to read. Often it included information that we'd prefer not to share with the outside world (table names and details of the connection string, for instance).

A better practice is to generate an application-specific message that reveals just what you want. And, unlike most system messages that describe what's wrong, your message could tell the user something useful: what to do to solve the problem. A unique message will also help you identify where things have gone wrong in your application. The right answer is to create your own Exception object with a unique message:

Try
  ...code...
Catch Ex As Exception
  Throw New Exception("Something has gone horribly wrong")
End Try

However, when you're debugging, the information you need to prevent the exception from happening again is in the original exception object.

As some readers pointed out to me in comments to an earlier tip, the right answer is to pass the original exception as the second parameter to the Exception object's constructor. Enhancing my earlier code, the result looks like this:

Try
  ...code...
Catch Ex As Exception
  Throw New Exception("Something has gone horribly wrong", Ex)
End Try

The Exception object you pass as the second parameter will show up in the InnerException property of the Exception object you're creating.

Posted by Peter Vogel on 11/13/2014 at 1:51 PM0 comments


Upcoming Events

.NET Insight

Sign up for our newsletter.

I agree to this site's Privacy Policy.