Handle Conflicting Namespaces with Aliases

Every once in a long while, I have a class name that appears in two different namespaces (there are other kinds of namespace confusions that I can run into but this is most common).

If I use ConfigurationManager, for example, the compiler insists that I either qualify the class name with the namespace (as Configuration.ConfigurationManager) or qualify a bunch of other ConfigurationManager classes with their namespace. I end up with this code:

  Configuration.ConfigurationManager.ConnectionStrings(...

But taking the namespace onto the class name is just the most obvious way to do this is to tack the whole namespace onto the front of the class name. If I'm only use the class once, that's not a problem ... but if I have to repeatedly type in the namespace-plus-class-name, things get very boring very fast.

If you find yourself in that spot, you can assign your namespace a short alias and use it instead the namespace. Here's what assigning an alias looks like in Visual Basic:

Imports sc = System.Configuration;

And in C#:

Using sc = System.Configuration;

Now, you can use your alias wherever you had to type in the namespace to save yourself some typing. My code could now look like this:

sc.ConfigurationManager.ConnectionStrings(...

I hope that clears things up a bit.

Posted by Peter Vogel on 02/18/20150 comments


Simple ForEach Processing on Lists

If you want to process all the items in a list, you can write a For…Each loop…or you can just call the List's ForEach method and pass it a lambda expression containing the processing you want. This code, for example, sets the OrderStatus property on a list of StatusChange objects to Ordered:

  Dim statuses As New List(Of StatusChange)
  statuses.Add(New StatusChange)
  statuses.Add(New StatusChange)
  statuses.ForEach(Function(s) s.OrderStatus = "Ordered") 

The ForEach method a nice feature. It's a shame more collections don't have it.

Posted by Peter Vogel on 02/16/20150 comments


Cut or Copy in One Keystroke

This could have been the simplest tip I've ever written: In Visual Studio, if you just want to cut or copy one line, you don't have to select the line. All you have to do is put your cursor on the line and press Control+X or Control+C. Visual Studio will cut or copy the whole line, including the carriage return.

Here's why this isn't the simplest tip I've ever written: There's a downside to this feature. In any other application you must select something before cutting or copying. If you haven't selected anything and accidentally press Ctrl+X or Ctrl+C then nothing happens. Critically, this means that an accidental cut or copy won't cause you to lose what's on the clipboard: No harm, no foul.

That's not what happens in Visual Studio: Visual Studio will always cut or copy something when you press Ctrl+X or Ctrl+C (even if you're on a blank line, Visual Studio will cut or copy the carriage return for the line). This means that if you do an inadvertent cut or copy then you're going to lose whatever you had on the clipboard. When you do make an inadvertent cut or copy and lose what's on the clipboard, you can get back to it by using Shift+Ctrl+V when you paste (the subject of an earlier tip).

You can't completely turn this feature off but you can ameliorate the impact of the inadvertant cut or copy by telling Visual Studio not to cut or copy blank lines. Go to Tools | Options | Text Editor| All Languages and uncheck "Apply Cut or Copy commands to blank lines when there is no selection."

Posted by Peter Vogel on 02/12/20150 comments


Fill a String with Characters

Sometimes you need a string that's filled with a specific number of characters. There are lots of ways to do that but the easiest is to use the New keyword with the String class because the New keyword gives you access to the String object's constructors.

In fact, the String class has three constructors. The first one initializes the string with whatever you pass to the constructor. This one initializes the string to four equals signs:

  x = New String("====")

Of course, that's not much of an improvement over what you'd do normally:

  X = "===="

But the second constructor is more useful because it accepts a character, an integer and then repeats the character the number of times specified by the integer. This example initializes the string with however may equals signs are specified by initCount:

  x = New String("=", initCount)

The third constructor is the most interesting, though I doubt that I'll ever use it. The third constructor lets you initialize the string with a set of characters from a Char array beginning at some point in the array and for some number of characters. This example initializes the string with the digits from 123456789, starting at the position specified in initStart and for the length specified in initLength:

  x = New String("123456789", initStart, initLength)

If initStart was set to 2 and initLength was set to 4 then x would be set to "3456".

Posted by Peter Vogel on 02/09/20150 comments


Support Remote Clients with a Custom Exception Object

The custom Exception class I described in a column earlier this month will work fine … as long as the .NET Framework doesn't need to serialize your Exception object to return it to a remote client. If you want to make the extra effort, you can add serialization support to your custom Exception class.

To support serialization you first need to decorate your Exception class with the Serializable attribute. And, if you haven't added any custom properties to your Exception class, that's all you need to do.

But if you do have custom properties on your Exception class then during serialization you must do two things. First, you must override the base Exception object's GetObjectData method. In that method you should call the base version of the method to ensure that the default serialization is performed.

After that, you need to add the values of any custom properties to the SerializationInfo parameter passed to the method, saving the parameters under some name you make up. Here's a version of the method that adds a value called BadOption from a variable in the class to the serialized version of the object:

Private _Option As String
Public Overrides Sub GetObjectData(info As SerializationInfo, 
                                   context As StreamingContext)
  MyBase.GetObjectData(info, context)

  If info IsNot Nothing Then
    info.AddValue("BadOption", Me._option)
  End If

End Sub

You also need to add a constructor that the .NET Framework will call during the deserialization process. In that constructor you need to extract your value from the SerializationInfo, using the name you saved the value under. Once you've retrieved the value you can then update your customer property with it. This example retrieves and updates my BadOption value:

Protected Sub New(SerializationInfo As SerializationInfo,
                  StreamingContext As StreamingContext)
  MyBase.New(SerializationInfo, StreamingContext)

  If SerializationInfo IsNot Nothing Then
    Me._option = SerializationInfo.GetString("BadOption")
  End If

End Sub

Let me know how helpful this is in the comment section, or send me e-mail!

Posted by Peter Vogel on 02/06/20150 comments


Reformat Your Whole Code File in One Step

Often after I've cut or pasted some text, I find that my code isn't formatted correctly any more. As long as your code is syntactically correct (i.e. no stray brackets or End Ifs without matching Ifs), Visual Studio will reformat your whole file with one key chord: Hold down the Control key and then press K, followed by D.

Boom! Everything looks pretty again.

Posted by Peter Vogel on 01/30/20150 comments


Control Class ToolTip During Debugging

You're debugging some code and you need to know the value of a string variable. You move your mouse over the variable and -- voila! -- a tooltip appears showing the value of the string.

But, when you think about that, things get complicated. After all, a string has many properties: How did Visual Studio know that the property you're interested in is the value of the string and not, for example, the string's length? And, more importantly, why don't you get that feature with your classes? When you hover the mouse over a variable pointing at one of your classes all that you get is your class' name: Distinctly unhelpful.

You can control what appears in the debugging tooltip in one of two ways. One way is to override your class' ToString method because Visual Studio defaults to calling ToString to generate the debugging message. However, using ToString to support debugging isn't necessarily an option.

For example, I often use ToString to supply the default representation of my class in my user interface (if I add a class to a dropdown list, the list will call my class' ToString method to get some text to display in the list). What I want displayed in my UI and what I want displayed when I'm debugging are often two different things.

There's a better solution for controlling what appears in the debugging tooltip: the DebuggerDisplay attribute. Just decorate your class with the DebuggerDisplay attribute and pass the attribute a string with property names from the class enclosed in curly braces (you can also surround the property names with additional text if you want).

This example will cause the debugging tooltip to display the Address and Type properties from my CustomerAddress class along with some explanatory text:

<DebuggerDisplay("Address={Address},Type={AddressType}")>
Friend Class CustomerAddress
    Public Property Address As String
    Public Property AddressType As String
End Class

Now, isn't that more informative?

Posted by Peter Vogel on 01/26/20150 comments


Inheriting Interfaces

It's no secret that I love interfaces (I did a whole column about them once). As you add more interfaces to your application you may find that you have several interfaces that look very much alike. These two, for example:

Public Interface ICustomer
    Property Id As Integer
    Property Name As String
    Sub Buy()
End Interface

Public Interface IVendor
    Property Id As Integer
    Property Name As String
    Sub Sell()
End Interface

But, looking at those interfaces, there's obviously the idea of a "business partner" buried in this interface design in the shared Id and Name property. It wouldn't be surprising to find that there are other interfaces in this application that share those Id and Name properties.

You can implement that "business partner" (and simplify maintenance of your application) by defining the business partner interface and then having the ICustomer and IVendor interfaces inherit from it. The result would look like this:

Public Interface IBusinessPartner
    Property Id As Integer
    Property Name As String
End Interface

Public Interface ICustomer
    Inherits IBusinessPartner
    Sub Buy()
End Interface

Public Interface IVendor
    Inherits IBusinessPartner
    Sub Sell()
End Interface

There are lots of benefits to building this inheritance structure: You can now extend both the IVendor and ICustomer interfaces with shared members by adding them to IBusinessPartner. If you ever need to add another "business partner" interface, all the common work is done for you: Your new interface just needs to inherit from IBusinessPartner.

Finally, a variable declared as IBusinessPartner will work with any class that implements ICustomer or IVendor, giving your application more flexibility.

Posted by Peter Vogel on 01/22/20150 comments


Control XML Output with SaveOptions

As my column Creating Complex XML Documents with XML Literals indicates, I think your best choice for creating complex XML documents is to use XML Literals with the XElement object. As I note in that column, generating the XML document from an XElement object is easy: Just call the Save method, passing a file name. That gives you a beautiful XML document, with each nested element nicely indented and starting on a new line.

But, of course, if you're just going to pass that document to some other process then all that "pretty printing" is a waste of time. You're better off passing SaveOptions.DisableFormatting as the second parameter to the Save method which saves your XML without indentation:

Dim elm As XElement
elm.Save("c:\test.xml", SaveOptions.DisableFormatting)

There, that's much simpler.

Posted by Peter Vogel on 01/16/20150 comments


Retrieve Multiple RecordSets in a Single Trip to the Database

I know that I keep going on about this, but the best way to speed up your application is to retrieve all the data you need on each trip to the database and make as few trips to your database as you can. One way to do that when retrieving rows is to retrieve multiple sets of rows on each trip.

This means that you can reduce trips to the database in a stored procedure by returning multiple sets of rows from a single stored procedure with a single call. If you're using ADO.NET, you can combine multiple Select statements in your Command object's CommandText property (just make sure you put a semicolon between the statements):

Dim cmd As New SqlCommand
cmd.CommandText = "Select * from Customers; Select * from Countries;"

When you call ExecuteDataReader to get your DataReader, the DataReader will be processing the first set of records returned from your stored procedure or from the first Select statement in your CommandText:

Dim rdr As SqlDataReader
'Processing customers
rdr = cmd.ExecuteReader()

You can process that DataReader or not -- your choice. When you're ready to process the next set of rows, just call DataReader's NextResult method. This command moves the DataReader to process the Countries that I retrieved:

rdr.NextResult

Because of the way that Entity Framework talks to your back-end database will vary from one database engine to another and on how much data you're retrieving, I can't guarantee that each NextResult won't trigger another trip to the database (ideally, all of the data will come down to the client in one trip). But you're guaranteed that you'll only make one trip to the database when you make the initial request, and that's a good thing.

And, as I mentioned in another tip, "Speed Up Your Application by Doubling Up on Database Access," if you want to mix some update commands in with your Select statements, you can do that, too -- saving you even more trips. I wouldn't suggest that combining these tips eliminates the need for stored procedures; I would, however, suggest that you only use stored procedures when you need some control logic mixed in with your SQL statements.

Posted by Peter Vogel on 01/13/20150 comments


The Power of Indexes

As I mentioned in a previous tip, Giving Your Database Updates Enough Time, I had a client contact me with a problem: The updates for an unusually large batch of data in their online application was taking so long that the updates were timing out. As a short-term fix, we increased the update time to just over two minutes but we all recognized the right, long-term solution was to reduce the time the updates were taking.

I am a developer so we discussed some code-based solution but, before I touched the keyboard, I looked at the database. I wanted to see if I could apply some indexes to speed up processing. I was somewhat surprised to discover that none of the tables had any indexes or primary keys on them (though the tables did ... usually ... have columns that would uniquely identify each row in a table). This wasn't a huge system but some tables had as many as half-a-million rows in them.

Without indexes, every Join and every Where clause had to scan the whole table to find the rows it needed. It's a testament to SQL Server that the application ran as fast as it did (and it was certainly "fast enough" -- except for this problem update).

Rather than do any testing or analysis, I just went through the stored procedures involved in the update and added a primary key or an index to each table that was involved in a Join or a Where clause. If the Join or Where clause used two columns from the same table, I created a primary key or index that included both columns.

The results, as is usual with indexes, were miraculous. Well, it certainly looked like a miracle to my client: With no code changes, that update that was taking over two minutes now took less than fifteen seconds -- almost an order of magnitude speed improvement (900 percent, to be exact). In addition, every other transaction that used those tables now executed faster.

Because the overall load on the database dropped, even transactions that didn't use those indexes were completing marginally faster.

Of course, adding indexes isn't free: Each index is, effectively, a table that needs to be updated. The index must always be updated when a row is inserted or deleted from the parent table, but updates only affect the index if one of the indexed columns is changed.

Even if you're updating an indexed column, though, the net result for updates is usually positive because the update statement probably includes a Where clause that will run faster when there's an index in place (and, in my experience, the indexed columns are often the columns that are least likely to be updated). I find that I can add up to a half dozen indexes to a table before update performance starts to degrade. Certainly, my client didn't see any impact.

If you're not paying attention to the indexes on your tables, you're missing an opportunity.

Posted by Peter Vogel on 01/08/20150 comments


Giving Your Database Updates Enough Time

My client was having a problem processing some very large batches of data in their online application: Their SQL updates were timing out. I offered to look at ways of fixing the problem (more on that in a later tip) but, in the meantime, the client asked me to see about "fixing the timeout problem."

To give the updates more time to complete, my client already tried playing with the ConnectionTimeout value in the connection string used to connect to the database. However, that value just controls how long ADO.NET will wait when opening a connection -- it has no effect on the time allowed for an update statement to complete.

To fix the update time, you need to set either the CommandTimeout property on the ADO.NET Command object or, if you're using Entity Framework and LINQ, the CommandTimeout property on the ObjectContext object.

In ADO.NET, the code looks like this:

 Dim cmdUpdateStatus As New SqlCommand
cmdUpdateStatus.CommandTimeout = 120

With LINQ and Entity Framework with the ObjectContext, the code looks like this:

  Dim doc As New MyObjectContext
doc.CommandTimeout = 120

With LINQ and Entity Framework with the DbContext, the code is a little more complicated:

  Dim dbc = New MyDbContext
Dim oc As ObjectContext CType(dbc, IObjectContextAdapter).ObjectContext
oc.CommandTimeout = 120

Setting CommandTimeout to 0 will cause your application to wait until your update command completes, however long that takes. As tempting as that option is, you could wait forever, so I don't recommend it: set a value, however big, for your timeout so that you know it will, eventually, come to an end.

Posted by Peter Vogel on 12/18/20140 comments


Subscribe on YouTube