This could have been the simplest tip I've ever written: In Visual Studio, if you just want to cut or copy one line, you don't have to select the line. All you have to do is put your cursor on the line and press Control+X or Control+C. Visual Studio will cut or copy the whole line, including the carriage return.
Here's why this isn't the simplest tip I've ever written: There's a downside to this feature. In any other application you must select something before cutting or copying. If you haven't selected anything and accidentally press Ctrl+X or Ctrl+C then nothing happens. Critically, this means that an accidental cut or copy won't cause you to lose what's on the clipboard: No harm, no foul.
That's not what happens in Visual Studio: Visual Studio will always cut or copy something when you press Ctrl+X or Ctrl+C (even if you're on a blank line, Visual Studio will cut or copy the carriage return for the line). This means that if you do an inadvertent cut or copy then you're going to lose whatever you had on the clipboard. When you do make an inadvertent cut or copy and lose what's on the clipboard, you can get back to it by using Shift+Ctrl+V when you paste (the subject of an earlier tip).
You can't completely turn this feature off but you can ameliorate the impact of the inadvertant cut or copy by telling Visual Studio not to cut or copy blank lines. Go to Tools | Options | Text Editor| All Languages and uncheck "Apply Cut or Copy commands to blank lines when there is no selection."
Posted by Peter Vogel on 02/12/2015 at 10:09 AM0 comments
Sometimes you need a string that's filled with a specific number of characters. There are lots of ways to do that but the easiest is to use the New keyword with the String class because the New keyword gives you access to the String object's constructors.
In fact, the String class has three constructors. The first one initializes the string with whatever you pass to the constructor. This one initializes the string to four equals signs:
x = New String("====")
Of course, that's not much of an improvement over what you'd do normally:
X = "===="
But the second constructor is more useful because it accepts a character, an integer and then repeats the character the number of times specified by the integer. This example initializes the string with however may equals signs are specified by initCount:
x = New String("=", initCount)
The third constructor is the most interesting, though I doubt that I'll ever use it. The third constructor lets you initialize the string with a set of characters from a Char array beginning at some point in the array and for some number of characters. This example initializes the string with the digits from 123456789, starting at the position specified in initStart and for the length specified in initLength:
x = New String("123456789", initStart, initLength)
If initStart was set to 2 and initLength was set to 4 then x would be set to "3456".
Posted by Peter Vogel on 02/09/2015 at 7:44 PM0 comments
The custom Exception class I described in a column earlier this month will work fine … as long as the .NET Framework doesn't need to serialize your Exception object to return it to a remote client. If you want to make the extra effort, you can add serialization support to your custom Exception class.
To support serialization you first need to decorate your Exception class with the Serializable attribute. And, if you haven't added any custom properties to your Exception class, that's all you need to do.
But if you do have custom properties on your Exception class then during serialization you must do two things. First, you must override the base Exception object's GetObjectData method. In that method you should call the base version of the method to ensure that the default serialization is performed.
After that, you need to add the values of any custom properties to the SerializationInfo parameter passed to the method, saving the parameters under some name you make up. Here's a version of the method that adds a value called BadOption from a variable in the class to the serialized version of the object:
Private _Option As String
Public Overrides Sub GetObjectData(info As SerializationInfo,
context As StreamingContext)
If info IsNot Nothing Then
You also need to add a constructor that the .NET Framework will call during the deserialization process. In that constructor you need to extract your value from the SerializationInfo, using the name you saved the value under. Once you've retrieved the value you can then update your customer property with it. This example retrieves and updates my BadOption value:
Protected Sub New(SerializationInfo As SerializationInfo,
StreamingContext As StreamingContext)
If SerializationInfo IsNot Nothing Then
Me._option = SerializationInfo.GetString("BadOption")
Let me know how helpful this is in the comment section, or send me e-mail!
Posted by Peter Vogel on 02/06/2015 at 7:44 PM0 comments
Often after I've cut or pasted some text, I find that my code isn't formatted correctly any more. As long as your code is syntactically correct (i.e. no stray brackets or End Ifs without matching Ifs), Visual Studio will reformat your whole file with one key chord: Hold down the Control key and then press K, followed by D.
Boom! Everything looks pretty again.
Posted by Peter Vogel on 01/30/2015 at 10:18 AM0 comments
You're debugging some code and you need to know the value of a string variable. You move your mouse over the variable and -- voila! -- a tooltip appears showing the value of the string.
But, when you think about that, things get complicated. After all, a string has many properties: How did Visual Studio know that the property you're interested in is the value of the string and not, for example, the string's length? And, more importantly, why don't you get that feature with your classes? When you hover the mouse over a variable pointing at one of your classes all that you get is your class' name: Distinctly unhelpful.
You can control what appears in the debugging tooltip in one of two ways. One way is to override your class' ToString method because Visual Studio defaults to calling ToString to generate the debugging message. However, using ToString to support debugging isn't necessarily an option.
For example, I often use ToString to supply the default representation of my class in my user interface (if I add a class to a dropdown list, the list will call my class' ToString method to get some text to display in the list). What I want displayed in my UI and what I want displayed when I'm debugging are often two different things.
There's a better solution for controlling what appears in the debugging tooltip: the DebuggerDisplay attribute. Just decorate your class with the DebuggerDisplay attribute and pass the attribute a string with property names from the class enclosed in curly braces (you can also surround the property names with additional text if you want).
This example will cause the debugging tooltip to display the Address and Type properties from my CustomerAddress class along with some explanatory text:
Friend Class CustomerAddress
Public Property Address As String
Public Property AddressType As String
Now, isn't that more informative?
Posted by Peter Vogel on 01/26/2015 at 10:26 AM0 comments
It's no secret that I love interfaces (I did a whole column about them once). As you add more interfaces to your application you may find that you have several interfaces that look very much alike. These two, for example:
Public Interface ICustomer
Property Id As Integer
Property Name As String
Public Interface IVendor
Property Id As Integer
Property Name As String
But, looking at those interfaces, there's obviously the idea of a "business partner" buried in this interface design in the shared Id and Name property. It wouldn't be surprising to find that there are other interfaces in this application that share those Id and Name properties.
You can implement that "business partner" (and simplify maintenance of your application) by defining the business partner interface and then having the ICustomer and IVendor interfaces inherit from it. The result would look like this:
Public Interface IBusinessPartner
Property Id As Integer
Property Name As String
Public Interface ICustomer
Public Interface IVendor
There are lots of benefits to building this inheritance structure: You can now extend both the IVendor and ICustomer interfaces with shared members by adding them to IBusinessPartner. If you ever need to add another "business partner" interface, all the common work is done for you: Your new interface just needs to inherit from IBusinessPartner.
Finally, a variable declared as IBusinessPartner will work with any class that implements ICustomer or IVendor, giving your application more flexibility.
Posted by Peter Vogel on 01/22/2015 at 8:59 AM0 comments
As my column Creating Complex XML Documents with XML Literals indicates, I think your best choice for creating complex XML documents is to use XML Literals with the XElement object. As I note in that column, generating the XML document from an XElement object is easy: Just call the Save method, passing a file name. That gives you a beautiful XML document, with each nested element nicely indented and starting on a new line.
But, of course, if you're just going to pass that document to some other process then all that "pretty printing" is a waste of time. You're better off passing SaveOptions.DisableFormatting as the second parameter to the Save method which saves your XML without indentation:
Dim elm As XElement
There, that's much simpler.
Posted by Peter Vogel on 01/16/2015 at 9:08 AM0 comments
I know that I keep going on about this, but the best way to speed up your application is to retrieve all the data you need on each trip to the database and make as few trips to your database as you can. One way to do that when retrieving rows is to retrieve multiple sets of rows on each trip.
This means that you can reduce trips to the database in a stored procedure by returning multiple sets of rows from a single stored procedure with a single call. If you're using ADO.NET, you can combine multiple Select statements in your Command object's CommandText property (just make sure you put a semicolon between the statements):
Dim cmd As New SqlCommand
cmd.CommandText = "Select * from Customers; Select * from Countries;"
When you call ExecuteDataReader to get your DataReader, the DataReader will be processing the first set of records returned from your stored procedure or from the first Select statement in your CommandText:
Dim rdr As SqlDataReader
rdr = cmd.ExecuteReader()
You can process that DataReader or not -- your choice. When you're ready to process the next set of rows, just call DataReader's NextResult method. This command moves the DataReader to process the Countries that I retrieved:
Because of the way that Entity Framework talks to your back-end database will vary from one database engine to another and on how much data you're retrieving, I can't guarantee that each NextResult won't trigger another trip to the database (ideally, all of the data will come down to the client in one trip). But you're guaranteed that you'll only make one trip to the database when you make the initial request, and that's a good thing.
And, as I mentioned in another tip, "Speed Up Your Application by Doubling Up on Database Access," if you want to mix some update commands in with your Select statements, you can do that, too -- saving you even more trips. I wouldn't suggest that combining these tips eliminates the need for stored procedures; I would, however, suggest that you only use stored procedures when you need some control logic mixed in with your SQL statements.
Posted by Peter Vogel on 01/13/2015 at 11:43 AM0 comments
As I mentioned in a previous tip, Giving Your Database Updates Enough Time, I had a client contact me with a problem: The updates for an unusually large batch of data in their online application was taking so long that the updates were timing out. As a short-term fix, we increased the update time to just over two minutes but we all recognized the right, long-term solution was to reduce the time the updates were taking.
I am a developer so we discussed some code-based solution but, before I touched the keyboard, I looked at the database. I wanted to see if I could apply some indexes to speed up processing. I was somewhat surprised to discover that none of the tables had any indexes or primary keys on them (though the tables did ... usually ... have columns that would uniquely identify each row in a table). This wasn't a huge system but some tables had as many as half-a-million rows in them.
Without indexes, every Join and every Where clause had to scan the whole table to find the rows it needed. It's a testament to SQL Server that the application ran as fast as it did (and it was certainly "fast enough" -- except for this problem update).
Rather than do any testing or analysis, I just went through the stored procedures involved in the update and added a primary key or an index to each table that was involved in a Join or a Where clause. If the Join or Where clause used two columns from the same table, I created a primary key or index that included both columns.
The results, as is usual with indexes, were miraculous. Well, it certainly looked like a miracle to my client: With no code changes, that update that was taking over two minutes now took less than fifteen seconds -- almost an order of magnitude speed improvement (900 percent, to be exact). In addition, every other transaction that used those tables now executed faster.
Because the overall load on the database dropped, even transactions that didn't use those indexes were completing marginally faster.
Of course, adding indexes isn't free: Each index is, effectively, a table that needs to be updated. The index must always be updated when a row is inserted or deleted from the parent table, but updates only affect the index if one of the indexed columns is changed.
Even if you're updating an indexed column, though, the net result for updates is usually positive because the update statement probably includes a Where clause that will run faster when there's an index in place (and, in my experience, the indexed columns are often the columns that are least likely to be updated). I find that I can add up to a half dozen indexes to a table before update performance starts to degrade. Certainly, my client didn't see any impact.
If you're not paying attention to the indexes on your tables, you're missing an opportunity.
Posted by Peter Vogel on 01/08/2015 at 9:58 AM0 comments
My client was having a problem processing some very large batches of data in their online application: Their SQL updates were timing out. I offered to look at ways of fixing the problem (more on that in a later tip) but, in the meantime, the client asked me to see about "fixing the timeout problem."
To give the updates more time to complete, my client already tried playing with the ConnectionTimeout value in the connection string used to connect to the database. However, that value just controls how long ADO.NET will wait when opening a connection -- it has no effect on the time allowed for an update statement to complete.
To fix the update time, you need to set either the CommandTimeout property on the ADO.NET Command object or, if you're using Entity Framework and LINQ, the CommandTimeout property on the ObjectContext object.
In ADO.NET, the code looks like this:
Dim cmdUpdateStatus As New SqlCommand
cmdUpdateStatus.CommandTimeout = 120
With LINQ and Entity Framework with the ObjectContext, the code looks like this:
Dim doc As New MyObjectContext
doc.CommandTimeout = 120
With LINQ and Entity Framework with the DbContext, the code is a little more complicated:
Dim dbc = New MyDbContext
Dim oc As ObjectContext CType(dbc, IObjectContextAdapter).ObjectContext
oc.CommandTimeout = 120
Setting CommandTimeout to 0 will cause your application to wait until your update command completes, however long that takes. As tempting as that option is, you could wait forever, so I don't recommend it: set a value, however big, for your timeout so that you know it will, eventually, come to an end.
Posted by Peter Vogel on 12/18/2014 at 10:05 AM0 comments
You copy something in Visual Studio and, before you can paste it you realize that you need to copy something else. Or, worse, you copy something, forget what you're doing, copy something else, go to paste ... and realize that you've lost that first thing you wanted.
Good news! The Visual Studio clipboard actually remembers the last 20 things you cut or copied. To access that history, instead of pressing Ctrl_V to paste your item, just press Shift+Ctl+V. The first time you press that combination you'll paste the last thing you cut or copied; the second time you press it, you'll paste the second last thing you copied right over top of the first item; the third time you press it ... you get the picture.
This feature even works if you press Ctl+V the first time. So, if you paste something you don't want, just switch to Shift+Ctl+V. The first press will get you what you just pasted but your second press will start working you back through your "copy history."
So, go ahead and copy that other thing -- you'll be able to get back to the item currently sitting in the clipboard when you need it.
Posted by Peter Vogel on 12/11/2014 at 1:51 PM0 comments
I was having an e-mail exchange with a reader who commented that he was having trouble following my coding conventions. Among other issues, I seemed to arbitrarily capitalize variable names. I explained that, in these columns, I specifically varied my coding conventions from article to article, just to avoid looking like I'm endorsing any particular style (one of the reasons that I don't use C# in most of my columns is I'm a "every curly brace on its own line" kind of guy and I don't want to get into an argument about it).
You can probably assume that anything you see that does turn up regularly in these columns is there because I think it's mandated by Visual Studio Magazine.
But there's another reason for the variation you see: Often the code in these columns is drawn from projects that I'm doing for a client and, since I've just copied the code and obfuscated its origin, the code reflects my client's coding conventions. And that made me notice something: As I move from client to client, I keep changing my coding conventons and, you know what? All the coding conventions I use seem good to me. And that, in turn, got me thinking: What does matter in a coding convention?
There's actually some research around this, if you're interested. The earliest work, I think, is in Gerald Weinberg's The Psychology of Computer Programming but the most complete review I know of is in Steve McConnell's Code Complete.
It turns out that the only thing that makes a difference is consistency (or, if you prefer, simplicity). The more special cases and exceptions that a set of coding conventions include, the more likely it is that the conventions won't be followed correctly by programmers writing the code or won't be understood by programmers reading the code.
The most successful coding conventions (the ones that programmers comply with, implement correctly, and find useful when reading code) are short and have no exceptions. As an example, the Microsoft C# Coding Conventions would probably cover three pages if printed out (and less than a page if you omitted the examples). Given the research available, that seems about right.
Posted by Peter Vogel on 12/09/2014 at 1:51 PM0 comments