Practical .NET

Creating Useful Naming Conventions: Technical Considerations

Naming conventions are obviously a good thing, right? Not necessarily -- and only if you understand the problem they solve.

Whenever I teach Learning Tree's course on Design Patterns and Best Practices, I begin the class by asking for some examples of best practices. "Naming conventions" comes up every time. It's obviously important to people. And naming conventions must be a good thing -- after all, there are so many of them.

However, naming conventions aren't free; developers have to learn them and think about them when coding. On occasion, developers have to revise existing, working code to bring that code in line with the conventions. In addition, someone has to draw up the conventions, deal with the objections, train new hands, distribute documentation, check that the conventions are being followed and enforce them. These cost aren't high: Assuming that the conventions are stable over time and that a shop's personnel doesn't turn over too frequently, the cost probably drops to something close to zero in a short period of time. But all that means is that following a naming convention is just a "harmless indulgence." Presumably, if everyone wants one, there have to be some benefits.

Before talking about what a "good" naming convention would be, we need to talk about why any naming convention would be worthwhile. There are two categories of benefits: One set is technical and relates to the way we write code, and the other set reflects the business and relates to the way we understand code. This column is about the technical benefits that you might get from a naming convention.

Why Naming Conventions Have Lost Value
Back in the days of Hungarian notation, naming conventions were used to embed the datatype of a variable in the variable's name. The goal was to help programmers ensure that their code wouldn't fail because of conversion issues. This code, using the "str" prefix for strings and the "int" prefix for variables, was obviously an accident waiting to happen:

intId = strId

But three changes in programming have made those benefits go away. The first change was the move to object-oriented programming and the resulting proliferation of datatypes. The never-ending number of datatypes made it impossible to invent or remember conventions for all the existing and potential datatypes.

The other two changes were related to improvements in development tools: better support for determining datatypes (IntelliSense, for instance) and compile-as-you-type so that you get instant feedback on whether your code has datatype-related issues.

In fact, in .NET, using implicitly declared variables means that the developer doesn't need to datatype a variable as long as they set it to some value:

Dim  CustOrds = From c In db.Custs
		 Select c.OrderNum

Under the hood, the compiler is still assigning a datatype to the variable even though the developer does not. Developers who make use of implicitly declared variables are, effectively, indicating that they don't really care what the datatype of a particular variable is -- they just want to know what's in the variable's IntelliSense list. Those developers just care what they can do with the variable. In this scenario, a naming convention that specifies the datatype is not going to be valuable.

Categorizing and Organizing
But just because we've lost interest in embedding datatype information into our variable and member names, it doesn't mean that we're not interested in embedding some information. When working with "code-behind" technologies like Windows Forms, ASP.NET Web Forms or WPF, I often used a form of Hungarian notation for my on-screen controls: Button names began with "btn," textboxes with "txt" and so on. I didn't do this to keep track of the control's datatype -- I did it because I have no memory. When entering code that worked with these controls, I wouldn't be able to remember if a textbox on the form was called CustomerName, CustName or CName. However, I would remember that whatever the name was, it was a textbox and its name began with txt. Typing this.txt or Me.txt would give me a shortlist of items to choose from.

With the latest versions of IntelliSense, typing Me.cust/this.cust gives an equivalent shortlist (all the controls with "cust" somewhere in their name). As a result, I've stopped prefixing my on-screen controls.

So here, at least, is one place where naming conventions serve a useful purpose: organizing variables into categories relevant to the programmer. For instance, many developers find it useful to distinguish between fields (variables accessible from multiple methods and properties), parameters and local variables. Some developers prefix field variables with an underscore (_CustName), have parameters begin with lowercase letters (custName) and have local variables begin with an uppercase letter (CustName). There is a benefit of stacking these kinds of identifiers at the start of the variable name where they're easier to spot when scanning a list.

So, from the programmer's point of view, if you want to develop a naming convention, start by watching yourself code. What distinctions among member and variable names are valuable to you? What distinctions, if you overlook them, lead to programming errors? What doesn't the IDE provide you (or doesn't provide in a timely and convenient way)? Those are what you should build your naming convention around.

And it's entirely possible that there are no distinctions that meet these criteria; if so, you don't need a naming convention -- at least, from a technical point of view. But if, for instance, you're writing stored procedures in SQL Server where the IDE doesn't do very much for you, a naming convention might be very useful. However, technical considerations are only one category of benefits that a naming convention could, potentially, provide. Next month, I'll look at the other category.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

  • What's New for Python, Java in Visual Studio Code

    Microsoft announced March 2024 updates to its Python and Java extensions for Visual Studio Code, the open source-based, cross-platform code editor that has repeatedly been named the No. 1 tool in major development surveys.

Subscribe on YouTube