Practical .NET

Creating Useful Naming Conventions: Technical Considerations

Naming conventions are obviously a good thing, right? Not necessarily -- and only if you understand the problem they solve.

Whenever I teach Learning Tree's course on Design Patterns and Best Practices, I begin the class by asking for some examples of best practices. "Naming conventions" comes up every time. It's obviously important to people. And naming conventions must be a good thing -- after all, there are so many of them.

However, naming conventions aren't free; developers have to learn them and think about them when coding. On occasion, developers have to revise existing, working code to bring that code in line with the conventions. In addition, someone has to draw up the conventions, deal with the objections, train new hands, distribute documentation, check that the conventions are being followed and enforce them. These cost aren't high: Assuming that the conventions are stable over time and that a shop's personnel doesn't turn over too frequently, the cost probably drops to something close to zero in a short period of time. But all that means is that following a naming convention is just a "harmless indulgence." Presumably, if everyone wants one, there have to be some benefits.

Before talking about what a "good" naming convention would be, we need to talk about why any naming convention would be worthwhile. There are two categories of benefits: One set is technical and relates to the way we write code, and the other set reflects the business and relates to the way we understand code. This column is about the technical benefits that you might get from a naming convention.

Why Naming Conventions Have Lost Value
Back in the days of Hungarian notation, naming conventions were used to embed the datatype of a variable in the variable's name. The goal was to help programmers ensure that their code wouldn't fail because of conversion issues. This code, using the "str" prefix for strings and the "int" prefix for variables, was obviously an accident waiting to happen:

intId = strId

But three changes in programming have made those benefits go away. The first change was the move to object-oriented programming and the resulting proliferation of datatypes. The never-ending number of datatypes made it impossible to invent or remember conventions for all the existing and potential datatypes.

The other two changes were related to improvements in development tools: better support for determining datatypes (IntelliSense, for instance) and compile-as-you-type so that you get instant feedback on whether your code has datatype-related issues.

In fact, in .NET, using implicitly declared variables means that the developer doesn't need to datatype a variable as long as they set it to some value:

Dim  CustOrds = From c In db.Custs
		 Select c.OrderNum

Under the hood, the compiler is still assigning a datatype to the variable even though the developer does not. Developers who make use of implicitly declared variables are, effectively, indicating that they don't really care what the datatype of a particular variable is -- they just want to know what's in the variable's IntelliSense list. Those developers just care what they can do with the variable. In this scenario, a naming convention that specifies the datatype is not going to be valuable.

Categorizing and Organizing
But just because we've lost interest in embedding datatype information into our variable and member names, it doesn't mean that we're not interested in embedding some information. When working with "code-behind" technologies like Windows Forms, ASP.NET Web Forms or WPF, I often used a form of Hungarian notation for my on-screen controls: Button names began with "btn," textboxes with "txt" and so on. I didn't do this to keep track of the control's datatype -- I did it because I have no memory. When entering code that worked with these controls, I wouldn't be able to remember if a textbox on the form was called CustomerName, CustName or CName. However, I would remember that whatever the name was, it was a textbox and its name began with txt. Typing this.txt or Me.txt would give me a shortlist of items to choose from.

With the latest versions of IntelliSense, typing Me.cust/this.cust gives an equivalent shortlist (all the controls with "cust" somewhere in their name). As a result, I've stopped prefixing my on-screen controls.

So here, at least, is one place where naming conventions serve a useful purpose: organizing variables into categories relevant to the programmer. For instance, many developers find it useful to distinguish between fields (variables accessible from multiple methods and properties), parameters and local variables. Some developers prefix field variables with an underscore (_CustName), have parameters begin with lowercase letters (custName) and have local variables begin with an uppercase letter (CustName). There is a benefit of stacking these kinds of identifiers at the start of the variable name where they're easier to spot when scanning a list.

So, from the programmer's point of view, if you want to develop a naming convention, start by watching yourself code. What distinctions among member and variable names are valuable to you? What distinctions, if you overlook them, lead to programming errors? What doesn't the IDE provide you (or doesn't provide in a timely and convenient way)? Those are what you should build your naming convention around.

And it's entirely possible that there are no distinctions that meet these criteria; if so, you don't need a naming convention -- at least, from a technical point of view. But if, for instance, you're writing stored procedures in SQL Server where the IDE doesn't do very much for you, a naming convention might be very useful. However, technical considerations are only one category of benefits that a naming convention could, potentially, provide. Next month, I'll look at the other category.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

  • TypeScript Tops New JetBrains 'Language Promise Index'

    In its latest annual developer ecosystem report, JetBrains introduced a new "Language Promise Index" topped by Microsoft's TypeScript programming language.

Subscribe on YouTube