Developer Product Briefs

Reduce Your Code Vulnerability

Take advantage of these eight, simple tips to reduce your code's vulnerability to attacks ranging from buffer overflow, to SQL injection attacks, to decompilation.

Reduce Your Code Vulnerability
Take advantage of these eight, simple tips to reduce your code's vulnerability to attacks ranging from buffer overflow, to SQL injection attacks, to decompilation.
by Josh Holmes and Gabriel Torok

February 21, 2006

Technology Toolbox: Visual Basic, C#, SQL Server

Creating secure, reliable applications is an important aspect of any developer's job.

The hard part, of course, is learning what practices lead to building secure, reliable applications from the outset, as opposed to attempting to graft security onto existing applications. This aspect can be especially daunting because not everyone agrees on what practices create the most secure, reliable applications possible. That said, there is much you can do to reduce the vulnerability of any applications you create, on a variety of levels, from how you design to how you code your applications.

Implementing the eight steps described in this article will go a long way toward helping you not only reduce your code's vulnerability to attacks, but also help you assess your level of vulnerability to potential attacks—whether from buffer overflows, SQL injections, or something as basic as someone running a decompiler on your application's binary. Each of these has attacks has potentially serious implications for a given application, as well as your company. The good news is that following some simple, practical steps and guidelines can help you address many of the common mistakes developers make that leave their applications and respective companies open to attacks from a wide variety of sources.

Note that you as a developer don't work in a vacuum, but in conjunction with one or more designers and testers. It is often your job to integrate efforts laid out by your designers and testers, whose concerns are often slightly different from your own. For example, a designer typically concentrates on things such as making sure that threat models are built, the product has layer defenses, that security failures are logged, and so on. A tester worries about testing attacks that have worked in the past, and known exploits. A developer has a broad set of concerns that include protecting the database, protecting data, and protecting the application. All three entities have important roles to play in your company's security. As a developer, you often find yourself in the role of helping the designer and tester with their own roles.

Once upon a time, in the not too distant past, security for applications was typically the concern of the developer only. But with the advent of the Web and increased connectivity forming the basis of many businesses partnerships, your security isn't just the business of your company, but of everyone that does business with your company. It's important to get the details—large and small—exactly right. These tips will help you do that.

1. Create Your Applications, So They Require the Least Amount of Privilege Necessary.
Most of your users are non-administrators, so they don't need admin-level privileges. Using limited accounts helps your users and your larger organization protect both your users' data, as well as the company's larger interests. For this reason, most good network administrators restrict their users severely.

For example, limited users can't install ActiveX controls or shell add-ins, so using this option alone can help you eliminate most spyware. Limited users can't access other user's data folders, making it impossible for them to delete, manipulate, or read other users' sensitive data.

But working in an environment where users have limited privileges puts extra pressure on you to make sure your users can use all your application's capabilities in the intended manner. Little will frustrate a user more than using an application whose features are crippled by their level of network access.

Unfortunately, sometimes you don't learn what won't work under your network's limited access settings until you get your application to a test lab. Worse, running back and forth from your development environment to the test lab to see whether you've addressed the issues limited access presents wastes too many cycles. One way to solve this issue is to run tests of your own code as a non-admin, so you find out early in the process what parts of your application don't work when run with limited privileges.

No question, this is a tough practice to accept for many developers. It's convenient having admin-level privileges, and you can run into a quite a few gotchas working in this manner. For example, you can't write to a config file in the program files area, nor can you read certain areas on the machine. Similarly, you don't have the power to change the firewall (which your code should never do, anyway). You also can't start or stop Windows Services. If your users can't do it, then your application shouldn't require it.

There are many occasions where you need these administrative privileges as a developer. In these cases, there are a lot of scripts out there that will help you temporarily acquire and subsequently drop extra privileges (see Additional Resources). Yes, such kludges are inconvenient sometimes, but working this way pays off in a big way when it's time to deploy your application, while also making your users, your company, and even your own system more secure.

2. Handle Exceptions Appropriately.
Exceptions can give away vital information about the internal workings of your program. Proper exception handling adds a layer of security to your application, without revealing where the location of possible holes in your application. Proper exception handling also results in a more stable system that your customers will find more enjoyable to use.

You need to account for proper exception handling in several areas of your code. First, you need to ensure your exception-handling code has bulletproof entry and exit points. This doesn't simply mean that you make the Main secure, but you make secure all application boundaries and layers, such as the event handlers in a user interface, Web service methods page rendering in ASP.NET applications, and all other inter-process entry and exit points. If you do not catch an exception on these boundaries, exceptions thrown show the user what went wrong, where it went wrong, and more. A clever user can learn much about your application by reading these exceptions, including where the vulnerabilities are. A clever and malicious user might even take advantage of them.

You can often react to the exceptions in a way that the user will never know what went wrong by prompting the user to fix the issue. For example, assume a FileNotFoundException is thrown. You could auto-create the file needed, then prompt the user to pick the location to place the file. Or, you might fall back to some set of defaults, depending on what information is in the file and the expected result of the file operation.

When you catch an exception on the external boundaries, sometimes the best move is to re-throw the exception. For example, assume a Web service needs a given database to operate, and it's offline for some reason. You should wrap that exception in a custom exception type that lets the calling application know how to react, but doesn't provide so much information that it exposes critical information about your application.

This tip puts the emphasis on exceptions at the boundaries, because this is a critical area when handling exceptions from a security standpoint. However, you shouldn't neglect exceptions in other areas of your application. You need to have an overall exception handling strategy that knows logging, how to handle certain types of exceptions, how to recover from exceptions, and what type of exceptions are fatal exceptions. (See the Additional Resources box for more information on how to handle these and other kinds of exceptions.)

3. Hide the Internals of Your Programs.
Your company's source code contains vital data: information about databases, critical algorithms, and the workings of internal systems. In a well-controlled environment, only developers should have access to this source code, while you give the end-users binaries to run.

The managed nature of .NET assemblies means that distributing unprotected or unobfuscated binaries is tantamount to distributing source code. As it stands now, someone using a free decompiler can recreate source code from an executable easily, unless you take steps to prevent it.

One way to help protect your applications from reverse engineering is to use an obfuscator. Fortunately, Microsoft includes a lightweight obfusctator with Visual Studio, available from the Tools menu. Otherwise, your software licensing code, copy protection mechanisms, and proprietary business logic are largely available for all to see—regardless of whether it's legal for others to look at them. Without the obfuscator, others can search for security flaws to exploit, steal unique ideas, crack programs, identify where key information resides, and so on. You should use an obfuscator even with ASP.NET apps, because your business logic can be compromised if your Web server is compromised.

4. Apply Appropriate Code Access Security.
The field of code-access security (CAS) is gaining momentum, not least because it is based on a trust level, or sandbox, principle. A sandbox is a set of privileges assigned to all of the applications in the sandbox. Code run from the Internet is in the lowest-level sandbox. For example, code run from the Internet can't open connections to the database directly, so it should use a Web service to access it. Such code can open a read-only file, write to its own storage (defaulted to 512 meg of private space), access its own clipboard but not the user's clipboard, perform limited kinds of printing— and not much else.

As a developer, you need to understand the different available sandboxes and make sure your code runs in the appropriate sandbox for your kind of application. This issue is less important to you if you install your app on the user's machine only. It matters more if you install your code from a local intranet, which means working with different kinds of restrictions. If you expect or want others to access your code from anywhere, then you need to understand and run within the rules of the Internet sandbox.

5. Validate All Input?
The scariest part of any application is accepting data from other sources. You need to check every source of data that provides data to your application that originates from outside your own application, whether it's the UI, Web services, databases, file input, or some other source. Every input point is a possible point of attack. These attacks can come in the form of SQL injection attacks, buffer overflows, or cross-site injection attacks, to name a few. This means you must check every field thoroughly before accepting it.

You have several options for checking your input. One basic technique is to check the length of the data to make sure that it is not zero length and that it fits within your fields. This is especially useful when you need to save a name field that's set to 100 characters and someone tries to put in 200 characters. This isn't as important as it was in the "old days," because buffer overruns are harder with .NET. More elaborate checks involve checking the content for specific characters or ranges of characters and either stripping them out or escaping them. For example, some tips for preventing SQL injection attacks include stripping out ;, =, <, >, or, and, select, insert, update, drop, xp, and exec at a minimum. Do this either by using regular expressions, or by using the string's replace function on each of these characters/phrases. You can learn more about using regular expressions by visiting www.regexlib.com.

This code scrubs out the dangerous items from any string that's passed to it. You can use this kind of utility function to help you validate all of your input:

public string ScrubSQLStringInput(string input)
{
   Regex expression =
     new Regex(";|=|<|>| or | and |select 
     |insert |update |drop |xp_|--|exec"
   );
   string result =      expression.Replace(input, new      MatchEvaluator(MatchEvaluatorHandler));
   return result; }
public string MatchEvaluatorHandler(Match match) {    //Replace the matched items with a blank string of    //equal length    return new string(' ', match.Length); }

Note that you cannot assume data is clean before using it. It doesn't take much time or effort to validate your data if you build a library of utility functions.

6. Validate All Users.
A client told us recently that he didn't have to worry about a particular section of his site, because the only way that someone could get there was to login and go through the proper pages. The site generated a unique URL for each user that didn't really exist on the site.

So, imagine his shock when he saw someone copied the URL into an IM or e-mail and reused it later without having to authenticate. That area of the site had no other security, so the person copying and pasting the URL could suddenly bypass all of that wonderful security through obscurity achieved with non-existent URL.

Similarly, some developers store the validation information in a config file, or other, similar locations that can access with copy-and-paste. For example, assume there is a utility that calls a Web service and reads the unique information in the config file, then makes some assumptions about who the user is based on this registration. You can patch this vulnerability by passing a username and password with each of the requests.

Both vulnerabilities are examples of developers attempting to make the lives of their users easier by not requiring them to login to multiple sites. Unfortunately, these approaches open gaping security holes the developer could plug easily. The fact that you make it difficult to navigate a site directly without coming in the front door doesn't mean that's the path the user will take.

The fix for this kind of issue is simple. Make your user login once again if you're not positive about the credentials of your user, it's been too long such as session time out, this is the first access from the utility app in this session, and so on. It's annoying to check the user's credentials continually, but the security implications of not doing so are too significant too ignore.

7. Enforce Security With FxCop.
FxCop is a code analysis tool that verifies that .NET managed code conforms to the Microsoft .NET Framework Design Guidelines (see Figure 1). FxCop can catch many issues—security and otherwise—that developers overlook on a day-to-day basis.

The tool uses reflection, MSIL parsing, and call-graph analysis to compare assemblies against certain rules. A rule is managed code that can analyze targets and return a message about its findings. Rule messages identify any relevant programming and design issues and, when possible, supply information on how to fix the target. FxCop checks against several libraries of rules, including: COM Rules, Design Rules, Globalization Rules, Naming Rules, Performance Rules, Usage Rules, and Security Rules. Security Rules focus on detecting programming elements that leave your assemblies vulnerable to malicious users or code. There are more than 25 standard Security Rules, and you can create new rules specific to your own organization.

FxCop has been integrated into the Team System Developer SKU of VS2005. There is also a free, stand-alone version of FxCop. You should use whichever version is available to you, but you can integrate the Team System version directly into your code check-in procedures. This enables you to enforce that code that the code you check into source control passes the chosen FxCop rules. See the Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">Additional Resources.aspx" target="_blank">8. Use Visual Studio's Built-in Cryptography.
Sometimes it's safer to do less work on your security code. Many shops believe they can create safer code by writing their own cryptography routines, rather than using the built-in cryptography from Visual Studio because the would-be hacker will have to figure out what the cryptography routine is before the hacker can break it. This is a complex and difficult-to-build example of security through obscurity. The reality is that it creates a brittle component that everything else in your application relies on.

Some of the best and brightest minds in the mathematical fields have worked for many years to produce the algorithms implemented in the Win32 CryptoAPI and the System.Security.Cryptography namespace. These implementations have been thoroughly vetted and proven over time against thousands of attacks. You won't have this luxury with your own code, so the chances of getting it right and being able to trust it are remote in comparison.

About the Authors
Josh Holmes is a principal of SRT Solutions as well as a Microsoft MVP and INETA Speaker Bureau member. He helps his clients—ranging from the Fortune 500 to small firms—to understand and implement an array of software technology, including .NET.

Gabriel Torok is president of PreEmptive Solutions, Inc. and a book author and national conference speaker. PreEmptive Solutions produces code security tools for Java and .NET. A lite version of PreEmptive's Dotfuscator tool is bundled with Microsoft's Visual Studio.NET 2005. He may be reached at [email protected].

comments powered by Disqus

Featured

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

  • What's New for Python, Java in Visual Studio Code

    Microsoft announced March 2024 updates to its Python and Java extensions for Visual Studio Code, the open source-based, cross-platform code editor that has repeatedly been named the No. 1 tool in major development surveys.

Subscribe on YouTube