In-Depth

Take Charge of Your Own Security

Microsoft has improved security for its overall platform in several key areas, but holes remain, most notably in its developer tools.

Technology Toolbox: VB.NET, C#

In early 2001, Microsoft announced the Secure Windows Initiative, which laid out a set of activities designed to inject security into all development practices at Microsoft.

From the outside looking in, this initiative appeared to be a haphazard and uncoordinated response to a multitude of software security threats. Development teams at Microsoft "stood down" to focus on identifying and fixing security holes in existing code and writing more secure code in the future. At the same time, the company attempted to rein in its haphazard processes and become more proactive in establishing the cause of known vulnerabilities and releasing patches for them.

Four years have passed, so now is as good a time as any to see how well Microsoft has accomplished the goals it set for itself. Is the battle almost won, has it just been engaged, or is it being largely ignored beyond the rhetoric of allegiance to the concept?

Answering these questions isn't easy, but it's important not to lose sight of one crucial fact when evaluating Microsoft's security policies: You bear ultimate responsibility for your own security. So, I won't just look at how Microsoft is doing, but I'll also point out what you can do to take advantage of the security that's built into the Windows platform. In many cases, the security is actually built in but underutilized. Microsoft bears some responsibility for educating its user base, but its user base is not excused from meeting Microsoft halfway.

One reason it's difficult to evaluate Microsoft on security is that the platform extends across several major components, including the OS, the database and development tools. But a bigger issue is determining the standard Microsoft should be held to in attempting to evaluate the efforts it has taken to be more security-conscious. All software vendors face security issues, but other vendors get much less attention than Microsoft does (see the sidebar, "How Java Compares on Security"). This might seem unfair on the surface, but a vulnerability in one of Microsoft's key products has the potential to affect far more people and can cost far more to monitor and fix.

So I hold Microsoft to a higher standard than others, but I also acknowledge actions it has taken to address its issues, even if some of those actions haven't yet borne fruit (see Table 1).

Because of its ubiquity, Windows (and Internet Explorer) gets the most negative attention on the subject of security. Hardly a week passes that you don't hear about a report of a critical security flaw or a virus attack that exploits an already known flaw.

Much of that attention comes from the sheer number of Windows installations. If a virus infects a tiny percentage of Windows systems, that can still be millions of computers. And from that perspective, it really doesn't matter whether Windows is more or less secure than other operating systems, because the impact of any flaw is correspondingly greater.

That is a tough standard to live up to, and Microsoft is attempting to do so through a regimented patch program. Microsoft schedules the release of patches every second Tuesday; occasionally there are no patches for release, and occasionally there are emergency patches, usually driven by critical security issues.

This procedure is more predictable for IT staff charged with ensuring the security and reliable operation of data center servers, but it is completely satisfactory to no one. To Microsoft, the patches are a regular acknowledgement that Windows lacks the level of security the company would like it to have. To IT professionals, testing and applying patches simply adds to the maintenance workload.

Microsoft has automated some of this process, through the use of Windows Automatic Updates, so for many the workload has at least become manageable. But this approach fails in some contexts. For example, home users are a highly diverse group. Some have Automatic Updates turned on; others visit Microsoft's Windows Update Web site and meticulously analyze their risks and patch appropriately. But most home users simply ignore security vulnerabilities until their systems become unusable.

Small business users also face issues. Small businesses often lack the resources and skills to test and apply patches manually, and can be more dependent on Automatic Updates than larger shops. Yet reports surfaced earlier this year that there were disastrous results when Windows Server 2003 SP1 showed up in the Automatic Updates of Windows Small Business Server and was installed by some small businesses.

You can take at least two lessons from this experience. First, you can't consider any Microsoft process to be infallible, even from such an obvious error. Second, users serious about the security of their servers or desktop systems must invest some time themselves to ensure that the OS is up to date and configured to deter attacks.

At the end of 2003, Microsoft formed the Core Operating Systems Division to focus on what are considered the core components of Windows: the kernel, the I/O system, core devices, setup, and all the build properties. This group has a broader view across Windows, and one of the key purposes of that view is to analyze and improve OS security. Among its duties is to establish quality metrics, including security metrics, at each stage of the development process.

I know some people argue that Windows security should be much better than it is, given the time and effort Microsoft has invested in it. There might be some truth in that assertion, but Microsoft faces a complex problem that it spent more than a decade building. Overnight solutions simply aren't feasible, and it will take a significant amount of time for Microsoft to iron everything out. More significant than what needs to be done is the fact that Microsoft appears to be dedicated to fixing security over the long term.

Watch Out for Hidden Risks
One of the significant advantages of the Windows platform is the extremely broad array of device drivers available for use by peripherals. Microsoft encourages this by providing a comprehensive driver development kit (DDK) with libraries, tools, and copious examples. At one time, you could download this kit freely from Microsoft's Web site. It's still available today, but downloading it now requires an MSDN subscription. In the arcane world of driver development, where drivers are written hurriedly just before the hardware is ready to ship, many base their drivers on the examples provided in the DDK.

The support Microsoft gives driver developers is laudable, but the DDK represents another area of vulnerability. The examples are freely available and were developed for clarification rather than security. There is a high level of quality information in the DDK that hackers and virus writers can use to stage attacks on various kinds of systems.

This is a particularly nasty security risk because device drivers run in kernel mode. Any vulnerability here that enables rogue code to execute practically guarantees full access to the system. Kernel code is one of the few types of code that can bring down the system or corrupt the operating system.

You can achieve a better level of security by paying attention to whether a driver is Windows Hardware Quality Labs (WHQL) tested and certified. WHQL drivers were originally intended to improve driver quality, but they also have the effect of establishing traceability of the driver and improving its security. WHQL-certified drivers get digitally signed, which is noted by Windows during installation. Prior to SP2, the default behavior was to go ahead and install drivers that weren't digitally signed. Today, however, Windows doesn't allow default installation of these drivers.

WHQL was never intended as a security solution, so it's not completely satisfactory as-is. Even though Microsoft has been encouraging driver developers to have their drivers tested, many essential drivers aren't digitally signed. Installing such a driver should make you pause, but you shouldn't preclude its use automatically. Until Microsoft is able to enforce traceability on all device drivers through certification or other means, vulnerabilities in drivers remain a particularly dangerous security problem. The good thing is that the risk is relatively small because drivers require administrative privileges to install—a level that many attacks can't achieve.

Security for drivers is important, and you need to be careful how you approach installing them. But a more significant issue to most rank-and-file corporate developers is the security inside Visual Studio. Microsoft has rarely put security at the top of the list for Visual Studio developers, and the IDE has often been a blank slate in that regard. If developers were cognizant of how to write secure code, Visual Studio assisted in that process by being comprehensive and easy to use. But Visual Studio proved no help to developers who lacked the knowledge and skills to assess code security and close security holes.

.NET Improves Application Security
The transition to managed code has done a great deal to improve security, with the elimination of potential holes such as buffer overflows and underflows. Yes, managed code can have its own vulnerabilities, but it eliminates many of the kinds of code types that are famous for exposing security holes (see Figures 1 and 2).

Even so, the lack of built-in features for helping developers find and assess any vulnerabilities in their own code remains a large issue when creating applications with Visual Studio. Visual Studio 2005 and Team System in particular address many of the concerns, but not all of them. By enabling the application architect to drop a design into a model of the production infrastructure, Team System makes it possible to ensure the deployment environment will support the application. In many cases, this means you can now confirm in advance that security settings for ports, account user privileges, and so on are compatible with the application architecture.

But achieving these benefits means that development teams must use Team System, Visual Studio 2005, and the application lifecycle processes defined by Microsoft. It will probably be some time before even established Microsoft shops fully adopt Team System features that can improve security. If you want to go beyond the support offered in Team System, you must investigate solutions from ISVs.

Also, many developers have yet to make the transition to managed code. Whether for reasons of legacy code, language skills, or tradition, it will be many years before C and C++ code becomes fully marginalized. That puts additional burdens on those developers, including those who work at Microsoft, to write code that minimizes the potential of successful attacks.

PreFix, available in Visual Studio 2005, will help find vulnerabilities in C and C++ code, and FxCop includes rules that cover security issues in .NET code. (FxCop has been a free download since 2003.) These are small steps, however, in providing a comprehensive security solution for application developers.

At this time, Microsoft is largely leaving security development and testing practices to ISVs. Microsoft will no doubt start incorporating its own features into Visual Studio, but it's also late to the game in doing so. There's a measure of truth to the adage that secure applications won't help unless the OS is also secure, but there's no reason your attack perimeter should be any larger than necessary. It's critical for developers to learn and apply security tools to application development to minimize the surface area for potential attacks. Every day Visual Studio goes without such built-in tools represents that much more time for hackers to exploit known application vulnerabilities.

The issues described so far are significant. But perhaps Microsoft's most insurmountable obstacle is that security and usability are often at opposite ends of a continuum. The most secure systems tend to be the least user-friendly, while user-friendly systems tend to have significant weaknesses that the bad guys can exploit.

Microsoft has long opted for user-friendliness over security at most decision points—a strategy that has served it well in building market share and becoming the ubiquitous desktop OS. For example, for many years Windows systems were easy to set up, with system defaults that made them open and widely accessible. This spread the use of the platform because it didn't require specialized expertise to do productive things. But the long-term downside of that strategy is that deployed systems can be exploited more easily.

But things are changing. For individual desktops, Windows XP SP2 turns on the firewall by default. Microsoft has also fine-tuned user privileges so that users need access to fewer system and network resources to do useful work.

Finding that ideal balance between security and usefulness is elusive, and neither the platform nor the development tools are there yet. And it's not as though Microsoft starts over with each new version of Windows. Some of the code base is more than a decade old, with all of the warts you expect to find in legacy code. The full code base might never get fully scrubbed.

The definition of a vulnerability changes over time, too. Talented and dedicated hackers can expose a code construct, data structure, or algorithm that looked perfectly secure at the time it was conceived and implemented. An attack can happen years after the code enters widespread use. Even software that looks perfect might eventually fall victim to the use of new technologies.

Take Security Into Your Own Hands
On many parts of the platform, such as the operating system, Microsoft isn't yet where it should be in terms of security. In other areas, such as individual applications, you are at the mercy of the application provider or the internal development team.

But you do have decent control of your own platform security. At least one major end-user enterprise I've spoken to requires all third-party applications to undergo an internal security review before the application is ever purchased. A failing grade means that the application isn't purchased or that the supplier must modify it suitably before it's reconsidered.

While few enterprises have that kind of power over software suppliers, one answer is to do your homework and "just say no" to applications that require open ports or fail tests for vulnerabilities.

As for the OS and your own software, it is long past time for all users to take charge of their own security. Taking security into your own hands is the surest path to protecting yourself over the long-term. Begin by determining your desired level of security and how much confidence you want in meeting that level. If asked, many IT professionals would claim to want security guarantees, something that isn't possible at this time (nor is ever likely to be).

Next, develop and apply a security checklist for all packaged applications, and don't make exceptions. Set the same standards for applications that you buy as well as for those that you build. Also, move to managed code, if you haven't already. Managed code is the future of application development, and it produces higher-quality code. High-quality code tends to be more secure code. You should also be sure to schedule time to research and apply patches to the OS and to any application that issues them. This is neither a waste of your time nor a failure of Microsoft or the application vendor who provided you the software initially.

You should also set aside a small test bed to learn the behavior of viruses, worms, and individual attacks. You'll learn how to recognize attacks by seeing how they affect systems and software.

In the end, the most important grade Microsoft's security efforts can receive is for providing the tools needed for development and IT professionals to identify security holes, patch or otherwise address those holes and detect intrusions.

There are certainly gaps in the OS, in packaged applications, and especially in the development tools. But Microsoft is providing more tools for you to assess and manage your own security. Using them effectively will do more for your security than Microsoft could ever hope to implement across the entire platform.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube