DevSmart

Diving into the Virtual Pool

The software development life cycle is enhanced by virtualization. An overview of what you should consider.

It starts with server consolidation, and before you know it, it's virtualization this and virtualization that. Once the server farm is turned into a seamless pool of computing resources, that oh-so-useful layer of abstraction between your computer hardware and the software running on it just seems to go with everything: storage systems, network resources, application development and testing.

Interestingly, it is this last area-dev-test-that has proved to be the most common toe-in-the-water enterprise entry point for this technology. Over the last few years, VMware, VMLogix, Virtual Iron, Surgient, Microsoft and others have been driving virtualization deeper into the software development lifecycle with tools designed to allow developers working on a single hunk of hardware to write code that runs in a cornucopia of environments, and to test that code before it's deployed.

Configuration Library
Virtualization software lets developers store standard x86-based configurations as virtual machines (VMs) loaded with operating systems, browsers and applications. The VM container has emerged as a better way to capture a multitude of software configurations. VMs are highly portable; in virtualized environments, it's just easier to move things around, to encapsulate, to archive and to optimize. Consequently, geographically disbursed dev and test teams can exchange complete environments with each other.

Virtualization also provides a means of testing applications across a heterogeneous enterprise -- different server and client operating systems, databases, middleware and browsers. Testers can share complex configuration and bug scenarios with enormous accuracy. And because the latest generation of virtualization tools allows developers to pool resources and allocate them on an as-needed basis, development managers are increasingly spared the complex, inefficient and costly task of setting up and tearing down physical development and test environments.

Virtual Wisdom
All of these capabilities are making development managers' lives easier. But they still have to run the virtualized show. Here are a few of the things that should be on their to-do lists:

Virtual machines should be managed in the same manner as physical systems. These VM containers may be just a bunch of files, but if you want your developers working in relevant environments, the VMs need maintenance. Apply all your standard corporate maintenance practices. Update the operating systems and deploy patches on schedule.

Apply best practices for security. Gartner Inc. analysts expect VMs to become a target of new security threats, because these systems use a privileged layer of software that, if compromised, places all consolidated workloads at risk. The process of securing VMs, Gartner says, must start before these systems are deployed -- even before a vendor or technology is selected -- so that security can be factored into the evaluation process.

Virtualization solutions come with, or allow a company to build, libraries of pre-loaded virtual system environments, which can be shared among dev and test teams. These shared libraries can be configured in just about any combination of hardware, operating system, service pack, and applications imaginable. But managers will want to establish a base library of VM images, versioned VMs and templates. Among other things, this will help with troubleshooting, disaster recovery and rollbacks.

Just like a physical dev-test lab, a virtual development and testing environment must be kept separate from the production environment.

Different Architectures
Finally, when people talk about the different types of "virtualization," they're usually referring to the implementation of that abstraction layer across servers, storage and network systems. But development managers should also be aware of the differences among basic virtualization architectures:

  • Single operating system image: User processes are collected into resource containers. Access to physical resources is managed. This type of virtualization scales well, but maintaining isolation among the different containers is tricky.
  • Full virtualization: The entire operating system and applications are virtualized as a guest OS running on top of the host OS. Allows a single host to run many different guest operating systems. This approach presents an abstract layer that intercepts all calls to physical resources.
  • Para-virtualization: The entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The downside: the kernels of both the OS and the hypervisor must be modified to accommodate this close interaction.
The advent of tools like VMware's Lab Manager, VMLogix's LabManager, and Virtual Iron's Virtualization Manager, among others, have lead industry watchers to predict a confluence of the virtualization and application lifecycle management (ALM) markets. The trend "bodes well," writes IDC analyst and RDN columnist Melinda-Carol Ballou, "for users seeking to gain virtualization benefits when building software and managing software development."

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

comments powered by Disqus

Featured

Subscribe on YouTube