Long ago, when servers still came one to a box, "sysadmins" spent all their time running from one machine to another, with boxes of tools and utilities designed to squeeze out every bit of performance and stability from physical servers. Now, virtual servers outnumber the physical in most data centres. And neither budgets nor toolboxes are over-provisioned with resources for fine tuning virtual infrastructures.
Companies expanding beyond the pilot phase and moving into large scale server or desktop virtualisation need to realise that utilities from third parties, not just the platform vendors, are what will help them make virtual infrastructures as stable as the real ones, some analysts say.
"One of the most common misconceptions among the non-technologist crowd is that once you buy your virtualisation platform, you are done with your software purchases," warns Greg Shields, who writes extensively in books and blogs on the details of virtual infrastructure implementations and serves as partner and principal technologist at Concentratedtech.com.
"When you virtualise an environment you add in dependencies no single human can do a truly good job of understanding," Shields says. "On a physical server a network problem is probably related to the card. On a virtual server, the card is virtual, so the problem could be it's not getting enough processor power, or the storage performance. You need very broad-based tools that will address those metrics."
The fight between HP and Dell to acquire virtual storage optimisation vendor 3Par shows how important management and optimization products are in keeping virtualized infrastructures running, according to Chris Wolf, senior analyst at Burton Group.
"This year especially we saw a lot of big organisations virtualising serious enterprise applications," Wolf says. "When you have mission critical apps virtualised or in the cloud, diagnosing application problems and optimising performance in the virtual environment becomes very important."
It's impossible to say which utilities or ISVs offer the best tool for every environment, Shields says. But five specific types are particularly important to getting virtual infrastructures humming right now.
1. Capacity management
"Virtualisation is taking what became sort of an also-ran activity, capacity management, and showing why it's really a critical step," Shields says.
Multi-processor, multi-core servers and acres of RAM made planning for server capacity almost moot, Shields says. With virtual servers, however, the question isn't the power of the server, it's how that capacity is doled out to specific workloads on specific virtual machines, and monitoring the performance of the VMs to make sure all the resource demands are satisfied.
"It goes beyond not being able to automate anything until you know what you have," Wolf. "Without capacity management you don't know what a particular service is costing the organisation and that makes it harder to build out your infrastructure as a service."
VMware's vCenter CapacityIQ is effective at identifying utilisation gaps, Shields says, but there are plenty of other options. These range from old school IT favourites retooled to cover virtual as well as physical, such as BMC's Capacity Management and HP's Insight Dynamics, to purpose-built virtualisation management tools from VKernel, VMTurbo and Embotics.
2. Performance optimisation
Performance problems in physical servers are relatively easy to spot because most functions are associated with a specific component. Swap it out and you're good to go.
"On a virtual server a performance issue could be related to spindle contention in storage, an oversubscription of RAM, and undersubscription of RAM, under or over subscription of processors, bandwidth utilisation; a whole series of dependencies that make it hard to put your finger on the problem without a deep analysis of what's going on inside," Shields says.
Find your next job with computerworld UK jobs