With a virtualised server, you run the proposed combination of VMs and applications on a physical server and monitor the results using monitoring tools from a third party or the virtualisation supplier. This is done instead of using something like Microsoft's Iometer, which works through the operating system. Remember that in products like VMware and Xen, the operating system is also running on a virtual machine. You need something that can go through the hypervisor layer to the underlying hardware. In both cases, you use a suite of test data, preferably drawn from your own enterprise.
One major difference comes when it's time to go live. Rather than installing the operating system and applications individually on the new server, you simply clone the whole stack over - a process that usually takes a matter of minutes.
This concern for interactions still applies once the server is in production as well. Enomaly's Anderson cites the example of a DBMS and a Web server on the same physical platform. If the database can keep all its tables in RAM, it will have relatively little effect on a disk-intensive application like a Web server, Anderson says. But when a table gets large enough that the DBMS has to start paging it out to virtual memory, it can have a big impact on the web server - and both applications' performance can suddenly go sideways.
Keeping it all together
While you don't want virtual servers with the same resource demands on the same physical box, you probably want to keep at least some of your physical servers in close physical and network proximity.
The reason: to ease switching virtual servers among physical servers should the need arise. If a group of physical servers shares network connections and other resources, the time to switch a virtual server from one physical server to another can be greatly reduced, as can the configuration effort.
"If you have a bunch of servers in a rack, you can turn off the virtual machine [on one server] and come up on a server on the same switch," says Anderson, "and the time it takes is literally the amount of time it takes to copy the hard drive."
On the other hand, he says, if the machine you're transferring to "is on the other side of the data centre, plugged into different switches and different subnets, then the time to do the reconfiguration could be an extra five or 10 minutes. Or if there's other stuff, then that could take a lot longer.
"It's just in the way you laid out your data centre," he adds.
Of course, there are limits to proximity, either physical or network. You don't want the servers to overload the network connections, for example, and you may want to have physical servers some distance away -- or even in a different state -- for disaster recovery. It's a balancing act.
Keeping track of hardware
This non-linearity complicates another aspect of managing virtual servers. Administrators need to closely monitor resource demands on various physical servers.
This is not the same as the demands reported by the operating systems on the virtual machines. Administrators need to look at what's happening on the actual hardware as well. They need to keep an eye on trends to avoid a sudden resource starvation as applications' resource profiles change.
What's more, tracking the physical hardware has to be done in detail because of the different loads the virtualised applications put on the different kinds of resources the server supplies. Because the various virtualised applications have different demands, things like RAM, processor cycles and I/O bandwidth have to be tracked separately.