In a “traditional” IT setup one machine would only perform one function, for example acting as an application server, with virtualisation numerous “virtual machines” can be set up on a single physical one, each performing a separate task.

Fewer physical machines mean greater energy efficiency and less use of space. Yet all of this potential can come with a cost.

In terms of maturity, despite its clear strength as a new IT platform, virtualisation remains a relatively young technology, particularly within businesses. Users are still not fully aware that virtual environments work on very different principles to “traditional” physical platforms. For instance, while a physical server can take hours or days to build and deploy, a virtual server takes only minutes.

The only way to address these differences is by using techniques and tools that are designed specifically with virtualisation in mind. However, too many organisations are still either throwing away well thought out management policies from the physical world or relying on approaches that are no longer applicable in the virtual world.

The results counteract any benefits virtualisation brings. For example, as virtual machines are so easy to create, many administrators are not applying the standard policies for building and deploying a machine.

These include recording when it was built, what for and where its sits on the network. Left unmanaged, this can result in “virtual sprawl”, where machines exist in no particular order and with no idea where in the system vital data or applications might be.

Proper planning and management should be part of any IT project, but with virtualisation the principles and risks involved make it even more important. As I first said, with virtualisation, it’s possible to create huge IT infrastructures in a fraction of the time compared to physical systems.

Subsequently, there is a huge risk of creating an unwieldy environment that ends up devouring hours of precious time in management. Therefore, start with the end in mind. If, as in many organisations today, you are using virtualisation to help aid a disaster recovery system, ensure that it adheres to the real world policies you have spent years developing. Otherwise, finding and restoring a critical application or file will be akin to searching for a needle in a haystack.

Once a firm plan is in place, organisations should decide how to use the management tools and techniques available to their best advantage. For example, disaster recovery and high availability implementation can benefit greatly from virtualisation, not least because it becomes far easier and more cost effective to implement such an environment.

However, on the other hand the backup of virtual machines requires a different approach to that taken with physical machines. With “traditional” physical tools, when backing up virtual machines, a huge virtual file directory is seen as one “big” file. As a result, where only one or two individual files might need restoring, it can take hours instead of minutes. Because of this, organisations need to look for specialist management and backup tools designed with virtualisation in mind.

Virtualisation presents a huge opportunity for organisations, and time must be taken to ensure it fulfils its promise. Once a decision is taken to employ virtualisation, that decision needs to be followed through entirely to its logical conclusion.

This means understanding the technology, using the appropriate tools and techniques, and carefully considering every step of the process. Without this, any dreams of virtualisation revolutionising the business will remain just that.

Ratmir Timashev is President and CEO of Veeam Software