Virtualisation represents a sea change in IT practices. Bound for years by the "one application, one server" rule, IT infrastructure was over capacity, underused and not cost-effective.
With the advent of virtualisation and the associated move to hosting multiple virtual machines on a single server, many of these problems disappeared.
Because multiple virtual machines can be placed on a single server, IT organisations can ensure that the machine's processing power is portioned out to many applications. Utilisation, often measured in single digits, can be increased to 70 percent or more, ensuring that far less capital is wasted on high-cost, little-used servers.
It's also no secret that the movement toward virtualisation has experienced what is sometimes referred to as "virtualisation stall." This refers to the fact that many organisations get around 25 percent of their total server population virtualised, and then progress stops.
When you look into why this happens, you usually find that the organization has virtualised all of the easy servers (for example, dev machines and low-risk internal IT applications like DNS) but has failed to virtualise its production applications.
There are many reasons for this stall, but an important one is security. Essentially, security groups are unsure how to apply practices designed for a physical environment to a virtualised one. Despite this confusion, the direction is clear: Security practices must be updated to break the logjam of virtualisation stall.
Here are three of the most common issues confronted by security organisations as they move toward a virtualised future:
Lack of visibility into network raffic
Many security organisations monitor network traffic to identify and block malicious traffic and penetration attempts. Vendors have delivered specialised appliances that perform monitoring to ease the headaches of installation and configuration. These appliances can be installed on the network just like another server, and they can be up and running in hours or days. The appliance approach has simplified security practices and been an enormous boon to hard-pressed security groups and IT operations.
There's one problem with this approach, though, in a virtualised world. Virtual machines on the same server communicate via the hypervisor's internal networking, with no packets crossing the physical network where the security appliance sits ready to sniff them. Of course, if the virtual machines (VMs) reside on different servers, inter-VM traffic will run across the network and be available for inspection. For performance reasons, however, virtual machines associated with the same application (for example, an application's Web server and database server) are often on the same physical server.
Fortunately, vendors have stepped forward to address this. Virtualisation vendors have provided hooks into their hypervisors that network vendors such as Cisco and Arista have used to integrate with virtual switches that, in turn, enable traffic inspection. So this problem is not insurmountable, though it does require an upgrade to the current method of network switching and the use of security products integrated with the newer model. You can translate this as a need for more financial investment. But lack of visibility alone is no reason for organizations to put off virtualising production applications.
Performance-sapping security overhead
The benefits of supporting multiple virtual machines on a single server have become obvious to the server manufacturers themselves, and they have modified their server designs accordingly. Unlike yesterday's pizza box 1U machine that could support perhaps five virtual machines, today's 4U blade servers come stuffed with hundreds of gigabytes of memory and numerous network interface cards. As a result, servers can now commonly support 25 or 50 virtual machines. Cost-effectiveness and utilisation are high, but hosting so many VMs on a single box can cause other issues.
One common problem is the result of each server managing its own security products. A prime example is antivirus. In many IT organisations, every server updates its antivirus signature files at the same time every day, resulting in 25 or 50 virtual machines launching the same activity all at once. This bogs down the server, resulting in lower throughput.
Fortunately, new technical solutions are available. First, just as the virtualisation vendors opened up APIs to allow network vendors to integrate into the hypervisor, they now have also opened up APIs to allow security companies to deliver new products that do not need to be installed on every virtual machine. Instead, the products themselves are virtual machines.
When the hypervisor recognizes traffic that requires, say, calling an antivirus program (for example, an access call for a document that must be scanned before opening), it forwards the call to the antivirus software on the virtual machine, and the VM performs the scan. Instead of 25 machines all running their own antivirus, one virtual machine runs antivirus on behalf of all 25 - obviously a better approach.
The second approach is, as you might guess, cloud-based. For something like the repetitive antivirus scanning of documents, which requires the distribution of hundreds of thousands (perhaps even millions) of copies of antivirus signature files, why not have the millions of end points call one centrally located, cloud-based solution? The vendor can ensure it has sufficient resources to handle all traffic, and the user avoids performance issues and doesn't have to invest more capital in security software. This approach offers significant benefits, and we'll be hearing more about cloud-based approaches to security in the near future.
The perimeter is breached
In January I attended the inaugural Security Threats Conference in Washington, DC. One theme that was covered is that it is foolish to believe that your perimeter is impenetrable. The rise of organised criminal enterprises and the emergence of state-sponsored hackers mean that extremely sophisticated attacks are being marshaled against interesting targets.
Larry Clinton, CEO of the Internet Security Alliance, provided some frightening statistics about current security threats and their effect on today's practices. In a word, today's security approaches are inadequate. Malevolent actors will get onto your network if they turn their gaze to your organization. They can set up long-lived, long-running bots that sift through your servers to identify and steal important data. These actors go under the rubric "advanced persistent threats," or APT for short.
What to do?
One approach, of course, is to integrate a new layer of security products designed to address APT. There are old and new vendors ready to sell you products targeted at APT. I won't dismiss this approach, but my take on this type of threat is that it increases the importance of security practices at the individual server or VM level - in other words, security at the instance level. You should definitely be running integrity monitoring and use an on-board intrusion-prevention system.
Putting these products on each virtual machine clashes with the "move security off the VM" approach, of course, but here's a better way to think about it: security that can only be executed on the machine should be on the machine, while security that can be shared across several machines should be migrated to a central location. There is no perfect answer, but security has always been a balancing act, right?
The economics of virtualization mean that this model of computing is likely to become widespread. Trying to ward off this spread just because current security practices are not supported is like trying to hold back the rising tide, which is futile.