Share

Gannett is the largest newspaper publisher in the United States, with 85 daily papers including USA Today and nearly 1,000 non daily publications. The company also operates 23 US television stations and a large number of websites affiliated with its various properties. As you might expect, all that content creates a rather heavy demand on the company's IT infrastructure, which supports nearly 50,000 employees at about 200 locations.

To help it keep up with demand without breaking the bank, in 2002 the company began exploring virtualisation technology. It hoped to improve its x86 servers' utilisation rates, which at the time averaged no more than 10%.

Today the company has well over 1,000 virtual machines running on more than 50 VMware hosts, says Eric Kuzmack, IT architect at Gannett. Virtualisation has been a big success, delivering ROI numbers that “nobody would believe”, Kuzmack says, but adding that it's not for every application and there is no shortage of enhancements he'd like to see, especially in terms of management and accounting tools.

What kinds of applications are you supporting using virtualisation?

Kuzmack: All kinds. Our general philosophy when deploying new applications is to virtualise them unless the application owner or the vendor we purchase from has a good reason not to.

We've come across a few application types that tend not to be great candidates for virtualisation, such as large databases and those that do a lot of polling, like network monitoring applications. But we're virtualising most other kinds of workloads, whether it's intranet web servers, database servers, various application servers, Active Directory and portions of Exchange, although Microsoft has taken a very hard stance against virtualising Exchange 2007. So, we're not virtualising our Exchange 2007 mailbox servers, but we are using virtualisation for some of the other components of Exchange, as well as for disaster recovery components. And for the most part, we really haven't had problems at all.

You set out to improve server utilisation rates. What have you achieved?

Kuzmack: When we start approaching 60% to 70% process utilisation we'll add servers to our farms. We like to leave some headroom to handle spikes. Generally we'll go up to eight physical servers and then start a new farm. Or when there's a generation change in the processor, we're essentially forced to start a new farm because you can't use VMware's VMotion technology across two Intel processors of different families. (Ed. note: VMotion makes it possible to move a running virtual machine from one physical server to another without disruption.) Intel is introducing some features in its new chips that are supposed to help moving [virtual machines] between processor families so that won't be as big of a deal.

Management was a big concern for you early on. How would you assess the general state of the tech today?

Kuzmack: There's still a wide disparity between what the various vendors offer. There's lot of talk. Microsoft, XenSource (recently acquired by Citrix Systems), Virtual Iron and everybody else is coming up with their own management tools. What we don't really have yet is a good, proven story on taking a Xen virtual machine from anybody's hypervisor and running it on somebody else's hypervisor. Or having a Microsoft hypervisor in the same pool as a Virtual Iron server and being able to move a [virtual machine] from one to the other. So, at the industry level, there's still a long way to go. VMware is certainly well beyond anybody else in the market [in terms of] management.

What kinds of things can you do with VMware that you can't with some of the others?

Kuzmack: At a very basic level, it's easy and flawless with VMotion. I right-click on a server, click migrate, hit enter a couple of times and I'm done. The other vendors in the market are coming out with [similar technology], but it's still a ways away. And once they do come out with it, how stable is it going to be? We've been using VMotion since 2003, which is a very long time.

Why is that capability so important to you?

Kuzmack: VMotion was really the feature that cemented our decision to go down the virtualisation road. The biggest concern management had when we started looking at virtualisation was the 'too many eggs in one basket' problem [and VMotion solves that].

We didn't want to have 10, 15 or 20 applications go down because of a hardware problem or because we needed to do maintenance. So, when VMotion came out and we started working with it, we were one of the two non-VMware entities that beta tested VMotion, it dawned on us how important VMotion was. Other vendors have kind of dismissed VMotion as a curiosity, but they're plainly wrong. Very shortly after we set it up, we had several cases where we used it to the company's benefit. And it's very easy to set up.

Are there other virtualisation management challenges that have not yet been met?

Kuzmack: How much time do you have? For one, no one's quite gotten to cost accounting yet. There are two pieces to this. We don't do internal chargebacks, but in general it's important for us to understand [virtual machines] aren't free. One of the downsides of virtualisation is a lot of folks say, 'Oh, we'll just spin up another [virtual machine]'.

So having tools to identify how much a particular farm costs, including the servers and the disks and everything, and how it's being utilised and at what percentages, would enable you to come up with a cost of ownership for a particular [virtual machine].

And another challenge is growth prediction, where if you've got a set number of [virtual machines], being able to look at how those are being utilised and, based on that, project how many additional [virtual machines] of similar characteristics you could put in a given environment before you'll run out of resources. Those kinds of things are critical.

Today a new application comes in, and it's purely a guess as to whether or not the amount of virtual resources you have will fit the application, which in a sense is similar to the physical world. Except people are a lot more comfortable with the physical world and, generally speaking, you either pick a two-, four- or eight-processor box. You don't have a whole lot of tiers in there. But in the virtual world, we're able to nuance our resources much more efficiently than in the physical world. The downside of that is you don't necessarily fix everything using brute force performance: 'Oh, this application is slow. Instead of troubleshooting the application, just put it on a faster box.'

Also, the management tools out there are great at managing two, three or four host servers, but when you start getting into 50 or 100 hosts spread across multiple divisions or subsidiaries, all of the tools still have a fairly long way to go.

So our subsidiaries that have a large number of hosts have their own instance of the management tools. Some of the smaller environments that have two, four or five servers are on our central management system. But the management software is fairly pricey, and we prefer not to have to buy multiple instances of it.

What have been the biggest challenges to implementing virtualisation?

Kuzmack: Honestly, there really weren't many. We've only run into one or two bugs of any substance since we started. And the issues we had were not technical. They were what we like to call the 'eighth layer' of the OSI model, the political layer. People want to have their own servers. Or if you're sharing a resource and you run out, then some little application may come along that has to bear the expense of a new physical piece of hardware. So, how do you account for the fact that one little application costs the company $1,000 (£500) and another little application costs the company $12,000 (£6,000)? So, things related to capital allocations were sticking points.

Another issue was trusting that the environment works, the issue of all my eggs in one basket. On the technical side we had training issues involved with troubleshooting performance problems. It's different in a virtual environment. Understanding that hitting the old power switch has a very different meaning when you've got 25 virtual servers running on a box.

And you can get yourself into trouble if you don't pay attention to the infrastructure you're running on. If you typically buy very inexpensive servers without a lot of redundancy, that may be okay for an environment where, if you lose a server, you lose one application. But if you use the same kind of servers in a virtual environment and you lose that server, maybe you take down 10 applications. It's a much larger business impact. So early on we made sure we bought Tier 1 vendor hardware, with all the right redundancy components built in, fully redundant storage networks and that sort of thing, because we do run mission critical applications on virtual infrastructure.

Has there been any user reaction to virtualisation?

Kuzmack: The end users have no concept of virtualisation. But the business owners of the application have seen our ability to deploy more quickly, whether test, development or production servers. Our ability to react to change is faster. When all of a sudden we need four more Web servers to do X, we can deploy them in minutes instead of days or weeks. Business owners also see substantially reduced costs because they don't need to purchase test and development hardware. They may need to contribute some capital funding to the overall virtual hardware, but typically it tends to be much less expensive than having to buy individual servers for all the components of their various applications.

Have you tried to calculate your ROI?

Kuzmack: When we started our virtualisation efforts back in 2002, we built a very strong ROI purely on the reduced number of servers that we had to purchase. We came up with an ROI that was so high we knew nobody would believe it. We had to cut things back, but we know it's saving the company hard dollars. It's the soft dollars that are much harder to quantify. We know we're saving a lot of time and effort in terms of deploying applications, as well as in the overall flexibility and time to market for various applications. Time equals money.

Aside from savings, what other kinds of benefits have you realised?

Kuzmack: A couple of years ago we did some testing where we VMotion-ed a virtual machine from one location to another 100 miles away. We lost just one packet. Now, the plumbing required to actually do that for real wasn't there yet. But as pipes get bigger, as VMware and other companies continue to build in disaster recovery, we're going to see the capability to do things like VMotion-ing between datacentres.

A variety of people have already done it in one way or another. With things like that, virtualisation is going to change the way we do things on a large scale. Disaster recovery, business continuity, those kinds of things are pretty key in our virtualisation strategy. We don't have to do cold spares anymore for most kinds of environments. If we're having problems with a particular virtual server, we just take a snapshot of it. We let the production system continue to run and we can give the actual server that's having trouble over to the developer to troubleshoot what the problem is.

Also, building a development lab is never easy, and they are never anything like real life. Well, in our environment they are. We take a snapshot of real life [virtual machines] and pull them off into an isolated environment. Then we have a development environment that actually matches production, because it was production an hour earlier.

What does that do for you?

Kuzmack: The first thing to get cut when doing development projects are test and development environments because generally speaking, you can't afford to buy three of the same system. In a virtual environment, we don't have to worry about that as much. And when you want to roll out a new version of the application a year later, you can just take another copy of the current production environment to create a fresh development environment, as opposed to using the year-old one.

So, would you say you're getting better applications as a result?

Kuzmack: Yes. And we also get better deployments of things like patches. There have been cases where we deployed patches but were unsure of exactly what was going to happen. Now we can take a snapshot of the [virtual machine], deploy the patch and, if things go poorly, just revert back with a couple of mouse clicks.

What would you say has been the most pleasant surprise for you with respect to virtualisation?

Kuzmack: From a VMware perspective, how easy it's been. Generally speaking, the virtual infrastructure stuff is pretty easy to install, especially if it's a small environment with two or three hosts. It's easy to install, easy to run and it's rock solid, very much one of those things you just don't need to worry about.

What's your biggest disappointment?

Kuzmack: 'Disappointment' may not be the right word, but the software vendors have been slow to adopt a support policy for virtual environments. Licensing policies for virtual environments are all over the place. Be it Microsoft, Oracle, IBM, whoever - they're all over the map. Even the vendors themselves don't have consistent policies, and when they do, their sales forces don't necessarily know what they are.

One salesperson will say, 'Oh, yes, sure, you can do it that way.' And then you actually go and look at the license and find, no, you can't. We're large enough that if a salesperson makes a promise, we're generally able to get the vendor to live up to that promise. But for your average [small to midsize] business, they don't have that kind of dollar baseball bat to go after a vendor.