Share

Question: Can you elaborate more on these "green IT" issues that keep coming up? Is this the tail wagging the dog, or is there something real going on here?

Answer: Oh, it is real my friend, a little too real in some cases. First, don't let my answer make you think I've gone all hippie all of a sudden, as nothing could be further from the truth. I remain one of the people who is an eco-problem generator vs. a problem solver. I don't do it intentionally, I just have one of those lives: 87 kids = big giant vehicles and a big house with lots of lights left on constantly. I make myself feel better by separating the cardboard and paper from the trash. You know the type.

Here are some frightening realities: You can't buy any more power in the cities of Boston or Houston, and other cities are either on the tapped-out list or about to be. It doesn't matter if you are Warren Buffet or Bill Gates, you can't buy any more. That, my friends, is a pretty harsh reality. We all know that every data centre in a restricted environment, such as the uber-expensive Canary Wharf in London, is facing very real limits and will have to do something radical and most assuredly expensive to deal those restrictions.

It isn't as simple as just packing up and moving down the road. A new data centre costs hundreds of millions of dollars and takes years to build. Cutting over to that new data centre can take just as long. Meanwhile, the demands placed on the data centre and its operations aren't staying flat, they are growing. I was recently told that the power company in Boston will write you a $4 million dollar cheque if you break ground for a new data centre -- outside the city.

Green isn't just about building disk arrays out of reusable tin foil or switches that contain 30% animal waste. Green is about how efficiently IT gear uses space, directs air flow, is managed, etc. It's a measure of "usability" if nothing else. If you have no more room, you need to rip out big things and put in small things. If you have no more power, you need to rip out inefficient things and replace them with more efficient things -- remembering that whatever metric you are measuring (such as I/O, server cores, ports or throughput) is growing, not shrinking.

Ripping out 100 square feet of servers and replacing them with four square feet of blade servers with equivalent processing power is great unless the heat generated in the micro package causes a meltdown in that area of the data centre. I have had conversations with real people who won't buy blade servers for just that reason. Density equals heat, and heat equals hot spots, which require cooling and humidification and airflow consideration.

It also doesn't make sense to replace a giant disk array that is an inefficient consumer of power with a more efficient array of similar capacity if the new one doesn't perform to the levels required because you'll just end up adding twice as much processing power and end up right back at the same problem.

All this leads me to the next issue. In my quest for understanding all things green and IT, I have discovered that, lo and behold, technology products might be the worst overall abusers of energy. Tech is all about making things smaller and therefore faster, and jamming smaller, faster things into ever shrinking packages. Each time we do that, we make things that suck more power per square inch and generate more heat.

Coal burning might be bad for the ozone, but at least the process extracts and redirects the most amount of energy possible. We burn coal to create heat, which is used to generate electricity. A by-product of that process creates steam and heat, which are exhausted as emissions. While the steam is heading up the chimney, the heat is again used to generate yet more electricity in another way. At the end of the process, the emissions (while arguably dirty and bad) are at least much cooler because the heat generated by the whole thing has been turned into energy.

In our data centre (or living room), the gizmos that suck power perform their designed task of processing, but the heat that is released during that process is dissipated into the air, and we don't get any value out of that at all. As a matter of fact, we experience a negative impact -- we have to buy more power to cool things and redirect air.

Another interesting point to contemplate: We all run around thinking power costs us 8 cents or so per kilowatt-hour. That is true if you live in my house, but not necessarily true for a data centre. By the time you consider that there may be multiple concurrent power grids running in parallel to a shop, through heavily conditioned power-manicuring systems, through multiple diesel-powered generation backup systems, that 8 cents might now be $10. That's a big spread.

I contend that the overwhelming majority of data centres contain infrastructure products that were all designed for a different era. Commercial computing has its roots in transactional systems, and those systems have been where all the "green" is spent. The business was run on systems designed to process, house, and protect ever-increasing transaction rates.

Big iron is where we came from, and today big iron is still at the design core of what we buy. It is only logical that vendors that won with big iron would try to repackage its elements as the world slowly changed. Mainframes became modular Unix boxes. Single-frame arrays became modular arrays. Vendors cut up their big boxes into cheaper smaller boxes because that's what we wanted -- and now we are stuffed to the gills with them.

There is no difference between a "monolithic" disk system and a "modular" disk system -- other than the add-on components and cost of entry are smaller. Both have a beginning and an end. The primary difference is marketing. What we need is smarter packaging that deals with the new realities. The overwhelming majority of data we create and have today is not dynamic, it's fixed. It doesn't change. It isn't transactional.

We don't need the same levels of performance, availability, etc. This is why it is inevitable that technologies such as MAID (massive array of idle disks) or data de-duplication are not flash in the pans -- they help the green factor, but more importantly, they address the business reality factor.

If we didn't shove all our data into transactional systems, we wouldn't need to keep adding bigger and bigger hardware, which leads to higher and higher software costs. If we used things more efficiently and matched the realities of our new world to the infrastructure that is available to us, we would never make the decisions we do. This is why VMware is kicking butt -- not because you can get electricity company credits for using it (you can, I swear), but because the reality of IT is that we don't need a million new boxes; we need to act like we have a million boxes but manage a hundred. It's simple business.

For the next five to 10 years, this is going to be a very, very big topic because of the true green motivations: the business implications, not the environment. Sure, folks will say they care about the trees and the spotted owls, but they really care about the bucks. Up until this point, as long as you could throw money at the problem, you didn't have to concern yourself with actually managing things. Now there is no one to take your money.

The inability to buy additional power is hard to deal with. Now folks are going to have to demand more of the IT industry, or they are going to run out of room to do business. This could well be the biggest inflection point in technology history -- a perfect storm where we can't solve our problems by throwing money at them, which means there is a big opportunity for new, forward-thinking folks to completely turn a long-standing industry right on it's head.

Send me your questions -- about anything, really, to [email protected]

Steve Duplessie founded Enterprise Strategy Group in 1999 and has become one of the most recognised voices in the IT world.