Power consumption matters. MAID suppliers Copan and Fujitsu Siemens Computers look pretty smart right now.

Q: Steve, everyone is talking about power consumption and “green” infrastructure these days. Is it more marketing hype or do I need to pay attention? -- D.G., Tempe, Ariz.

A: I’m not sure how old you are, but I remember the gas lines of 1972. Sure, I was six, but I watched TV. No one expected the oil crisis then, though in retrospect weren’t we not only naive, but downright stupid? Our European and Asian counterparts were paying more than twice what we were paying, and I guess we just thought our friends down in Texas or Oklahoma would keep on pumping out the juice so that we could keep on bombing down the road in our gigantic Impala’s every time we needed to go 45 feet. All good things come to an end eventually; it’s just a matter of time.

Did we learn our lesson then? Absolutely not -- we stayed the course and demanded the government figure out a way to get fuel prices back down so that we could justify buying vehicles larger than a Frankfurt hotel room.

Side note: I got my license in 1980, and the first of my fathers’ vehicles I slowly destroyed was a Ford LTD – curb weight of about 11,000 lbs or so. After a few years he showed up with a 1976 Bonneville – an absolutely magnificent vehicle that was like driving a luxury condo it was so big a plush. You could run a Volkswagen over and not even know it. (I remember having 28 of my friends in the car and collecting $4 in gas money and it got the needle to move to E.)

Sorry for the trip down foggy memory lane, but the answer is yes, you need to care. It isn’t even about power costs per se, it’s about the whole enchilada.

The things that need to be considered when buying anything computer for a business, whether it’s installed into a closet or filling up a huge data center -- and they are not mutually exclusive:

The cost to power up and run a gizmo may or may not be overtly relevant – meaning if it costs $50,000 vs. $40,000 to power up and run a giant storage array for three years, it’s not going to be enough to justify the purchase of the lesser for that alone – since the acquisition cost of the system is most likely a million dollars or more. However, on a percentage basis, there is an enormous impact on the total cost of gear that isn’t expensive, comparatively. For example, if a server blade costs $2,000 from vendor X or vendor Y, but it has a power and cooling cost of $1,500 vs. $800 over three years respectively, vendor Y ends up much cheaper if that

Whatever the power costs are (multiply the maximum wattage of the gizmo times 1,000 to give you kilowatt-hours and multiply that by the cost of electricity, which averages about $.10 per kwh right now in the USA.

Then multiply that by times 24 hours, times 365 days, times 3 years. Then take that number and multiply it times two to give you an approximate cost of cooling the gear.

Consider the cost per square foot, or meter, of floor space for the gear. How much stuff (servers, ports, capacity, etc.) can fit per unit of measure? It may not matter if you are sitting on an extra 10,000 feet of data center space, but if you are in Manhattan or some other densely populated area, that can be a huge cost.

Consider the unit of measure per square foot/meter of what you care about as a guide in comparing products. I’ll be giving away modeling tools to help you with this shortly, but think about real total cost of ownership when comparing products – at least up front. How many I/Os per second per square foot can you get out of the gizmo, or how many servers per square meter, or whatever metric you really care about. Always look to the maximum physical capacity to compare equipment, as that will be the best cost per square foot the vendor can do. That way you know when you are going to get a spike in costs when you run out of vertical real-estate on that floor tile. Then you can compare the ability/costs of the amount you really need.

Consider how and where hot air flows out of the stuff you have already. Cooling is hard enough; it is multiplied by localized hot spots, which tend to cause meltdowns of equipment. If you have a few racks of blades that blow air out the back next to a box of switches that blow air out the top, next to some arrays that blow air out the bottom front, you might be fine – or you might not. The heat generated by some heavy duty gear can cause huge problems for the neighboring gear. There are NO standards for how to exhaust heat from infrastructure, and everyone does it differently.

Speaking of doing things differently, there are also no standards on what power gear makers put into their products. It can be as varied as everything else, so it makes it no only harder compare, but it can be an “I always wanted to fix a transmission” moment for your electricians. As a general rule of thumb, remember that every time you convert power down until it gets into a form used by the components within, you will have “leakage.” I don’t think I need to tell you that leakage is never good. Anywhere.

Our European friends have gone so far as to legislate that gear sold there has to even contain recycled materials, and be made from materials that can be recycled.

Green is “in” over there, but we’re woefully behind in Asia and North America. Fellow ESG employee Brian Garrett spent time in a double secret co-location facility in Canada a while back, which was one of the first I’d heard of to charge clients based exclusively on power consumption, with no regard to footprint. As a matter of fact, they encouraged their clients to spread out; leave open rack space, etc., so that they could best manage the cooling of the facility. That’s when I knew this stuff really mattered.

The industry is listening, but like always, is slow to react until the money gets bigger. Guys like eGenera started building blade servers and their claim to fame was management, but now the phone rings because of their packaging. Verari Systems of San Diego (nice town, but a Football team who wears baby blue can’t possibly beat my Patriots no matter how much more talent they have. Can they?) packs more servers and storage into a square foot than anyone I’ve ever seen, though I just started paying attention myself.

I thought guys like Copan Systems were smoking a little sunshine when they first told me their idea was giant density arrays which spin down when the drives when not accessed. They look pretty darn smart right now. Fujitsu Computer has followed suit and added that spin-down feature into their integrated three tier array, where the bottom tier powers down as well. If you can do this kind of stuff, and the price is right, why would you ever take anything off your arrays? Store everything, forever, has suddenly become realistic. The switch guys don’t suck a lot of juice, but with so many things to connect, density matters.

So the big guys dabble, the little guys innovate, and then the big guys come in and take the money once you start to spend it. It’s the circle of IT life.

Send me your questions -- about anything, really, to

Steve Duplessie founded Enterprise Strategy Group Inc. in 1999 and has become one of the most recognized voices in the IT world. He is a regularly featured speaker at shows such as Storage Networking World, where he takes on what's good, bad -- and more importantly -- what's next. For more of Steve's insights, read his blogs.