In one sense, IT organisations have been preparing for a downturn for some time, given the considerable pressure over the past several years to curb the rate of IT spending.

Consolidation efforts have become commonplace – datacentre consolidation initiatives are occurring in most large organisations, while server consolidation through virtualisation and blade technologies seems to top almost everyone's to-do list.

Green initiatives within datacentres represent another dimension of the ongoing effort to drive efficiency.

So when surveying the IT infrastructure landscape for consolidation or efficiency opportunities, what else might catch an IT executive's eye? Certainly storage has to come to mind. Given data growth rates and the inherent storage multiplier factor (10 to 50 times for every byte of new data), the question is not whether storage can be consolidated, but how much?

Are there ways to readily gauge the storage-consolidation potential within an organisation? Here are some basic factors to consider in weighing consolidation or efficiency improvement potential.

Utilisation This is an obvious but important starting point. Utilisation metrics, particularly when analysed in combination with configuration and allocation data, begin to paint a picture of overall storage efficiency. They also indicate the effectiveness of capacity planning and provisioning processes.

Tiered storage distribution Assuming that a tiered storage architecture is in place, the distribution of capacity across the various tiers can indicate the level of efficiency. Ideally, one would expect a pyramid model with the greatest capacity at the lowest tier. An inverse pyramid with the preponderance of storage in the top tier represents an opportunity for improvement.

Allocation How and where storage gets allocated can offer insights as well. For example, is storage for development and test instances regularly allocated from the same tier as production?

Complexity Complexity doesn't always mean inefficiency, but overcomplexity can be a contributing factor. So what represents overcomplexity? One quick indicator is the number of different technology platforms and management tools that exist within the storage infrastructure.

SAN The design of the SAN infrastructure and related port-usage data are also helpful efficiency indicators. Host-to-target port and host-to-interswitch-link port ratios combined with port-utilisation metrics can point to aggregation opportunities (or, conversely, to oversubscription-related bottlenecks).

Of course, just identifying inefficiency is not enough. The big challenge in storage is to actually realise identified improvements. Service-disruption concerns and operational challenges often mean that many improvements are only implemented with technology refreshes.

Forthcoming economy-related fiscal constraints highlight the need for storage management to ensure that limited "efficiency improvement" dollars are well spent.

Jim Damoulakis is chief technology officer at GlassHouse Technologies, a leading provider of independent storage services.