Tightening regulation and growing volumes of data are driving institutions to invest significant sums in expensive IT and datacentre capacity: according to the Digital Realty Trust, corporate datacentre requirements have grown by over 20 per cent during the past year alone.

Following the Digital Britain report in June 2009, Lord Carter stated that the datacentre sector must strive to build more data storage facilities in the UK if it is to meet growing demand.

This emphasises the importance and growing realisation among the business community that datacentre space is becoming more and more sought after. Wherever they can, therefore, CEOs need to do more with less, accommodating more applications and data, without compromising the resilience of the business.

In the pursuit of maintaining a lean balance sheet with optimum cash flow, boards are paring back on new capital investments (CapEx), opting wherever possible to fund projects from operating expenditure (OpEx) instead, and acquiring new capabilities as “managed services”.

In the context of office space, company cars, even software, this “lease-based” approach is nothing new. It offers businesses the flexibility to scale upwards or downwards with ease, and enables them to adopt the latest technologies cost-effectively.

Such an approach can also be brought to bear on the datacentre, where the benefits are no less tangible. Indeed, when one considers that datacentre investments are generally written down over ten years – an eternity in technology terms – the managed services approach makes compelling business sense.

Perhaps the greatest value lies in the improved use of resources such an approach can offer. Financial services organisations generally operate datacentres in pairs, in a “live: live” redundancy configuration.

If one of them goes offline, the other functions alone, so end users do not experience any impairment to Information Availability. Outsourcing one of the “live” facilities to a managed service provider and selective operations within it – for example storage environments – frees up space and resource, and could remove the need to lobby for CapEx in these difficult times.

Such a move would also enable IT managers to set levels of availability for the different services supported within it, application by application. Whilst highly visible production systems such as trading environments will require immediate failover and synchronous replication, other, less sensitive systems, such as a “work share” intranet, may not need immediate recovery.

The managed services approach extends beyond the simple provision of real-estate, however. As part of the outsourcing arrangement, the managed service provider should be able to provide teams of engineers who man the facilities 24/7, monitor the hardware and carry out basic maintenance tasks.

These “intelligent hands” perform a vital role. Not only do they ensure the systems function at optimum levels, they also allow CEOs to keep their own teams focused on the services and systems the institution manages for itself, with no dilution of resources.

There has been much talk of a ‘datacentre tipping point’ in terms of the amount of data that we now produce with many hardware vendors trying to reduce footprint per cabinet of servers. Ultimately, there is no ‘silver bullet’ and business decision makers must simply look to manage the differences in capabilities between with the technology currently available.

Some have talked of ‘the cube’ (a smaller, higher density datacentre operation) as a solution, but this still uses a lot of cooling power and therefore a lot of energy.

Additionally, there is the possibility of operating datacentres in colder climates and reducing the need for as much cooling. Whilst good from a power usage effectiveness perspective, there is always the nagging question about connectivity. In order to make the business as resilient as possible to extreme heat and any other cause of downtime, it is best to be part of an integrated datacentre network, rather than an isolated datacentre situated in Scotland or even Iceland.

Outsourcing datacentre capabilities, as opposed to having an in-house facility, means cutting costs in terms of staff and maintenance, which is especially prevalent in the summer as there are additional loads on the chiller units, which are more likely to clog up and not work as efficiently.

It is also somewhat of a false economy. The outlay of operational expenditure may seem quite an investment, but if your business has an outage and has to hire temporary chiller units to cope, then that hire cost could soon escalate.

This could leave the decision makers wishing they had left the running of the datacentre to the experience of a managed services provider.

As a result of the fallout from the Digital Britain report, many businesses will be debating the merits of outsourcing their datacentre operation amid concerns of budget constraints. It should serve as a reminder of why businesses need to assess whether they have the core competencies and core capabilities to cope with pressures of a modern datacentre workload.

If not, then it may be prudent to outsource to a specialist and focus on the main area of their business: their customers. The fact that analyst Ovum has recently said that the top ten UK outsourcers saw their total contract value of deals swell by 31 per cent on an annual comparison is surely testament to the growing opinion among CEOs that outsourcing to specialist managed services provider is more efficient in terms of cost and capabilities.

The rapid pace of technological change, and the prospect of ever tighter regulations, means that businesses’ datacentre requirements are unlikely to evaporate any time soon. By selectively outsourcing key functions to a managed service provider, CEOs can make far more effective use of the facilities their organisations already own, whilst growing their datacentre footprint sustainably and cost-effectively.

Keith Tilley is UK MD and SVP Europe, SunGard Availability Services