Tightening regulation and growing volumes of data are driving institutions to invest significant sums in expensive IT and datacentre capacity: according to the Digital Realty Trust, corporate datacentre requirements have grown by over 20 per cent during the past year alone.
Following the Digital Britain report in June 2009, Lord Carter stated that the datacentre sector must strive to build more data storage facilities in the UK if it is to meet growing demand.
This emphasises the importance and growing realisation among the business community that datacentre space is becoming more and more sought after. Wherever they can, therefore, CEOs need to do more with less, accommodating more applications and data, without compromising the resilience of the business.
In the pursuit of maintaining a lean balance sheet with optimum cash flow, boards are paring back on new capital investments (CapEx), opting wherever possible to fund projects from operating expenditure (OpEx) instead, and acquiring new capabilities as “managed services”.
In the context of office space, company cars, even software, this “lease-based” approach is nothing new. It offers businesses the flexibility to scale upwards or downwards with ease, and enables them to adopt the latest technologies cost-effectively.
Such an approach can also be brought to bear on the datacentre, where the benefits are no less tangible. Indeed, when one considers that datacentre investments are generally written down over ten years – an eternity in technology terms – the managed services approach makes compelling business sense.
Perhaps the greatest value lies in the improved use of resources such an approach can offer. Financial services organisations generally operate datacentres in pairs, in a “live: live” redundancy configuration.
If one of them goes offline, the other functions alone, so end users do not experience any impairment to Information Availability. Outsourcing one of the “live” facilities to a managed service provider and selective operations within it – for example storage environments – frees up space and resource, and could remove the need to lobby for CapEx in these difficult times.
Such a move would also enable IT managers to set levels of availability for the different services supported within it, application by application. Whilst highly visible production systems such as trading environments will require immediate failover and synchronous replication, other, less sensitive systems, such as a “work share” intranet, may not need immediate recovery.
The managed services approach extends beyond the simple provision of real-estate, however. As part of the outsourcing arrangement, the managed service provider should be able to provide teams of engineers who man the facilities 24/7, monitor the hardware and carry out basic maintenance tasks.
These “intelligent hands” perform a vital role. Not only do they ensure the systems function at optimum levels, they also allow CEOs to keep their own teams focused on the services and systems the institution manages for itself, with no dilution of resources.
There has been much talk of a ‘datacentre tipping point’ in terms of the amount of data that we now produce with many hardware vendors trying to reduce footprint per cabinet of servers. Ultimately, there is no ‘silver bullet’ and business decision makers must simply look to manage the differences in capabilities between with the technology currently available.
Some have talked of ‘the cube’ (a smaller, higher density datacentre operation) as a solution, but this still uses a lot of cooling power and therefore a lot of energy.