If you're nervous about running your business applications on a public cloud, many experts recommend that you take a spin around a private cloud first.
But building and managing a cloud within your data centre is not just another infrastructure project, says Joe Tobolski, director of cloud computing at Accenture.
"A number of technology companies are portraying this as something you can go out and buy – sprinkle a little cloud-ulator powder on your data centre and you have an internal cloud," he says. "That couldn't be further from the truth."
An internal, on-premise private cloud is what leading IT organisations have been working toward for years. It begins with data centre consolidation, rationalization of OS, hardware and software platforms, and virtualisation up and down the stack – servers, storage and network, Tobolski says.
Elasticity and pay-as-you-go pricing are guiding principles, which imply standardization, automation and commoditisation of IT, he adds.
And it goes way beyond about infrastructure and provisioning resources, Tobolski adds. "It's about the application build and the user's experience with IT, too."
Despite all the hype, we're at a very early stage when it comes to internal clouds. According to Forrester Research, only 5% of large enterprises globally are even capable of running an internal cloud, with maybe half of those actually having one, says James Staten, principal analyst with the firm.
But if you're interested in exploring private cloud computing, here's what you need to know.
First steps: Standardisation, automation, shared resources
Forrester's three tenets for building an internal cloud are similar to Accenture's precepts for next-generation IT.
To build an on-premises cloud, you must have standardized – and documented -- procedures for operating, deploying and maintaining that cloud environment, Staten says.
Most enterprises are not nearly standardized enough, although companies moving down the IT Information Library (ITIL) path for IT service management are closer to this objective than others, he adds.
Standardized operating procedures that allow efficiency and consistency are critical for the next foundational layer, which is automation. "You have to be trusting of and a big-time user of automation technology," Staten says. "That's usually a big hurdle for most companies."
Automating deployment is probably the best place to start because that enables self-service capabilities. And for a private cloud, this isn't Amazon-style in which any developer can deploy virtual machines (VM) at will. "That's chaos in a corporation and completely unrealistic," Staten says.
Rather, for a private cloud, self-service means that an enterprise has established an automated workflow whereby resource requests go through an approvals process.
Once approved, the cloud platform automatically deploys the specified environment. More often, private cloud self-service is about developers asking for "three VMs of this size, a storage volume of this size and this much bandwidth," Staten says. Self-service for end users seeking resources from the internal company cloud would be "I need a SharePoint volume or a file share."
Thirdly, building an internal cloud means sharing resources – "and that usually knocks the rest of the companies off the list," he says.
This is not about technology. "It's organizational -- marketing doesn't want to share servers with HR, and finance won't share with anybody. When you're of that mindset, it's hard to operate a cloud. Clouds are highly inefficient when resources aren't shared," Staten says.
Faced with that challenge, IT Director Marcos Athanasoulis has come up with a creative way to get participants comfortable with the idea of sharing resources on the Linux-based cloud infrastructure he oversees at Harvard Medical School (HMS) in Boston. It's a contributed hardware approach, he says.
At HMS, which Athanasoulis calls the land of 1,000 CIOs, IT faces a bit of a unique challenge. It doesn't have the authority to tell a lab what technology to use. It has some constraints in place, but if a lab wants to deploy its own infrastructure, it can. So when HMS approached the cloud concept four years ago, it did so wanting "a model where we could have capacity available in a shared way that the school paid for and subsidised so that folks with small needs could come in and get what they needed to get their research done but also be attractive to those labs that would have wanted to build their own high-performance computing or cloud environments if we didn't offer a suitable alternative."
With this approach, if a lab bought 100 nodes in the cloud, it got guaranteed access to that capacity. But if that capacity was idle, others' workloads could run on it, Athanasoulis says.
"We told them – you own this hardware but if you let us integrate into the cloud, we'll manage it for you and keep it updated and patched. But if you don't like how this cloud is working, you can take it away." He adds, "That turned out to be a good selling point, and not once [in four years] has anybody left the cloud."
To support the contributed hardware approach, HMS uses Platform Computing's Platform LSF workload automation software, Athanasoulis says. "The tool gives us the ability to set up queues and suspend jobs that are on the contributed hardware nodes, so that the people who own the hardware get guaranteed access and that suspended jobs get restored."
Don't proceed until you understand your services
If clouds are inefficient when resources aren't shared, they can be outright pointless if services aren't considered before all else. IBM, for example, begins every potential cloud engagement with an assessment of the different types of workloads and the risk, benefit and cost of moving each to different cloud models, says Fausto Bernardini, director IT strategy and architecture, cloud portfolio services, at IBM.
Whether a workload has affinity with a private, public or hybrid model depends on a number of attributes, including such key ones as compliance and security but others, too, such as latency and interdependencies of components in applications, he says.