Cloud, latency and your budget

I was recently a panellist at an Analyst Ovum’s forum on the Cloud when we received a couple of interesting questions from the audience: Firstly there was “What’s the difference between virtualisation and the cloud?” As a...

Share

I was recently a panellist at an Analyst Ovum’s forum on the Cloud when we received a couple of interesting questions from the audience: Firstly there was “What’s the difference between virtualisation and the cloud?”

As a panel we answered the question, explaining that a) Virtualisation is not mandatory in the cloud and b) it could even be in an on-site datacentre (private cloud only). But in a nutshell it boils down to: The minimum requirement to be a cloud is that it’s a utility computing model i.e. the provider owns the infrastructure and sells it to you by day/week/month and usage.

Indeed this form of cloud is taking off with SaaS services such as accounting packages Google apps, CRM tools like Salesforce are available and priced in this utility manner. Another form is “Your application on my infrastructure” (IaaS) - also with a utility billing model (e.g. Amazon EC2).

Whether we engage these or not will depend on whether we like the pricing model, the facilities on offer and whether we are able to accept the security of the provider.

Laurant Lachal (Ovum’s Principal Cloud Analyst) admitted In practice the utility billing requirement is often secondary for customers especially when doing their own cloud (private cloud) and then, for them, the main aspect of cloud is consolidation and virtualisation and a network delivery and here is where the second panel question arose: “What’s the impact of the network, as I’m buying more and more network pipe without making the applications perform much better, and it’s not cheap!”

No indeed, it isn’t cheap as available network infrastructure is currently lagging behind demand for it. Though that isn’t always the case. In fact network circuits i.e. megabits per second (mbps) in many ways are almost commodity priced, until you ask for non-bandwidth factors (QoS, Low latency etc) in the network.

The problem here is that even after spending a fortune on bandwidth application performance in the network does not improve - One of the prime reasons for this is Latency (often visualised as ping time) which can dominate bandwidth.

Whatever the direction you are looking, one thing is absolutely certain, unless you have a private cloud with end users located “nearby”, your applications will be delivered to you across the network, this maybe a public network or fixed circuits joining your corporate network (which may already be under stress) to the cloud datacentre(s).

Without these networks no services are possible. In IaaS, Cloud infrastructure vendors have concentrated their efforts on providing tools to ensure: is there enough computing power? Enough memory?disk? I/O? And that’s absolutely vital. But, often the delivery mechanism i.e. the network, vital to cloud delivery is secondary.
Make no mistake, for most cloud implementations the Network (and thus latency) will be a major factor in cloud application performance as it is determined by distances and the nature of the network being used and what is happening on that network.

My next entries will look at other cloud issues, including the Network and what we can do to make sure it’ll work.

Posted by Frank Puranik

Frank is Product Director at iTrinegy. With more than 30 years in the computing industry. He is an expert in the performance issues of applications across the worlds most complex networks.

Find your next job with computerworld UK jobs