How to optimise public cloud applications

Moving to the public cloud can be as easy as migrating your applications and paying your monthly bill. But are you spending more than you should? These five public cloud application optimization techniques will help you save money and improve performance to boot.


Moving to the public cloud can be as easy as migrating your applications and paying your monthly bill. But are you spending more than you should? These five public cloud application optimisation techniques will help you save money and improve performance to boot.

1. Refactor code to address cloud service providers' billing patterns

AWS charges not only for computes, storage and network bandwidth used - it also charges every time you access your storage for a read or a write. As a result, you may want to gather up reads and writes in your application and bunch them into single operations wherever possible. That way, once you have spent the money on your own servers, you don't incur additional costs every time you do a read or write operation.

The overall effect of this cloud optimisation technique depends upon the pricing methodologies of the public cloud service provider (CSP) you sign up to use. Irrespective of which CSP you sign up with, however, re-factoring can be seen as an opportunity to improve application performance.

2. Optimise chosen default cloud instances

When setting up instances with EC2, you can choose among various levels of computes, memory and storage. In addition, EC2 offers Spot Instances, which refers to excess capacity that's available at any time and offered at lesser prices than the normal ones.

It pays to spend some time experimenting with your application in order to determine the optimal level of computes, memory and storage that you need. This will help you make sure that you do not overspend on capacity or configuration, and it will help you figure out if you should consider Spot Instances (or the equivalent offering from another CSP).

3. Balance service levels needed with default cloud instances

Every applications has its own service level profile-that is, its general purpose and function. Your customer-facing e-commerce site has a different service level than, say, your internal employee portal. Evaluating the costs of public cloud instances against the service levels needed for various applications may help you optimize their public cloud costs.

How-to: 4 tips to prepare for the next Amazon outage

Think back to the 29 June Netflix outage. Given the nature of the video streaming service, pressing into action another Amazon's data centres elsewhere in the country may not have been feasible, given the storage and bandwidth-intensive nature of Netflix. However, less intensive-and more mission-critical-services can be optimized to be served out of alternative data centers if necessary, making them immune to such outages.

4. Fine-tune auto scaling rules

Applications that automatically scale the number of server instances, both up and down, offer a great opportunity for optimisation. For example, you may have one auto scaling rule that spawns a new instance once CPU utilization reaches 80 percent on all current instances and another that kicks in once average CPU utilization reaches 40 percent.

How do you know that 80 percent and 40 percent are the right numbers? Why not 85 percent and 35 percent? With the latter rule, you would spawn fewer instances and lower your costs.

In addition, applications have varying compute, storage and bandwidth needs. Your rules, then, may need to be based on a complex combination of these three factors and not just on CPU utilization. You may want to experiment with combinations that look logical for your public cloud applications and the service levels they need. You can then optimise these percentages over a period of time.

5. Database row optimisation

Applications such as Netflix have a localised nature, meaning that, most of the time, customers access only the data that pertains only to them. Netflix uses AWS' Regions and Zones to host servers that serve customers who live near those data centers.

This is possible thanks to database sharding technology, which lets you partition the rows in your database and store the different partitions in databases that reside in various datacenters. This also applies to applications such as credit card processing, since sharding can be applied to localized patterns of use such as looking up one card owner's transactions or transactions with one merchant.

You don't need to store all database rows in all database instances. If you can partition your database rows and store them in database shards in different instances, you can take advantage of the locality of usage patterns. This will reduce the number of server instances you need and, hence, the cost of your public cloud service.

When you move your application to the public cloud, it may work very well as it is, without any changes. However, if you pay attention to how your CSP charges you and put it in the context of your application's pattern of compute, memory, storage and network bandwidth usage, you can easily reduce your public cloud charges. Optimizing the application itself with some re-factoring may improve its performance and lengthen its life, while experimenting with and fine-tuning your own default instances and auto scaling rules may help you further lower CSP costs.

"Recommended For You"

The future of IT infrastructure and data centres ElasticHosts: Linux container virtualisation allows us to beat AWS on price